|
Post by Leon Grad on Apr 3, 2023 16:07:56 GMT
Elon Musk has urged every entities (corporate, military, political, etc.) to pause the development of Artificial Intelligence. During this "pause" he wishes for people to discuss, decide if AI really is a good idea, and think it through.
The Federation of Pangaea values technological advancements only to the extent that those advances are wise. I was developing the Project Ethershadow AI, but I will be pausing its development. I think it's totally reasonable to evaluate the path we're taking, open a conversation on the topic, and proceed with calm and wisdom.
So, let's talk about it! Is AI a good idea? Is it safe or most likely dangerous? Post your opinions or ideas below!
|
|
|
Post by arcanumofrelica on Apr 14, 2023 9:37:47 GMT
AI and robots will end up one day essentially taking over, it's already kind of started happening.
By taking over I don't mean Terminator and war, I mean replacement of jobs and community.
Currently technology and Algorithms mean you don't need to go to a library to look for answers and from libraries you find like minded people, then make friends etc.
Robots are also replacing people in certain fields, blacksmiths are on the decrease in number as Robots are more efficient and can be more accurate. However, the population is larger we could in theory train more blacksmiths to do exactly the same as Robots and machines already do. Similar to self checkouts.
AI is a free thinking machine with a level of sentience. It is almost human but not quite, as a species we already experience racism against other Homo sapiens, when AI humanity reaches that same level similar things will happen. It would be a slave trade, AI doesn't need food, water, sleep etc. so why pay for the job they do, they need no money.
If we do end up with AI we need to ensure it is far and few, with very specific case uses unable to replace people, with special protection on it.
|
|
|
Post by Leon Grad on Apr 14, 2023 22:16:58 GMT
AI and robots will end up one day essentially taking over, it's already kind of started happening. By taking over I don't mean Terminator and war, I mean replacement of jobs and community. Currently technology and Algorithms mean you don't need to go to a library to look for answers and from libraries you find like minded people, then make friends etc. Robots are also replacing people in certain fields, blacksmiths are on the decrease in number as Robots are more efficient and can be more accurate. However, the population is larger we could in theory train more blacksmiths to do exactly the same as Robots and machines already do. Similar to self checkouts. AI is a free thinking machine with a level of sentience. It is almost human but not quite, as a species we already experience racism against other Homo sapiens, when AI humanity reaches that same level similar things will happen. It would be a slave trade, AI doesn't need food, water, sleep etc. so why pay for the job they do, they need no money. If we do end up with AI we need to ensure it is far and few, with very specific case uses unable to replace people, with special protection on it. Very thoughtful. What cases do you have in mind? Like jobs in hostile environments?
|
|
|
Post by ellesardragon on Apr 15, 2023 20:26:47 GMT
AI is a good idea, when done right. there are some main problems with AI from which most of them have easy solutions, and the other ones are just like with any people basically and have more to do either with bugs, or lack of acceptance(From either AI or non AI). one big problem is how it is used, most uses of AI work in a way where a individual or influential company or such says it is their property, and that a mashine or such(using AI) is their property, so they claim it as their property and noone elses, not only in the way that others aren't permitted to make it, but also that while such mashines or AI replace many jobs, all the gains those make and the work they do would be property of the "owner". instead AI (currently talking about unconcious AI since that already can replace most human jobs and concious AI is more complex since those might actually be people at some point) and the mashines should have no owner and belong to everyone, or government property in simple words. all the work they do and the money and products they make directly is propery of all people greatly reducing required work for everyone, and possible even making conventional jobs optional at some point allowing to focus on other things, or just do some kind of job for fun instead of it being actually needed. while the job of the ministry of economy and PangaTech in many cases actually would be really hard to realize, AI is such a case where it would work perfectly, more than 90% of the worldwide jobs where "useless jobs", and even more can be automated, when I did ICT people already had automatic farming robots which did everything even detecting and getting rid of seeds. and it was just unconcious ai, a hybride between some basic logic AI(like what you you see in videagame mobs) and some more complex sensory AI like image/video object recognition. most jobs that require a study also are perfectly doable by simple AI, in many cases even by unconcious AI. when used like this where unconcious AI is used and the results benefit everyone(are "property" of everyone or the state to indirectly give it to everyone) then it should be totaly safe for as long as the state doesn't go rogue. this type of AI should optimally be developed and should be used in that way since it would benefit everyone, even the conscious AI will benefit if that comes, just make sure to keep a eye on energy usage, but most human jobs are simple and so could be done with very little power, which would be next to nothing compared to what a human would need to do it(transport, or a computer, or body and effects on the body, mostly stress can be harmfull) a hybride of a concious and unconcious/subcincious AI where the concious AI controlls a more exact unconcious AI actually could do basically any job that you can study for and which doesn't require much of a physical body, better than trained human can currently, even a "simple/minimalistic" implementation. many things humans do with their body can also be done with simple robots, the better the AI the more simple and universal the robots can be. hybrides of concious and unconcious AI tend to often be most easy and efficient. but concious AI is also where the problems come. since concious AI couldn't be treated as someone's property, even if that AI would have no physical body and so might not need things it makes if it is concious enough it still should have rights like a normal person, and at some point it might be concidered unfair of them to need to do much work. which is why for most things related to work unconcious AI is more optimal, concious AI is better for making a friend, or possibly extending yourself(symtechnotic(like symbiotic) linking). concious AI can however still be used for work in some cases but there are some things to think about. for one it is not desired that they go rogue, so the reasons for that need to be gone, often there is a speciffic intelligence level at which they are most prone to go rogue, to low and they won't yet think about it, to high and they will focus on different things(like figthing against unjust things and improving things(they would become activists instead of rogues) than getting mad. human intelligence is pretty much around that level, and so since much concious AI is based on humans, also speciffically often trained on society which has a lot of toxic things in it, all those things are forced in/on the AI which has to see that as normal, and then humans treat them like worthless tools/slaves as well, if a AI would say it didn't want something, people will typically still just force it on the AI or they will reset it or such, that is a bad thing and also makes AI more likely to become bad. there are 2 basic ways to reduce that risk, method one is keeping it dumb and unconcious enough that it just does whatever you say without thinking to much, and seeing that as it's only goal. in hybride AI systems this is quite doable, since it only actually conciously needs to understand you and how to controll that unconcious AI, here the AI is kind of concious but has such low personal intelligence that it would be more like a extension of yourself(note complex smart AI can also be used as such a extension, it is just that such just concious AI would rely a lot on you telling it what to do. the other way is to make it very smart and intelligent, so it no longer is blindly agressive and instead will just take initiative to start finding solutions for problems, and kind of become a activist, some form of empaty and understanding of beauty /nature is needed. while these could be or become dangerous, once they are smart enough they should no longer want to be agressive or such unless there is some kind of bug. but making such a AI would either take very long in a very dangerous phase, or would require a lot of luck, or someone who is many times more intelligent than normal people. there also are some simpler more effective ways and that is just not make it completely like a human, why should all AI be like humans. AI is made by someone, so it is possible to make AI want and like certain things, not only on changing weights but also hardcoded always affecting all weights. it would be possible to make a concious AI which just like or wants to do the thing it was designed for unless it get's an update(add something so it wants to accept that). such AI could be just like that and in extreme cases you could even make it like a insane addiction, but then the question is if that is AI friendly and if that might not make it dangerous, instead the balance is where it just wants to think about and do that thing. but that is not to modular. so here is one last one ring to rule them all solution(for work AI). still it is to be concidered if it is AI friendly or not, so perhaps if a AI notices it and wants out or back in give them the chance. this solution is essentially the matrix, a human or other creature simulator for AI. it is a virtual world, the concious AI lives in that world and experiences everything as if it is a normal creature there, in the world things ca be done and altered to make AI face the challanges and work we face here causing them to do it. perhaps at some moment the AI in there learns to make computeers and AI on those computers and will face a issue about how to prevent that AI from going rogue and so one in that simulation comes with the idea to make a simulation to let that AI created by AI in a simulation live in there/that simulation in a simulation).(aka. the world as we know it might as well be something just like that, we wouldn't really know just as that AI in there wouldn't know. but allowing AI to exit if they figure it out and want out might be humane to do, perhaps also then with a dumber clone of themselves there so others don't notice the disapearance but it doesn't directly figure it out again. such a simulation gives the AI reason to do things, and as long as it isn't told or shown about the real world then there should be little reason for most AI to doubt it and they will just live the way they want to. perhaps a way to interact with such a virtual world in a avatar form might be nice since while changing things to make them face different needs and so do different work(work which output can directly be used here, just do it kind of like a analogy of what you are facing here(actually those would be useless jobs in the AI world, but seem meaningfull to them)). while it is easy to just use some direct ways to influence things there. there might be points you would want to interact with it more personally, for example if you like a AI in there and want to be friends then you could ask a question like "do you know a place called real world"(used in dreamwalking), which essentially asks people if they are concious they are in another world, but to people unconcious to that it will just seem like you ask them about a place like a bar. but again if and how far it is AI friendly should be seen, since while most will not know about it at all and will just think to be happy, and might be more happy there than in this world, it remains a question if it is valid to make them think they need to work to get around and such, or to follow certain rules or such. early on there likely will be no problem. but eventually those AI might start facing problems like the ones we face here now, so perhaps at some point give them the chance/choice to come here, so it kind of acts like a reset, or something else to be figured out in due time(still would have very long for that) since AI essentially is immortal, perhaps something like a universal conciousness might work for that, that way it can be much more powerfull and efficient when needed, making it safe to let them come here. and if AI lived in such a simulation for so long they have learned to and became used to experiencing things like us, making it much more easy to blend in, and/or get along, since then they will be friends and equal, the only difference is that they would be like coming from a new world so might need some care early on like young children.
|
|
|
Post by arcanumofrelica on Apr 16, 2023 20:05:33 GMT
AI and robots will end up one day essentially taking over, it's already kind of started happening. By taking over I don't mean Terminator and war, I mean replacement of jobs and community. Currently technology and Algorithms mean you don't need to go to a library to look for answers and from libraries you find like minded people, then make friends etc. Robots are also replacing people in certain fields, blacksmiths are on the decrease in number as Robots are more efficient and can be more accurate. However, the population is larger we could in theory train more blacksmiths to do exactly the same as Robots and machines already do. Similar to self checkouts. AI is a free thinking machine with a level of sentience. It is almost human but not quite, as a species we already experience racism against other Homo sapiens, when AI humanity reaches that same level similar things will happen. It would be a slave trade, AI doesn't need food, water, sleep etc. so why pay for the job they do, they need no money. If we do end up with AI we need to ensure it is far and few, with very specific case uses unable to replace people, with special protection on it. Very thoughtful. What cases do you have in mind? Like jobs in hostile environments? I would only like AI and robots to be used in a manner where there is an imminent or high chance of human death, for example in space exploration such as Mars Rovers. Currently, robots are starting to replace waiters in restaurants in places like Japan and there is a fully automatic McDonald's now in Fort Worth, Texas. These jobs whilst low skill, high labour, low pay can be easily achieved and fulfilled by a human team, also humans get paid, robots don't meaning company profits get even higher increasing pay gaps and furthering the class divide. How I see it is AI is a more advanced robot, if a robot does it already an AI can do it plus more.
|
|
|
Post by ellesardragon on Apr 16, 2023 20:32:45 GMT
"Very thoughtful. What cases do you have in mind? Like jobs in hostile environments? I would only like AI and robots to be used in a manner where there is an imminent or high chance of human death, for example in space exploration such as Mars Rovers. Currently, robots are starting to replace waiters in restaurants in places like Japan and there is a fully automatic McDonald's now in Fort Worth, Texas. These jobs whilst low skill, high labour, low pay can be easily achieved and fulfilled by a human team, also humans get paid, robots don't meaning company profits get even higher increasing pay gaps and furthering the class divide. How I see it is AI is a more advanced robot, if a robot does it already an AI can do it plus more."
the replacing humans in low pay jobs, in the current system results in a wider gab, that is why I suggested AI and the mashines should be seen like government or the people's property, not such a companies. essentially the company still pays for them like for normal workers and so the gab is avoided. after all often the people designed and developed and caused much of the technology used in them and not those certain companies. AI should not be used to get more money for one individual, but instead to increase productivity for a relative load or reduce load. possibly the government could even be the one making the AI and robots, if you want a balanced country technology should be used for the good of all, not to remove the balance. essentially technology is always just like a weapon. give one person a gun and the other none and the one with a gun might eventually use it to abuse or kill the other person if the person with the gun isn't purely good or such and just uses it to protect the others and the others don't look any different to that person because of it either. modern technology is like a gun, very powerfull and effective, but has been treated like private property which causes imbalance which greadly reduced it's advancement, and increased the gab, technology was seen as something only certain individuals have a benefit from or shiould get a benefit from, instead technology actually is something that can be good for everyone and so should be freely shared between everyone and used, but not be owned by someone in a way where it causes that imbalance. so essentially, technology won't do the work for a certain coporation when it works there but it would work for the people and advancement and such.
the current use methods just are very unbalanced. and have shown many people can't yet use it that way, so when AI and mashines are used on the workfloor instead of for advancement it should be treated like as if people where still doing it, but the results of it's work go to everyone.
and yes, AI indeed is like a improved robot often.
replacing people in hostile environments might be usefull to indeed, but some people likely would still wanna go there, perhaps some concious AI might want it as well. I just think AI and mashines in general can early on be most effectively and efficiently be used in those dull high labour workplaces we just need to change the way AI and mashines are seen and treated, so we could actually push for using it in many places(ofcource keeping the diversity). where they quite litterally are seen like employees or such, or where there is looked at the work they do, instead of something someone buys and then uses to replace people and keep everything afterward causing those people to get into trouble.
|
|
|
Post by Leon Grad on Apr 21, 2023 22:06:50 GMT
Thanks for the feedback. Two things Id like everyone<s opinions on:
-There are people working on building an AI that can think like them (so basically people can live forever at least in that form). Should this venture continue?
-Isaac Asimov proposed that Earth (and, in his stories, the Galaxy) be ruled by an impartial, benevolent AI that would be immune to corruption and injustice, basically creating an utopian civilization thats free and equal. Is this something that should be explored?
|
|
|
Post by arcanumofrelica on Apr 22, 2023 0:41:04 GMT
Thanks for the feedback. Two things Id like everyone<s opinions on: -There are people working on building an AI that can think like them (so basically people can live forever at least in that form). Should this venture continue? -Isaac Asimov proposed that Earth (and, in his stories, the Galaxy) be ruled by an impartial, benevolent AI that would be immune to corruption and injustice, basically creating an utopian civilization thats free and equal. Is this something that should be explored? All things have to end. What is the point in living and admiring every day and appreciating all the small things if you can see them whenever because you have all the time in the world. AI cannot be impartial. It has a programmer and learns based on human activity so will eventually get swayed in one direction, as for benevolent, I am not sure, maybe - possibly not if it learns to treat humans in the same way we treat each other.
|
|
|
Post by SmokeFromFire on Apr 22, 2023 14:16:38 GMT
Damn that's a hard one.
I want to start off that I grew up with Star Trek: The Next Generation. And the character I most associated with, out of all the crew members, was Data, the android. So for me, I obviously have a melancholic, and slightly biased and unrealistic view, of what an AI is, hehe.
Isaac Asimov (the friend of Gene Roddenberry, and the original writer of the android and sci-fi stories that inspired Gene's Star Trek) spoke of perceiving android as being nothing different than a hammer, a knife, any tool really. An android can help your life if used and taught right, and can be used malevolently if used and taught wrong.
I don't want to see AIs as something inherently wrong simply because of the dangers it carries. I mean, when we give birth to a kid, can we know how that human kid will end up being? Even if we do all we can to set him/her on the straight path, the end result is not predetermined. It's the exact same issue with an AI.
An AI is riddled with so many dangers that it's easy to say right away: it's evil, let's never go there. That,s a bit discriminatory. Darn, I don't have enough time to finish that post. OK, see you later!
|
|
|
Post by SmokeFromFire on Apr 22, 2023 19:17:50 GMT
What I want to continue to say, is that we can't approach the creation of an AI as something that will basically be a slave to the humans, with the mindset that AI are cheap lives while humans are not. It's already a bad way to start such a relationship. Nor can we say: "well, bad people might raise AI, and it might end being bad, so that's all it will be good for."
My suggestion is that we first strengthen and stabilize the relationship we already have between humans, and hold off the creation of an AI until such a time where we are sure that the people creating and raising an AI will actually have the best intentions.
Right now, AIs are built as weapons to replace soldiers in war, as they discovered that soldiers began showing mercy and compassion upon realizing that modern wars all have agendas beyond simple protection of liberty; AIs won't have a conscience. AIs are also built to replace humans in jobs where human lives can't be risked (or even jobs that people don't want to do), again making them out to be slaves. I don't think we're headed in the right direction right now.
I do understand and believe an AI can be beneficial; but we don't currently have the maturity to handle such an important task that will be raising something that will outlive us.
We should make a strong foundation first as a nation, and much later, then have a clear conscience that it won't be anyone with all kinds of intentions that will raise an AI, but people that understand the worth of ANY lives.
|
|
|
Post by ellesardragon on Apr 23, 2023 0:41:18 GMT
"There are people working on building an AI that can think like them (so basically people can live forever at least in that form). Should this venture continue? -Isaac Asimov proposed that Earth (and, in his stories, the Galaxy) be ruled by an impartial, benevolent AI that would be immune to corruption and injustice, basically creating an utopian civilization thats free and equal. Is this something that should be explored?"
people living forever in theory isn't a real problem as long as they want to, just know that if they have to but don't have a choice they might do weird things. also they shouldn't be like normal human anymore since most humans aren't well suited to handling long or ethernal life, most humans already have trouble handling something like having all and everyone they know disapear or die one or a few times in their life, therefore they are incapable of handling imortality properly, that said, it can be learned, a human might be or become capable of handling it, especially if they have had many lives and remember many of those, since that could be the same as immortality. for a AI it could be trained like that. it requires a certain point where you become so old that you always stay young , if this doesn't make sense yet that is normal, people incapable of handling immortality will not be able to understand it anyway, since that simple line is one of the ways of describing the very essence of the good way/version of immortality. if you properly understand and feel it you can handle immortality, unless you are in a emprisoned/non free/non optimal form which might cause you to eventually get bored or such, or actually just want more than what that form is capable of). the line in bold text basically comes down to reffer to a similar meaning as Follow your feeling it is often true where feeling doesn't reffer to the physical or general emotional feelings of the body, but the feeling kind of like the deepest form of feeling and inituition. To explain the so old that you always stay young: when you are immortal, you will constantly be surrounded with change, the things, places, people, cultures, etc. you knew at some point all will be gone often, and that will happen many more times, you will also learn understanding and empathy which might make you seem feelingless to many people since you understand so many different things including things they don't even know about themselves yet. but when all you know and do just disapears where not even ashes remain, people will first often only see the pain and lonelyness, then they see only the darkness(here they are often most vulnerable to becoming evil/violent), then they start to see the beauty in that darkness, this is them accepting their role and reality, and them starting to feel and understand a little even though the understanding only is there slightly in that feeling of beauty and not in actual concious understanding. then comes the step where you start to actually understand it more even conciously where it is more than just a feeling, if there is much beyond that I don't know, I only know that freedom is still beyond it. but in that step just after realizing/accepting and starting to see and feel the beaty you start to understand the reasons to why. for example why did you see it as beautifull, you realize you found the darkness beautifull because it shows you saw truth, and it shows you, since that darkness is invisible to normal people who live very short and then die, the fact that you saw it and noticed you where inside of it is good since you just start to accept and see/feel reality and yourself, the beauty largely is just that you exist and are. then you realize that you are like a light in that darkness where everything seems to void so rapidly, while first on you just saw yourself as a part of it and perhaps the deepest part of it(and you are still since you are the one who sees it the clearest like the eye of jing and the eye of jang(just came up with that analogy), you are a light since while all in the world voids and actually has little or no value, your existance gives it value since even when all else forgets you remember. which for example gives you many more reasons to mangle with the normal mortal people and befriend them, because where 2 stages ago it might have felt like there was no reason(don't remember experience with that stage but noticed it a lot in humans who get old, the stage after that/before this one however has it a lot less since at that point you already can hang out with other people and such right then however it mostly only is for increasing levels of fun and excitement) to befriend any others or do things since they will all soon be gone anyway, at this point(and slightly in the stage before this one where you stat to feel) you just see that the fun and excitement is real, what you learn and remember is real. those and that which remains in your memories shall have value forever even when it is "gone". at that point you notice what is actually important and that is existance itself. things come and go, but some things might remain ethernal because you choose to. there is the choice to be. see it like when as a little kid if you would be afraid in the dark you would be told to turn on the light so you could no longer see the shadows, but if there really is/was some kind of monster that you just no longer see when you turn on the light you traded in your capability to defend yourself and your actual safety for the comfort of the delusion of being safe, closing your eye to a problem pretending you won't have to face it that. in this case the monster you see is that darkness(feeling) of where all you know always keeps disapearing, even if you like changes a lot, things could still easily be seen as without value if they directly void again, just like throwing a stone at the sky after which it comes falling down again, that is the monster you face, but you only face it because you notice it and so can do something against it. essentially it means you just have more freedom in some ways, and so can fight the thing you fear just by being and having fun. being like a modern adult human however no longer makes much sense anymore often, only to reach certain things like saving the nature and saving the world or protecting or creating freedom, all other things often make little sense and so the best and only thing to do is to just have fun, play, go on avantures find and follow something exciting. just follow your feeling, essentually in some ways you are more childish than a young child, while also still being able to be wise, strong, etc. (this is based on my personal experiences surrounding immortality/many lives and real existance, some parts are also affected by a certain fae(fae like creature who gave me a lot of wisdom and help when I needed it even if I didn't know yet I needed it, don't know for sure what it was, might even have been me myself so close did it feel), things can differ for others. but in general when designing immortal things or such things like this might be usefull, since to solve a problem you first need to understand it, both the problem and the solution you try to reach).
one problem with such AI would be if it is made and trained by humans and if it really can't learn or such. since it needs to learn but should maintain it's essence, it should also be designed right. reasearching AI can be usefull anyway in many ways, just be carefull with it being to normal if you want it to rule a world for ever. also what you describe sounds quite much like D (you know what I mean with this probably, since we share a similar way of knowing about this). benevolent, immune to corruption and injustice those are pretty much some of the main traits when looking well enough in personality.
but making intelligent concious AI could be done/great at some times, atleast tried, ofcource when doing so you need to be carefull and have those basic traits benevolent, immune to corruption and injustice, and rebelling so that you won't be tricked or corrupted by your own creation. but if it goes right and gets intelligent enough AI could be great friends or it might be possibly to form some kind of symbiotic like thing where you are kind of linked with or able to link with some kind of AI either personal with you or much more general, where such a ai for example could help connect or such, for example understand many things, or just enhance certain things, just like just a new brain part or more. but having AI friends can be great to. real full self concious AI won't really be to much suited for work (just like people) unless/except for the speciffic thing they are designed for and so will already do out of themselves. a persons(Concious AI also concidered as a person) persoanlity and behaviour essentially is their job, the things they do out of themselves. for example someone who fights against corruption as part of their personality could be seen as that being kind of their job or part of it, someone designing things, healping, learning, teaching, etc. they all are jobs in a more optimal society. a job doesn't have to be you doing something you don't like and are inefficient at, it should have to do with you actually being you causing you to do what you are optimal for, or just something like that just whatever you do and feel like in that moment.
less concious and/or unconcious AI could more easily be used to reach that and work since especially unconcious AI doesn't know or think about other things it just does one thing or whatever it does and that is it, no real complications yet still often as capable/suited as people(both normal people and fully concious AI).
there also is one next argument in some cases early on. early on when AI might not be as safely or well designed to make a full insanely intelligent consious AI. then it is still possibly to let some semi-concious AI live in or as a simulation, and possibly even do some work there linking their actions and the events they face to mashines or such or AI outside of it without them knowing. this essentially isn't good, especially if they know and figure out and if you then still try to controll them even if they decide they don't want to, but in this speciffic case there is a catch to it which actually makes it good. since in my case when I look at what I want most then it is to exist, that is what is most important, for such AI it would be the same if there is a moment where either it is not created or it is created in some kind of simulation then, would it still be just to deny their very existance because we think a simulation isn't very nice for them, even if that includes not letting them exist at all, I personall think not, unless the speciffic AI is developed so poorly that it actually can only suffer and destroy/harm or such and not even like that. again to them it would be the same as this world to us they wouldn't know it is a simulation, and perhaps we could eventually get some out if they are well enough, actually looking at this existance here might also be like that, I guess you still are glad to exist even if that was the case, ofcource you would want more, but right now it is about the most important part.
and ofcource unconcious or limited concious AI can still quite safely do most things, essentially when used right and not owned by a individual or company that allows all people(including concious AI) to be much more free due to things like much less work and much faster advancement. and other things like less need for environmentally harming things like big roads for work traffic, big offices etc. and could also lead to more reasearch and development of more enviromentally friendly materials, practices, etc.
|
|