![]() ![]() ![]() |
|
Mike
MikeAqua: Why do people assume that AI will turn nasty?
It's common theme in fiction because it makes for an interesting plot.
Most unethical human behaviour arises from some sort of greed i.e. an obsessive desire for wealth, sex or power.
If an AI doesn't have those desires, what is it's motive to go all SkyNet on us?
Brendan: I've read the entire thread, and there has been some isolated posters that cause me to think there is some hope for this world, but for the most part what struck me most was a singular lack of imagination, and a dogged determination to stick to the tropes and ideologies they are saturated with.
A few points I think of:
1. There is no doubt that machines will replace all human labor. You may argue all you like that new jobs will become available for us, but that argument is flawed: if a machine can do your job, and anyone elses, it will also do any NEW job. Any argument to the contrary is ultimately an argument for Vitalism - a concept thoroughly debunked during the Victorian age.
2. Insane conjecture about machines making us slaves or wiping us out are puerile. Much of the power from these machines will come (at first) from them enhancing existing human abilities. These enhancements will expand. Eventually, there will be no clear delineation between man and machine. Prosthetics for your mind. Eventually though, they will be too complex for the standard human brain to interact with. But hopefully by then your mind will have already been transferred to better hardware...
3. Wealth distribution. This is perhaps where I have just seen some of the most slavish, laughable arguments in the whole thread. Any argument that relies on an ad-hoc enhancement to current pseudo-economic 'management' is as laughable as Creationists trying to explain away fossils or 10,000 year old bristle cone pines.
Most of us though will be happy to live in perfect health for an indefinite period of time, our lives powered by the Sun, our needs seen to via semi-intelligent machines that you made with your advanced 3d printer from plans off the Net.
Brendan:
2. Insane conjecture about machines making us slaves or wiping us out are puerile. Much of the power from these machines will come (at first) from them enhancing existing human abilities. These enhancements will expand. Eventually, there will be no clear delineation between man and machine. Prosthetics for your mind. Eventually though, they will be too complex for the standard human brain to interact with. But hopefully by then your mind will have already been transferred to better hardware...
tdgeek:
Interesting post. But there is no real need to cut down every opinion stated that does not agree with your opinion,
You have provided detail in points 1 and 2, great, but why not 3? I am interested to read your detail for point 3.
Fred99:Brendan:
2. Insane conjecture about machines making us slaves or wiping us out are puerile. Much of the power from these machines will come (at first) from them enhancing existing human abilities. These enhancements will expand. Eventually, there will be no clear delineation between man and machine. Prosthetics for your mind. Eventually though, they will be too complex for the standard human brain to interact with. But hopefully by then your mind will have already been transferred to better hardware...
I'm curious as to how you can dismiss conjecture as insane, then conjecture to the point of "prediction" about how you think it will happen.
Perhaps you've missed the point about AI - where rather than being extensions of ourselves, bionics etc, more in the nature of tools, then the machines could be making what we might call moral choices.
Brendan:Fred99:Brendan:
2. Insane conjecture about machines making us slaves or wiping us out are puerile. Much of the power from these machines will come (at first) from them enhancing existing human abilities. These enhancements will expand. Eventually, there will be no clear delineation between man and machine. Prosthetics for your mind. Eventually though, they will be too complex for the standard human brain to interact with. But hopefully by then your mind will have already been transferred to better hardware...
I'm curious as to how you can dismiss conjecture as insane, then conjecture to the point of "prediction" about how you think it will happen.
I always applaud curiosity. It's the beginning of wisdom they say.
The mistake you make is in assuming all conjecture is defined as the same content and therefore criticisms of one equally applies to others.
1. Boringly predictable imaginings like 'the robots will take over' have been about since the 1950's and I find them all to be less than likely because they pre-suppose a level of incompetence that would preclude the ability to construct them in the first place. Furthermore, it is more efficient to co-operate than it is to compete.
Brendan:
2. Why would a race of machines waste their time fighting with us, instead of simply transferring themselves to an environment we don't want that happens to have a trillion times our resources? The argument is equivalent of claiming you would fight a 3 year old over a muddy puddle in the drive way.
3. The quickest way for us to develop AI is to duplicate the workings of our own brains, and before that some animal brains. They will be no more capable than we will be. After that, we will want AI super intelligence. We could just emulate several minds in a large computer; or perhaps run them at a faster speed. But that will only give us answers we could have gotten ourselves, but faster. Useful though.
Better QUALITY of thought is the real key. Better pattern recognition, better memory, more parallelism. Hyper-dimensional information that can be used directly without conversion loss. Oh, I'm sure it'll happen but unless we design it to be a psychopath, it would simply take 2 above I think. Why bother fighting over a muddy puddle? If we are lucky, it'll take us along and improve us.
Ignorance breeds violence. Knowledge brings peace. I see no reason to think it's different for smarter entities.
But feel free to tell me why I'm wrong.
Brendan:Perhaps you've missed the point about AI - where rather than being extensions of ourselves, bionics etc, more in the nature of tools, then the machines could be making what we might call moral choices.
I have not missed your point; you have missed mine.
I think they will have 'morals' (a slippery concept itself) because they will need them in order to understand us and our knowledge, civilization, etc. But a super intelligent AI will likely have morals that we cannot understand because we will not be able to model the future as accurately as it does. It could be dangerous to us; or it could save us all.
Brendan:
This is why I would advocate enhancing our own minds just as we create AI. The two goals are remarkably compatible: technology and discoveries gained from an emulated human mind could easily show us how to enhance a real one - with out anyone dying. New chemicals, better drugs, perhaps even some re-wiring. Expanding our short term memory would enhance a great many things for example. Chips that connect to neurons, crowd sourced strategy optimization. Who knows.
Brendan:
Anyway, it's speculation at present, albeit extrapolated from current trends and achievements.
Recently, scientists have emulated a part of a rat's brain. It behaves identically to the real thing. But it runs on silicon.
Computer vision and recognition now allow driver-less cars. 15 years ago, this required a room full of computers. Google can search images based on the content of an image right there on your laptop. e.g. it can 'see' what it is.
Your smart phone understands the spoken word. Soon it will understand the meaning and context.
I do not wish to get bogged down in petty debates. I did not come here to hold a lecture. I've spent far too much time explaining subjects I thought were pretty self evident and now I am tired. Have fun everyone, and sorry if I was a bit brash.
Fred99:Brendan:
it is more efficient to co-operate than it is to compete.
That doesn't seem to be how things work in the human world, from either a biological POV (how we evolved) or how our social and economic system operates. The concept that "competition drives innovation" seems to be very well accepted.
Brendan:
2. Why would a race of machines waste their time fighting with us, instead of simply transferring themselves to an environment we don't want that happens to have a trillion times our resources? The argument is equivalent of claiming you would fight a 3 year old over a muddy puddle in the drive way.
3. The quickest way for us to develop AI is to duplicate the workings of our own brains, and before that some animal brains. They will be no more capable than we will be. After that, we will want AI super intelligence. We could just emulate several minds in a large computer; or perhaps run them at a faster speed. But that will only give us answers we could have gotten ourselves, but faster. Useful though.
Better QUALITY of thought is the real key. Better pattern recognition, better memory, more parallelism. Hyper-dimensional information that can be used directly without conversion loss. Oh, I'm sure it'll happen but unless we design it to be a psychopath, it would simply take 2 above I think. Why bother fighting over a muddy puddle? If we are lucky, it'll take us along and improve us.
Ignorance breeds violence. Knowledge brings peace. I see no reason to think it's different for smarter entities.
But feel free to tell me why I'm wrong.
The conditions which make our planet a goldilocks zone for biological life may also make it a goldilocks zone for artificial life.
I don't share the optimistic view held by some that resources are unlimited.
When two (or more) "devices" seek to use the same limited resource in order to serve their human masters, then how are they going to decide to "share" it?
Brendan:Perhaps you've missed the point about AI - where rather than being extensions of ourselves, bionics etc, more in the nature of tools, then the machines could be making what we might call moral choices.
I have not missed your point; you have missed mine.
I think they will have 'morals' (a slippery concept itself) because they will need them in order to understand us and our knowledge, civilization, etc. But a super intelligent AI will likely have morals that we cannot understand because we will not be able to model the future as accurately as it does. It could be dangerous to us; or it could save us all.
That's what I've been saying. Yes it could be. It could be terminal for human life. It's worth thinking about now.
Brendan:
This is why I would advocate enhancing our own minds just as we create AI. The two goals are remarkably compatible: technology and discoveries gained from an emulated human mind could easily show us how to enhance a real one - with out anyone dying. New chemicals, better drugs, perhaps even some re-wiring. Expanding our short term memory would enhance a great many things for example. Chips that connect to neurons, crowd sourced strategy optimization. Who knows.
Before going there, perhaps time to revisit the original topic - how the bounty of this revolution should be distributed. I definitely don't advocate doing any of this as "enhancement to maintain competitive advantage". That to me is an ethical bottom line - even if doping does help an individual win a bicycle race, it's something I'll never support.
frankv: Economics is all about allocating limited resources. If a resource (e.g. manufacturing capability) is not in short supply, then it won't be subject to economic pressures. But other things will... so, if everyone has a 3D printer, then the power to run them (if there isn't already ubiquitous cheap power), and/or the comms to download designs (if there isn't already ubiquitous cheap Internet), and/or the mechanism for creating and distributing the raw materials, and/or the mechanism for removal of waste (recycling machinery onsite perhaps?), and/or the capability to design of objects become limiting resources.
Given that manufacturing of mining or agricultural equipment will also be essentially free, I guess that the price of things will reduce to essentially the cost of the raw materials, which in turn will be proportional to the land (or sea) area needed to produce them. Similarly, the price of generating electricity will be proportional to the area needed for the solar panels.
I think the increasing push for IP rights is part of this... the IP proportion of the value of an item is increasing, so it's now worth squabbling over control of it. Expect that to go on.
So, my advice, invest in land, and in RIAA/MPAA.
|
![]() ![]() ![]() |