Over the weekend, either BBC Knowledge, the History Channel, or Discovery, aired a segment on "Combining Artificial Intelligence and Robots" and posed the question as to whether or not it was a threat to mankind. Part of this discussion included the moral concepts associated with enabling "smart robots" to conduct autonomous killing and what-not. Although I could not find the actual segment that aired, I did find another documentary on YouTube that discussed the same thing;
What I'd like to know are the thoughts any of you might have on this subject. Will one country attempt to dominate the world through smart robots? Will we create our own demise if smart robots become self-aware (i.e., "SkyNet" from the Terminator series)? Should there be controls put in place to stop the development of smart robots for military purposes? Or, is all this just a bunch of fear-mongering hype?
Obviously, this is a very open-ended topic but, we do know that companies all over the world are investing billions into AI. Cognitive Computing is already a common word in organisations wanting to utilise it for Business Intelligence and big data.