mclean:
networkn: The difference is, people can be held accountable.
Absolutely people will be held accountable when driverless cars crash. In a situation like this the software will be smart enough to identify several possibilities - maybe steer for a possible safe gap, a child, an animal, or simply carry on and hit the obstruction ahead. The manufacturer's software will make these life-or-death decisions, the logic will be clearly evident, and the manufacturer will be culpable.
I'm not sure that will always be the case (or even if it is now). I don't believe people are programming a set of rules, rather they use machine learning and pattern recognition. As things progress, it could be possible that we won't be able to understand why a car made a particular decision. Similar to trying to get Google to explain how it can recognise a picture of a cat vs a child doing the same thing.