tdgeek:mdooher:dclegg:mdooher:dclegg:mdooher:
The BIG problem is that the police always want to charge someone when there is a crash (they aren't even allowed to call them accidents).. so who is in charge of an autonomous car? You?, Google? the CPU?
It's also interesting to ponder the ethics of autonomous cars, especially in scenarios where there will be a crash where fatalities are guaranteed, and the software has to determine who should live or die.
Without thinking too deeply Ill go with this:
Assuming the two cars aren't having a conversation about who is going to "take the hit" I would say each car should work for its owner.. I guess the people who can afford the better car/software get better protection: A bit like now really.
What if the scenario is different? An imminent collision that will kill the driver (and nobody else). The car can swerve, but it will then kill 3 children by the side of the road.
Same, protect the driver. The software is working for the owner. Callus? possibly, but that's the way I see it. The computer sees all external objects as just that, objects.
Maybe:
Avoid solid target - if possible aim for softer target - if this is human, avoid - if cannot avoid, hit human target, using front or side of car, not corner, if possible
or
if not under bridge or in tunnel, Eject
