![]() ![]() ![]() |
|
SaltyNZ:DizzyD:ubergeeknz: Of course the cop-out option is that in this hypothetical extreme situation, the vehicle hands control to the human driver.
The assumption there is that the occupant can drive a car.
And that there are manual controls to take over with.
ubergeeknz: IMO the algorithm should try to cause the *least* harm overall. There's very few scenarios in reality where, when a car is being driven safely, avoiding eg. a pedestrian would cause more harm to the vehicle occupants than the potential harm to the pedestrian. I think people are introducing a false dichotomy into the equation.
reven:ubergeeknz: IMO the algorithm should try to cause the *least* harm overall. There's very few scenarios in reality where, when a car is being driven safely, avoiding eg. a pedestrian would cause more harm to the vehicle occupants than the potential harm to the pedestrian. I think people are introducing a false dichotomy into the equation.
sure but it was shown that people would less likely want to buy a automated car, if it would do this. They were happier for self driving cars to do this, as long as they weren't in them.
So if no one buys them....
and how to you quantify "least harm"
- do you take age into consideration? kill 3 90 year olds, who have maybe 10 years left between them? or kill a 5 year old who has potentially 90 years left of life?
- if the car crashes itself into a building, what if there are 30 school kids on the other side of the wall resulting in saving 5 pedestrians but killing those 30 kids (because the car couldn't see the 30 kids, i thought only the car occupant would be killed).
luckily these situations should be very few and far between with all the sensors in the car and how aware of its surroundings they should be, so they should be able to safely stop in all but a few scenarios.
we're obviously not going to solve the issue, but its interesting to think about :)
ubergeeknz: Of course the cop-out option is that in this hypothetical extreme situation, the vehicle hands control to the human driver.
reven:ubergeeknz: Of course the cop-out option is that in this hypothetical extreme situation, the vehicle hands control to the human driver.
the human would have less time to make a decision and probably end up just crashing causing more harm.
ubergeeknz:reven:ubergeeknz: IMO the algorithm should try to cause the *least* harm overall. There's very few scenarios in reality where, when a car is being driven safely, avoiding eg. a pedestrian would cause more harm to the vehicle occupants than the potential harm to the pedestrian. I think people are introducing a false dichotomy into the equation.
sure but it was shown that people would less likely want to buy a automated car, if it would do this. They were happier for self driving cars to do this, as long as they weren't in them.
So if no one buys them....
and how to you quantify "least harm"
- do you take age into consideration? kill 3 90 year olds, who have maybe 10 years left between them? or kill a 5 year old who has potentially 90 years left of life?
- if the car crashes itself into a building, what if there are 30 school kids on the other side of the wall resulting in saving 5 pedestrians but killing those 30 kids (because the car couldn't see the 30 kids, i thought only the car occupant would be killed).
luckily these situations should be very few and far between with all the sensors in the car and how aware of its surroundings they should be, so they should be able to safely stop in all but a few scenarios.
we're obviously not going to solve the issue, but its interesting to think about :)
Whether people "want" to buy them or not becomes irrelevant when the majority of cars are self drive. And asking people how they might behave in a hypothetical situation versus actual behaviour in that situation has been shown many times to bear little correlation. I think it would be fair to say that people going out to buy a self drive car are not (on the whole) going to even CONSIDER this.
freitasm:DizzyD:Rikkitic: This is a fascinating and serious subject and I was going to contribute, until I saw the stupid remark about cyclists. Grow up, people.
What if the cyclist is:
- pointing a gun at the car.
- a wanted criminal
- its the cyclist on the pavement, or 10kids standing in the road.
Would that make it OK?
You made a generalised comment before. This comment doesn't add to it.
Please stop trolling.
reven: Personally I think a self driving car sounds cool, but I am nervous about it, I wouldn't want to get one for at least a decade after they become "stable". People can hack cars now, computers can crash, there's no historic safety data with these cars etc.
I'd be more inclined to travel in a automated taxi/bus/plane than a human driven one however.
ubergeeknz:reven:ubergeeknz: Of course the cop-out option is that in this hypothetical extreme situation, the vehicle hands control to the human driver.
the human would have less time to make a decision and probably end up just crashing causing more harm.
Correct. But the blame would lie with the driver.
Lazy is such an ugly word, I prefer to call it selective participation
ubergeeknz:reven: Personally I think a self driving car sounds cool, but I am nervous about it, I wouldn't want to get one for at least a decade after they become "stable". People can hack cars now, computers can crash, there's no historic safety data with these cars etc.
I'd be more inclined to travel in a automated taxi/bus/plane than a human driven one however.
I see that as the target market, initially.
There is some safety data, self-drive cars have been cruising around some parts for a while now and have an excellent safety record. But the only way you get more safety data is by putting more on the road :)
scuwp: I don't think we are terribly far away already, and am just waiting for a driver to defend a charge of careless or dangerous driving causing a crash, because the electronic systems in the car prevented the driver avoiding the crash. "It was the cars fault your honor..." Its all very interesting.
reven:ubergeeknz:reven: Personally I think a self driving car sounds cool, but I am nervous about it, I wouldn't want to get one for at least a decade after they become "stable". People can hack cars now, computers can crash, there's no historic safety data with these cars etc.
I'd be more inclined to travel in a automated taxi/bus/plane than a human driven one however.
I see that as the target market, initially.
There is some safety data, self-drive cars have been cruising around some parts for a while now and have an excellent safety record. But the only way you get more safety data is by putting more on the road :)
exactly but if people don't buy them because they think they might die in one, then there wont be more on the road, meaning less people buying them due to low safety data :)
I see companies/governments using these before most consumers, taxi companies, public transport etc.
reven:ubergeeknz: Of course the cop-out option is that in this hypothetical extreme situation, the vehicle hands control to the human driver.
the human would have less time to make a decision and probably end up just crashing causing more harm.
only situations I can really think of that the car couldnt avoid are
- things falling off the back of trucks etc
- rockslides etc
pedestrians, other cars etc the automated cars should have enough sensors (well the google one's definitely appear to) to be fully aware of these and safely avoid any issue.
|
![]() ![]() ![]() |