Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 
petes117
371 posts

Ultimate Geek
+1 received by user: 40


  #1697559 2-Jan-2017 13:19
Send private message

richms:

 

petes117:

 

Yes that's true... but what about accidents? If a self-driving car swerves to avoid a collision and save its passenger's life, but kills a pedestrian, who is to blame? Or if the car knew it was going to be a choice between saving the passenger or the pedestrian, who should it save?

 

 

The person who owns it should be saved. Anyone thinking that the stuff will sell doing anything other than that is an idiot.

 

 

That makes sense from an economic standpoint, yes. But how far should the car go to protect its passengers, where do you draw the line? What if the pedestrian that's killed is a mother and baby... could the owner live with themselves, even if it's the car to blame?

 

That's the issue with combining of AI and robotics, life-changing split-second decisions such as that would be taken out of our direct control, and not everyone will answer it (or react, if they had control) in the same way.

 

Same principle very much applies to military use, what point is deemed "acceptable losses" say with a drone strike? Humans have enough trouble making that decision, let alone an autonomous robot. Highly recommend checking out the movie 'Eye in the Sky' from 2015 if you guys are interested in how a situation like this might play out.




JimsonWeed

126 posts

Master Geek
+1 received by user: 4
Inactive user


  #1697562 2-Jan-2017 13:37
Send private message

petes117:

 

richms:

 

petes117:

 

Yes that's true... but what about accidents? If a self-driving car swerves to avoid a collision and save its passenger's life, but kills a pedestrian, who is to blame? Or if the car knew it was going to be a choice between saving the passenger or the pedestrian, who should it save?

 

 

The person who owns it should be saved. Anyone thinking that the stuff will sell doing anything other than that is an idiot.

 

 

That makes sense from an economic standpoint, yes. But how far should the car go to protect its passengers, where do you draw the line? What if the pedestrian that's killed is a mother and baby... could the owner live with themselves, even if it's the car to blame?

 

That's the issue with combining of AI and robotics, life-changing split-second decisions such as that would be taken out of our direct control, and not everyone will answer it (or react, if they had control) in the same way.

 

Same principle very much applies to military use, what point is deemed "acceptable losses" say with a drone strike? Humans have enough trouble making that decision, let alone an autonomous robot. Highly recommend checking out the movie 'Eye in the Sky' from 2015 if you guys are interested in how a situation like this might play out.

 

 

 

 

Excellent question;  "But how far should the car go to protect its passengers, where do you draw the line?

 

Would you not agree that it should go at least as far as what you would expect for a human to go?  To put it into perspective, assume a jet goes into a flat spin at a low altitude and there's nothing the pilot can do to correct it - impact is imminent.  You would expect a computer system to at least go as far as the pilot to try and save himself, others, and the craft. But, if the onboard computer predicted the spin 45 seconds before it happened, you wouldn't want it to say, "Welp, I'm outta here!  Good luck with this one"  :)

 

Good question though.  I would suspect the "government" would insist on a standard higher than what you would expect of human response but, what is that threshold of tolerance.  Age old question... a log truck coming down the hill with a load of logs.  The brakes fail and there's a school bus up ahead.  He can choose the cliff and take his own life to save the children or, he can mow them down like grass.  Rest assured, it takes a lot of stones to give up your life in those situations but, people do it.  Should the computer make that call for you?


frankv
5705 posts

Uber Geek
+1 received by user: 3666

Lifetime subscriber

  #1697564 2-Jan-2017 13:43
Send private message

petes117:

 

richms:

 

petes117:

 

Yes that's true... but what about accidents? If a self-driving car swerves to avoid a collision and save its passenger's life, but kills a pedestrian, who is to blame? Or if the car knew it was going to be a choice between saving the passenger or the pedestrian, who should it save?

 

 

The person who owns it should be saved. Anyone thinking that the stuff will sell doing anything other than that is an idiot.

 

 

That makes sense from an economic standpoint, yes. But how far should the car go to protect its passengers, where do you draw the line?

 

 

The obvious? next step is features that make the car safer for the passenger(s) but less safe for pedestrians. More mass, for example, as per the trend towards "safer" SUVs, at the expense of greater risk to everyone else on the road.

 

 

What if the pedestrian that's killed is a mother and baby... could the owner live with themselves, even if it's the car to blame?

 

 

I don't think that's an issue. If you're on a train that kills someone, there's no problem with living with yourself.

 

 

 

 




petes117
371 posts

Ultimate Geek
+1 received by user: 40


  #1697565 2-Jan-2017 13:43
Send private message

JimsonWeed: 

 

Excellent question;  "But how far should the car go to protect its passengers, where do you draw the line?

 

Would you not agree that it should go at least as far as what you would expect for a human to go?  To put it into perspective, assume a jet goes into a flat spin at a low altitude and there's nothing the pilot can do to correct it - impact is imminent.  You would expect a computer system to at least go as far as the pilot to try and save himself, others, and the craft. But, if the onboard computer predicted the spin 45 seconds before it happened, you wouldn't want it to say, "Welp, I'm outta here!  Good luck with this one"  :)

 

Good question though.  I would suspect the "government" would insist on a standard higher than what you would expect of human response but, what is that threshold of tolerance.  Age old question... a log truck coming down the hill with a load of logs.  The brakes fail and there's a school bus up ahead.  He can choose the cliff and take his own life to save the children or, he can mow them down like grass.  Rest assured, it takes a lot of stones to give up your life in those situations but, people do it.  Should the computer make that call for you?

 

 

Yes your second example is exactly right, and quite timely since just last April a fleet of self-driving trucks made their first trip across Europe! This situation with a self-driven truck is bound to occur one day.

 

Your first example with the plane is a bit more black and white since the pilot and passengers are all part of the same vehicle, so definitely the AI should do everything to save them all in that situation.


petes117
371 posts

Ultimate Geek
+1 received by user: 40


  #1697567 2-Jan-2017 13:50
Send private message

frankv:

 

 

 

I don't think that's an issue. If you're on a train that kills someone, there's no problem with living with yourself.

 

 

Trains still have drivers, but they are a slightly different issue considering they're on tracks which narrows your options to swerve in a life or death situation


frankv
5705 posts

Uber Geek
+1 received by user: 3666

Lifetime subscriber

  #1697569 2-Jan-2017 14:05
Send private message

petes117:

 

frankv:

 

 

 

I don't think that's an issue. If you're on a train that kills someone, there's no problem with living with yourself.

 

 

Trains still have drivers, but they are a slightly different issue considering they're on tracks which narrows your options to swerve in a life or death situation

 

 

I agree that it is an issue from train *drivers*, but not for train *passengers*. But the question was whether a *passenger* in a self-drive car would/should feel guilty about an accident caused by that vehicle. 

 

 


 
 
 

Shop on-line at New World now for your groceries (affiliate link).
geocom
597 posts

Ultimate Geek
+1 received by user: 143

Subscriber

  #1697570 2-Jan-2017 14:05
Send private message

In what situation though would the car ever need to make a moral decision?

 

The car would attempt to do a controlled slow down as fast as possible in a safe way so that the wheels do not lockup and not to mention it would always maintain a safe distance from the car in front.

 

In every situation i have seen where people say cars need to make moral decisions use examples that happen too fast for a human to comprehend anyway.





Geoff E


frankv
5705 posts

Uber Geek
+1 received by user: 3666

Lifetime subscriber

  #1697573 2-Jan-2017 14:09
Send private message

petes117:

 

JimsonWeed: 

 

Good question though.  I would suspect the "government" would insist on a standard higher than what you would expect of human response but, what is that threshold of tolerance.  Age old question... a log truck coming down the hill with a load of logs.  The brakes fail and there's a school bus up ahead.  He can choose the cliff and take his own life to save the children or, he can mow them down like grass.  Rest assured, it takes a lot of stones to give up your life in those situations but, people do it.  Should the computer make that call for you?

 

 

Yes your second example is exactly right, and quite timely since just last April a fleet of self-driving trucks made their first trip across Europe! This situation with a self-driven truck is bound to occur one day.

 

Your first example with the plane is a bit more black and white since the pilot and passengers are all part of the same vehicle, so definitely the AI should do everything to save them all in that situation.

 

 

Yes, the correct answer is to create a system where harm to other people isn't possible. So self-drive logging trucks wouldn't be allowed to share roads with school buses. Or that logging trucks be fitted with redunant braking systems so that their brakes cannot fail.

 

Regretfully, a perfectly safe system would be infinitely expensive, which is why we take risks every day of our lives, and especially on the roads.

 

 


Geektastic
18009 posts

Uber Geek
+1 received by user: 8465

Trusted
Lifetime subscriber

  #1697574 2-Jan-2017 14:12
Send private message

petes117:

 

frankv:

 

The issue isn't about robots and AI and so on. The issue is about *people* misusing them for their own ends.

 

 

 

 

Yes that's true... but what about accidents? If a self-driving car swerves to avoid a collision and save its passenger's life, but kills a pedestrian, who is to blame? Or if the car knew it was going to be a choice between saving the passenger or the pedestrian, who should it save?

 

 

 

 

Wider but similar.

 

 

 

What happens if the AI decides that X country or Y religion is responsible for too much discord/damage/pollution or whatever and calculates that the logical answer is to erase the cause, like amputating a gangrenous limb?






gzt

gzt
18689 posts

Uber Geek
+1 received by user: 7827

Lifetime subscriber

  #1697577 2-Jan-2017 14:16
Send private message

petes117:

Yes that's true... but what about accidents? If a self-driving car swerves to avoid a collision and save its passenger's life, but kills a pedestrian, who is to blame? Or if the car knew it was going to be a choice between saving the passenger or the pedestrian, who should it save?


If an intelligence knows a person will die as a result of a decision, and the intelligence chooses that decision path, that is intentional killing / aka murder.

It would not be sensible or desirable for AI to make decisions requiring that.

petes117
371 posts

Ultimate Geek
+1 received by user: 40


  #1697585 2-Jan-2017 14:43
Send private message

geocom:

 

In what situation though would the car ever need to make a moral decision?

 

The car would attempt to do a controlled slow down as fast as possible in a safe way so that the wheels do not lockup and not to mention it would always maintain a safe distance from the car in front.

 

In every situation i have seen where people say cars need to make moral decisions use examples that happen too fast for a human to comprehend anyway.

 

 

Just because the moral decision examples happen too fast for humans to comprehend doesn't mean they don't exist. 

 

 

 

gzt:

 


If an intelligence knows a person will die as a result of a decision, and the intelligence chooses that decision path, that is intentional killing / aka murder.

It would not be sensible or desirable for AI to make decisions requiring that.

 

In these hypothetical moral decision examples a person dies no matter what, with no third option (like a safe braking procedure, as geocom mentioned).

 

The AI might even make a (subjectively) better decision than a human ever could (e.g. saving the maximum number of people's lives possible), by being able to decide far quicker than a human.

 

 

 

There is no easy answer unfortunately. No one is perfect and so neither will AI ever be perfect.


 
 
 
 

Shop now on Samsung phones, tablets, TVs and more (affiliate link).

gzt

gzt
18689 posts

Uber Geek
+1 received by user: 7827

Lifetime subscriber

  #1697587 2-Jan-2017 14:56
Send private message

petes117: The AI might even make a (subjectively) better decision than a human ever could (e.g. saving the maximum number of people's lives possible), by being able to decide far quicker than a human.

That is where it is best used - making decisions to avoid those situations in the first place.

The vast majority of traffic accidents occur because of bad decisions about very simple things.

richms
29104 posts

Uber Geek
+1 received by user: 10222

Trusted
Lifetime subscriber

  #1697633 2-Jan-2017 15:49
Send private message

The AI also has access to much better sensor information, and if they every sort out car to car communications may know about things that are not detectable from the location of the car itself like queues around a bend etc. I think its the better information available that will be the biggest saver with autonomous cars.





Richard rich.ms

1 | 2 
View this topic in a long page with up to 500 replies per page Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.