Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | ... | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 

gzt

10909 posts

Uber Geek


  # 1422807 7-Nov-2015 12:52
Send private message

also it is not just control, but sensing beyond human capability.

Taking just a simple example, eye level and position vs roof based sensing giving extra distance.

Lock him up!
10682 posts

Uber Geek

Lifetime subscriber

  # 1422816 7-Nov-2015 13:03
Send private message

Admittedly this was a long time ago, but when Apollo 11 landed on the moon, the computer panicked and screamed 'I'm going to die!', while the human saved the day.




I don't think there is ever a bad time to talk about how absurd war is, how old men make decisions and young people die. - George Clooney
 


 
 
 
 


3344 posts

Uber Geek

Trusted
Vocus

  # 1422868 7-Nov-2015 15:17
One person supports this post
Send private message

Rikkitic: Admittedly this was a long time ago, but when Apollo 11 landed on the moon, the computer panicked and screamed 'I'm going to die!', while the human saved the day.


Not true, actually the computer completed its critical tasks despite being overloaded, due to good programming: https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#PGNCS_trouble


5441 posts

Uber Geek

Trusted
Subscriber

  # 1422959 7-Nov-2015 21:21
Send private message

DizzyD: Found this article very interesting. What are your thoughts? Should self driving cars be programmed to kill?

Self-driving cars are already cruising the streets. But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.


How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?


These problems cannot be ignored, say the team: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”



http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/


The answer to almost all will be "pull over and stop". 

Whatever happens....ba-bump, ba-bump....ooops.....it will pull over and stop. 




____________________________________________________
If you order a Tesla, use my referral code to get free stuff. 

 

My Tesla referral code: https://ts.la/steve52356


483 posts

Ultimate Geek

Trusted

  # 1422968 7-Nov-2015 21:36
Send private message

DizzyD: Found this article very interesting. What are your thoughts? Should self driving cars be programmed to kill?

Self-driving cars are already cruising the streets. But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.



Let us summarise;

In order for X to do Y, Z must do the impossible.

Seeing as Z cannot do the impossible, X cannot do Y.


18103 posts

Uber Geek

Trusted

  # 1423031 8-Nov-2015 08:20
Send private message

roobarb:
DizzyD: Found this article very interesting. What are your thoughts? Should self driving cars be programmed to kill?

Self-driving cars are already cruising the streets. But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.



Let us summarise;

In order for X to do Y, Z must do the impossible.

Seeing as Z cannot do the impossible, X cannot do Y.



But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.

This means that the cars will be driven by AI or near AI computers. When humans are confronted with an impending accident, do we decide this? " Uh oh, impending accident that I cannot avoid, so how much is a panel and paint repair, cost of those fancy LED lights, does the other car have full insurance, I am younger than that driver, so I might sacrifice his injuries for mine, I'll only cause minor injuries, so I might prefer to protect my 2 year old Aston Martin first."  

Ok, a bit of exaggeration there, but when we are in that situation it all happens quick, and all we do is avoid and stop as much as we can. Thats all a CPU will do. We don't make ethical decisions when a car comes round an open road bend on our side of the road, or if another car runs a red light. We just apple basic rules of physics to avoid a crash if we can and stop as quick as we can.

483 posts

Ultimate Geek

Trusted

  # 1423089 8-Nov-2015 11:23
Send private message

tdgeek: We just apple basic rules of physics to avoid a crash if we can and stop as quick as we can.


We don't even do that, hence why people flip their cars in ditches on country roads trying to avoid rabbits. However the car still dutifully applies the basic rules of physics, hence the unintended end-state.

 
 
 
 


18103 posts

Uber Geek

Trusted

  # 1423091 8-Nov-2015 11:33
Send private message

roobarb:
tdgeek: We just apple basic rules of physics to avoid a crash if we can and stop as quick as we can.


We don't even do that, hence why people flip their cars in ditches on country roads trying to avoid rabbits. However the car still dutifully applies the basic rules of physics, hence the unintended end-state.


Very true, the genuine response which happens to be the wrong response. CPU control system wont make those human based errors

Webhead
2292 posts

Uber Geek

Moderator
Trusted
Lifetime subscriber

483 posts

Ultimate Geek

Trusted

  # 1424161 10-Nov-2015 09:45
Send private message

jarledb: Interesting about Tesla vs Google self driving cars on Quora


Very interesting, curiously no mention of the three laws of robots, ethics or calculating the impact of a crash.

3344 posts

Uber Geek

Trusted
Vocus

  # 1424202 10-Nov-2015 10:48
2 people support this post
Send private message

I'm just going to leave this here

http://existentialcomics.com/comic/106


Lock him up!
10682 posts

Uber Geek

Lifetime subscriber

  # 1424273 10-Nov-2015 11:46
One person supports this post
Send private message

ubergeeknz: I'm just going to leave this here

http://existentialcomics.com/comic/106



You win. 42 plus ones.





I don't think there is ever a bad time to talk about how absurd war is, how old men make decisions and young people die. - George Clooney
 


1892 posts

Uber Geek


  # 1424481 10-Nov-2015 15:58
Send private message

I personally can't wait for self driving cars.  Once they are mainstream and affordable, all Aucklanders will have them.  Which means I'll get to drive around Auckland without having to worry about a crazy Aucklander that can't drive, because they won't be driving.

How cool would that be!

The semi-sarcasm aside, I feel once self driving cars are actually mainstream and affordable and people know how to behave around them, that accidents will become a rarity and ethical decision making won't need to be made by the computer in the long run.  All ethical decisions aside, all the computer has to do is be better than a human behind the wheel, not necessarily perfect.

To decide if a computer would be better over and above a human, we have to put the computer in a situation where a human does not perform well.

A roadkill accident resulting in death (say a horse in the middle of the road on a foggy morning) is not necessarily the driver's fault.  But it could almost certainly be prevented with a machine that can apply the brakes in 3 milliseconds over and above the normal reaction time of a human without considering the fact that a self driving car would almost certainly be able to detect the horse from a safe distance with radar, mitigating the flaws of the human eye. 

Some might argue that most accidents were preventable, including the horse scenario when following the road rules.  I've never been involved in an accident, while everyone else in my family have been in several.  Having observed the level of care and attention my own family affords to driving suggests to me that perhaps a car that drives itself (regardless of ethics) would have a better way of dealing with an accident in almost all cases compared to an inattentive/distracted, incapacitated or irresponsible human.

With that said, what happens in the event that the software and hardware faults and the only occupant is in the passengers seat?  How will insurance cope with having two drivers claiming not to be at fault?  How would courts decide if the car or driver committed homicide or was guilty of a driving infraction if there is nobody at the controls?

--

TL;DR: Aucklanders suck at driving and human kind would be better if they all had self driving cars.  Everyone should be worried about hardware and software faults causing accidents, rather than trying to find a way to build an ethics calculator.





Sometimes what you don't get is a blessing in disguise!

1 | ... | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 
View this topic in a long page with up to 500 replies per page Create new topic



Twitter and LinkedIn »



Follow us to receive Twitter updates when new discussions are posted in our forums:



Follow us to receive Twitter updates when news items and blogs are posted in our frontpage:



Follow us to receive Twitter updates when tech item prices are listed in our price comparison site:





News »

Microsoft New Zealand Partner Awards results
Posted 18-Oct-2019 10:18


Logitech introduces new Made for Google keyboard and mouse devices
Posted 16-Oct-2019 13:36


MATTR launches to accelerate decentralised identity
Posted 16-Oct-2019 10:28


Vodafone X-Squad powers up for customers
Posted 16-Oct-2019 08:15


D Link ANZ launches EXO Smart Mesh Wi Fi Routers with McAfee protection
Posted 15-Oct-2019 11:31


Major Japanese retailer partners with smart New Zealand technology IMAGR
Posted 14-Oct-2019 10:29


Ola pioneers one-time passcode feature to fight rideshare fraud
Posted 14-Oct-2019 10:24


Spark Sport new home of NZC matches from 2020
Posted 10-Oct-2019 09:59


Meet Nola, Noel Leeming's new digital employee
Posted 4-Oct-2019 08:07


Registrations for Sprout Accelerator open for 2020 season
Posted 4-Oct-2019 08:02


Teletrac Navman welcomes AI tech leader Jens Meggers as new President
Posted 4-Oct-2019 07:41


Vodafone makes voice of 4G (VoLTE) official
Posted 4-Oct-2019 07:36


2degrees Reaches Milestone of 100,000 Broadband Customers
Posted 1-Oct-2019 09:17


Nokia 1 Plus available in New Zealand from 2nd October
Posted 30-Sep-2019 17:46


Ola integrates Apple Pay as payment method in New Zealand
Posted 25-Sep-2019 09:51



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.


Support Geekzone »

Our community of supporters help make Geekzone possible. Click the button below to join them.

Support Geezone on PressPatron



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.