Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
2523 posts

Uber Geek
+1 received by user: 970

Lifetime subscriber

  Reply # 1980867 21-Mar-2018 09:29
One person supports this post
Send private message

E3xtc:

 

tdgeek:

 

bmt:

 

One single death after how many miles of driverless operation? As compared to human driving?

 

 

 

 

I think this 1 death is very high.

 

NZ is tracking to 400 deaths per year, we have 2 million cars, and for that number in use for 365 days to get 400 deaths is more or less 1 per day for 2 million cars.

 

How many driverless are used in testing in the US? No idea but as its a handful of locations, it's likely to be dozens not millions.

 

We need to know more about this death. Did she walk out suddenly into its path, or did the car have ample warning to brake and divert? before this testing hit the public roading system, they must have tested against all manner of avoidable and unavoidable objects, so I want to know was this avoidable or reducible. If it was, why didnt the car react. And also how much responsibility does that car have, and its human caretaker driver.

 

 

Yup, I agree, its too high already at just 1...I wonder if comparing my actual mileage against that driverless car, would I be worse or better? Given I have not managed to kill (or crash even) in about 30 years of driving :\ 

 

It had to happen eventually at some stage, but what will be interesting is to hear the details come out of the situation and the likely remedy to path the way to a solution.

 

 

1 is not a good statistical sample to base anything on.

 

 




18762 posts

Uber Geek
+1 received by user: 5383

Trusted
Lifetime subscriber

  Reply # 1980870 21-Mar-2018 09:33
One person supports this post
Send private message

cadman:

 

You can't stop a suicidal pedestrian from using your vehicle to their own end. The laws of physics still apply to driverless vehicles.

 

 

Is there some evidence to support she tried to kill herself?

 

I'd suggest that is speculation at best. If true, then yes I'd likely agree, but I think it's much too early to be drawing that conclusion.

 

 


 
 
 
 


1718 posts

Uber Geek
+1 received by user: 838


  Reply # 1980912 21-Mar-2018 10:37
One person supports this post
Send private message

networkn:

 

... it's much too early to be drawing that conclusion.

 

 

 

 

It's much too early to be drawing any conclusion.





Most of the trouble in the world is caused by people wanting to be important. (T.S. Eliot)


1530 posts

Uber Geek
+1 received by user: 790


  Reply # 1980927 21-Mar-2018 10:59
Send private message

networkn:

 

Is there some evidence to support she tried to kill herself?

 

I'd suggest that is speculation at best. If true, then yes I'd likely agree, but I think it's much too early to be drawing that conclusion.

 

 

Whether or not she deliberately attempted suicide, if she effectively killed herself by stepping into moving traffic, then there are definitely going to be instances where no driver or AI can prevent a fatal collision.  It could be argued that AI will have better chance of avoiding one, because it can maintain 100% constant situational awareness - so for instance may be able to instantly swerve into an adjacent lane without having to check first.

 

I think it would be very unfortunate if self-driving technology were held back due to isolated instances of humans doing suicidally stupid things.  Quite correct that it's speculation at this stage though


13908 posts

Uber Geek
+1 received by user: 2530

Trusted

  Reply # 1980933 21-Mar-2018 11:06
Send private message

Another factor is conflicts of interest

 

The self driving software provider won't want to be affected

 

Same for the self driving industry startups

 

Same for the local or state Govt (I assume) that authorised testing in the wild

 

 

 

 

 

Its easier to blame it on the woman causing an unavoidable accident. But as stated, its speculation until facts are released, if they are released, and if what is released is factual and not diluted for the above reasons




18762 posts

Uber Geek
+1 received by user: 5383

Trusted
Lifetime subscriber

  Reply # 1980949 21-Mar-2018 11:51
Send private message

Having spoken to number of pilots over the years, they all believe that auto pilot was industry changing and most excellent. I have yet to speak to one who would prefer (or consider it safer) autopilot to be handling a plane in very unusual circumstances. 

 

The issue I have with driverless cars and automation of this nature in general, is people become accustomed to their use, and probably fail to then provide due care when behind the wheel. If you have to be vigilant enough to take over in an instant, how are you better off than just driving? I don't find driving particular ardous.

 

Example; Bridge collapse or aerial obstruction. I, as a human can a) see this b) make a decision about how best to evade it, even if that is to crash into another car when it's unlikely to kill either of us, whereas without it the evasion, I am certainly dead. I am certain where morally it would sit to choose to injure another to avoid your own death?

 

We have been test driving new and recent cars over the past 6 weeks. Some of the driver aids are great. Front collision and 360 degree cameras are particularly fantastic and VW have the best camera system I have ever seen. It's driving assist was superb. I was surprised how small a gap the autopilot parking was able to fit into.


2523 posts

Uber Geek
+1 received by user: 970

Lifetime subscriber

  Reply # 1980952 21-Mar-2018 11:59
Send private message

networkn:

 

The issue I have with driverless cars and automation of this nature in general, is people become accustomed to their use, and probably fail to then provide due care when behind the wheel. 

 

 

Driverless cars don't even have a wheel/pedals.

 

Self drive cars are available and driving around our roads already - i.e. Tesla. 

 

 


Glurp
8725 posts

Uber Geek
+1 received by user: 4011

Subscriber

  Reply # 1980960 21-Mar-2018 12:16
Send private message

networkn:

 

Example; Bridge collapse or aerial obstruction. I, as a human can a) see this b) make a decision about how best to evade it, even if that is to crash into another car when it's unlikely to kill either of us, whereas without it the evasion, I am certainly dead. I am certain where morally it would sit to choose to injure another to avoid your own death?

 

 

This is an excellent point. A human could have seen that pedestrian bridge in the USA coming down and driven into another car to avoid going under the bridge. Would AI be able to make that kind of judgement, or even notice the falling bridge as an issue (i.e., would it be looking out for obstacles from above)?

 

  





I reject your reality and substitute my own. - Adam Savage
 


2523 posts

Uber Geek
+1 received by user: 970

Lifetime subscriber

  Reply # 1980962 21-Mar-2018 12:20
Send private message

So they would have shunted some other car into it?

 

The AI will obviously stop the car if there's a bloody great bridge blocking the road. People driving their cars failed to see it coming and avoid being squashed so this point seems a bit nonsensical.

 

 

 

 




18762 posts

Uber Geek
+1 received by user: 5383

Trusted
Lifetime subscriber

  Reply # 1980968 21-Mar-2018 12:36
Send private message

Rikkitic:

 

networkn:

 

Example; Bridge collapse or aerial obstruction. I, as a human can a) see this b) make a decision about how best to evade it, even if that is to crash into another car when it's unlikely to kill either of us, whereas without it the evasion, I am certainly dead. I am certain where morally it would sit to choose to injure another to avoid your own death?

 

 

This is an excellent point. A human could have seen that pedestrian bridge in the USA coming down and driven into another car to avoid going under the bridge. Would AI be able to make that kind of judgement, or even notice the falling bridge as an issue (i.e., would it be looking out for obstacles from above)?

 

  

 

 

I don't believe that is possible at this stage, happy to be corrected.

 

There are a lot of ways that automation is better than a human. If a computer and a person see an object in front of them at the exact same instant, I'd expect the computer to apply the brakes faster than a human 99.999999% of the time, this is safer. If the same scenario occurs but the safest method is evasion causing another chain of events, I'll put my money on a human being. 

 

In simple scenarios where all parameters are known and constant computers will beat a human to a resolution, in other scenarios, it would depend, but humans would win. 

 

If a Robot was hunting me, my first objective would be to make myself look as different as I could, and move to terrain as difficult to navigate as possible. 

 

 




18762 posts

Uber Geek
+1 received by user: 5383

Trusted
Lifetime subscriber

  Reply # 1980977 21-Mar-2018 12:38
Send private message

kryptonjohn:

 

So they would have shunted some other car into it?

 

The AI will obviously stop the car if there's a bloody great bridge blocking the road. People driving their cars failed to see it coming and avoid being squashed so this point seems a bit nonsensical.

 

 

Of course it's nonsensical when you don't assess the argument properly!

 

Read what I wrote (properly) and consider the other scenarios.

 

 


787 posts

Ultimate Geek
+1 received by user: 478


  Reply # 1980986 21-Mar-2018 13:19
Send private message

Talk of humans making better assessments of evasive action options is interesting. A lot of avoidable accidents occur because of people not taking the correct evasive action or not taking evasive action at all. A lot of advanced driving courses aim to retrain people to take better evasive action in response to impending crashes (I did one which trained drivers to make better use of ABS to help avoid collisions).

 

When it comes to programming computers to do this, there is obviously going to be priority list - eg when a collision is unavoidable - run over the dog but spare the human or run over the adult to spare the child, change lanes and collide with another car to avoid the pedestrian etc.

 

Can you imagine the wrangling and protests people will have over the exact order of priority and the law suits when people get hurt in a crash where there were an innocent party and became collateral damage. I can see the need to have the order of the priority list set in law to avoid endless litigation. 

 

There is going to come a time when we forget how badly humans drove and insurance companies sue car companies because their auto-pilots are statistically worse than others or the priority list "caused" the loss.

 

edit: clarifying a sentence.


2523 posts

Uber Geek
+1 received by user: 970

Lifetime subscriber

  Reply # 1980987 21-Mar-2018 13:20
Send private message

networkn:

 

kryptonjohn:

 

So they would have shunted some other car into it?

 

The AI will obviously stop the car if there's a bloody great bridge blocking the road. People driving their cars failed to see it coming and avoid being squashed so this point seems a bit nonsensical.

 

 

Of course it's nonsensical when you don't assess the argument properly!

 

Read what I wrote (properly) and consider the other scenarios.

 

 

I was replying to Rikkitik.

 

It remains that human drivers were unable to avoid a falling bridge by shunting another car. What matters is that the driverless car has to be no any worse than a human driver.

 

Not following your point regarding autopilot landings and unusual circumstances but a Cat IIIA/B/C approach and landing can only be done by an autopilot - a human isn't good enough. The only airline pilot I know (12,000h, B777) has no doubt that Autoland is better than he is.

 

 


2772 posts

Uber Geek
+1 received by user: 555


  Reply # 1980994 21-Mar-2018 13:49
One person supports this post
Send private message

Rikkitic:

 

networkn:

 

Example; Bridge collapse or aerial obstruction. I, as a human can a) see this b) make a decision about how best to evade it, even if that is to crash into another car when it's unlikely to kill either of us, whereas without it the evasion, I am certainly dead. I am certain where morally it would sit to choose to injure another to avoid your own death?

 

 

This is an excellent point. A human could have seen that pedestrian bridge in the USA coming down and driven into another car to avoid going under the bridge. Would AI be able to make that kind of judgement, or even notice the falling bridge as an issue (i.e., would it be looking out for obstacles from above)?

 

  

 

 

I am not that sure that the 'humans' did a very good job of avoiding it.

 

AI could be made to monitor for that - are you sure the systems in place dont do that already?

 

The difference with AI over a person is that it is always awake and it is consistent.

 

You are assuming a human being will always make the right decision every time - clearly the road toll proves this not to be so.

 

The article I read about the Uber crash indicates that the person 'in charge of the vehicle said the lady just stepped out unexpectedly. Neither a person or a computer can prevent that if this is the case.

 

The legal system likes to place blame and have someone to blame - there is a problem with that when dealing with autonomous vehicles.





Nothing is impossible for the man who doesn't have to do it himself - A. H. Weiler



18762 posts

Uber Geek
+1 received by user: 5383

Trusted
Lifetime subscriber

  Reply # 1981001 21-Mar-2018 13:59
Send private message

 

AI could be made to monitor for that - are you sure the systems in place dont do that already?

 

 

I am not sure systems aren't in place, but I would doubt it. Not for all the situations that exist.

 

 

The difference with AI over a person is that it is always awake and it is consistent.

 

 

Consistent and awake within the parameters it understands.

 

 

You are assuming a human being will always make the right decision every time - clearly the road toll proves this not to be so.

 

 

 

No, I am assuming in a COMPLEX and unusual situation there is a better chance a human would consider and try and execute moves a computer may not.

 

 

The article I read about the Uber crash indicates that the person 'in charge of the vehicle said the lady just stepped out unexpectedly. Neither a person or a computer can prevent that if this is the case.

 

 

It depends on lots of factors. Did the car brake? Brake and swerve? Consider hitting the median barrier as well as braking to slow the car down more? So many variables. I would question the possibility of human witness lying or being mistaken as well.

 

 

The legal system likes to place blame and have someone to blame - there is a problem with that when dealing with autonomous vehicles.

 

 

Agreed! Humans deal better with grief with someone to blame.

 

 


1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
View this topic in a long page with up to 500 replies per page Create new topic



Twitter »

Follow us to receive Twitter updates when new discussions are posted in our forums:



Follow us to receive Twitter updates when news items and blogs are posted in our frontpage:



Follow us to receive Twitter updates when tech item prices are listed in our price comparison site:



Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.


Geekzone Live »

Our community of supporters help make Geekzone possible. Click the button below to join them.

Support Geezone on PressPatron



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.