For 3 years now, I have stated that self-driving cars can only be allowed on the road when human drivers are NOT allowed on the road. In other words, they won't mix well. Unpredictable driving behavior of humans will be the number one cause in accidents involving self-driving cars. When was the last time you saw a human driver follow ALL of the road signs and vehicle codes? (it's as rare as a bigfoot sighting here in California)
Also, you must take into account our struggling transportation maintenance system. CalTrans is the department in CA taking care of the roads and their backlog for road maintenance is miles long, (172,138.66 miles to be precise per 2010 data, PDF, page 10 if you are curious or have insomnia). So, with that in play, think about how many roads do not have fully adequate signage, markings, posted speed limits, etc.
City traffic areas may be better as they are in a much more controlled environment from a logistics perspective, but human drivers in city areas are the worst. Cab drivers alone generate a risk factor that negates any chance for self driving cars to be truly safe.
This Ebay auction is relevant to this discussion, regarding the state of the art of self-driven vehicles. A smashed-in SICK laser scanner off of an automated forklift system. If this self-driving vehicle smashed itself under controlled conditions, we might not be ready to hand the car keys over to Joe-Bot Johnny Cab just yet.
I thought it was pretty common knowledge that the Hindenburg's problem wasn't the hydrogen but the flammable fabric covering. Hydrogen is a much better and cheaper lifting gas than helium, and we have an inexhaustible supply of it.
This is the Dymaxion effect at work. It's been said it wasn't the car that primarily caused the fatal accident that ultimately derailed the project (though likely the design of the car was a contributing factor). The point is, once such a "disaster" occurs it kills the efforts of those that follow. I always cringe when I hear about security robots running loose in a mall. One minor misstep -- like a robot falling over and killing or seriously injuring a child -- will ruin it for anyone else who has a better idea of how to do it.
Though not to diminish the human cost of the Hindenburg, as a "disaster" fewer than half of the people on board died. Watching the newsreels of it, that looks impossible.
It's why I'm always a vocal dissident when it comes to the use of robotics in the midst of the general population. The geniuses developing these technologies are far too socially impaired (and I mean this lovingly) to understand the ramifications of their work when deployed out in public. We simply do not have the technology for it yet, though we do have the lawyers for it.
One also may want to consider the results of when the car driven by a HUMAN causes an accident involving a driverless car...where the death of the passengers of the driverless car occurs.
You're looking at this wrong. If the autonomous car isn't certain it can precede without hitting something, why should it ever move? Mostly I'm describing the testing scheme any autonomous car would have to pass for me to consider it safe. Basically it should fail as safely as it can when one or multiple sensors are disabled at the worst time. It should be able to deal with random stuff appearing in front of it. It should correctly deal with a perspective corrected person painted on the road. (and any other evil test you can think of that the best drivers on the road could also pass)
The insurance companies have tons of data on this. I'd expect any autonomous car would incorporate a lot of the crash safety data from the insurance companies into it's crash mitigation decision tree.
And regarding your example of car with kids who's only choice is to hit an old lady or a lamp post, the car should hit the lamp post. The car with kids has the passengers surrounded in several tons of ablative armor so the likely injury is going to be bruises, while the old lady is unarmored so the likely injury is something between broken bones and death. I would further say that given the slow movement of most "old ladies", any autonomous car that couldn't anticipate and avoid said "old lady" is a failure and should be recalled as unsafe.
I've done some driving down town in various USA cities. (Chicago, St. Paul/Minneapolis, Madison, Milwaukee, etc.) Just observing what I can and avoiding anything that could hit me has served me well with no accidents in ~15 years of driving. As far as I know the current autonomous cars can do that right now.
Back on topic, I'd say that the owner of an autonomous car should be liable for any accidents unless a clear manufacturing fault can be shown. Of course if the car's auto-drive has been tampered with, then whoever tampered with the auto-drive should be held liable. (with potential criminal charges for tempering with a deadly weapon)
This is the Dymaxion effect at work. It's been said it wasn't the car that primarily caused the fatal accident that ultimately derailed the project (though likely the design of the car was a contributing factor). The point is, once such a "disaster" occurs it kills the efforts of those that follow. I always cringe when I hear about security robots running loose in a mall. One minor misstep -- like a robot falling over and killing or seriously injuring a child -- will ruin it for anyone else who has a better idea of how to do it.
Though not to diminish the human cost of the Hindenburg, as a "disaster" fewer than half of the people on board died. Watching the newsreels of it, that looks impossible.
It's why I'm always a vocal dissident when it comes to the use of robotics in the midst of the general population. The geniuses developing these technologies are far too socially impaired (and I mean this lovingly) to understand the ramifications of their work when deployed out in public. We simply do not have the technology for it yet, though we do have the lawyers for it.
Maybe the question is..."Is society ready for it?"
How are they going to make that vehicle to vehicle communication secure? The guys that hang out at DEFCON are going to have a lot of fun with this!
They claim this could save a 1000 traffic fatalities per year. I'm thinking implementing it will kill more people than that. The money that has to be piled into fitting every car with this otherwise useless feature will have to come from somewhere. That ultimately detracts from the total wealth available to society. With a subsequent detriment to spending on things like nutritional food, medicine, healthcare, education, and the consequent increase in death rates from other causes.
Logic and common sense (or lack thereof) never stops the politicians. When I lived in Florida 30 years ago, some senator or congressman began a bill to require automakers to add rain sensors to Florida cars (ha) which would automatically turn the windshield wipers on. Further, the bill said "it shouldn't cost more than a dollar".
Comments
Also, you must take into account our struggling transportation maintenance system. CalTrans is the department in CA taking care of the roads and their backlog for road maintenance is miles long, (172,138.66 miles to be precise per 2010 data, PDF, page 10 if you are curious or have insomnia). So, with that in play, think about how many roads do not have fully adequate signage, markings, posted speed limits, etc.
City traffic areas may be better as they are in a much more controlled environment from a logistics perspective, but human drivers in city areas are the worst. Cab drivers alone generate a risk factor that negates any chance for self driving cars to be truly safe.
For Parts or Repair: http://www.ebay.com/itm/Sick-S30A-6011DA-Safety-Laser-Scanner-1019600-For-Parts-or-Repair-/201149726173?pt=LH_DefaultDomain_0&hash=item2ed5753ddd
Could be quite the smoking deal if someone wants to gamble on fixing it.
This is the Dymaxion effect at work. It's been said it wasn't the car that primarily caused the fatal accident that ultimately derailed the project (though likely the design of the car was a contributing factor). The point is, once such a "disaster" occurs it kills the efforts of those that follow. I always cringe when I hear about security robots running loose in a mall. One minor misstep -- like a robot falling over and killing or seriously injuring a child -- will ruin it for anyone else who has a better idea of how to do it.
Though not to diminish the human cost of the Hindenburg, as a "disaster" fewer than half of the people on board died. Watching the newsreels of it, that looks impossible.
It's why I'm always a vocal dissident when it comes to the use of robotics in the midst of the general population. The geniuses developing these technologies are far too socially impaired (and I mean this lovingly) to understand the ramifications of their work when deployed out in public. We simply do not have the technology for it yet, though we do have the lawyers for it.
Hot topic in the insurance sector...
https://www.google.com/search?q=governments+driverless+cars+insurance+companies&hl=en&gbv=2&oq=&gs_l=
Maybe the question is..."Is society ready for it?"
http://news.yahoo.com/govt-wants-require-cars-talk-160548877.html
They claim this could save a 1000 traffic fatalities per year. I'm thinking implementing it will kill more people than that. The money that has to be piled into fitting every car with this otherwise useless feature will have to come from somewhere. That ultimately detracts from the total wealth available to society. With a subsequent detriment to spending on things like nutritional food, medicine, healthcare, education, and the consequent increase in death rates from other causes.