Shop OBEX P1 Docs P2 Docs Learn Events
If a Self-Driving Car Gets in an Accident, Who—or What—Is Liable? — Parallax Forums

If a Self-Driving Car Gets in an Accident, Who—or What—Is Liable?

«1

Comments

  • TtailspinTtailspin Posts: 1,326
    edited 2014-08-16 22:16
    Who has the deepest pockets?
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-17 05:27
    The owner, of course.
    At least in California, it doesn't matter who drives your car. If something happens .. you and your insurance company (if you have one) are considered available for liability pay outs.

    We don't have a Positronic Brain for robots quite yet.
  • Mark_TMark_T Posts: 1,981
    edited 2014-08-17 05:49
    Give a siren and blue flashing lights and then its clear who has to take avoiding action!

    More seriously the insurance companies will get to determine this won't they? The
    plentiful data-log from the vehicle will help establish the facts more reliably than human
    witnesses, so I suspect the rules will develop as experience is acrued.

    The way air-safety is managed is to learn from every piece of experience and develop
    and refine rules and regs. Such an approach is likely to work here too, perhaps a specialist
    agency for autonomous vehicle accident and incident investigation is needed?
  • Heater.Heater. Posts: 21,230
    edited 2014-08-17 05:59
    Seems to me that if you put a massive high speed projectile on the public highway and it runs amok killing half a dozen school children waiting at a bus stop whilst you are busy texting and playing Angry Birds you are going to have a really hard time claiming it was not your fault.

    Today we can shift the responsibility to the manufacturers if your brakes fail or the throttle gets jambed. If of course you are licensed to drive, are sober, in control and at least trying to drive responsibly.

    One day perhaps self-driving car control software will be designed, written and tested with the same rigour as fly-by-wire aircraft software. Has traceability and approvals and is certified to mostly work most of the time. Then you can close the curtains on the windows of your vehicle and sleep soundly knowing that when your vehicle does get confused an mow down a crocodile of infants it will be Google's fault (or whoever) and not yours.
  • John AbshierJohn Abshier Posts: 1,116
    edited 2014-08-17 07:41
    Let's see, sue me and take everything I have and get less than a million dollars. Sue GM, Google, Intell, and the company that made the wire in the car and get multiple millions of dollars. Where will the lawyer go??

    John Abshier
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-17 08:05
    Hmmm. It seems that the insurance companies want everyone to be insured and accidents settled by non-fault insurance settlements.

    On the other hand, people want fault determined and punative damages.

    So I suppose the question of 'Who is liable?' depends on who is trying to get rich and how.

    Until the 1990s, punative damages were tax free in the USA, so a big settlement was thought to create instant wealth. But then, one still had to pay 1/3 contingency to thier lawyers. And the lawyers might heap a pile of direct expenses atop that.

    ++++++++++++
    Frankly, I prefer the good old days of 'drive defensively'. If you have the car under your own control, you can always pull off the road when conditions grow absurd.
  • ValeTValeT Posts: 308
    edited 2014-08-17 09:03
    Realistically, if we get automated self-driving cars, they would not be responsible for accidents. If a self-driving car is on the road, it will always follow the laws of the road so if it gets into an accident, it will never be at fault. Do you see my point?
  • bill190bill190 Posts: 769
    edited 2014-08-17 09:09
    I think there have been cases where a car was stolen, then crashed into something, the owner of the car and his insurance company had to pay for the damage.

    So perhaps a better question would be if a self driving car violated the law (speeding, run red light, vehicular manslaughter, etc.)

    And in court, there is the sure fire defense of "I was elsewhere when it happened" (an alibi). "I was at work when my car ran over that guy in the wheelchair, it is not my fault, I was not there! I have witnesses that I was at work!"
  • Heater.Heater. Posts: 21,230
    edited 2014-08-17 09:14
    ValeT,
    Do you see my point?
    No.

    Such machines may well be designed to follow the "laws of the road". Such machines will have faults in design or manufacture or due to damage or old age or whatever.

    They may well be confused by some circumstances that have never been anticipated during their design. Would not be the first time a computer has behaved in bad ways given unexpected input.

    So really, when things go wrong who carries the can? The operator, the supplier, the designer. .....

    And what about odd cases, for example where a car spins off the road slamming into something massive and killing everyone inside whilst trying to avoid hitting that kid on a bike that just shot out in front the car from nowhere?
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-17 09:39
    It's going to be on the person taking a risk for some reward, and in this case, it's the car owner, who will need to secure some insurance, and it's the creator of the car, who needs to do the same.

    The risk is new tech, and the reward is freeing up time and attention, and potentially more consistent driving with better outcomes.

    The creator of the tech has the traditional engineering liabilities and must invest and insure to manage those technical risks. Once that is done, the owner of the car must manage their risk associated with whatever intent they may have. Could be getting to work while prepping for work. Could be lesser, such as just wanting to let the car manage more of the task so they can observe at a higher level and be safer too.

    When bad things happen, those involved will be judged on intent. That's how law sorts these kinds of things out.

    Say you are driving, and a kid pops out, and the crash results in the death of the kid.

    Intent determines the legal dynamics.

    In the most basic sense, your intent is good, you are being careful, and it just all comes together badly, then the legalities will reflect that. Perhaps warnings are put up, parents educated, etc... Insurers pay what they need to, and it's not criminal, just really unfortunate. You know your intent was good, so does the law and everyone involved. Emotionally, it's horrible, but rationally, it's understood to be unfortunate, not nefarious or criminal.

    If your intent is not good, say you are texting while driving, with everything else the same, it gets interesting. Say the physics of it means the kid ends up dead no matter what. Just a bad combination of world events. The intent to text while driving would incur more liability and potential criminality even though the events are exactly the same. Now being distracted, the intent brings with it criminal elements, in addition to the ones above.

    Should your intent be to get there quicker, speeding essentially, then it's deffo criminal, and there is the difficulty of understanding whether or not it was the speed, or just the combination of happenings that lead to a dead kid. Because the intent was to be less safe, it's gonna be criminal.

    Now add the self-driving car, and parsing intent should be clear what say, Google needs to be doing and could be liable for, and what the driver intent needs to be, etc...

    Personally, I think the self-driving car can make these dynamics better for all involved. A computer, sufficiently advanced, may well respond to the bike much quicker than the person could, even with focused, good intent in place at the time. We may well invoke technology in various ways too. An airbag outside the car, for example? Or airbags inside, with the car able to do something serious, like engage the road surface with something that will stop the car at distances not possible without a computer helping to protect the driver through such an extreme change in movement. (where are those inertial dampeners we see in sci-fi? Oh yeah, airbags. Got it. )

    It could also be that big data may well help us with the cars too. It's entirely possible for Google to know a neighborhood is filled with kids as opposed to one that's not, and the machine could factor probabilities into it's best case navigation, and this is something humans do all the time. Going through some neighborhoods totally warrants a very moderate speed right? There is one near me where I let the car idle through, 10MPH or something, because I know. And I avoid that area, just because the kids are active and who wants to mess any of that up? Easier to invest a little gas and avoid higher risk areas.

    Self-driving cars could make this computation, just as people can.

    Because we know stuff about computers, we worry about this technology. Honestly, because I know stuff about people, I worry about that technology too, and we've lived with those worries, risks and costs just fine, despite people being crappy and highly inconsistent in how they actually handle cars too.

    I predict there will come a time when manually operating a vehicle will be seen as high risk than having it perform for us. But it will take iterations of the above for us to get there. Hopefully those costs will be low, and our rewards for getting it done maximized.

    Not automating driving at all is status quo, and we depend on people, their intent, skill, etc... to make it work. So far, that works crappy. Lots of bad things happen, and they happen with the best of intent in place too. Frequently, the best of intent isn't there, which makes it all that much worse.

    Semi-automating is the most dangerous scenario to me. This seems like it will maximize the risks, while only adding tepid rewards. "I'm texting because my car is better..." isn't the same as texting with a fully automated car, and the reason why is intent. A manufacturer who seeks to assist the driver needs to quantify risk augmentation on the part of the combination of the driver and car. Traction control, for example, ABS, etc... are all good things that enhance good drivers, but they also enhance poor ones with crappy judgement. That driver, who depends on the tech, gets into a car that isn't augmented, and suddenly presents a much higher risk profile. Not good. The manufacturer can say, "not our problem", etc... and limit their liability while capturing high value for augmented cars. I don't like this much, because it's a dodge, pushing costs onto people.

    Full automation, like Google is attempting, is IMHO, the very best intent. It means there are no "good enough" solutions, only innovation, feedback, more innovation, with the lofty goal of it being better every day no matter what. With this intent, it's always their problem. And the manufacturer is going to have to own driving entirely. That will push costs onto them, and if they are up for doing it, the result will be excellent, with driver and ordinary people risks diminishing as the manufacturer innovates their risks away. Smarter path, and of course being Google, no surprise. I like this a lot.

    IMHO, the intent of the manufacturer will help determine how the specific liabilities play out in real life when things go badly. And people will have to see this play out for a while before the dynamics are better understood.

    Finally, let's hope politics doesn't hose this up. Tesla is getting beat up for making viable, practical electric cars. We all know who is responsible for that, and it's the fuel based car manufacturers, who know they are gonna be disrupted and that means costs and risks for them, which dilute the value of their well established rewards. SpaceX is getting beat up by the same kinds of entities with similar dynamics. (poor Elon, right?)

    With the self-driving car, I bet it will play out in similar ways. Maybe rationality will prevail, maybe it won't. If people don't get educated and we don't make the investments, we won't see the benefits of automated cars. I used to think there weren't any, but given how the Google solution has performed, I'm not there anymore. Let's get this done, and free people to be people and improve the risk and cost exposure cars present to us.
  • ValeTValeT Posts: 308
    edited 2014-08-17 10:03
    Heater,

    I understand what you are saying. To switch gears to a much better idea :), another great way to determine who got the blame would be to install black boxes in all self-driving cars. These black boxes would be the equivalent of an airplane's black box, just on a car. They would monitor speed, positions of objects around it, etc. This way, who got the blame would be determined on a case by case basis.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-17 10:10
    @Vale

    Yes. I do see your point, but it's more complex than that.

    People are regulated in these ways: Money and markets, law, physics, norms. Our behavior is influenced by those regulatory forces.

    None of them are absolute, save physics, and even then, our understanding of physics continues to make more possible.

    The law isn't a perfect thing. The laws of the road are structured to maximize the utility of the road and guide our choices to minimize risks. An automatic car would have to follow all the laws of the road, and some of those laws require judgement. Say you are in a high kid traffic area. Technically, the speed is 25MPH, or maybe 20MPH. However, it may be reasonable and prudent to not drive there at all, or if one does drive there, drive in ways that maximize the safety of everybody involved. This may involve things like changing lanes, creeping instead of moving at speed, and or other kinds of things.

    The law requires human drivers to make those judgements, but it also shields them based on intent. So if you really try, and it goes badly, your liabilities are smaller than when you didn't try.

    But there are liabilities regardless. Kids, for example, do not comply with laws always, or as well as adults do. It's because they aren't real people yet, still developing and they just do stuff. The law says to not jump into traffic too.

    These things are all balanced so that the cost of using cars is managed. We carry liability insurance, home insurance, and in many nations provide universal health care so that remedies are available to people when it goes badly.

    Because of this, there is always some intent in navigating the road, and that overlays the laws of the road, meaning the creator of an automatic car needs to create something that can participate on that basis, or simply not create it at all, as it's untenable given the dynamics of people, cars, law, physics, money, norms. (no good choices, or promoting more bad choices than good ones kind of thing)

    Here's a good example:

    So the law says the speed is X and you must stay in your lane, etc...

    It's known there are active kids in that neighborhood, and one gets hit. A driver who did nothing to avoid that scenario, complying perfectly with the law would be seen as not demonstrating the right sort of intent. The right intent is to get through without hurting anybody, not mere compliance with the law.

    A driver who drove more slowly, braked hard, tried to navigate away will have much lower liability potentials than the one who was merely compliant. Either way a kid gets hit, but the intent leading up to the kid getting hit is what determines what the liabilites actually are, not mere compliance with whatever law is in place.

    Law doesn't actually prevent anything. Law operates as a remedy, and the remedy takes place after intent has played out. People can, will, do break laws, and their intent in doing so determines a lot. Because we know what the law will do, we can use that as motivation to demonstrate good intent, but we don't have to. Law has no real teeth in prevention.

    Physics is different from law, in that it's the rules of the world. Consistent, to the limits of our understanding anyway. Money? That one actually does prevent things. If we cannot fund some choice, it's denied to us before the act.

    Anyway, mere compliance with the law isn't a shield from liabilities.
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2014-08-17 11:27
    This isn't about who was at fault for an accident, but merchantability and fitness of the product. It's why you don't see hydrogen filled dirigibles ferrying people across the Atlantic. Been there, done that.

    The maker of a product is direct responsible for its fitness for its intended use, even if disclaimed. If Google markets this car as self-driving, then it is exposing itself to tremendous liability.

    Though there are some municipalities expressing interest in the technology, Google knows it's not yet a practical product for use on public roads. Consider it a clever marketing ploy for a product that will eventually be used only in restricted environments, or -- despite Google's claim to pacifism -- on the battlefield. Don't think for a minute Google bought Boston Dynamics because it likes cuddly little animal robots.

    Finally, Google must be aware of the "Dymaxion effect," where because of a publicized mishap good technology gets a bad reputation right at the start of its life, and is snuffed out for future consideration, regardless of improvements.
  • Too_Many_ToolsToo_Many_Tools Posts: 765
    edited 2014-08-17 11:43
    One might consider how these issues were dealt with decades ago when on road transportation changed from horse drawn to internal combustion engine.

    Many times horse drawn transportation was autonomous in that the horse followed a regular route with the driver doing nothing to guide it.
  • W9GFOW9GFO Posts: 4,010
    edited 2014-08-17 11:54

    Finally, Google must be aware of the "Dymaxion effect," where because of a publicized mishap good technology gets a bad reputation right at the start of its life, and is snuffed out for future consideration, regardless of improvements.

    Like hydrogen filled dirigibles?
  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2014-08-17 12:01
    W9GFO wrote: »
    Like hydrogen filled dirigibles?

    Yes, but those were never a good idea to begin with. The German's didn't have much choice in the gas, though, as I believe at the time they had no way to make that much helium.
  • W9GFOW9GFO Posts: 4,010
    edited 2014-08-17 12:05
    Yes, but those were never a good idea to begin with. The German's didn't have much choice in the gas, though, as I believe at the time they had no way to make that much helium.

    I thought it was pretty common knowledge that the Hindenburg's problem wasn't the hydrogen but the flammable fabric covering. Hydrogen is a much better and cheaper lifting gas than helium, and we have an inexhaustible supply of it.
  • Too_Many_ToolsToo_Many_Tools Posts: 765
    edited 2014-08-17 14:28
    Consider that the drone technology and driverless cars are moving us closer to flying cars....

    http://en.wikipedia.org/wiki/Flying_car_(aircraft)

    I'll take mine in red....
  • LawsonLawson Posts: 870
    edited 2014-08-17 17:12
    Heater. wrote: »
    Then you can close the curtains on the windows of your vehicle and sleep soundly knowing that when your vehicle does get confused an mow down a crocodile of infants it will be Google's fault (or whoever) and not yours.

    So you're expecting the software to be designed by homicidal maniacs? If something confuses an autonomous car, it should stop. If I can't stop it should avoid. And if something did such a good job of sneaking up on the car that it couldn't avoid, I'd expect it to crash in the least dangerous way possible for everyone involved. (note: most autonomous cars have 360 degree vision. seeking up on one will take effort) Anything less and an autonomous car simply won't be approved for civilian use on public roads.

    AFIK, one of the biggest challenges with autonomous cars right now is that they're TOO polite. Plop one down in inner-city traffic and it'll calmly wait for a clear lane while everyone else cuts it off. Another challenge is the hand-off between manual and autonomous driving. A hand-off should never happen accidentally or happen during a panic situation. At least initially, I expect an autonomous car will need to be parked to engage or disengage the auto-drive function.

    Marty
  • Too_Many_ToolsToo_Many_Tools Posts: 765
    edited 2014-08-17 17:41
    FYI...

    http://news.msn.com/science-technology/look-no-hands-test-driving-a-google-car

    Your thoughts?

    I wonder if insurance premiums will be less for a Google car than one with a human driver?
  • Too_Many_ToolsToo_Many_Tools Posts: 765
    edited 2014-08-17 18:26
    And it would follow that driverless cars would be tracked...

    http://www.citylab.com/commute/2014/08/speed-is-enough-for-car-insurance-companies-to-track-you/375940/

    Thoughts?
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-17 18:39
    I would think the premium will eventually be cheaper. Once the cars are automatic, why even own one? Just ask for transportation, use it, and then continue on.

    Google, or somebody will buy the policy and sell transportation as a service.
  • ercoerco Posts: 20,256
    edited 2014-08-17 18:53
    Who's liable? The Arduino programmer/developer, obviously. He should have used a Propeller.
  • Too_Many_ToolsToo_Many_Tools Posts: 765
    edited 2014-08-17 23:05
    erco wrote: »
    Who's liable? The Arduino programmer/developer, obviously. He should have used a Propeller.

    Actually I waiting to hear a comment like this....will a programmer be criminally libel for a traffic death because of bad code?
  • GenetixGenetix Posts: 1,754
    edited 2014-08-17 23:09
    Erco, who is responsible for maintenance? Does the car need to drive itself for maintenance if needed or is the owner liable because the vehicle maintained.
    At some point you need to draw the line or like many things will the rest of us need to pay for someone else's laziness or stupidity.

    Oh, and what should happen if your vehicle is cyber-jacked?
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-17 23:36
    W9GFO wrote: »
    I thought it was pretty common knowledge that the Hindenburg's problem wasn't the hydrogen but the flammable fabric covering. Hydrogen is a much better and cheaper lifting gas than helium, and we have an inexhaustible supply of it.

    Well, the flammable fabric hypothesis has been rebutted with the possible build up of an electrical charge that ignited leaking gas through ventilation ducts. There seems to be at least one eye-witness that saw a blue corona on the tail of the HIndenberg just prior to its bursting into flames.

    Still, the fabric was coated with aluminum paint and aluminum powder alone has a tremendous combustible potential. I believe it has become the staple of solid rocket engine fuel these days.

    ~~~~~~~~~~~~
    American law allows anyone to sue without merit and Melvin Belli, the King of Torts, has pretty well established the proceedure to sue everyone involved as a standard proceedure and then let the courts assign blame.

    In other words, when things so wrong ... everyone is sucked into the litigation. Nobody knows who is really liable until the courts have decided. It is all on a case-by-case basis.
  • Heater.Heater. Posts: 21,230
    edited 2014-08-18 02:13
    Lawson,
    So you're expecting the software to be designed by homicidal maniacs?
    No. Where did I say that?


    Software has bugs, designs have omissions, possible situations are not foreseen. That is the nature of not just software but everything else humans create.
    If something confuses an autonomous car, it should stop
    Well, if it's confused it may not know it. Why would it stop? That is the nature of confusion.
    If I can't stop it should avoid
    In deed. Often aviding hitting something can involve the possibility of hitting other things around. Do I go for the lamp post or that old lady on the side walk? How do I know which is which? Is the old lady worth more than the kids in the car or not?
    And if something did such a good job of sneaking up on the car that it couldn't avoid, I'd expect it to crash in the least dangerous way possible for everyone involved.
    Again, how does it decide the "least dangerous way"?
    (note: most autonomous cars have 360 degree vision. seeking up on one will take effort)
    Sounds like you have not been driving in places like London or other crowded cities. There are plenty of ways surprises can happen all around you, all the time, and at close quarters. 360 vision will not help you against that. The only safe thing is to make progress at a snails pace. Which is probably a good idea though no matter if the driver is human or machine.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-18 03:16
    My dear grandfather always asserted, "Most people in the graveyard had the right of way."

    Not sure liability and blame matter as much as the ill effects of poor judgment. Any fool that gets into a self-driving car might just be getting what's coming to them.

    Maybe I should take a tramp freighter back to the USA next time I return rather than fly in automated airliner.

    It is up to all of us to determine how much risk we are willing to take.
  • LawsonLawson Posts: 870
    edited 2014-08-18 11:22
    Heater. wrote: »
    Lawson,

    Well, if it's confused it may not know it. Why would it stop? That is the nature of confusion.

    You're looking at this wrong. If the autonomous car isn't certain it can precede without hitting something, why should it ever move? Mostly I'm describing the testing scheme any autonomous car would have to pass for me to consider it safe. Basically it should fail as safely as it can when one or multiple sensors are disabled at the worst time. It should be able to deal with random stuff appearing in front of it. It should correctly deal with a perspective corrected person painted on the road. (and any other evil test you can think of that the best drivers on the road could also pass)
    Heater. wrote: »
    In deed. Often aviding hitting something can involve the possibility of hitting other things around. Do I go for the lamp post or that old lady on the side walk? How do I know which is which? Is the old lady worth more than the kids in the car or not?

    Again, how does it decide the "least dangerous way"?

    The insurance companies have tons of data on this. I'd expect any autonomous car would incorporate a lot of the crash safety data from the insurance companies into it's crash mitigation decision tree.

    And regarding your example of car with kids who's only choice is to hit an old lady or a lamp post, the car should hit the lamp post. The car with kids has the passengers surrounded in several tons of ablative armor so the likely injury is going to be bruises, while the old lady is unarmored so the likely injury is something between broken bones and death. I would further say that given the slow movement of most "old ladies", any autonomous car that couldn't anticipate and avoid said "old lady" is a failure and should be recalled as unsafe.
    Heater. wrote: »
    Sounds like you have not been driving in places like London or other crowded cities. There are plenty of ways surprises can happen all around you, all the time, and at close quarters. 360 vision will not help you against that. The only safe thing is to make progress at a snails pace. Which is probably a good idea though no matter if the driver is human or machine.

    I've done some driving down town in various USA cities. (Chicago, St. Paul/Minneapolis, Madison, Milwaukee, etc.) Just observing what I can and avoiding anything that could hit me has served me well with no accidents in ~15 years of driving. As far as I know the current autonomous cars can do that right now.

    Back on topic, I'd say that the owner of an autonomous car should be liable for any accidents unless a clear manufacturing fault can be shown. Of course if the car's auto-drive has been tampered with, then whoever tampered with the auto-drive should be held liable. (with potential criminal charges for tempering with a deadly weapon)

    Marty
  • Heater.Heater. Posts: 21,230
    edited 2014-08-18 13:17
    Lawson,
    You're looking at this wrong. If the autonomous car isn't certain it can precede without hitting something, why should it ever move?

    Have you ever written any software?

    Programs produce incorrect outputs all the time. I might think I have a program that outputs 4 when given 2 and 2 as inputs and asked to add them. But when it produces 7.9087234987e11 it will never know it went wrong. It will be "confused" but not know it.
    And regarding your example of car with kids who's only choice is to hit an old lady or a lamp post, the car should hit the lamp post. The car with kids has the passengers surrounded in several tons of ablative armor so the likely injury is going to be bruises, while the old lady is unarmored so the likely injury is something between broken bones and death.
    I might agree. Are you asking me to believe that any computer system is going to have all that awareness of circumstances built into it any time soon? Or that it can make moral judgements in such situations. I think not.
    I've done some driving down town in various USA cities...As far as I know the current autonomous cars can do that right now.
    As far as I know they cannot. They don't even have the sensors available. Never mind the compute power and the algorithms.
    Back on topic, I'd say that the owner of an autonomous car should be liable for any accidents unless a clear manufacturing fault can be shown.
    Quite right too. I think that's what I have been saying. That in itself would dissuade me from ever riding in such a machine.
Sign In or Register to comment.