Are self-driving cars 33 times more deadly than regular cars?

Paul Kedrosky writes:

I’ve been mulling the noise over Uber’s pedestrian death.

While there are fewer pedestrian deaths so far from autonomous cars than non-autonomous (one in a few thousand hours, versus 1 every 1.5 hours), there is also, of course, a big difference in rates per passenger-mile. The rate for autonomous cars is now 1 for 3 million passenger miles, while the rate for non-autonomous cars is 1 for every 100 million passenger miles. This raises the obvious question: If the rates are actually the same per passenger mile, what’s the likelihood we would have seen that first autonomous car pedestrian death in the first 3 million passenger-miles?

Initially wanted to model this as a Poisson distribution, with outbreaks (accidents) randomly distributed through passenger-miles. Then I thought it should be a comparison of proportions. What is the best approach here?

I haven’t checked the above numbers so I’ll take Kedrosky’s word for them, for the purpose of this post.

My quick reply to the above question is that the default model would be exponential waiting time. So if the rate of the process is 1 for every 100 million passenger miles, then the probability of seeing the first death within the first 3 million passenger miles is 1 – exp(-0.03) = 0.03. So, yes, it could happen with some bad luck.

Really, though, I don’t think this approach is appropriate to this problem, as the probabilities are changing over time—maybe going up, maybe going down, I’m not really sure. I guess the point is that we could use the observed frequency of 1 per 3 million to get an estimated rate. But this one data point doesn’t tell us so much. In general I’d say we could get more statistical information using precursor events that are less rare—in this case, injuries as well as deaths—but then we could have concerns about reporting bias.

21 thoughts on “Are self-driving cars 33 times more deadly than regular cars?

  1. People are making a mistake by letting the AV proponents set the discussion in terms of deaths per mile. Not every mile is created equal, obviously country miles are going to be easier to navigate than city miles for these cars so simply by activating the system more often in those situations they can bias the results. Also this ignore non-fatal accidents (as Andrew said).

    Finally, I suspect the nature of these accidents is going to be much more disturbing to people. Usually accidents are considered avoidable as long as both parties had been “more careful”. I’d expect these accidents to be bizarre. Like a car sees some road-associated pattern in the waves/horizon and then veers off a bridge into the water.

    • “People are making a mistake by letting the AV proponents set the discussion in terms of deaths per mile. Not every mile is created equal…”

      AV proponents are already including this in the discussion, see my answer below. Is it because they want to level the comparison to other AV products? Sure. But at the very least, it’s not completely ignored.

      In regards to your second point, I do believe the vast majority of AV accidents involve a non-AV car at fault. Actually, I think this is an area where AV cars may be “gaming” the metric. Why? Well, no human *perfectly* follows the driving rules, but it’s a lot easier to get an automated system to follow rules. So an AV car could do something unpredictable, but legal, causing an accident which would then be declared the non-AV car’s fault.

      • So an AV car could do something unpredictable, but legal, causing an accident which would then be declared the non-AV car’s fault.

        I think this is also likely. As an example, there is a left turn near me where *everyone* goes into the far lane then merges back into the inside lane. I attempted to follow the law one time by turning into the near lane and sure enough nearly got in an accident. Basically the correct action needed to be learned by mimicking the other drivers but this needs to be done without a lemming effect.

        Still, I think there will be some really strange accidents that result and this will disturb people more. I’ve played with training a network to drive (not real life, in a videogame) and seen it go really well until it takes a really bizarre action, and then its in unfamiliar territory so things get worse from there. In this case it sounded like the network didn’t even know to slow down after hitting something (probably not many training examples of that).

      • Here appears to be an example:

        He also makes a startling claim — that before the crash, Walter complained “7-10 times the car would swivel toward that same exact barrier during auto-pilot. Walter took it into dealership addressing the issue, but they couldn’t duplicate it there.”

        Noyes: “The family is telling me they provided an invoice to investigators, that the victim took the car in because it kept veering at the same barrier. How important is that information?”
        O’Neil” “That information has been received by the CHP, they’ve been acting on it for some time now.”

        http://abc7news.com/automotive/i-team-exclusive-victim-who-died-in-tesla-crash-had-complained-about-auto-pilot/3275600/

        The network apparently thought one seemingly arbitrary barrier looked like a road.

        • The road is poorly marked there, the theory is that the software was treating one of the road lines as a lane separator when it was really part of the gore.

      • Something like this happened I think in Arizona where an Uber came upon traffic backed up near an intersection but the right lane was clear. The car proceeded at the legal speed limit and crashed near the intersection into another vehicle turning.
        A cautious or professional driver would slow down anticipating possible conflict, in fact some jurisdictions prohibit passing on the right unless you exercise extra caution.

  2. Hi Andrew,

    I was wondering if you would weigh in on this. I’ve been posting about this on my own blog as well, using the negative exponential approach: http://faculty.washington.edu/dwhm/2018/03/19/are-ubers-autonomous-vehicles-safe/

    Some back-and-forth on Twitter made me consider the possibility that maybe we should be considering autonomous miles by all companies, not just Uber. After all, we would having more or less the exact same conversation now if it had been a Waymo or GM vehicle involved in the crash. Estimating total autonomous mileage at 9.4M miles, the probability of the first crash occurring by this point is about 10%, if the crash rate were the same as human drivers. Still doesn’t look good, but could just be bad luck. (leaving aside that the number 1 selling point of AVs is supposed to be *better* safety).

    I couldn’t find data on injury crashes for AVs (not that I looked very hard), but we do know that in CA in 2017, Waymo safety drivers intervened once every 5,600 miles, on average. The question this raises is: What would have happened if they hadn’t? If the answer is “a police-reported crash” in more than 1% of cases, then AV safety doesn’t look so good compared with the average of one crash every 490,000 miles for human drivers. (That’s for all police-reported crashes, not just fatal crashes.)

    • I think it’s a mistake to be estimating static rates at this point, as Andrew alludes to above. Early phase implementation is not gonna be flawless and we probably expect huge improvements in safety per mile once AV systems go through initial testing and refinement in the wild.

  3. Along the lines what you wrote in the end, one thing to remember is that the rate of accidents for self-driving cars is, with high probability, getting lower much faster than that of human-driven cars. That the rate of accidents could be *increasing* for human driven cars is a totally believable hypothesis to me, although I would speculate that this rate of change would be near 0 compared with the rate of change for self-driving cars.

    So even if the rate of accidents for self-driving cars using available historic data was 33x higher than that of human operated cars, I would be willing to guess that the ratio based on current technology is much much better than that…and in a year would likely be even significantly better for self-driving cars.

    Of course, the flip side is that as self-driving cars get more reliable, they can be exposed to more and more difficult areas. For example, I recall reading an article about the fact that GM’s self driving cars have a higher accident rate than I think Uber’s…but the GM tech team countered that Uber had only been testing in Phoenix, while GM was testing in SF. Having driven in both, I have more faith in GM’s technology just in that they would be confident enough to expose their cars to the SF driving world!!

    • +1. I think estimating static rates at this point in the game is likely to be way more misleading than helpful. We need to run the experiment long enough to figure out if we can drive (hah) AV accident rates way down and how quickly.

  4. The distinctions in driving area is I think very important. Where are most pedestrian deaths likely to occur? I would imagine in places with more pedestrians. Do autonomous cars drive more of their miles in areas with pedestrians than human-operated cars? Is passenger-mile the best measure? What about vehicle-miles instead?

    • From what I’ve seen, most people are talking in terms of deaths per vehicle mile or (more appropriately for this case, IMO) fatal crashes per vehicle mile, not per passenger mile.

      In general, rural driving is more dangerous on a per-mile basis than urban driving. However (though I don’t have the numbers) I suspect pedestrian deaths are more common in urban areas.

      Another issue is that AVs may very well increase miles traveled, by reducing the perceived cost of travel. http://dx.doi.org/10.1016/j.tra.2015.12.001 If that happens, total deaths could increase unless the death rate per mile decreases enough to offset the increase in VMT.

      Some useful resources:
      http://www.iihs.org/iihs/topics/t/general-statistics/fatalityfacts/state-by-state-overview
      https://www.fhwa.dot.gov/policyinformation/statistics/2016/ (see section 12)

    • Where are most pedestrian deaths likely to occur? I would imagine in places with more pedestrians.

      Really? In my imagination (informed only by reading and remembering the newspapers unsystematically, and driving here and there), they’re most likely to occur in places where pedestrians are usually rare (but not unexampled) and perhaps formally forbidden, and where cars usually travel fast: for instance, places where people on foot may try to cross a divided highway (without any sort of cross-walk) in moderately heavy traffic. I’ve often been in such places (in both roles), and though I’ve never seen (or been…) a fatality, I’ve seen close calls, and (I think) read about deaths.

  5. This is much like my case, that I mentioned in a comment a while ago. I had to provide a failure rate when after testing a single unit for a year, it had not failed. Not much to go on! But we were required to supply a number for a (non-us) government customer. I argued that if it had failed after a year, then we would estimate the failure rate as Andrew did. So it was probably lower than that. The standard deviation of the distribution equals the mean, so you could get some notion of the uncertainty, but it wouldn’t be worth much. What more could you possibly say?

    You can’t say much here either…

  6. The argument presented is somewhat like this: we may be failing, let’s stop. I can’t think of a good reason to treat that kind of argument seriously. It’s not like car accidents have never been seen or that they cause radioactive fallout which damages or kills large numbers. It’s not like new things haven’t been introduced before and then improved. The argument is closer to ‘the typewriter will eliminate large numbers of clerk jobs, and the resulting mass unemployment will cause social unrest that will result in society being taken over by a communist/authoritarian menace’. Does that sound absurd? It should but that’s the same quality of argument except it’s made obviously absurd – not counting the quality of minds that believe in conspiracy theories. Now of course the person isn’t actually saying we should stop and is, I assume, trying to be interesting by being ‘counter-narrative’, but I consider it confabulation of a rank sort to type that out and publish it without a disclaimer that ‘this is silliness bubbling out of me’. I appreciate a good ‘oh my God’ thought as well as the next person. I particularly enjoy ‘the sky is falling’ threads. It’s important to distinguish those from reality, particularly when clothing them in statistical pretensions. You may say, ‘That’s obviously not what he said’, to which I reply, ‘But that’s what it means’: to point out that the rates are higher is pointing out the negative case. It’s not even a clever one: every human being aware of autonomous cars is worried they will hit someone, so feeding that fear with pretend statistics is exactly the kind of impulse one should resist. It not only feeds fears but it feeds the inability to see through silly arguments: the more the fear is reinforced, the more any argument that reinforces seems rational and the less the fearful remain skeptical enough to question. Think of any number of repeated mantras that induce fear of ‘other’, whether of kinds of humans or rabid raccoons – they gonna bite ya! – to ideas that challenge your pre-conceptions. These rely on repeating things that only make sense when you are not skeptical enough to challenge them, and fear is so powerful that it strips skepticism both quickly and powerfully.

    • I couldn’t think of the right word at the time but by “non-public road” I meant test track.

      This type of accident – a person in the middle of the road or a kid running onto the road should be the type of accident tested for way before the car gets anywhere near a public road.

      • Because Uber felt under huge pressure from upper management to put cars on the road so they could claim to be competitive with where the other automated driving companies are in their development. There are a number of reports showing that Uber’s self-driving cars are orders of magnitude less safe (way more interventions per mile) than other companies’ cars. Worse, that same pressure caused them to move from two person operation to single driver operation.

        So basically yah you are right.

Leave a Reply

Your email address will not be published. Required fields are marked *