In the air the last few years has been the idea that the legal questions raised by so-called driverless cars might do the technology in before it has a chance to shine; see, for example, this MSN.com piece from last April titled “Will Lawsuits Kill the Autonomous Car?” It entertained the possibility that they just might. But a new paper out from Brookings Institution senior fellow John Villasenor argues, in distilled form, that it’s the worrying about lawsuits that might be more threatening to the future of computer-assisted driving.
Sure, the law might need to be tweaked a bit, writes Villasenor, “but broad new liability statutes aimed at protecting the manufacturers of autonomous vehicle technology are unnecessary.” The paper is a little dense so if this is of only top-line interest to you, you might get all you need in this short explainer video:
In fuller scope, Villasenor’s argument is that in most cases where automated-driving technology is called into question, the existing body of products liability law in the United States is more than capable of coping with them. Automatic braking technologies that don’t perform well on wet streets, a road condition any manufacturer should have foreseen, will trigger traditional negligence claims. Collision-prevention gizmos found to have a software flaw fall under the umbrella of manufacturing defects. A driver who argues that he or she wasn’t warned in due time to retake control of the vehicle would raise the issue of the “risk-utility test” already used to determine whether design choices are striking the right balance.
Villasenor’s aware the body of liability he’s tackling here doesn’t cover everything. “Untangling fault for accidents,” he writes, “will sometimes involve complex questions of liability shared by both the human driver and autonomous vehicle technology providers.” But his paper provides a useful reality check on where we are now. The Google driverless car, for example, might seem mystifyingly futuristic, but we’ve been using autonomous driving technologies for some time, he reminds us, pointing to this 1958 brochure highlighting “an amazing new device that helps you maintain a constant speed and warns you of excessive speed.” Indeed, what is cruise control if not a robot-of-sorts helping us drive better?
Where Villasenor’s thinking about the capacity of today’s liability law to handle tomorrow’s technology is more important than it might sound is where it short-circuits the idea that preemptive rule-making has to happen before our robot cars can be allowed to ferry us around. Things will happen, the argument goes. The law can handle them. Full speed — or the speed determined by our technological superiors to be the appropriate one — ahead.
And that’s a good thing, as Villasenor sees it, because it frees us to move past this strange time where we accept that scores of violent car deaths — Villasenor cites a figure of more than 90 a day in the United States — are simply the price paid for living with cars. If we allow ourselves to embrace what autonomous driving technologies are very good at, “in the long run,” he writes, “this tradeoff will be viewed as a historical aberration, present only during the century or so when technology enabled the mass production of cars, but not of highly automated systems to help drive them safely and reliably.”
The Shared City is made possible with the support of The Knight Foundation.
Nancy Scola is a lead writer for the Washington Post’s tech blog, The Switch. Scola’s writing on the intersections of technology and politics has been published by The American Prospect, Capital, Columbia Journalism Review, New York, Reuters, Salon, Science Progress, Seed, and other publications.