An Uber Vehicle

Autonomous Cars Can Be Safe or Mass-Marketable, Not Both

An Uber Vehicle

“UberImpact,” a questionable branding decision (Pride Parade in Seattle, Jun. 29, 2014) © Jiong Gong posted to flicker under CC BY-NC-ND license, at https://www.flickr.com/ photos/124869658@N02/16016091244/

You might remember a rash of people on social media becoming suddenly interested in “the Trolley Problem” a few years ago. A classic ethical thought experiment about the dilemma posed by a runaway trolley, contemporary interest in it has been revived as corporations and engineers (PDF) begin grappling with programming and design decisions that will govern self-driving cars.

This post is in part a response and counterpoint to Michael Daigh’s recent post that presented a hopeful view of self-driving cars that always (obviously!) comply with the law and never make unsafe maneuvers. In that post, the author supposed a world where the robots were programmed to adhere to Asimov’s three laws of robotics, which emphasize avoiding harm to humans.  I think it’s important to understand that this outcome for our self-driving car future isn’t self-evident or guaranteed—it is going to take deliberate effort and vigilance because there are already forces at work guiding our streets toward a different, less utopian outcome.

As a matter of practical realism, self-driving cars can and will be safe only directly in inverse proportion to their marketability. That is, you can either have a safe autonomous vehicle, or you can have a mass-marketable one that drives the way contemporary human American passengers and drivers want them to drive, but not both. The reason for this is simple: humans are unsafe drivers, and the customs of the roadway reflect that.

Exhibit 1: the self-driving Uber that killed a pedestrian in Arizona earlier this year had its emergency braking programming disabled to “reduce the potential for erratic vehicle behavior.” Here’s an analysis of what might have happened if the safety feature were enabled.

Self-driving cars that don’t drive the way a human would drive have a “potential for erratic behavior.” They’re prone to, say, driving under the speed limit, or slowing, stopping, and avoiding potential hazards that a human might disregard. That’s erratic in relation to the current culture of the road, which is to out-drive drivers’ realistic reaction time, use the vehicle’s mass and speed as a threat to keep the way clear, and assume things will all work out.

Modern car- and road-culture expectations assume unsafe driving. When people are forced to contemplate what safe driving really looks like, the way a properly programmed robot would do it, it’s not going to sell cars. Nobody will want to own one—not if it’s going to stop at every intersection for pedestrians, even though that’s already the law. Not if it’s going to give bicycles sufficient following distance and no less than three feet of passing buffer, but also not unsafely cross into the lane of oncoming traffic, instead waiting patiently for a safe opportunity to pass, even though that’s all already the law. Not if it routinely drives under the speed limit to avoid reasonably foreseeable hazards or to adapt to road conditions, even though that’s already the law.

American road culture includes all these ways that car drivers already break the law or otherwise drive unsafely. Self-driving car manufacturers, who want to sell to the public and have their product share the road with human drivers (something that will have to happen, unless and until we accomplish an overnight switch to ban human driving!), quickly discovered that their products need to match our cultural expectations.

A car following Asimov’s laws probably would never reach 30 mph on a typical Minneapolis street, for example, and would virtually always drive below the speed limit, completely unlike any human driver any of us have ever known. This is why the autonomous car manufacturers are in the early stages of a long, multi-faceted war to let the robots drive as unsafely as humans do.

Evidence of this war can be found in Elon Musk’s “playing the refs” efforts to discourage media coverage of autonomous car collisions. It can be seen in efforts to ease and deter regulations governing these cars, their sale and manufacture, and their use on the roads. And it can be seen in their brazen determination to put the cars on the roads with safety programming disabled to use public streets and everyone on them as their guinea pigs.

An illustration captioned "Hedonist's Trolley Problem."

An illustration of the ethical choices available under the current proposition of autonomous cars.

The profit incentive to disregard laws and regulations is going to be just as strong, if not stronger, than a human driver’s incentive to engage in the same unsafe and socially malignant behaviors to get where they want to go quickly. Only, its going to be baked, universally, into the design of the vehicles themselves. Unless maybe autonomous car owners will get to choose whether to “be safe” or “get me there quickly” to match their own personal ethical calculus.

When a manufacturer decides it’s too “erratic” to stop for pedestrians who want to cross at an unmarked intersection, or too unacceptable to drive at speeds slow enough to be safe—and the car that doesn’t do those things sells better—will existing law prevail, or will the allure of the market bootstrap unsafe driving into a new law of the road, hard coded by car manufacturers to best suit their customers?

This is why we can’t take for granted that autonomous vehicles will be safe, but we will have to insist on it, repeatedly, and in many forums: legislative, regulatory, product engineering & design, and public opinion. As has already been amply demonstrated, compliance with existing law and local preference is not a given with all the financial incentives at play in this developing industry.

If we aren’t careful, we will miss this opportunity to make roads safer by compelling the robots to drive more safely than we do. Instead, we risk codifying a bunch of unsafe things our culture is willing to tolerate from human drivers, programming the cars to reproduce them, for marketing purposes.

Christa M

About Christa M

Attorney. I do law stuff, ride bikes, and paint murals. Member of Hourcar & Nice Ride, and customer of Freewheel Bike and The Hub Bike Co-op.