If a small tree branch pokes out onto a highway and there’s no incoming traffic, we’d simply drift a little into the opposite lane and drive around it. But an automated car might come to a full stop, as it dutifully observes traffic laws that prohibit crossing a double-yellow line. This unexpected move would avoid bumping the object in front, but then cause a crash with the human drivers behind it.
Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes?
Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.
But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time.
Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.