Driverless cars sound like a good idea. After all, who wouldn’t love the idea of using technology to boost efficiency and improve safety? If only it were that simple.
The reality is that driverless does not mean humanless. Software and sensors may curb the need for human skill but seldom (if ever) purge that need entirely. The reason? Machines are imperfect. They err much like humans do. The impact of these faults is trivial when we’re talking about machines that flip burgers, pick fruit, or pour drinks breaking down. But when machines are carrying out tasks that put lives at stake, the results of errors can be deadly.
Take the airplane autopilot. First introduced in 1912, the system is designed to automatically balance an airplane so human pilots don’t have to. The result is a smoother, safer ride for passengers. But autopilot can fail to work properly, and that raises serious safety concerns. In 1985, a jumbo jet nearly crashed after the autopilot failed to inform the crew about an imminent safety risk. The airplane went into a high-speed dive before human pilots were able to intervene and avert disaster. Incidents like this are why, to this day, autopilot use still requires the human touch. We know that for all their virtues, machines can’t be trusted to get it right all the time, every time. Driverless cars are no different.