Self-driving cars will soon become easy to hide in sight. The lidar sensors on the roof that currently mark many of them are likely to become smaller. Mercedes vehicles with the new, partially automated Drive Pilot system, which carries its lidar sensors behind the car’s front grille, are no different to ordinary people-driven vehicles.
Is this a good thing? As part of ours Driverless Futures project at University College London, my colleagues and I recently concluded the largest and most comprehensive research of citizens’ attitudes on self-driving vehicles and traffic rules. One of the questions we decided to ask, after more than 50 in-depth interviews with experts, was whether autonomous cars should be marked. The consensus of our sample of 4,800 UK citizens is clear: 87% agreed with the statement “It must be clear to other road users whether the vehicle is just driving” (only 4% disagree, while the rest are unsure).
We sent the same survey to a small group of experts. They were less convinced: 44% agreed and 28% disagreed that vehicle status should be advertised. The question is not simple. There are valid arguments on both sides.
We could argue that, in principle, humans should know when they are interacting with robots. That was the argument made in 2017 in a report commissioned by the UK Research Council for Engineering and Physical Sciences. “Robots are manufactured artifacts,” he writes. “They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent. ” If self-driving cars on public roads are truly tested, then other road users could be considered subjects in that experiment and should give something like informed consent. Another argument in favor of marking, this practical one, is that – as with a car driven by a student driver – it is safer to give a wide position to a vehicle that may not behave like one driven by a well-trained man.
There are also arguments against labeling. The label can be seen as a disclaimer on the responsibility of the innovator, which implies that others need to recognize and adapt the self-driving vehicle. And it could be argued that the new label, without a clear common sense of technology limitations, would only add confusion to roads that are already full of distractions.
From a scientific perspective, labels also affect data collection. If a self-driving car learns to drive, and others know it and behave differently, it could spoil the data it collects. Something like that seemed to be on his mind Volvo CEO who told reporters in 2016 “Just to be on the safe side”, the company would use unmarked cars for its proposed testing of self-driving on the roads in the UK. “I’m pretty sure that people will challenge them if they are marked by very sudden braking in front of a self-driving car or that they interfere with themselves,” he said.
Overall, the labeling arguments, at least in the short term, are more convincing. This debate is more than self-driving cars. This is at the heart of the question of how new technologies should be regulated. Developers of new technologies, which they often show them as disruptive and changing the world in the beginning, they tend to characterize them as only incremental and unproblematic when regulators knock. But new technologies don’t just fit into the world as it is. They reshape the worlds. If we want to understand their benefits and make good decisions about their risks, we must be honest with them.
To better understand and manage the application of autonomous cars, we need to break the myth that computers will drive just like humans, but better. Management professor Ajay Agrawal, for example, he argued that self-driving cars basically do what drivers do, but more efficiently: “People have data coming in through sensors – cameras on our faces and microphones on the side of our heads – and data comes in, we process data with our monkey brains, and then we take action and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate. ”