Modern streets are designed to facilitate a complex transportation system that includes pedestrians, cyclists, transit users, and—predominantly—vehicles. As autonomous cars are developed, they’re being taught to work in the street design of today. Yet the speed of their development begs the question: if they become the major form of vehicle traffic on the road, will they change the design elements of urban streets? Should we be investing in different types of infrastructure? Traffic signals, speed humps, curbs, bike lanes, and bollards are all installed with human drivers in mind, offering visual or environmental cues as to appropriate behavior. Autonomous cars sense differently than people. Will these objects become less or more important in the future? Importantly, what role will they play in helping create communication and safety between autonomous cars and other users, like pedestrians and cyclists?
The safety of all users is of paramount concern because, in general, people will be less predictable, and less identifiable, than other machines. (A pedestrian fatality caused by a driverless vehicle was in part due to this confusion—but also because the vehicle was depending on a human driver for emergency braking, and the driver was not paying attention.)
How do autonomous vehicles work?
Currently, autonomous vehicles have several “senses” they use to navigate the world, including cameras, radar, lidar, and GPS.
Traditional radar—radio detection and ranging—broadcasts radio waves and uses the returning echo to detect the range, velocity, and trajectory of objects. Radar is especially good at detecting metallic objects, like other vehicles, and is not blinded by fog, snow, or dust. By using doppler changes in frequency, it gives an accurate picture of an object’s speed relative to its antenna. However, it can be tricked by metal near non-metal things: a child near the front bumper of a car might be obvious to a human yet all but invisible to radar.
Lidar uses infrared laser rather than radio, but a similar broadcast and return process maps a 3D image of the world, drawing a wireframe model that looks a bit like the world of the Tron movies. Lidar is better at scaling objects than is radar, but it can be overcome by too much particulate in the air casting light back. Lidar’s world-building gets assistance from GPS, which can tell a car where it is in relation to known infrastructure.
Cameras are also used. These are already useful in driver-assist applications like parking and rear-obstacle detection. The data coming from the visual world is very detailed, and that is both a strength and a flaw. Detail takes a lot of processing power to interpret. Computer processors cannot yet read and prioritize all incoming visual information in real time, whereas a lot of human communication in driving is based on visual cues. Drivers wave each other through, sometimes contradicting legal right-of-way. Pedestrians make eye contact to make sure they’re seen, and may behave unpredictably by dashing or stopping, depending on what they read in a driver’s response.
For computers, processing all the information in even a limited field of vision takes computational power. A subtle interaction of body language and eye contact may make the difference between a wave-through and a wave-hello. In truth, human brains also do not process all we see, and we have gaps in visual input and processing: we have simply evolved to know where to put our eyes, and how to combine and extrapolate from the information we get. Lidar, radar, and GPS help provide a similar world “model” for onboard intelligence, but without the millions of evolved years of integration of sense and process. Knowing where things are and how fast they’re moving does not necessarily mean being able to make sense of them. How does a self-driving car recognize that a bollard is stationary, but the child between two bollards might suddenly move? The problem is, of course, that real world conditions are unpredictable. A driverless vehicle may do very well in most instances, but be tricked by a situation that no human would be confused by. We recoil when our technology does something strange and reckless, like accelerating purposefully into a stationary vehicle.
Autonomous cars learn from watching human drivers, and million of miles of driver data will help. The infrastructure and environment of the street may also adapt to assist, just as it does with people.
Autonomous vehicles and city streets
Proponents of self-driving cars are already envisioning possible huge changes in city infrastructure. For example, researchers at the University of Texas at Austin propose that stoplights at intersections might be replaced by serversthat control traffic flow. This simple model does not incorporate other traffic not controlled by the server, which makes it unrealistic: all real-world applications would have to be able to adjust for the unexpected. Even if all cars were automated, objects might blow onto the roadway, people or animals could wander in, vehicles break down, communications go out, and accidents will happen. However, the possibility of multiple points of observation and communication may make busy intersections without stoplights possible.
If the very concept of turn-taking grid traffic is to be re-evaluated, the streets of tomorrow may look very different. Will bike lanes be necessary if robotic drivers will be able to observe and adapt to cyclists anywhere on the road? What about signs? With an automated network, some traffic guidance systems might not need to be large enough for human attention. After all, servers controlling intersections could be completely invisible. Signposts could be replaced with RFIDs announcing speed limits, changes in road conditions, and local traffic restrictions. A traffic grid that only contained self-driving vehicles could look very different than the grids of today.
The use of urban space could also become more flexible. Human drivers often depend on routine knowledge of an area to navigate; if a road was there yesterday, they’ll expect it there tomorrow. City works post blinking signs and emblazon areas in safety orange when traffic changes occur. All an autonomous vehicle needs to adapt to change is an up-to-date map and GPS coordinates. In an urban future where self-driving, shared-use cars work around the clock, the need for parking will drop radically. Yet, during pick-up and drop-off times, there will be an increased need for curb space. Mixed-used areas, perhaps marked by retractable bollards, could rise to provide extra pedestrian space during tourist and lunch times, and lower to allow autonomous vehicles to drop off and pick up passengers at times of high through-volume. This variability will cause much less risk to the public than if it were human drivers being asked to adapt to hourly route changes.
Of course, short of building a separate system just for autonomous vehicles, a la Elon Musk, or banning all human-driven automobiles, our street infrastructure will always need to support a diversity of transportation modes. Shared space will remain human space in our cities, with pedestrians, bikes, scooters, wheelchairs, strollers, and those with canes or walkers. Humans need human amenities. Street lights, bike racks, and bollards provide security. Benches, waste receptacles, washrooms, and water fountains support the use of public space. Planters, trees, art, and fountains invite participation. These human needs won’t change.
What will change is how street amenities are deployed. Proponents suggest automation, if done carefully, could allow more efficiency on the roads, allowing urban design to be more focused on human need, not less.
Ethics and decision-making in autonomous cars
One of the biggest ongoing conversations around self-driving cars is regarding the algorithmic programming of ethics. In split-second decisions, human beings often react on instinct. An autonomous car evaluates, rather than reacts, and makes a decision that can be known and understood. By understanding what choices are programmed into vehicles, urban designers can build pedestrian safety zones with intentionality.
In philosophy, there is a classical ethical thought experiment called “The Trolley Problem.” The thought experiment goes like this: imagine you are on a trolley on a track. The trolley has lost its brakes, and is bearing down on five unaware people. It will certainly kill them, if something is not done. In front of you is a switch, and you see that if you throw it, the train will take a junction, sparing the five lives. However, one person currently on the far track—at the moment, unaware and not in danger. Yet, if you pull the lever, he or she will certainly be killed.
Many people agree that it is preferable to kill one person instead of five. Even still, it is hard to imagine being the person who consigns another to death: this is called “participation in a moral wrong,” and it is a gray ethical area. If you are utilitarian, looking for the best outcome for the most people, killing one to save five is clearly the right choice. Yet the number of people willing to make this choice drops when the trolley problem is recast: what if you were standing beside the trolley tracks with a very large person, and physically shoving them in front of the trolley would stop its motion and save the other people? Does the answer change if you are comparing five elderly people with one child? To take the conundrum away from the trolley scenario, what if you were a doctor, and you knew that if you kidnapped a healthy young person, you could save five lives with their organs?
People have an instinctive sense of moral wrong. Yet balancing moral wrong against utilitarian arguments is something philosophers struggle with, never mind the average person. How much harder will it be for a car? Programmers will not be able to consider every scenario the artificial intelligence may come across, even if they could empanel ethics experts to agree on the best outcome. So what “rules of thumb” should be used as the guide?
Consumer intention also must be considered. An autonomous vehicle might know with certainty that throwing itself in front of the trolley could save a crosswalk full of pedestrians but kill its passenger. It’s been shown that few people would buy a car that would make this decision. Similarly, few pedestrians would feel comfortable with driverless cars sharing the streets if they thought autonomous cars would choose to run them over to prevent an on-road collision.
These ethical questions will be rare: the thought is that, in general, autonomous cars will tend to lower traffic accidents, rather than increase them. Yet it is uncomfortable to imagine our cars deciding who to sacrifice—ourselves, or those around us?
In some ways, our infrastructure can be part of these conversations. Intention can be made when creating pedestrian and bicycle zones. A bollard can stop an autonomous vehicle springing out of the way of accident just as well as it stops a human car careening out of control.
Hacking and other human dangers
Of course, one of the constants in our increasingly digital world is the threat of digital crime. Hacking, viruses, and malware are problems on “the internet of things,” including possible autonomous vehicles, just as they are on PCs, phones, and servers. With a wide distributed network of cars, terrorism may not just mean one car used as a weapon, but many.
Digital security efforts will of course provide most of the protection against this sort of malicious actor. Still, the anti-terrorism perimeters we use today will not become obsolete, even if no human gets behind the wheel of a vehicle a century from now. They will merely be modified to reflect what threats exist in the network of cars.
The streets of tomorrow
As we investigate the possibility of autonomous vehicles we can imagine the landscape being completely transformed by their presence. We may imagine a technological dream that reduces congestion, prevents deaths, and lowers our individual need for car ownership.
Yet streetscapes and urban design will likely still be recognizable, just scaled differently. Old city streets, built before the rise of the automobile, show us that our streets have always answered human needs. Successful places allow people to sit, to socialize, and to walk, and provide safety and ease of passage. Although the streets of tomorrow may deploy street furniture differently, perhaps making some areas more variable and others more human-focused, the city scape will still be recognizably human habitat, built for people to work, play, travel, and live.