Cameras Can Speed Cities to Improving Pedestrian Safety – Next City

Cameras Can Speed Cities to Improving Pedestrian Safety

Computer algorithms can track anything that moves through the intersection. (Credit: Tarek Sayed/University of British Columbia)

The great irony of Vision Zero (a goal to prevent all traffic-related deaths), if you ask Tarek Sayed, is that to know how to reduce pedestrian fatalities, you need data on pedestrian fatalities.

“We have to wait for collisions to happen before we can do anything. A fundamental ethical and practical problem which faces traffic engineers is, in order to improve safety, you need a certain number of collisions … which you would try to prevent later,” says the University of British Columbia civil engineering professor. “It’s very reactive.”

Certainly, the traditional way to attack traffic safety is to identify places with a high number of crashes, then make changes at those places and wait a few years to see if the changes reduce crashes. Traffic engineers agree that you need a baseline of around three years of crashes to have statistically significant results.

Sayed thinks there’s a better way. For the last 10 years, he’s been developing a system that uses video cameras to monitor intersections for near misses between moving objects, and computers to automatically track the results.

“You don’t have to wait for two, three or four years” to gather enough data on an intersection, he says. “You can do it in a matter of hours.”

The system, called, somewhat inelegantly, “computer vision and automated safety analysis,” uses off-the-shelf cameras, or cameras that are already installed in an area, to film a given intersection. Computer algorithms can track anything that moves through the intersection — cars, bikes, people — and can figure out quite a bit about each one. The computer knows whether the moving blip is a person or a car, how fast they’re going, how close they got to hitting another road user. The computer can even tell, with about 80 percent accuracy, whether a person is distracted by their phone while walking.

“We can automatically identify whether somebody is texting,” Sayed says. People distracted by their phones take shorter steps and walk slower, and tend to have a more irregular walk than people not texting.

The cameras typically can’t see inside cars but the algorithms are reasonably good at estimating when a driver is distracted, based on how long it takes the car to start braking. (Sayed admits that being right 80 percent of the time isn’t an amazing rate, but given the price of sending a human to monitor for distracted drivers and pedestrians, the system offers huge cost savings.)

Sayed’s system has been used in 10 countries, and is currently working in Vancouver and Edmonton in Canada, New York City, and Brisbane, Australia. In Edmonton, redesigning one intersection after analyzing it with computer vision and automated safety analysis reduced collisions by 92 percent, says Gerry Shimko, executive director for traffic safety in Edmonton. “We went from over 150 collisions over five years, to maybe four or five [total] since it was redesigned in 2010.” He says internal studies have estimated cost savings of $1 million annually.

Sometimes what the cameras show is not exactly rocket science. A New York study Sayed participated in analyzed the intersection at 28th Street and Park Avenue. After two hours of recording, the team had learned: New Yorkers jaywalk a lot.

Similarly, the Edmonton intersection was a known problem, where a road, bridge and train trestle all came together and created poor visibility for drivers. But even then, Shimko says he sees a benefit in being able to quantify improvements without having to wait three years for data to roll in.

The system solves a few other problems. Collision reports as written by police officers are inherently subjective, as they’re based on eyewitness and participant accounts. (The driver who was texting while behind the wheel might not want to admit to a police officer that she was doing so.) Cameras, and automated tracking, let traffic engineers see what actually happened. Did the pedestrian use a crosswalk? Was the cyclist swerving? Did the driver brake with enough time to spare?

Another advantage is simply that video is a better communication tool than written reports. “When you show that video to your elected body, they can clearly see what the issue is without a verbal explanation. They can see there’s a conflict, near misses,” Shimko says. He points to an example where a bus stop at an already busy intersection was blocking drivers’ view of the road ahead. Drivers would attempt to swerve around the bus while it was stopped, which was dangerous for pedestrians and cyclists. The obvious solution: Move the bus stop 100 yards down the street. “It’s very difficult to get a bus stop moved,” Shimko says. “But when we showed [the video] to our transit folks they said, ‘That makes perfect sense. It increases safety for our users as well as the bus driver.’”

In the case of jaywalking pedestrians, there isn’t as much traffic engineers can do, but in a similar situation in Vancouver, Sayed’s team recommended lowering the speed limit at a troublesome intersection. “So if we have conflicts, we don’t have severe conflicts,” he says.

Shimko says that this “next-generation tool … ethically, speaks to what Vision Zero is about. We shouldn’t be allowing people to die or be seriously injured as a result of crashes before we do something.” He added that the automated analysis is just one tool “in conjunction with other data and other techniques [that] allows us to really take traffic safety to the state of the art.”

As for Sayed, his next project is studying how cyclists and pedestrians interact on the Brooklyn Bridge.

Rachel Kaufman is a journalist covering transportation, sustainability, science and tech. Her writing has appeared in Inc., National Geographic News, Scientific American and more.

Follow Rachel .(JavaScript must be enabled to view this email address)

Tags: big datacarspedestrian safety