New York City has 11,412 pay phones, which currently do little more than serve as advertising space. Plans are underway to reinvent them as wi-fi hot spots. But if NYU music professor Tae Hong Park had his druthers, each stall would also contain a small sensor, velcroed to the booth. These sensors would pick up all the ambient sounds of the city — sirens, horns, street musicians — that pay phone users used to try to shout over.
Park is the creator of Citygram, a project dedicated to documenting urban “soundscapes.” He wants to collect acoustic data from the smartphones of residents as well as through fixed sensors — in pay phone stalls, trees or elsewhere. (Park is in discussion with one of the city’s potential industry partners about his vision.) The system will then measure sonic qualities — such as volume and “brightness” — and visually represent them on a dynamic, open-source map. As of now, Park and his collaborators, including CalArts and NYU’s Center for Urban Science and Progress, have just completed an initial prototype, with plans to begin using it and building on it in the coming months.
What will we do with all of that information? Park has plenty of ideas, ranging from the practical to the whimsical. But the underlying goal is “to provide a platform of pure raw data,” says Park, for the community to use in ways he can’t predict.
The most obvious use would be to monitor noise pollution. It’s the number one complaint to 311, and responsible for a variety of adverse health outcomes, from insomnia to high blood pressure. The soundscape map could give city officials a more comprehensive overview of the noise pollution in the city, equipping them to target resources more effectively. Frequent nuisances include cars, blaring music, loud talking and barking dogs. The Citygram system will be designed to recognize and categorize such noises, and then represent them on the map (picture a puppy icon).
Of course, some noises are more than just annoying. On the extreme end, police could also be instantaneously alerted to gunshots or blood-curdling screams anywhere in the city.
The information, Park hopes, will be useful not just to public officials but to ordinary citizens as well. He outlines the following hypothetical scenario. “It’s a Sunday afternoon, 1 p.m., maybe you have two children, and you’re looking to go to a park in New York City that’s quiet. How do you achieve that? You peruse through the maps and you see what the noise levels are.”
Park also envisions more playful uses, such as interpreting and classifying the moods of different locations. For example, people tend to like birdsong and other nature-based sounds (rustling trees, rainfall), as well as children playing. When those sounds are audible, it also indicates a lack of more noxious noise; the jackhammer isn’t drowning out the chirping of the sparrows.
To get a deeper understanding of human responses to various urban sounds, Park is currently working on a survey of musicians in Korea. The subjects hear two-minute excerpts of street sounds and indicate their emotional response. When the surveys are complete, Park will use the results to develop algorithms to automatically categorize moods. The maps may eventually display happy faces alongside the dog icons.
On a similarly fanciful note, Park imagines collective performances with contributions from cities around the world. The soundscape of the street in each location would be sent to the Citygram server. A passing car, for example, might resemble a crescendo. The serendipitous urban din could integrate with intentional sounds such as clapping or conventional music. The idea is “to negotiate these different elements where some things are strategically placed or strategically scored. How do we also engage the environmental sounds from a specific place and do that globally?”
Finally, the data will be available not only in real time, but will accumulate in an archive. In 2050, if you want to know what New York sounded like in 2014, and if all goes according to plan, you’ll be able to enter a date and hour and see the map. (Though users will be able to hear the actual sounds in real time, the archive will consist only of the visual representation.)
But isn’t there something creepy about recording every sound throughout the city? Given the public’s wariness of surveillance, as well as a growing backlash against omnivorous recording, the project may not sit well with everyone. Park says he’s had the thought, “Huh, is this Big Brother? There is that danger for sure. I think the only way … to make this a community project is for the community to be contributing to it and make it transparent,” he says. He emphasizes that voices will be blurred so that conversations will not be intelligible.
The Citygram team just recently finished testing the system, including the sensors and the map. A handful of sensors are currently deployed in Brooklyn and on the campus of CalArts. They have also devised an app — which allows users to upload sound, or just check out the map — and offered tutorials at conferences on how to use it. In the months to come, they hope to set up many more sensors — ideally in the phone booths — and engage people to use the app on their smartphones.
For all the ambitions of this acoustic map, it is only the first stage of the Citygram project. Ultimately, Park and his colleagues intend to map other “non-oracular energies.” Like what? Electromagnetic waves from cell towers is one possibility — “Who knows what they’re doing to us?” Park says — and smell is another. Park conceives of an exhaustive olfactory map, from restaurants to garbage. “It would be really interesting,” he says, “to map the ebb and flow of the stinkiness of New York City.”
The Science of Cities column is made possible with the support of the John D. and Catherine T. MacArthur Foundation.
Rebecca Tuhus-Dubrow was Next City’s Science of Cities columnist in 2014. She has also written for the New York Times, Slate and Dissent, among other publications.