A historic accord was signed earlier this month at COP21, with world leaders signing on to slow climate change. In addition to promising to cut emissions, all signatories agreed to report their progress through newly established universal methods.
Fans of the final COP21 document say the commitment to reporting is a victory toward accountability, and one that will be a useful source for sharing what’s working and what’s not across the globe. An official reporting system still needs to be hammered out, but a possible hint of the effectiveness of transparency and knowledge sharing can be found in “Climate Action in Megacities 3.0” (CAM 3.0), a report released in time for the Paris conference that examined what cities around the world are currently doing about climate change.
First, CAM 3.0 reveals a noteworthy role of behavioral science in the shift toward governments prioritizing climate change. The report concludes that 30 percent of the cities’ climate actions cited are a result of city-to-city collaboration. “Given that cities are typically very self-reliant, this is an exceptionally high figure,” the study asserts. And collaboration is, according to the study, accelerating the rate at which cities take actions for the climate. Call it urban peer pressure.
A few factors do make the distance from CAM 3.0’s urban collaboration to certainty about nations’ post-COP21 reporting a leap. For one, C40 Cities Climate Leadership Group was behind the analysis (along with global consulting firm Arup), and as a global network of more than 80 megacities collaborating to address climate change, C40 wanted to highlight success not failure.
Still, Arup and C40 call CAM 3.0 “the world’s most extensive quantitative study of city climate actions.” The study analyzed nearly 10,000 “actions” taken by 66 cities across 12 sectors, or categories. As countries work toward upholding the COP21 accord, it could prove a helpful model for accountability.
“The point is what people can learn from,” Tania Smith, senior strategy and economics consultant at Arup, says of CAM 3.0. “So the fact that [cities] not doing much aren’t involved is less important. We want to encourage those who might potentially do well to learn from others.”
Designing the survey itself was like doing a logic puzzle. First they chose 12 sectors: adaptation, buildings, community-scale development, energy supply, finance, food and agriculture, mass transit, outdoor lighting, private transport, waste, and water. Within the 12 sectors, actions were slotted into 50 thematic “action areas,” such as “water reclamation and recycling” in the sector of “water.” Each of the sectors and action areas had to be mutually exclusive because in the data, each survey answer had to be identifiable as a unique response. In other words, for each answer, respondents could check only one box.
“The list had to be classified clearly and over time,” Smith says.
As new actions gain popularity, they must sometimes shake up the classification system. For example, recently more popular LED streetlights (65 of the 66 C40 cities in the survey are introducing LED street lighting) fall within the “outdoor lighting” sector and the “reducing emissions from street lighting” action area. But not all LED lighting was treated the same. The “buildings” sector also contains LED retrofits for domestic, commercial and public buildings — which falls under the “energy efficiency/retrofit” action area. So, similar initiatives within different sectors had to be classified very differently as to never be double counted.
Then there was the data unit. Setting out to turn cities’ “actions” into a unit of quantitative data was a monumental task. In a smart but counterintuitive design, the definition of “action” didn’t slice very thinly. For this report, “climate action” is defined as “the measures and initiatives cities take to reduce the severity of climate change (mitigation), or their exposure to the effects of climate change (adaptation).” Examples of actions ranged from establishing incentives for building retrofits, to installing a green roof, to building bus rapid transit lines, to financing waste-to-energy projects — even to talking about financing waste-to-energy projects.
Adding to the difficulty of collecting and gauging this data, cities were self-reporting their actions. Because the definition of actions was so broad, the study designers devised a system to prevent over-reporting of actions by cities that may have an interest in over-representing their climate efforts. (Such caution is warranted. A few months before COP21, China admitted that it had misreported its CO2 emissions due to an error in types of coal considered.)
First of all, Arup simply asked for a lot of detail.
“If they’re reporting on a lot of actions but without good detail … it becomes quite obvious,” says Thomas Hurst, senior energy and climate change consultant at Arup. To further judge significance of actions, they also asked cities to rank the scales of their actions with four labels: under consideration, pilot, significant or transformative (meaning, already citywide). Of course, cities could still over-report, but, Smith says, “we look at the data and do quite a lot of statistical analysis. When we run queries, it’s clear where there are weird things going on.”
Measuring collaboration and how city actions resulted from cooperation was another interesting capture in the study. Arup asked cities what mechanism of knowledge transfer was involved in delivering actions. They found that one-third of all actions were the result of cooperation or networking. They used what they called “a rough definition of what collaboration might mean” because, “it’s not something you want a strict definition for. It might be daily meetings or might be a shared report.” This is a gray area, of course. Just as with their definition of “action,” slicing broadly and thinly produces different effects.
This will likely make anyone with a scientifically precise perspective uncomfortable. How accurate can we consider respondents’ claims? There’s another challenge of the survey sample that must be considered: Only C40 member cities participated, and they’re all already disposed to working on climate change. So the report doesn’t truly reflect global norms. (It’s not even a guaranteed capture of the norms within C40, because a largely inactive city wanting to avoid publicizing that lack of action could have opted out of the survey altogether.) COP21 follow-up too will face a similar challenge. At the end of the day, the bigger the international buy-in, the bigger the chance of transparency and sound reporting. But clear reporting methods will also be required and will underscore those good intentions.
For his part, Hurst’s attitude is similar to that of those who left Paris with a sense of renewed hope.
“The whole point of CAM 3.0 is to allow decision-makers to make more informed decisions by knowing their wider universe, what they can learn and borrow from,” he says. “The idea is you’re measuring things and sharing information. It will hopefully bring us to better environmental outcomes.”
The Science of Cities column is made possible with the support of the John D. and Catherine T. MacArthur Foundation.