The Problem With Some of the Most Powerful Numbers in Modern Policing

Predictive policing shows potential, but not all models are alike.

Chicago police display confiscated firearms. The city is one of two being looked at in a new study of the impact of predictive policing on law enforcement. (AP Photo/M. Spencer Green, File)

This is your first of three free stories this month. Become a free or sustaining member to read unlimited articles, webinars and ebooks.

Become A Member

It’s been four years since a pioneering team of researchers at UCLA began working with the Los Angeles Police Department to test the power of big data analytics to transform the field of law enforcement.

Since then, municipalities of varying sizes have been quietly incorporating so-called “predictive policing” technology to better guide their crime-fighting strategies. The early returns look promising.

In the past year, cities with populations and crime profiles as varied as Alhambra, California, and Atlanta, Georgia, have reported decreases of as much as one-third in incidences of some property crimes using predictive algorithms that forecast when and where offenses are likely to occur. Others, like Modesto, California, say they have used predictive policing to successfully offset otherwise devastating staffing cuts while simultaneously reducing crime.

Couple that with potential cost-saving benefits of predictive analytics (as much as $2 million for a single LAPD division by one estimate) and it’s tempting to call the technology a resounding success.

The problem for reporters and policymakers, however, is that the vast majority of what we know about predictive policing comes from data released unilaterally by individual police agencies, or by the firms peddling software to them. This not only makes it hard to compare results from city to city, but raises serious questions of data reliability.

“I think that there are very interesting anecdotes out there that should be followed up, but as researchers we try to get at the causal mechanisms of crime reductions, and a lot of these initial releases don’t get there,” RAND criminologist Jessica Saunders says about the value of current data on predictive policing.

Saunders and a team of researchers are now working to shed new light on the field with the first comprehensive, multi-site study on the impact of predictive policing.

The analysis is being funded by the National Institute of Justice (NIJ) and is comprised of two phases. Phase I, completed last year, focused on site identification and laying the groundwork for program implementation. For the second phase, which is currently underway, the researchers are employing randomized control trials to study the effectiveness of predictive modeling at pilot sites in Shreveport, Louisiana, and Chicago.

While Saunders says a final report will not be out until next spring, in November, RAND released preliminary findings from the Shreveport study, which undercut some of the more glowing reviews of predictive policing by revealing the limitations of the technology.

The Shreveport study analyzed six police districts that were part of an NIJ-backed pilot initiated in 2012. It found that while predictive modeling cut costs by up to 10 percent in some districts, there was “no statistically significant change in property crime in the experimental districts that applied the predictive models compared with the control districts.”

The poor overall performance is not necessarily an indictment of predictive policing. Rather, Saunders places blame on a lack of uniformity in how forecasting was applied on the street by district commanders and the need for a more formalized approach to data utilization.

“When you get predictive policing you may have better predictions, but how do you take that slightly better information and turn it into something? I think that’s something we really need to think about,” she says.

Jeff Brantingham is a cultural anthropologist at UCLA who helped establish the first predictive policing trials with the LAPD in 2010. He later parlayed his research into the successful software startup PredPol — now the market leader in predictive policing software.

Brantingham says the most reliable data available today on predictive policing stems from his own randomized controlled trial with the LAPD, which found a two-fold increase in policing efficiency in districts using his algorithms. Those findings are currently undergoing peer review for future publication.

As a social scientist, he too is troubled by the lack of transparency in the current data and welcomes the RAND effort.

“By and large, moving all public policy domain stuff in the direction of actual experimental treatment is the way to go,” he says. “Everyone has a compelling interest in knowing what works and what doesn’t and if something doesn’t work we shouldn’t be doing it.”

The Future of Data-Driven Policing

Data-driven policing was formalized in the 1990s with the launch in New York City of the CompStat program. But revolutionary advances in computing capacity have since opened the field to new possibilities. Skeptics worry that predictive technologies represent a slippery slope to the type of future envisioned by the movie Minority Report, in which suspects are arrested before they have even committed a crime.

Brantingham concedes that’s a reasonable concern, but says it’s important to distinguish between types of predictive policing to identify their usefulness and any potential pitfalls. On the one hand are strategies that focus on geographic areas most likely to be affected by crime (in the case of PredPol, this includes areas as small as two or three houses).

Then there are systems focused on the people most likely to commit crime. The most prominent example of this is the Chicago Police Department’s “Heat List” — which contains the names of 400 people most likely to be involved in violent crime, and has been the subject of criticism by civil liberties groups including the ACLU.

These “person-centric” models are problematic, Brantingham says, because they carry an elevated margin of error and can legitimize racial, gender-based and socioeconomic-driven profiling.

“As a scientist you better be damn sure the model of causality is right or else it’s going to lead to a lot of false positives,” he said.

Among geographic-based solutions, Brantingham questions the efficacy of strategies like “risk-terrain modeling” — which factors environmental variables such as the number of bars or churches in a neighborhood into its forecasts. While it’s possible to draw strong correlations from such information (generally, the more bars, the more crime), he questions the use of non-crime-related data, saying that it forces officers to spend at least some time in areas that may rarely or never experience crime.

By contrast, PredPol’s software includes no environmental or personal information, and relies on just three historical variables to make its predictions: type of crime, place of crime and time of day. Brantingham says this kind of modeling provides the best chance of success with the least potential for waste and abuse.

A competing platform — HunchLab, by Azavea — says its use of risk-terrain modeling in conjunction with a host of other variables (it even includes weather data in its algorithm) sets it apart from first-generation offerings and leads to improved targeting. The solution was rolled out by police in Philadelphia last year, but a study of its efficacy has yet to be released.

It’s certainly not surprising that Brantingham would view his own product as the most effective; however, its simplicity does coincide with one of the fundamental tenants of data science: More does not necessarily mean better.

Indeed, as predictive policing matures, departments are coming to see that its real value lies not in its ability to foresee future crimes, but to prevent them by creating the opportunity for more focused community policing efforts.

Lt. Michael Fischer, who is currently overseeing the rollout of predictive policing software in Mountain View, California, says he is aware of the risk of his officers relying too much on data and is working to prevent that.

“This is just another tool in our arsenal to combat crime,” he said. “It’s about sending the message that we’re not using this to replace the instincts of the officers or getting out and talking to the community.”

Like what you’re reading? Get a browser notification whenever we post a new story. You’re signed-up for browser notifications of new stories. No longer want to be notified? Unsubscribe.

Christopher Moraff writes on politics, civil liberties and criminal justice policy for a number of media outlets. He is a reporting fellow at John Jay College of Criminal Justice and a frequent contributor to Next City and The Daily Beast.

Follow Christopher .(JavaScript must be enabled to view this email address)

Tags: policebig datacrime

Next City App Never Miss A StoryDownload our app ×

You've reached your monthly limit of three free stories.

This is not a paywall. Become a free or sustaining member to continue reading.

  • Read unlimited stories each month
  • Our email newsletter
  • Webinars and ebooks in one click
  • Our Solutions of the Year magazine
  • Support solutions journalism and preserve access to all readers who work to liberate cities

Join 1108 other sustainers such as:

  • Brendan in Chatlottesville, VA at $10/Month
  • Anonymous in Flushing, NY at $25/Year
  • Anonymous at $25/Year

Already a member? Log in here. U.S. donations are tax-deductible minus the value of thank-you gifts. Questions? Learn more about our membership options.

or pay by credit card:

All members are automatically signed-up to our email newsletter. You can unsubscribe with one-click at any time.

  • Donate $20 or $5/Month

    20th Anniversary Solutions of the Year magazine

has donated ! Thank you 🎉