KC police are already using tech to prevent crime. Include the public | Opinion
Predictive policing seeks to use large quantities of data to predict where crimes are likely to occur, along with who is likely to commit these acts. As a result, police can use these results to better allocate resources to prevent crime. While the idea of a machine guiding police movements might seem like a science fiction concept, this practice is used by many major police departments in the U.S, including Kansas City and St. Louis. The yearlong Kansas City Preventive Patrol Experiment, conducted in the early 1970s, is still studied today.
There are two basic types of predictive policing. The first seeks to identify which locations are most likely to experience crimes. The second focuses on predicting which individuals are likely to commit crimes.
There are three main arguments in favor of predictive policing. The first is that this method can effectively reduce crime rates by directing police presence to where it is most needed. For example, the Kansas City Police Department implemented a predictive policing program a decade ago, murders dropped a reported 20% in 2014 from the previous year. In September 2025, Kansas City police reported an 18% reduction in violent crime from 2024, resulting in part from the Save KC initiative, which seeks to identify and direct resources toward people most likely to commit crimes.
The second related argument is that predictive policing can be used to efficiently allocate limited police resources. This option becomes more attractive in times of budget cuts. A third justification is that algorithms used in predictive policing could be less racially and ethnically biased than human officers. While there is evidence and reasoning to support each of these arguments, there are reasons to be cautious about the use of large-scale data analysis in policing.
The first concern is that several studies suggest that predictive policing is simply not as effective in reducing crime as proponents claim. While this is a broad assertion, it is important to take sweeping claims about the effectiveness of predictive policing with a grain of salt. One part of this debate is that the models used in this process are only as accurate as the data they are trained on. As the computer programming saying goes, “Garbage in, garbage out.” Because of the sheer number of data points used in some of these models, and the inherent presence of human error, it is difficult to verify that these predictions are actually being trained on accurate information.
A second, related concern is that while predictive policing models might not be inherently racially biased, if they are trained on data tainted by this bias, they can replicate it, reproducing and possibly entrenching these harms. More broadly, the issue of what counts as bias plays an important and unsettled role in this debate.
The third and final set of concerns centers around transparency and explanation. From a Fourth Amendment perspective, one of the key concerns in the legality of policing is probable cause. There is an open question as to the degree to which predictive policing models can satisfy this requirement, especially as they identify high-risk locations and individuals based on group-based statistical analysis of the broader population, not the individual in question. These concerns are amplified by the degree of secrecy surrounding the models used in predictive policing, as well as the data sets utilized by these models. Moreover, effective policing relies on community trust. As highlighted by the public backlash against the Los Angeles Police Department’s use of predictive policing launched in 2011, a lack of transparency can reduce public trust in police departments.
Predictive policing is likely too entrenched at this point to be abolished. However, several reforms could be useful in expanding its benefits while limiting its harms. Legislators and community groups should pressure law enforcement to be more transparent regarding how, where and when they use predictive policing. These departments, along with the companies that supply technology, should have to explain the procedures they follow to minimize the risk that these models use false data. And more extensive studies should be conducted to better understand the effectiveness of predictive policing in reducing crime.
While decisions about policing are increasingly made by machines, crime — and therefore policing — remains human, with all the associated strengths and weaknesses.
Blaine Ravert is a Columbia resident whose work has been published in The Columbia Missourian, The Columbia Tribune, The St. Louis Post-Dispatch, The Cipher Brief and CSPC News Roundup.