Law enforcement in America is facing a day of reckoning over its systemic, institutionalized racism and ongoing brutality against the people it was designed to protect. Virtually every aspect of the system is now under scrutiny, from budgeting and staffing levels to the data-driven prevention tools it deploys. A handful of local governments have already placed moratoriums on facial recognition systems in recent months and on Wednesday, Santa Cruz, California became the first city in the nation to outright ban the use of predictive policing algorithms. While it’s easy to see the privacy risks that facial recognition poses, predictive policing programs have the potential to quietly erode our constitutional rights and exacerbate existing racial and economic biases in the law enforcement community.
Simply put, predictive policing technology uses algorithms to pore over massive amounts of data to predict when and where future crimes will occur. Yes, just like Minority Report. These algorithms can guesstimate the times and locations of crimes, the potential perpetrators, and even their upcoming victims based on a variety of risk factors. For example, if the system recognizes a pattern of physical altercations outside a bar every Saturday at 2am, it could suggest increasing police presence there at that time to prevent the fights from occurring.