Algorithms are ubiquitous in the criminal legal system. As we all move through the world, a mosaic of overlapping surveillance technologies collect and store information about us. Law enforcement agencies, adjudicators and corrections officials use algorithms to filter through the billions of pieces of data collected through these surveillance technologies and make calculations about which neighborhoods to police, what offense to charge someone with, whether bail should be set, how long that person should be incarcerated and when they will be free again.
These algorithms don’t produce “neutral” or “objective” calculations. They are built with real-world data, which records and reflects the criminal legal system’s biases and abuses. That means, for example, that police relying on the output of a crime forecasting algorithm will go to the same streets and target the same people they have in the past. And in most cases, that produces a feedback loop resulting in a persistent and disproportionate police presence in communities of color.
The companies that build the algorithms behind policing technologies take for granted existing approaches to policing and punishment, approaches that communities are actively contesting. What should the police’s role be? What is the real meaning of public safety? Are there better ways to prevent and redress harm? Those are questions that must always be part of ongoing democratic deliberation about justice. But policing algorithms codify assumptions about justice, thereby preserving the status quo.
Most of us are unaware of the ways that algorithmic tools, trained on information gathered through continuous mass surveillance, are influencing legal system actors who have enormous power over our lives. This project’s goal is to illuminate that fact and to begin the process of making an opaque and disorienting system legible.