Pretrial jail

When the police tell you why they are arresting you, they don’t give you information about how you were identified. They also don’t tell you that they were monitoring you for months as an at-risk individual. While held in jail, you are given maximum supervision, but you are not told the reason and there is no opportunity to challenge this decision.

prisoner looking in mirror with police officer
i
x

Risk assessment technologies: case management, planning, and supervision in jail

In jail, case managers use RATs to supplement the process by which they evaluate incarcerated peoples’ supposed risk of reoffending in the future and what resources might reduce that supposed risk.

i
x

Crime forecasting

Crime forecasting algorithms use historical police data to calculate the probability of future police activity within a given geography or time frame. Law enforcement officials use those algorithms to label certain locations “at-risk” or certain people “likely” to commit or be the victim of a crime, and then they base decisions about policing and prosecution on those labels.

i
x

Face recognition technology

Face recognition technology, also called facial recognition, is a biometric tool used to determine the likelihood that two images are both representations of the same person’s face. Law enforcement agencies use face recognition software to attempt to identify people in photos or video footage.

Face recognition technology

Face recognition technology, also called facial recognition, is a biometric tool used to determine the likelihood that two images are both representations of the same person’s face. Law enforcement agencies use face recognition software to attempt to identify people in photos or video footage.

more info on

Face recognition technology

Face recognition technology, also called facial recognition, is a biometric tool used to determine the likelihood that two images are both representations of the same person’s face. Law enforcement agencies use face recognition software to attempt to identify people in photos or video footage.

more info on

Face recognition technology

There are generally two types of face recognition systems: one-to-one matching, which compares two photos to one another to verify they’re the same person, and one-to-many matching, which compares a photo of an unknown person to a database of known persons in order to identify someone. In the latter, officers upload a probe photo of a face to be identified, and an algorithm analyzes its features, compares it to a database of face photos and then issues a list of potential matches along with a numerical confidence score for each match.

What face recognition tools are based on

Face recognition tools have two constituent parts: an algorithm applied to a face photo and a database to which analyzed face photos are compared. The algorithm, which is typically shielded from public view by copyright and trade secret law, is created by training an algorithm on vast sets of labeled data. The more data the algorithm processes, the better it gets at matching. The algorithm detects patterns in facial features and applies those patterns to novel images. The database is the collection of face photos — often a mugshot or driver’s license database — to which face recognition users (in this case police) compare the result of the algorithm’s analysis. It is the pool of people from which a match can be detected; only images within the database can be returned as a potential match.

How face recognition tools are used

Law enforcement agencies use face recognition for two primary purposes: to verify an individual’s identity and to identify an unknown individual. Verification uses one-to-one matching, whereas identification uses one-to-many. Officers use face recognition to try to identify people in the field, after they’ve been arrested, in a photo or video during an investigation, or on real-time video surveillance.

Biases

Bias can exist at a number of key points in a face recognition system: the algorithm, the database of faces or the human implementation of the technology. The algorithm can be biased if it’s trained on data that favors lighter-skinned men — as many commercially available algorithms are — and performs worse on ethnic, racial and gender minorities. The comparison database can be biased if people of certain demographic groups are overrepresented. Because low-income, Black and other communities of color are subject to greater levels of law enforcement, they are overrepresented in mugshot photo databases; when such a comparison database is used, marginalized people are more likely to be identified or misidentified. Finally, face recognition can be implemented in harmful ways that reinforce bias. Officers can manipulate the probe photos they submit for searches: copying features from one face and pasting them onto another, heavily editing photos or submitting composite sketches or celebrity doppelgangers.

Impacts of increased surveillance

Face recognition alters the balance of power between people and governments, giving government agents the ability to identify and track people in secret. By virtue of being enrolled in a face recognition database, people are part of a perpetual lineup, always a potential suspect in a criminal investigation. The fear of government surveillance has chilling effects. It makes people less likely to attend protests or engage in political speech for fear of being watched.

See the appendix for more information.

Crime forecasting

Crime forecasting algorithms use historical police data to calculate the probability of future police activity within a given geography or time frame. Law enforcement officials use those algorithms to label certain locations “at-risk” or certain people “likely” to commit or be the victim of a crime, and then they base decisions about policing and prosecution on those labels.

more info on

Crime forecasting

“Crime forecasting” refers to two separate but related processes: predictive policing and data-driven prosecution. Both use historical police data to calculate the probability that people will be arrested or that crimes will be reported in a given area in the future. Police use the algorithm’s calculations as a basis for targeting particular neighborhoods or particular people. Prosecutors use the algorithm’s calculation as a basis for charging decisions and sentencing recommendations. 

What crime forecasting tools are based on

There are two types of predictive policing tools: place-based, which maps so-called “hot spot” areas based on historical police activity, and person-based, which uses historical police data to generate lists of individuals supposedly at-risk of committing or being a victim of crime. 

Place-based policing algorithms rely primarily on police department data, including 911 calls and community and police reports of suspected crime. They may give weight to data points such as reports of property crime or vandalism, juvenile arrests, the presence of people on parole or probation, and disorderly conduct calls. Some algorithms even give weight in their calculations to things like weather patterns, the presence of liquor stores and population density. 

Person-based algorithms rely on data collected about individual histories of interaction with the criminal legal system. These tools create lists of names and assign accompanying “risk scores” based on data from arrest records, inclusion in gang databases, parole and probation records, and police reports.

“Data-driven prosecution” refers to a set of data analysis techniques that prosecutors use when making decisions about what charges to bring, what sentences to ask for and how to dispose of cases. As a process, it is relatively understudied, but in general it describes prosecutorial reliance on data from the criminal legal system, including law enforcement data about active cases, data about the locations of previously reported crimes, lists of individuals whom police have identified as “priority offenders”and data about probationers and parolees. More research is necessary to understand what specific algorithmic products prosecutors are using; how they are using them; and what policies, if any, guide or constrain these practices. 

Risks and biases

Developers build crime forecasting algorithms with historical police data, meaning the algorithms’ outputs will reflect inequities in the criminal legal system. That includes data from decades of documented policing practices that are biased, corrupt and even unlawful: falsifying crime records, planting evidence, targeting Black and Brown communities and otherwise manipulating crime data. All of those biased decisions, originally made by humans, become part of the algorithm that corporations and law enforcement agencies claim can “predict” future crime. Police then spend disproportionate amounts of time targeting the same communities and individuals they have targeted in the past, creating a feedback loop. 

Historical crime and arrest data don’t give reliable information about where the most crime is happening but instead, merely reveal where the most policing is happening. What gets measured, and subsequently fed into policing algorithms, are the incidents that are reported to police or that the police themselves report and the people police arrest. But the crime that is reported to police varies by community — white and wealthier communities are less likely to report crimes to the police. And even where the data shows that Black and white people are likely to commit certain offenses at the same rate — for example, offenses relating to drug use and sale — in many cases Black people are significantly more likely than white people to be arrested or incarcerated. Crime forecasting tools take the data from this kind of biased policing and represent it as objective evidence of where future offenses will be committed and by whom.

See the appendix for more information.

Risk assessment technologies: case management, planning, and supervision in jail

In jail, case managers use RATs to supplement the process by which they evaluate incarcerated peoples’ supposed risk of reoffending in the future and what resources might reduce that supposed risk.

more info on

Risk assessment technologies: case management, planning, and supervision in jail

When case managers estimate whether a person is at risk of being charged with misconduct while incarcerated or rearrested after leaving jail, they consider factors such as physical and mental health, records of substance abuse, education, housing situation and employment history. Some case managers rely on RAT risk scores to make those judgments.

See the appendix for more information.