Booking & bail

You are booked into jail. You are told that, based on your history and life circumstances, you are a flight risk and a risk to public safety and therefore denied bail.

Person being booked
i
x

Risk assessment technologies: bail

Defendants often first encounter RATs before trial during their bail hearing. Here, judges use RATs in one of two ways: to gauge the likelihood of an individual’s failure to appear at their trial or to assess the risk they pose to public safety.

i
x

Risk assessment technologies (RATs)

Risk assessment technologies (RATs) are a general class of algorithms that calculate the probability, based on past law enforcement data, that a person will have a particular interaction or outcome in the criminal legal system, such as being arrested or missing a court hearing. Typically, judges and corrections officials consult RATs when making decisions about the level of state control or supervision a person will receive — for example, how to set bail, how long a criminal sentence will be or whether to grant parole.

Risk assessment technologies: bail

Defendants often first encounter RATs before trial during their bail hearing. Here, judges use RATs in one of two ways: to gauge the likelihood of an individual’s failure to appear at their trial or to assess the risk they pose to public safety.

more info on

Risk assessment technologies: bail

One of the features of “tough on crime” politics, which beginning in the 1970s and 1980s helped to bring about our current era of mass incarceration, is a focus on the question of an individual’s supposed “dangerousness” to the community. That question has become a major point about which judges deliberate in making decisions about bail, and the companies that create RATs and the officials who use them claim that RATs make those decisions easier and more objective by assigning defendants numerical risk scores, although neither ‘dangerousness’ nor the ‘community’ in consideration are well-defined.

See the appendix for more information.

Risk assessment technologies (RATs)

Risk assessment technologies (RATs) are a general class of algorithms that calculate the probability, based on past law enforcement data, that a person will have a particular interaction or outcome in the criminal legal system, such as being arrested or missing a court hearing. Typically, judges and corrections officials consult RATs when making decisions about the level of state control or supervision a person will receive — for example, how to set bail, how long a criminal sentence will be or whether to grant parole.

more info on

Risk assessment technologies

Basics

Criminal legal system officials rely on RAT scores when making decisions about bail; sentencing; case management in jail, prison, and during parole; discretionary release/parole and conditions of probation. Depending on the jurisdiction, officials may use the same RAT product at several decision points (for example, in decisions about both sentencing and parole), or they may use different RAT products marketed for a specialized purpose (for example, “predictive policing”).  

What RATs are based on

RATs give predictive weight to correlations between previous defendants’ characteristics and their outcomes in the criminal legal system. Prior criminal history is by far the most common factor used in RATs: a person with an extensive history of contact with the criminal legal system, for example, is likely to be classified by an RAT as high risk. RATs also take into account data from law enforcement about an individual’s drug use, family and social support, community or neighborhood, and employment status.

Risks and biases

While originally marketed as a way to reduce mass incarceration, reform the bail process, combat bias in judicial decisions and more efficiently allocate scarce resources, RATs can actually fuel inequities. The problems with RATs can be broken down into six main issues.

  1. There is no consensus on what makes an RAT algorithm fair.
  2. RATs use real-life data, reflecting an inequitable and biased criminal legal system.
  3. Because of existing inequities in the criminal legal system and society more broadly, data that is on its face race-neutral, such as ZIP codes, can be used as a proxy for race.
  4. Judges’ interpretations of risk scores vary greatly, and there is no established criteria for distinguishing levels of risk in the first place.
  5. Many algorithms are developed by private actors and hidden from outside inspection, making them impossible to audit.
  6. RATs base their calculations of individual behavior on data about group behavior, which may have constitutional equal protection implications under the 14th Amendment.

See the appendix for more information.