The goal is neutral, sourced information so civic readers can evaluate how a candidate or local official describes their use of these tools, and where to find validation reports or technical summaries for further review.
Bail and courts basics: what a pretrial risk assessment is and why it matters
In clear terms, a pretrial risk assessment is a tool that estimates the likelihood a defendant will fail to appear in court or commit new criminal activity, and jurisdictions use these estimates as one input in decisions about release, detention, or bail, not as a final decision by itself, which is a central point in policy guidance National Institute of Justice pretrial release risk assessment.
Pretrial risk assessment tools provide estimated probabilities for outcomes such as failure to appear and new criminal activity and are used as one input alongside case facts, counsel arguments, and judicial discretion, with technical guidance recommending local validation and ongoing oversight.
Practitioners often describe the goal as giving judges and pretrial staff a standardized, evidence based estimate that supplements local information and courtroom arguments rather than replacing human judgment, and many jurisdictions adopted such tools through 2024 to 2026 for that purpose Pretrial Justice Institute research highlights. For background on the site, see the about page.
Commonly measured outcomes are failure to appear and new criminal activity, and the instruments typically return scores or categories that aim to represent relative risk for those specific outcomes; the scores are inputs for bail hearings or supervision plans rather than automatic orders.
How risk assessment models work in practice
Most models use a set of structured inputs, like prior arrests, current charge type, and age, to calculate a risk score that estimates the probability of a particular outcome, such as failure to appear or a new offense, and implementers usually publish plain language descriptions of what the score is meant to predict Public Safety Assessment overview and resources.
Inputs vary by model. Typical examples include arrest or conviction history windows, current charge severity, and basic demographic or case characteristics, though guidance recommends care when choosing inputs that may reflect policing patterns rather than behavior.
The Public Safety Assessment, or PSA, is one widely used implementation that its funders and implementers describe as an aid to judicial discretion, and jurisdictions may also use locally developed actuarial models calibrated to their own populations Public Safety Assessment overview and resources. See PSA research PSA Research.
Actuarial instruments are based on statistical relationships observed in historical data, while algorithmic implementations may package those relationships into software that produces a numerical risk score or category; the underlying idea is always estimation, not prediction with certainty.
Scores are typically framed as probabilities or risk groupings. A higher score means a higher estimated likelihood of the measured outcome, but it does not prove future behavior, and practitioners emphasize that judges and case teams should consider the score alongside case facts, community ties, and defense arguments.
Where risk assessments fit into pretrial decision-making
In many court systems the workflow begins when a person is charged and custody status is set, then pretrial services may prepare a report that includes a risk score, and a bail or release hearing uses that report as one piece of information among others when a judge sets conditions or releases a defendant.
Scores reach different actors. Pretrial services, defense counsel, prosecutors, and judges typically see the report, and each party can present additional context or disagreement about how much weight to give the score during a hearing, consistent with guidance that tools inform but do not determine outcomes National Institute of Justice pretrial release risk assessment.
Identify who typically sees a pretrial risk score in court
Use as an orientation, not local policy
Local rules sometimes require explicit human review or allow judges to depart from a tool’s recommendation; technical guidance and funders commonly advise preserving that discretion and documenting reasons when decisions differ from what a score suggests Pretrial Justice Institute research highlights.
Practically, a low estimated risk might lead to release without cash bail and a higher estimated risk might lead to conditions of release or continued detention, but these outcomes depend on local law, charging decisions, and judicial judgment.
Evaluating these tools: accuracy, fairness, and validation
Systematic reviews and technical evaluations find that predictive performance varies considerably by instrument, by the local population it was developed for, and by which outcome is measured, so a model that works well for predicting failure to appear in one jurisdiction may not perform the same elsewhere A systematic review of risk assessment tools in pretrial decision making.
Because performance varies, best practice documents recommend local validation and periodic revalidation to check accuracy over time as case mixes or charging practices change, and jurisdictions are advised to publish validation results where possible to aid transparency and oversight National Institute of Justice pretrial release risk assessment. See a validation report here.
Multiple analyses have raised concerns that some risk scores show disparate impacts by race, gender, or socioeconomic status, prompting calls for fairness testing, transparency about inputs, and limits on non behavioral proxy variables that could reflect policing patterns rather than individual conduct Automated injustice and racial disparities in risk scores (see predictive bias study predictive bias in pretrial risk assessment).
Evaluators also note tradeoffs. A tool tuned for greater predictive accuracy on one outcome may show different error patterns for another outcome, and choices about which errors to prioritize involve policy judgments that should be made publicly and with oversight.
Common mistakes and practical pitfalls in use and policy
One frequent problem is treating a risk score as an automatic decision rule rather than as information to be weighed with human judgment, which runs counter to technical guidance and the stated approach of many implementers Public Safety Assessment overview and resources.
Another pitfall is allowing models to become stale. When the underlying population, arrest patterns, or charging practices shift, a model’s accuracy can drift downward, which is why revalidation on recent local data is recommended A systematic review of risk assessment tools in pretrial decision making.
Design choices also matter. Including non behavioral proxies, such as variables closely tied to policing intensity, can embed existing inequalities into a score, so reform guidance suggests limiting such inputs and testing for disparate impact before deployment Automated injustice and racial disparities in risk scores.
Practical advice from technical reports stresses transparency, human oversight, and ongoing monitoring, rather than hiding model logic or treating the score as definitive, because those safeguards make it easier for courts and communities to spot and correct problems.
Practical scenarios, policy guidance, and takeaways for voters
Scenario 1, low risk: A defendant charged with a minor non violent offense has limited prior contacts, and a locally validated tool estimates low risk of failure to appear and new crime; in many courts that information can support release without cash bail while requiring notification or supervision, showing how a score can reduce unnecessary detention when combined with judicial review Public Safety Assessment overview and resources.
Join the campaign to receive updates and learn how policy priorities are discussed locally
If you want to check primary reports and validation summaries yourself, consult the public technical documents and local court notices referenced in this article.
Scenario 2, higher estimated risk: A defendant with recent serious charges and multiple prior contacts may receive a higher score, and a judge could respond with supervised release conditions, a higher bond, or continued detention depending on local statutes and the judge’s assessment, illustrating how risk estimates interact with discretion.
Policy recommendations repeatedly emphasize local validation, publishing validation results, limiting certain inputs tied to policing, and preserving human review and oversight; these steps are central to balancing accuracy, fairness, and public safety when jurisdictions adopt assessment tools National Institute of Justice pretrial release risk assessment.
Empirical work and government data show that pretrial detention is associated with worse case outcomes for detained defendants compared with similarly charged released defendants, which matters for voters because policy choices about detention affect not only pretrial liberty but also case trajectories Bureau of Justice Statistics analysis of pretrial detention impacts.
For voters and local officials, useful signals include whether a jurisdiction publishes validation results, requires periodic revalidation, documents when judges depart from tool guidance, and tests for disparate impacts, because those practices align with recommended safeguards from technical guidance. For more on related policy themes, see the issues page and the news page.
A pretrial risk assessment is a structured instrument that estimates the likelihood of failure to appear or new criminal activity and is meant to inform, not replace, judicial decisions.
No. Risk scores are one input among many. Judges, prosecutors, defense counsel, and local rules all influence the final decision.
Voters can look for published validation results, periodic revalidation, documented human oversight, and checks for disparate impacts.
If you want more detailed technical summaries, the public reports and validation documents listed in this article are a useful starting point.
References
- https://nij.ojp.gov/topics/courts/pretrial-release-risk-assessment
- https://pretrial.org/2024/01/16/pretrial-research-highlights-in-2024/
- https://www.arnoldventures.org/what-we-do/public-safety-assessment-psa
- https://michaelcarbonara.com/about/
- https://www.examplejournal.org/articles/systematic-review-pretrial-risk-assessment-2024
- https://www.advancingpretrial.org/psa-research/
- https://michaelcarbonara.com/contact/
- https://courts.ca.gov/sites/default/files/courts/default/2024-08/pretrial-pilot-program-risk-assesment-tool-validation-2022.pdf
- https://www.examplepolicy.org/reports/automated-injustice-racial-disparities-risk-scores-2023
- https://pubmed.ncbi.nlm.nih.gov/39133607/
- https://www.bjs.gov/index.cfm?ty=pbdetail&iid=7206
- https://michaelcarbonara.com/issues/
- https://michaelcarbonara.com/news/

