This browser version is not supported. For best experience using this site, we recommend viewing in the latest Chrome or Safari.

1
Introduction
2
Pretrial Performance Defined and Expressed
3
Identifying Your Performance Measures
4
Reporting Your Performance Measures
Download
Community
1. Pretrial Performance Defined and Expressed
    1. Introduction
    2. Pretrial Performance Defined and Expressed
    3. Identifying Your Performance Measures
    4. Reporting Your Performance Measures

Introduction

Pretrial performance measures allow system and community stakeholders to determine, with data, to what extent your jurisdiction is meeting its pretrial goals. For example:

  • To what extent are people on pretrial release remaining arrest-free and appearing in court? 
  • How many people are being released and detained, and what are their charges and demographics? 
  • How has the jail’s pretrial population changed in number and type since new practices were implemented?

Using data to answer these and other questions provides more accurate and reliable information than anecdotal experience alone. With this information, stakeholders are able to analyze trends to understand how their pretrial system is changing over time. Stakeholders are also more effectively positioned to communicate among one another and with the public and media, including dispelling misconceptions or rumors that portray the pretrial system as faulty or ineffective. 

This guide explains what pretrial performance measures are and why they are important. It then walks you through a process to develop, customize, and calculate essential measures. It also explores how to present these measures to your system and community stakeholders so that they can evaluate your pretrial system’s functioning. Templates and examples are provided to help you complete these important activities.

A Process of Input and Collaboration

Identifying your system’s pretrial performance measures and developing a pretrial performance report requires input from a broad range of key stakeholders; it is not generally the responsibility of a single agency. An inclusive cross-discipline policy team composed of both community and system stakeholders serves as an ideal group to work on pretrial performance measures.

Policy teams typically include representatives from:  

  • Judiciary
  • Court administration
  • Prosecutor’s office
  • Public and/or private defense
  • Jail
  • Law enforcement
  • Pretrial services
  • Information technology (from individual agencies or a multi-agency information system)
  • Local community

The policy team can:

  • Decide which measures are important or necessary for your jurisdiction
  • Authorize the sharing of the data used to calculate the measures
  • Discuss what the measures indicate about your local pretrial system’s functioning
  • Decide which policies and practices can be improved to better achieve your jurisdiction’s pretrial goals

In addition to reviewing general information about performance measures, you may want to ask members what they want to know about their pretrial system. Members may also want to ask their colleagues this question before the meeting. Possible questions include the following:

  • Are we getting more people out of jail?
  • What percentage of people are failing to appear because they forgot or couldn’t get off work, and what can we do about that? 
  • Are the judges following the new recommended practices?

How Pretrial Performance Is Defined and Expressed

Performance Measures Defined 

There are two main types, or categories, of performance measures: outcome measures and process measures.

Outcome measures pertain to results or end goals. They measure what you are trying to accomplish. Examples of pretrial outcome measures include: 

  • Pretrial release rate 
  • Arrest-free rate 
  • Court appearance rate 

Process measures pertain to methods or activities; they measure what you did to get your results. Process measures often indicate how many of which kind. Examples of pretrial process measures include:

  • Number and types of people booked into jail 
  • Average length of time in jail until pretrial release   
  • Rate of agreement between presumptive release conditions for certain categories of people and conditions actually ordered for these people

Outcome and process measures complement one another. Outcome measures are needed to help determine to what extent desired goals are being accomplished. On the other hand, process measures help explain why goals are or are not being achieved.

Systemic and Agency-Specific Measures

Performance measures can be broad when they are used to evaluate the functioning of the pretrial system as a whole or they can be narrow when they pertain to just one agency. Both broader, systemic and narrower, agency-specific measures can be important, depending on the information being sought. Systemic measures and agency-specific measures can be either outcome or process measures. 

Here are two examples of outcome measures, a broader one and a narrower one:

  • Release rate for the broader system
    The arrest-free rate of all people released pretrial, regardless of how they were released, such as on citation, from the jail before first appearance, by posting money from a bond schedule, or supported by pretrial services
  • Arrest-free rate for a specific agency
    The arrest-free rate of just the pretrial service agency’s clients, such as those to whom the agency provided supportive services

How Measures Are Expressed

Regardless of whether they’re outcome or process measures, performance measures are expressed in a few different ways, depending on what is being measured.

The most common expression is as a rate, or percentage. For example, if 820 out of 1,000 people remain arrest-free during pretrial release, the arrest-free rate is 82%

Counts are often helpful to measure workload demand. For example, a law enforcement agency may have made 2,584 arrests for jailable offenses in the past year: 646 through the issuance of a citation and 1,938 through a custodial arrest.

Finally, an average is sometimes used when people differ on a given performance measure. For example, your stakeholder team may evaluate how quickly cases are being processed by examining the average length of time—for example, 2.2 days—that elapses between when people are booked into jail and when they have their first appearance hearing. 

Knowing about the different types of pretrial performance measures and how they are expressed will help the policy team decide which measures are most relevant for your jurisdiction.

Identifying Your Performance Measures 

APPR developed a list of recommended pretrial performance measures that you can use to identify the measures that best suit your jurisdiction’s needs. [1]

Your policy team should convene to complete the pretrial performance measures reference sheet. The group will customize the sheet to reflect outcome and process measures that are relevant for your jurisdiction.

How to Use the Reference Sheet

Guidance on how to use the reference sheet is provided here. It is organized following the structure of the spreadsheet itself. We’ll walk through the four key steps you’ll need to complete to customize the sheet for your jurisdiction: 

  1. Establish the data parameters
  2. Customize your measures
  3. Fill in where the data come from
  4. Identify what else people need to know

Step 1: Establish the data parameters

Intro Tab

On the Intro tab, you will find guidance about establishing three parameters, or outer bounds, for your data collection: 

  1. Population about whom data are collected (i.e., people arrested for a crime)
  2. Case status (i.e., cases that have ended or reached disposition)
  3. Time period for extracting data (i.e., typically one calendar year, with smaller time periods when relevant) 

It is recommended that you include data that indicate how well the pretrial system as a whole is functioning rather than focusing on any one agency, such as the pretrial services agency.

Step 2: Customize your measures

Current Performance Measures tab

The Current Performance Measures tab lists many possible performance measures. The list is fairly comprehensive but not exhaustive. The measures have been designed to be as consistent as possible with recent developments in pretrial law, research, and best practice standards (e.g., the National Association of Pretrial Services Agencies’ (NAPSA) Standards on Pretrial Release: Revised 2020). You can add measures to this document as needed.

The measures are divided into two categories: Must-Have and Good-to-Have. The Must-Have measures are at the top of the page. They are the metrics that every jurisdiction should track and use to evaluate the effectiveness of their pretrial system. Most of the Must-Have measures are outcome measures. You may need to make slight edits to customize them for your jurisdiction.

The Good-to-Have measures are optional and consist of process measures and a few outcome measures. These measures are valuable because they assist stakeholders in understanding how the pretrial system is functioning. You can use them to help determine why certain desired outcomes are or are not being achieved and whether resources are being used efficiently.

For all the measures, some information is already provided, such as what the measure tells us, why it is important, and how it is calculated. There are also useful subcategory breakdowns according to charge type, assessment score, demographics, and other characteristics. Notes relevant to stakeholders or analysts who will be responsible for locating, collecting, and/or interpreting the data are also provided.

The policy team should customize the list of measures relevant for your jurisdiction.

For instance, the team may choose to:

  • Relabel the measures to reflect terminology your jurisdiction or state law uses
  • Edit any text describing a measure, such as why it is important or how it is defined and calculated
  • Decide which breakdowns of the measure are desired and possible to gather, such as for demographic characteristics, the charged offense, assessment score, and so on
  • Add or edit notes
Framing Performance Measures Performance measures should use positive language.

Performance measures should use “positive” language—wording that reflects the goals and values of your local pretrial system and that encourages community confidence in the local criminal legal system.

For example, “court appearance rate” is a positively phrased way to report progress toward the major pretrial goal of people appearing in court, whereas “failure to appear rate” uses negative language that focuses attention on the goal not being attained. See APPR’s Language Guide for additional suggestions.

Possible Performance Measures tab

If any of the measures on the Current Performance Measures tab are not relevant for your jurisdiction at this time, you can move them to the Possible Performance Measures tab. This tab serves as a “parking lot” for measures that are not possible at this time to fill with your data or that are not yet relevant because a certain practice is not yet in place. For example, if people are currently not represented by defense counsel at first appearance and will not be in the near term, you may not want to include at this time the “Defense Counsel Representation-at-First-Appearance Rate” in the Current Performance Measures tab. You can cut and paste this measure into the Possible Performance Measures tab for use when that practice is implemented. 

Later, when new practices are being implemented, the policy team should revisit the measures in the Possible Performance Measures tab to decide whether any have become relevant. If they have become relevant, they should be moved to the Current Performance Measures tab and customized as needed.

Step 3: Fill in where the data come from

Data Fields tab

On the Data Fields tab, you will see a list of potential data fields that may be needed to calculate your performance measures, a definition of each field, the source of the data (e.g., court, jail, pretrial services, prosecution, defense), and notes. The policy team, with the help of one or more of your jurisdiction’s data system analysts, should edit, fill in, and expand this list as needed. This list can serve as the data dictionary for your pretrial performance measures.

Data Dictionary What is a Data Dictionary?

The American Health Information Management Association defines a data dictionary as a catalog that lists data fields, where the data originate, information describing the field, the type and width of the field, what applications or reports use that data element, and so on.

Hermann, M. (2018, November 18). What is a data dictionary? [Blog]. Journal of AHIMA

Data-Sharing Agreement

After the policy team has customized the pretrial performance measures reference sheet, an additional step may be needed to obtain the data to populate the measures. Agencies supplying the various data may need to sign a data-sharing agreement before they can share data with one another or with a central data repository, if one will be used for your pretrial data. If your jurisdiction does not have an existing agreement, consider the data-sharing agreement Example 1 and Example 2.

Reporting Your Performance Measures

Design Your Report

After you identify your pretrial performance measures and finish customizing the reference sheet, it is time to design a pretrial performance report. This involves deciding which measures to show in the report and how to illustrate the data. The goal is to make it easy for the cross-discipline policy team, other practitioners, and potentially the public and media to understand what is working well in your pretrial system and what needs improving.

An effective report is one that stakeholders can easily read and digest. To achieve this, make sure the report:

  • Is brief (e.g., one to three pages)
  • Presents the most important data (e.g., progress toward the pretrial system’s major goals, such as maximizing court appearance rates, arrest-free rates, and pretrial release rates)
  • Illustrates the data using different graphics (e.g., bar charts, pie charts, line graphs) and colors
  • Is not too dense (i.e., minimizes text and uses lots of white space)

In addition, it is helpful if the report contains footnotes or endnotes that qualify data parameters (e.g., notes explaining which people are included or excluded from the analysis).

Decide on the Format 

After you have decided which measures to illustrate and how, the next step is to decide how the report will be made available to people such as system and community stakeholders. There are two common options:

  • A document that can be downloaded, emailed, or printed, usually as a Portable Document Format (PDF) 
  • A publicly available website

Each of these options has a few unique advantages, including the following: 

  • PDF
    • People can receive the PDF without having to go to a website.
    • PDFs can easily be printed as handouts for stakeholder meetings.
    • Presentation in PDF is usually needed for analyses that require one-time manual data collection and/or calculation.
    • Analyses can be calculated and displayed in software that is likely already owned (e.g., Microsoft Excel). 
  • Website
    • People can view the website any time.
    • People can easily view breakdowns of the main analyses and focus on their specific areas of interest.
    • The calculation and display of the analyses can be automated, such that they are updated on a regular schedule (e.g., daily, monthly, quarterly, depending on the analysis).
    • Data visualization software (e.g., Microsoft Power BI) is easily customizable for a wide variety of viewing options.

Think creatively about which format to use. You may also use a combination of formats. Online analyses might be downloadable to a PDF. Certain analyses may be programmed to autocalculate for graphical display in the PDF.

Examples of pretrial reports

Here are a few examples of pretrial performance reports. Some are available as PDFs and others are available as  automated dashboards on a website. You may wish to review them so that you can see what measures other jurisdictions include and how they display their data:

Decide the Frequency of Your Report

The policy team should decide on the frequency for updating and distributing the report. Some stakeholders may want to see data on a monthly basis. For others, quarterly reports may be sufficient. Daily, and to some extent monthly, updates are not usually needed for policymaking purposes. Quarterly or annual updates generally allow agencies to make adjustments to practices. 

The policy team should decide how often they want to see the performance report. The frequency of the reports may depend on how often the team convenes and how often they want to communicate with leadership, agency staff, system partners, the public, and other stakeholders. 
Regardless of the frequency of reporting, the measures can also be compiled into an annual report. Annual statistics are useful for understanding how the pretrial system changes over longer periods of time, such as three, five, or ten years. Annual reports, which tend to be longer, also present the opportunity to share data on metrics that are not included in the monthly or quarterly report, such as various Good-to-Have measures.

Automate Your Report

Some jurisdictions collect, analyze, and report their data manually. This is very time consuming and susceptible to human error. Therefore, consider automating your pretrial performance report. 

Automation has several advantages:  

  • There is less opportunity for human error
  • You will be able to more quickly and easily generate a report
  • You can have multiple pre-programmed reports that display different measures
  • You can customize the reports by specifying certain parameters, such as by selecting a certain time frame or a particular subpopulation

Depending on your jurisdiction’s data system(s), you may be able to purchase an off-the-shelf reporting application that overlays your data system(s), or your information technology staff may be able to program your reports directly into an existing data system.

“Pretrial performance metrics were at the heart of our successful implementation. I truly believe that the reason our pretrial project might have died was not because it could potentially have been ineffective but because we could have gone the route of not measuring and reporting certain things properly. This measurement and reporting has helped our improvements to stand the test of time and inevitable political turnover.”

– Joel Bishop, executive director, Justice and Community Services, El Paso County, Texas


References

[1] APPR’s pretrial performance measures reference sheet builds upon and expands the list of measures included in the National Institute of Corrections’ Measuring What Matters document.

Back to top