Originally published in the Harvard Political Review.
In the aftermath of Trayvon Martin, Eric Garner, and Freddie Gray’s untimely deaths and in the wake of the Black Lives Matter movement and its numerous spin-off protest groups, one common message permeates America’s national dialog: the criminal justice system is broken. With these sobering anecdotes of unnecessary death and the ever-growing mountains of data that reinforce their rhetorical significance, the United States finds itself with a criminal justice system that unfairly preys on black and brown bodies.
According to recent studies, America’s prison population has quadrupled over the past 30 years, now housing approximately 2.3 million people and comprising around 25 percent of the world’s prison population. Of those imprisoned in the United States, 58 percent are black or Hispanic, despite these groups comprising only a quarter of the country’s total population. As protest groups and the Bernie Sanders campaign repeatedly acknowledged, startling racial disparities exist in sentencing as well as in police activity. While five times as many white people are using drugs, black people are sent to prison for drug offenses at 10 times the rate of whites. Once in the system, the accused don’t fare any better; a staggering 95 percent of people accused of a crime are never given their Constitutional right to a trial due to the system’s reliance on plea-bargaining.
In its current state, the entire U.S. criminal justice system–from the way law enforcement locates and apprehends suspects to the way courts determine guilt and innocence–requires repair. However, most proposed solutions focus entirely on policy changes, grassroots efforts, or interventional programs for officers and judges. These attempts have yet to yield significant changes, suggesting that a solution to our criminal justice issues may lie in more unconventional territory: artificial intelligence technology. In every situation, humans bring a personal, cultural framework filled with explicit or implicit biases, all of which impact their decision-making abilities. Given the numerous highly-publicized instances of unarmed shootings, mistaken arrests, and false imprisonments, perhaps the United States criminal justice system should seek its remedy in the hands of technology.
The AI Revolution: Current Applications in Criminal Justice
Artificial intelligence is a catch-all term for digital technology that emulates human intelligence, judgment, and decision-making. Today, AI technologies are improving at a faster pace than ever before, yielding advanced machine learning algorithms for autonomous vehicles, facial recognition, medical diagnostics and a number of other applications. Because of AI’s widespread success in several other areas, the criminal justice system has become a burgeoning new field for AI technology and research.
Most criminal justice applications today focus on risk assessment tools, which analyze vast amounts of data that may be correlated with future criminal activity and use it for predictive purposes. In a practice known as predictive policing, for instance, police departments around the country are starting to use risk assessment tools to allocate resources to the most crime-ridden areas. Judicial systems are also using predictive tools to calculate a defendant’s probability of recidivism, allowing courts to more strategically determine bail amounts, sentence lengths, and parole opportunities.
Larry Schwartztol, the Executive Director of Harvard Law School’s Criminal Justice Policy program, told the HPR: “risk assessment tools are used all over the place and will likely continue to grow in popularity and complexity given their widespread applicability.” “These tools have the potential to speed up processes and potentially decrease bias,” but, Schwartztol warns, “technological interventions are loaded with normative questions.” Indeed, while risk assessment should ideally ameliorate human bias, some new algorithms have received harsh criticism for how strongly their results correlate with race. Some of these biases may be rooted in the data that is used to train the algorithms in the first place. Furthermore, risk assessment algorithms are still programmed by humans, so human decision making, however minor, still has the potential to introduce bias. Despite these criticisms, risk assessment tools are likely here to stay because of their efficiency and precision.
The AI-Enabled Future
As AI technology continues to improve over the next several decades, it will likely expand from simple risk assessment tools to more complex technologies with potential to remove human bias in the field itself. For example, Axon – the smart weaponry company responsible for the Taser – is beginning to build AI-powered body cameras that will recognize faces and objects, effectively creating a “personal assistant” for police officers in the field. As more police departments mandate the use of body cams, they will collect a plethora of data about suspect behavior, both in cases where the suspect poses a real threat and in those where the officer overreacts due to inherent biases. Eventually, technology companies, whether they be established businesses like Axon or startups like Visual Labs or Vievu, will be able to use this data to create machine learning models that can predict, with a high degree of accuracy, how dangerous a suspect actually is and whether or not they committed a crime. If police officers around the country are armed with this technology, it is possible that the shooting of unarmed and innocent suspects will see a sharp decrease.
Though the use of AI in the criminal justice system currently focuses entirely on tools that provide police officers, lawyers and judges with a second opinion, research shows that their jobs will be more significantly aided and potentially replaced by AI at some point in the future. David Abrams, a lawyer and electrical engineering lecturer at Harvard, argues these advances are coming sooner than we expect: “Building algorithms that recognize subtleties in arguments and make decisions in the ways humans do is extremely difficult. However, advances in machine learning and expert systems suggest this technology is eventually coming and it’s important to be prepared right now.”
One of these innovations from researchers at University College London is an advanced machine learning algorithm that replaces the role of a judge, taking in all the evidence and delivering conclusions about the severity of a crime. An AI algorithm such as this, if crafted carefully, will deal solely with the facts of the case, ignoring prejudices from other factors such as the accused person’s appearance, race, and socioeconomic status. One of the key issues with the justice system today is that the accused must wait several years for a trial and often can’t afford to pay bail, squandering their time in prison. Large bails, combined with mandatory minimum sentencing fears, force many accused convicts to accept plea agreements for crimes they did not commit. AI-based judiciaries hold much of their potential in removing human bias, but perhaps more notably, their utility will come from an ability to speed up trials and to ensure that everybody has their Constitutional right to one. For those who are afraid of putting the entire U.S. justice system in the hands of a computer, Abrams reminds us: ‘The constitutional right to a jury will never disappear. Even if we have computers aiding in every part of our criminal justice system, the ultimate verdict will always be made by human beings.”
Incentivizing Socially-Responsible AI
In its current form, there are many reasons to fear the widespread use of AI tools in the criminal justice system. Chiefly, there is the very legitimate fear that AI will replicate, or even worsen, the issues of race and class-based discrimination that currently exist. While we can currently blame these issues on a person or a department, when a black box algorithm is making decisions, it becomes much thornier. Most recently, a risk-assessment algorithm created by Northpointe Inc., a private company, has led to the possibility of a Supreme Court hearing on this topic of algorithmic transparency.
As AI technology becomes more sophisticated and sees more use in the criminal justice system, these questions of discrimination, responsibility, and transparency will only become more relevant. For AI technology to improve efficiency and remove human bias, it must be designed in a socially-responsible manner. Doing so will take a concerted effort on the part of key stakeholders – policymakers, venture capitalists, entrepreneurs, and researchers – but it will also have positive impacts on the future of the U.S. criminal justice system.
There are several key ideas all these stakeholders must consider in order to ensure that AI technology used in the criminal justice system does not promote more human bias in a computerized form. The first step is to promote open data and algorithmic transparency. In other words, police departments, judicial systems, and private contractors must be encouraged to make their data available to the research community, allowing experts to observe which key features drive judicial outcomes. This will ensure that algorithms determining an individual’s guilt or innocence have easily identifiable rationalities behind their decisions.
Second, the U.S. government should increase its standards for the technology companies it contracts with, or build this critical technology internally. Similar to how the National Highway Traffic Safety Administration (NHTSA) has performance and durability standards for cars in the United States, the government should run extensive simulations on proprietary datasets before a contractor is selected, ensuring that there are absolutely no discriminatory outcomes that will arise from private companies’ algorithms. Chiraag Bains, a DC-based attorney and fellow at Harvard Law School’s Criminal Justice Policy program, believes the solution to regulating AI for criminal justice lies in modeling a system after the Department of Justice’s Commission on Forensic Science. In an interview with the HPR, he posited: “There should be a special commission put in place to ensure these technologies are not biased in any way. They must encourage voluntary change and a norm of transparency at the local level, put in place a system of incentives through grant funding, and finally enforce certain standards from the top down.” Some of the most important regulatory questions of the next century will revolve around AI, so policymakers need to be prepared now.
Artificial intelligence technology may very well touch every part of our criminal justice system in the not so distant future. With AI’s applications ranging from body cam software to grand jury analysis, its potential benefits include fewer unarmed shootings, fewer false arrests, and faster, fairer trials. Most of the media attention given to AI and criminal justice today focuses on the potential for racial discrimination, and for good reason. But whether we like it or not, AI is coming with many of its more positive consequences for the U.S. criminal justice system. As long as key stakeholders take an active role in encouraging socially-responsible design and implementation, AI will begin the complex process of reforming our criminal justice system, ultimately making it more efficient and less discriminatory.