Lady Blind Justice (Photo: Marc Treble/ flick'r)
Overview

Algorithms play an increasingly prominent role in societal decision-making in a variety of settings. Online streaming services use them to recommend new music, movies, or television shows; criminal justice courts use them, controversially, to predict the future behavior of someone accused or convicted of a crime. Their proponents claim that they are objective and accurate, and they are often presented as sophisticated and mysterious. But they’re not infallible: Even the most carefully-designed algorithms may produce biased outcomes, and blind trust in those programs can cause, perpetuate or even amplify societal problems. 

We want to demystify algorithms and help everyone understand how they work in the real world. We are researchers from the Santa Fe Institute and the University of New Mexico with backgrounds in computer science, political science, mathematics, and law. We are available to provide expertise and guidance to policymakers to help them understand algorithms and their policy implications, and help them decide whether and under what circumstances they should be employed.

Our work centers on the need for transparency. We believe stakeholders should know an algorithm’s strengths and weaknesses, as well as its best uses and limitations, to make the best decisions. What data was used to design and train it? Does this data mean what we think it does? How will we know if it works in practice, and how will we measure its performance? Can it be independently audited for accuracy and fairness? What kind of explanation or appeal is available to those affected by it? Will its use create unexpected feedbacks in human behavior?

Housing

Our first project focuses on the national issue of access to housing. Lenders, landlords, and brokers often use algorithms to decide whether to approve or deny loan and rental applications, but the historical and geographic data used to train those algorithms can give rise to bias against certain socioeconomic or racial groups. The Department of Housing and Urban Development (HUD), the government agency charged with improving access to homeownership, recently proposed amendments that would effectively allow lenders to circumvent anti-discrimination lawsuits and avoid liability by blaming the algorithm, rather than its application. 

These changes fail to account for the subtleties of evaluating algorithms or recognizing unintended consequences, and they relieve lenders and other defendants of responsibility. We have summarized and submitted our concerns to the Federal Register, focusing on four key arguments showing the importance of understanding the use of algorithms in these decisions, and recommendations for best practices. 

Criminal justice

Within the criminal justice system, algorithms are being applied as a "predictive" tool in a variety of scenarios. These include pretrial detention and supervision, sentencing, housing classification in prison, and parole. Working group members Kathy Powers and Cristopher Moore recently gave a presentation for the New Mexico Legislature's subcommittee on criminal justice reform. In it, they walked through four crucial questions around transparency which could help algorithms be applied more fairly and accurately.

  1. How does the algorithm work? Can everyone (defendants, prosecutors, judges) understand how a score was obtained?
  2. Can we validate its performance independently? How well does it work on our local population in New Mexico?
  3. When should a human be in the loop? Should an algorithm ever be used for detention before a trial?
  4. What does the data really mean? Does a single zero or one capture the full story behind a failure to appear or rearrest?

 

The full presentation can be viewed on the website of the criminal justice reform subcommittee.

A second presentation to the NM Black Lawyers Association, titled "Risk Assessment Algorithms: Discrimination vs. Transparency," can be viewed on YouTube.

Future work

Future projects will focus on the spectrum of ways that governments, corporations, and institutions are increasingly relying on algorithms, with the constant goal of boosting transparency. Some people see algorithms as miraculous crystal balls; others see them as malevolent attempts to control our lives. For the most part, neither of these extremes is true, but only by demanding transparency can we find the most beneficial ways to use these powerful tools. 

Our members include Mahzarin Banaji, Elizabeth Bradley, Tina Eliassi-RadG. Matthew Fricke, Mirta Galesic, Joshua Garland, Cristopher MooreAlfred Mathewson, Melanie Moses, Kathy Powers,  Sonia M. Gipson Rankin, and Gabriel R. Sanchez.