A Blog by Jonathan Low

 

Mar 7, 2020

How Automation Bias Encourages the Use of Flawed Algorithms

Humans tend to believe that computers are smarter and more objective than other humans, which encourages the use of algorithmic decision-making. And it can absolve responsibility of those who use such systems.

But the data reveal that many algorithms are inaccurate and biased. Which is why employees and citizens are increasingly demanding more transparency about how the algorithms are programmed - and on what assumptions that coding is based. JL


Chloe Hadavas reports in Slate:

Risk-assessment algorithms have been in use for decades. One reason for the dependence on algorithms is “automation bias,” in which humans attribute more weight than is deserved to computer decisions. This “veneer of objectivity and certainty,” is particularly attractive to government agencies. “Governments make hard decisions every day. They use tools that help them do it more efficiently, consistently and accurately.” (But) given the ubiquity of these algorithms and the secrecy in which they’re allowed to operate it becomes more necessary to hold agencies accountable for their algorithmic practices.
From 2013 to June 2017, the U.S. Immigration and Customs Enforcement’s New York Field Office determined that about 47 percent of detainees designated as “low risk” should be released while they waited for their immigration cases to be resolved, according to FOIA data obtained by the New York Civil Liberties Union. But something changed in the middle of 2017. From June 2017 to September 2019, that figure fell to 3 percent: Virtually all detainees, the data shows, had to wait weeks or even months in custody before their first hearing, even if they posed little flight risk.
All that time, ICE used the same software to determine a detainee’s fate: the Risk Classification Assessment tool, which is supposed to consider an individual’s history—including criminal history, family ties, and time in the country—to recommend whether that person should be detained or released after 48 hours of arrest. When ICE introduced the algorithm in 2013, the Intercept reported Monday, it gave four options: detention without bond, detention with the possibility of release on bond, release, or referral to an ICE supervisor. In 2015, the algorithm was edited to remove the option for bond. Then, it was changed again after the 2016 election to remove the release output. According to the NYCLU and Bronx Defenders, the possibility of bond or release has been “all but eliminated.” (ICE personnel can still technically override the recommendations of the tool, which may explain why that 3 percent of low-risk detainees were still released.)
On Feb. 28, the NYCLU and Bronx Defenders filed a lawsuit alleging that as a result of this change, ICE has illegally detained virtually all the thousands of people its New York Field Office has arrested over the past three years. ICE, the NYCLU argues, has a legal obligation to consider whether release is appropriate within 48 hours of arrest.
The lawsuit aims to restore due process and end what it calls “ICE’s manipulation of the legal process,” by requiring ICE to make individual assessments and abandon its “hijacked algorithm.” “If the New York Field Office were actually conducting individualized determinations pursuant to its stated criteria,” the lawsuit says, according to the Intercept, “the percentage of people released should have actually increased since 2017 because more people arrested qualified for release.”
ICE’s reliance on a risk assessment tool is not unusual, even as it becomes increasingly clear that algorithmic biases tend to affect marginalized communities and people of color. Risk-assessment algorithms in particular have been in use for decades and, more recently, have become an integral part of the criminal justice system, from policing to evidence to sentencing. Algorithms have started to replace bail hearings to help determine who goes to jail, for instance, and police departments use them to predict future unlawful activity, which civil liberties groups say leads to heavier policing of communities of color. In large part, we still don’t know how these algorithms work—many of them are kept secret from the public, often in the name of protecting intellectual property.
One reason for the dependence on algorithms is “automation bias,” in which humans attribute more weight than is sometimes deserved to computer decisions, says Colleen Chien, a professor at Santa Clara University School of Law who researches innovation and the criminal justice system. “There’s been a lot of criticism of risk assessment tools, like this one, particularly in the bail and pre-trial contexts,” Chien said. “But the reality is that human beings like to get information from what they think is an objective source.”
This “veneer of objectivity and certainty,” as Chien puts it, is particularly attractive to government agencies. “The reality is that administrators in governments have to make hard decisions every day,” said Chien, who served in the Obama White House as a senior adviser on intellectual property and innovation. “And they are going to use tools that help them do it more efficiently, do it more consistently, and more accurately.”
Sometimes, as in the case of ICE’s Risk Classification Assessment, these tools can start to feel like an “algorithmic rubber stamp,” Chien said. But algorithms tend to reflect the systems and agencies that use and create them. In this context, the data behind ICE’s tool is perhaps unsurprising, since the agency has escalated its terror tactics and “become an arm of Donald Trump’s nativist agenda” since 2017.
Chien stressed, however, that algorithms also hold potential for making systems more efficient and for upholding the presumption of innocence. For instance, “[t]he criminal justice system is notoriously biased in terms of its history,” Chien said. So “when you look at what’s the impact of the algorithm, you need to take a baseline, and then you need to measure how are we changing from [that] baseline,” she said. Chien advocates for a clean slate policy, which would use algorithms to seal or clear Californians’ criminal records. She also pointed out the potential of facial recognition technologies, which are often criticized, for exonerating suspects.
“I think there’s a lot of ways in which algorithms can be very beneficial in criminal justice,” said Chien. “But I think the reality is, again, that they’re being used whether or not the public thinks they’re beneficial.” (A recent report by the Administrative Conference of the U.S. details how pervasive AI tools are across federal agencies.) “The stories you read about are just the tip of the iceberg,” said Chien.
Given the ubiquity of these algorithms—and the secrecy in which they’re currently allowed to operate—it only becomes more necessary to hold government agencies accountable for their algorithmic practices. The ICE case is just the latest, and most public, example of how algorithms can be weaponized, under the guise of impartial justice, to a certain end.

0 comments:

Post a Comment