A Blog by Jonathan Low

 

Dec 18, 2019

New York City Can't Understand Its Own Black Box Algorithm. So Now What?

Keep calm and think of Mark Zuckerberg? JL

Rebecca Heilweil reports in Vox:

In 2017, the city council decided that the public should have some insight into black-box calculations — which can influence everything from policing to where your kid goes to school — and passed a law creating a task force to investigate the city’s“automated decision systems” (ADSs) and propose ideas for their regulation. Nearly two years later, the task force failed to unearth much about how these systems actually work. “If New York City, which has the largest municipal budget in the world, couldn’t do a decent job, it sends a negative signal to other cities who may want to do something but may be fearful about their actual capacity.”
Algorithms make decisions that could impact nearly 9 million New Yorkers, but we don’t actually know much about them. So in 2017, the city council decided that the public should have some insight into these black-box calculations — which can influence everything from policing to where your kid goes to school — and passed a law creating a task force to investigate the city’s so-called “automated decision systems” (ADSs) and propose ideas for their regulation.
Nearly two years later, the task force largely failed to unearth much about how these systems actually work. While the city says it provided some examples of ADSs used by agencies — specifically, five — the city did not produce the full list of the automated decision systems known to be used by its agencies that some activists and advocates had hoped for. While some systems have been already identified, it’s very likely that there are algorithmic systems that the city uses that the public isn’t aware of.
Algorithms and artificial intelligence can influence much of a city government’s operations. Predictive models and algorithms have been used to do everything from improving public school bus routes and predicting a home’s risk of fire to determining the likelihood of whether a child has been exposed to lead paint. In New York City, it’s publicly known that such systems have been used to predict which landlords are likely to harass their tenants, in the evaluation of teacher performance, and in the DNA analysis used by the criminal justice system, examples that were flagged by the research nonprofit AI Now.
In its concluding report released last month, the task force recommended that the city create a centralized “structure” for agencies using automated decision systems that could also help determine best management practices. The task force further called for more public education on algorithm-based systems and for the development of protocols for publicly revealing some information about ADSs, among other broad proposals.
But critics complain that it’s still unclear which “automated decision systems” are already used by city agencies, a knowledge gap that they say has frustrated any possibility at real transparency, and established a concerning precedent, as reported by CityLab.
So now what?
Through an executive order, Mayor Bill de Blasio established a novel algorithms management and policy officer role — a position expected to be filled in the coming months — inspired by the task force’s findings.
The officer will be charged with helping city agencies to responsibly use and assess ADSs and educating the public on their use.
But Rashida Richardson, the policy research director of AI Now, says that position isn’t nearly enough, as the role has neither the mandate nor the power to reveal what automated systems are in use. A city spokesperson says the officer will “maintain a platform where some information about relevant tools and systems will be made available to the public.”
Richardson emphasizes that transparency isn’t just a value in itself, but is also key for investigating the social, ethical, and fairness concerns that government use of algorithms can raise. Critics are especially concerned that algorithms can cause a disparate impact on members of protected groups, who — without information about how many of these tools actually work — have few avenues of recourse against an unfair or biased decision.
“If New York City, which has the largest municipal budget in the world, couldn’t even do a decent job, then it sends a really negative signal to a lot of other cities who may, in earnest, want to do something but may be fearful about their actual capacity,” Richardson says. “The concern is that this signals that cities don’t need to do much, or don’t have to.”

A shadow report on New York’s AI might chart a path forward

Earlier this month, Richardson and AI Now released a shadow report critiquing and analyzing the task force’s work, and highlighted its own recommendations for moving forward. Those include recommended policy safeguards and procedures, as well as tailored proposals for city agencies, including the New York Police Department, the Administration of Children’s Services, and the Department of Education on their individual use of algorithm-based systems.
In a response to AI Now’s shadow report, mayoral spokesperson Laura Feyer said the task force “brought together nearly two dozen leaders in computer science, civil liberties, the law, and public policy to tackle a highly complex set of questions that no other jurisdictions have been able to fully untangle.” Recode reached out to several city council members who sponsored the initial legislation authorizing the task force, but did not hear back by the time of publication.
It’s not a surprise critics are dissatisfied. As the task force proceeded, there were complaints about city agencies failing to share information about the algorithm-based systems already in use, insufficient engagement with the public, and a struggle to define, what, exactly, qualified as an automated decision system. And when establishing standards for algorithmic transparency is of growing interest to local and state policymakers, it’s not encouraging that one of the US’s first algorithmic transparency efforts feels like a flop.
In New York City, attention is now turning to legislation introduced by council member Peter Koo, which could accomplish some of what the task force couldn’t. His proposal would have city agencies report annually to the mayor (who would then report that information to the city council) and describe every automated decision or algorithm system that they’ve used and what it’s used for, as well as other information.
“Many advocates feel the task force fell short in its duty to provide a meaningful accounting of how the city uses algorithms,” Koo said in an statement to Recode. “We drafted this bill so there could be no more misinterpretations of what we believe should be a full and transparent accounting of how the city uses these automated decision systems.”
In New York’s state Senate, there’s also proposed legislation that would create a task force looking at “automated decision systems” for the state agencies.

State governments are leading the charge for greater AI transparency

Across the country, there are other efforts at promoting transparency in government use of algorithms. Vermont is in the midst of a study that, in part, focuses on the AI used by its state government. Alabama has established a commission on artificial intelligence that’s due to be released in May of next year.
Massachusetts, meanwhile, is currently considering legislation that would create a commission analyzing the use of automated decision systems in the commonwealth. That commission’s responsibilities would include creating “a complete and specific survey of all uses of automated decision systems by the commonwealth of Massachusetts and the purposes for which such systems are used,” according to draft legislation.
In Washington, there’s hope that algorithmic accountability legislation could be revived. The proposal would establish standards and transparency guidelines for algorithmic systems used by government agencies. But the legislation has struggled in the face of some of the challenges also encountered by New York City’s task force. For one thing, getting agencies to report on their use of automated decision systems is difficult, and expensive, says Shankar Narayan, the Washington ACLU’s technology and liberty project director. (Narayan worked on the bill.)
He explains that the law was pegged with a hefty cost estimate because agencies anticipated struggling to figure out what systems they were already using and how they worked.
“Often, people within agencies really don’t understand how these systems make decisions. They’re marketed to by vendors behind-the-scenes, who come with proprietary tools that aren’t available for scrutiny or [aren’t] capable of being scrutinized,” Narayan says. “Potentially, even the vendors themselves may not know if they’re having a disproportionate impact on protected groups.”
He says the definition of automated decision-making systems used in the bill will likely be changed and possibly narrowed in order to make the legislation easier for agencies to understand and comply with.
In the meantime, Richardson warns against a game of “policy chicken.” If government officials take a wait-and-see approach and hold off to see how other municipalities and states address these questions, they risk ignoring harms that algorithmic systems may already be causing.
For New York, Richardson pointed to Koo’s bill as a next step, impressing that transparency is a “diagnostic tool” that advocates need in order to propose interventions for government algorithms’ potential fairness and bias problems.
“We only have a taste, or an idea, of what’s at risk and what’s at stake” she says. “But we’ll have a better understanding when we understand the landscape of uses.”

0 comments:

Post a Comment