A Blog by Jonathan Low

 

Apr 2, 2019

The Reason For the Increasingly Angry AI Ethics Debate In Silicon Valley

As AI plays a greater role in decisions with financial, legal and socio-cultural implications, the question of not just who is designing the code, but who is overseeing their efforts becomes more important - and more volatile - especially as the stakes get higher. JL


Sam Levin reports in The Guardian:

Major tech corporations have launched AI “ethics” boards that not only lack diversity, but include powerful people with interests that don’t align with the ethics mission. The result is what some see as a systemic failure to take AI ethics concerns seriously, despite widespread evidence that algorithms, facial recognition, machine learning and other automated systems replicate and amplify biases and discriminatory practices.
When Stanford announced a new artificial intelligence institute, the university said the “designers of AI must be broadly representative of humanity” and unveiled 120 faculty and tech leaders partnering on the initiative.
Some were quick to notice that not a single member of this “representative” group appeared to be black. The backlash was swift, sparking discussion on the severe lack of diversity across the AI field. But the problems surrounding representation extend far beyond exclusion and prejudice in academia.
Major tech corporations have launched AI “ethics” boards that not only lack diversity, but sometimes include powerful people with interests that don’t align with the ethics mission. The result is what some see as a systemic failure to take AI ethics concerns seriously, despite widespread evidence that algorithms, facial recognition, machine learning and other automated systems replicate and amplify biases and discriminatory practices.
This week, Google also announced an “external advisory council” for AI ethics, including Dyan Gibbens, the CEO of a drone company, and Kay Coles James, the president of a rightwing thinktank who has a history of anti-immigrant and transphobic advocacy.
For people directly harmed by the fast-moving and largely unregulated deployment of AI in the criminal justice system, education, the financial sector, government surveillance, transportation and other realms of society, the consequences can be dire.
“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”
Universities and ethics boards could play a vital role in counteracting these trends. But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.”
It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginalized groups.
The Partnership on AI, an ethics-focused industry group launched by Google, Facebook, Amazon, IBM and Microsoft, does not appear to have black board members or staff listed on its site, and has a board dominated by men. A separate Microsoft research group dedicated to “fairness, accountability, transparency and ethics in AI” also excludes black voices.
Axon, the corporation that manufactures Tasers, launched
an AI ethics board last year. While its makeup is racially diverse, it includes a number of leaders from law enforcement, the sector that has faced growing scrutiny over how it uses Axon products in discriminatory and fatal ways.
A major joint AI ethics research initiative of Harvard and Massachusetts Institute of Technology (MIT) has one woman on its board, and the five directors from the Harvard Berkman Klein Center whose research is tied to the initiative are all white men. (Tim Hwang, an MIT director for the initiative, said inclusion was “one of the primary objectives” of the program and was integral to its grant process and research.)
After facing an uproar, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) added several black members to its webpage. A spokesperson told the Guardian the initial site was an incomplete list and that the additional names were not new partners.
Still, out of 20 people on the leadership team, only six are women.

Kristian Lum, the lead statistician at the Human Rights Data Analysis Group, and an expert on algorithmic bias, said she hoped Stanford’s stumble made the institution think more deeply about representation.
“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.
When new AI ethics projects fail at diversity from the start, it makes it challenging to recruit different voices without tokenizing people, said Nicole Sanchez, a tech diversity advocate and the founder of Vaya Consulting.
“They just lost credibility,” Sanchez said of Stanford, which she attended as a student. “How would you feel if you’re one of the handful of black folks who are called now?”
Rediet Abebe, a computer science researcher and the cofounder of Black in AI, said it was encouraging that many in the field spoke out about Stanford: “It has been gratifying to see how quickly many caught this, called it out and are looking to work with folks at Stanford to fix it. I don’t know that the discourse would have been the same 10 years ago, or even two years ago,” she said.
An HAI spokesperson told the Guardian in an email that “we acknowledge that there’s progress to be made” and that Stanford was “committed to bringing in new voices and perspectives to this conversation”. The institute
would be hiring 20 additional faculty members and recruiting fellows.
Google’s tactic, however, seems to be to ignore the backlash to the makeup of its AI group. The company has not responded to the Guardian’s repeated requests for comment.
Google’s AI partnership with James, the president of the rightwing Heritage Foundation, was particularly disturbing to some critics, given that she is anti-abortion, has fought LGBT protections and has promoted Trump’s proposed border wall.
Os Keyes, a PhD student at the University of Washington’s data ecologies laboratory, said the appointment of James was a “transparent calculation” that had nothing to do with ethics and was meant to appease conservatives in Washington DC in an effort to avoid regulations.
“This is a person who hates me and hates my community and is trying to cause us harm,” said Keyes, who is trans, adding that they felt “visceral horror” when they saw the announcement.
It was further evidence that corporate ethics initiatives like this are futile, Keyes added. “They can’t be trusted with self-policing. They shouldn’t be allowed to self-regulate.”
Sanchez said the decision to partner with James was “not even a dog whistle
– that’s a bullhorn”, adding that there was no such thing as “neutral” AI: “The idea that you can do AI or technical ethics without a point of view is silly … The bias is deep inside the code. Whose values are embedded in the bias?”
The Heritage Foundation did not respond to requests for comment.
Mecole Jordan, a Chicago-based community organizer, who is a part of Axon’s ethics group, said she appreciated the opportunity to be involved, given that black communities are so often forced to fight damaging technology after it’s already been adopted.
“These things are done in a vacuum and rolled out, and we have to just live with it and respond to it, as opposed to being a part of the conversation,” she said.

2 comments:

Litacopeland said...

All the information in this post is very useful for all of us, so thank you so much for sharing this kind of fantastic post. Kindle Book Coupon

Matteo Thomas said...

Murder and non-murder mystery games are included in adult party game packages, scavenger hunts, and small contests such as best dressed or best investigator. Packages are available for all ages, group sizes, and genders. There are games for all ladies (hen or bachelorette parties) and cleaner versions for mixed-age groups. Murder mystery parties

Post a Comment