A Blog by Jonathan Low

 

Nov 4, 2018

The Reason Creating An AI Code of Ethics Is Harder Than People Think

A global economy does not necessarily mean acceptance of universal values or ethical choices.

And human behavior is based on evolving cultural norms. JL


Karen Hao reports in MIT Technology Review:

Establishing ethical standards doesn’t necessarily change behavior. Technology often highlights peoples' differing ethical standards. MIT asked millions of people from around the world to weigh in on variations of the classic "trolley problem" by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.
Over the past six years, the New York City police department has compiled a massive database containing the names and personal details of at least 17,500 individuals it believes to be involved in criminal gangs. The effort has already been criticized by civil rights activists who say it is inaccurate and racially discriminatory.
"Now imagine marrying facial recognition technology to the development of a database that theoretically presumes you’re in a gang," Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense fund, said at the AI Now Symposium in New York last Tuesday.
Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them?
Sherrilyn Ifill (NAACP Legal Defense Fund), Timnit Gebru (Google), and Nicole Ozer (ACLU) in conversation at the AI Now 2018 Symposium.
Andrew Federman for AI Now Institute
Not only is facial recognition imperfect, studies have shown that the leading software is less accurate for dark-skinned individuals and women. By Ifill’s estimation, the police database is between 95 and 99 percent African American, Latino, and Asian American. "We are talking about creating a class of […] people who are branded with a kind of criminal tag," Ifill said.
Meanwhile, police departments across the US, the UK, and China have begun adopting face recognition as a tool for finding known criminals. In June, the South Wales police released a statement justifying their use of the technology because of the "public benefit" that it provides.
Indeed, technology often highlights peoples' differing ethical standards—whether it is censoring hate speech or using risk assessment tools to improve public safety.
In an attempt to highlight how divergent people’s principles can be, researchers at MIT created a platform called the Moral Machine to crowd-source human opinion on the moral decisions that should be followed by self-driving cars. They asked millions of people from around the world to weigh in on variations of the classic "trolley problem" by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.
Establishing ethical standards also doesn’t necessarily change behavior. In June, for example, after Google agreed to discontinue its work on Project Maven with the Pentagon, it established a fresh set of ethical principles to guide its involvement in future AI projects. Only months later, many Google employees feel those principles have been placed by the wayside with a bid for a $10 billion Department of Defense contract. A recent study out of North Carolina State University also found that asking software engineers to read a code of ethics does nothing to change their behavior.
Philip Alston, an international legal scholar at NYU’s School of Law, proposes a solution to the ambiguous and unaccountable nature of ethics: reframing AI-driven consequences in terms of human rights. "[Human rights are] in the constitution," Alston said at the same conference. "They’re in the bill of rights; they’ve been interpreted by courts," he said. If an AI system takes away people’s basic rights, then it should not be acceptable, he said.
Philip Alston (NYU School of Law), Virginia Eubanks (University at Albany, SUNY), and Vincent Southerland (Center on Race, Inequality, and the Law at NYU) on stage at the symposium.
Andrew Federman for AI Now Institute
Alston isn’t the only one who has come up with this solution. Less than a week before the Symposium, the Data & Society Research Institute published a proposal for using international human rights to govern AI. The report includes recommendations for tech companies to engage with civil rights groups and researchers, and to conduct human rights impact assessments on the life cycles of their AI systems.
"Until we start bringing [human rights] into the AI discussion," added Alston, "there’s no hard anchor."

0 comments:

Post a Comment