A Blog by Jonathan Low

 

Jun 18, 2019

The Reason Investors Are Pressuring AI Startups To Address Biases, Ethics

Common sense: fewer biased outcomes or controversial applications mean a more advantageous reputation, more potential customers and higher sales. JL 

Jared Council reports in the Wall Street Journal:

Backers of artificial-intelligence startups are paying more attention to ethics and rooting out potential biases embedded in algorithms that power AI systems, a focus that is starting to affect the earliest stages of company development. (Some) encourage the companies (they) invest in to incorporate feedback into their algorithms to neutralize potential bias. “A company that opens themselves up to all of their users and customers will generate less-biased outcomes.”
Backers of artificial-intelligence startups are paying more attention to ethics and rooting out potential biases embedded in algorithms that power AI systems, a focus that is starting to affect the earliest stages of company development.
Executives at venture-capital firms and tech accelerators have spurred three main improvements at startups: a code of ethics that guides the AI startup’s operations, a tool that explains how an algorithm makes its decisions, and a set of best practices that includes consistent and open communication as well as immediate feedback about an algorithm’s output.
The changes come amid increased regulatory scrutiny of AI software, including allegations by the Department of Housing and Urban Development in March that Facebook Inc.’s algorithms helped advertisers violate fair-housing laws and San Francisco’s decision last month to ban the municipal use of facial-recognition systems. Debate about responsible use of AI is growing: This summer, the European Commission plans to assess a summary of ethical guidelines for AI technology.
Andreas Roell, managing partner at San Diego-based Analytics Ventures, said his two-year-old firm started seriously discussing AI ethics about a year ago, after being pitched by a group of horse breeders that wanted to use AI to identify the best thoroughbreds to breed for racing.
Analytics Ventures didn’t make the investment, but Mr. Roell said the incident got him thinking about the ethics of using AI to breed animals. That prompted his firm to create a code of ethics and to start embedding ethical practices in its AI companies. Analytics Ventures is a venture studio, meaning it not only invests in companies but builds them from scratch in-house and manages their administrative tasks.
Analytics Ventures’ eight startups use a tool called Klear, built internally by a team of roughly 20 AI scientists, that forensically explains why an AI system made a decision it did.
“If a human cannot understand the logic behind an [AI system’s] action, it’s much more difficult to contain,” Mr. Roell said. “So I see explainability as a core component of having an ethical guardrail around AI.”
Eventually, “What I would like to see is that every single entity of ours has a designated AI ethics officer,” Mr. Roell said.
The leaders of High Alpha, an enterprise-software venture studio based in Indianapolis, said they have received pitches that didn’t pass ethical muster, including a startup that sought to rank the value of shoppers at retail stores in part by using facial recognition.
Partner Eric Tobias said the firm began considering the ethical implications of AI about two years ago, after the backlash against Chicago-based Geofeedia, a maker of social-media geo-tagging software that law-enforcement agencies could use to identify people at protests or large gatherings. High Alpha didn’t invest in Geofeedia but all four of its partners did so as individuals.
The controversy prompted High Alpha leaders to examine the ethical implications of its portfolio companies’ software during the company creation process. They began inviting nontechnologists such as marketing specialists or designers to meetings about software under development to bring in different perspectives, Mr. Tobias said.
The strategy proved fruitful at a 2017 meeting about portfolio company Pattern89, which uses AI to recommend images and copy that marketers should use in social ads. The company collects information on past ads and uses an Amazon.com Inc. service to tag the images to understand what they are showing. Pattern89 then assesses the relationship between tags and an ad’s performance.
A High Alpha designer at the meeting asked about the kinds of tags that fed Pattern89’s algorithms, said Mark Clerkin, High Alpha’s vice president of data science. Her underlying concern, Mr. Clerkin said, was whether those attributes could be proxies for race or religion. “One attribute that came back was afro hairstyles...so we decided not to utilize that one,” Mr. Clerkin said—meaning the algorithm wouldn’t take afros into account when building its recommendations for what to show in ads.
A tech accelerator run by Pittsburgh-based Innovation Works this year introduced a voluntary ethics component to its 27-week program for startups, in collaboration with Carnegie Mellon University. All 12 companies taking part in the program chose to participate in the ethics pilot, which touches on topics including bias and data privacy and asks founders to produce an ethical-values document, said Ilana Diamond, managing director of hardware at Innovation Works.
Sometimes, the bias in an algorithm isn’t obvious. Gordon Ritter, founder and general partner of San Francisco-based venture-capital firm Emergence Capital, gave an example from a portfolio company, Seattle-based Textio Inc., whose software generates language for companies to use in recruiting emails and job postings. With feedback from millions of user interactions across its platform, Textio has been able to identify words such as “synergy” and “stakeholders” that have been shown to deter minority candidates from applying for jobs.
Mr. Ritter said over the past two years, Emergence Capital, which has $1.35 billion in assets under management, has encouraged the companies it invests in to incorporate such feedback into their algorithms to neutralize potential bias.
“A company that opens themselves up to all of their users and customers will generate less-biased outcomes,” he said.

0 comments:

Post a Comment