A Blog by Jonathan Low

 

Feb 10, 2014

Inside Google's Mysterious Ethics Board

Just as societies can't legislate morality, neither can organizations outsource responsibility for their values.

Google set a high bar for itself when it declared early in its history that its core founding principle was 'don't be evil.'

That proposition has been challenged and sometimes mocked as the company's decisions about doing business in China, cooperating with US government intelligence agencies and even affecting housing values in the Bay Area have come under scrutiny. The problem in most of these cases has been that Google is caught in the no-man's land between conflicting societal values.

The news that Google has acquired an artificial intelligence lab in the UK was accompanied by the notice that an ethics board would be established to monitor the use of its products in services, presumably with an eye to its impact on personal privacy as well as related rights and liberties.

This is all good. Thank goodness someone is thinking about these things from a proactive institutional standpoint rather than waiting to duck and cover when the inevitable problems arise. But as the following article explains, the institutional decision to address ethics can take many forms and has numerous hard-to-predict outcomes. The only thing that is certain is that as in every other aspect of organizational behavior, if there is a significant gap between public pronouncement and private performance, it will eventually be revealed to the operational - and reputational - detriment of all involved. JL

Patrick Lin and Evan Selinger comment in Forbes:

Just as poking unnecessarily into a hornet’s nest is dangerously foolish, companies are afraid that probing into issues beyond what is legally required may compromise plausible deniability and open up new possibilities for litigation.
The technology world was abuzz last week when Google announced it spent nearly half a billion dollars to acquire DeepMind, a UK-based artificial intelligence (AI) lab. With few details available, commentators speculated on the underlying motivation.
Is the deal linked to Google’s buying spree of seven robotics companies in December alone, including Boston Dynamics, “a company holding contracts with the US military”? Is Google building an unstoppable robot army powered by AI? Does Google want to create something like Skynet? Or, is this just busybody gossip that naturally happens in an information-vacuum? The deal could simply be to improve search engine functionality.
All this uncertainty is driving an unnerving question: What exactly is DeepMind so worried about that they insisted on creating an ethics board? Is it a basic preventative measure, or is it a Hail-Mary pass to save “humanity from extinction”? Whatever the answer, we don’t want to feed the rumor mill here. But as professional ethicists, we can throw some light on the mysterious nature of ethics boards and what good they can do.


It’s fair to assume that the smart folks at DeepMind have thought deeply about AI and its implications. AI is very powerful technology that is largely invisible to the average person. Right now, AI controls airplanes, stock markets, information searches, surveillance programs, and more. These are important applications that can’t help but to have a tremendous impact on society and ethics, increasingly so as every futurist predicts AI to become more pervasive in our lives.
AI developers are thus under pressure to get it right. Just as we’d want to make sure you knew how to be a responsible gun-owner before we sell you one, DeepMind seems to have the same concern for commonsense responsibility as it sells potent AI technology and expertise. But because DeepMind is looking for ethical guidance from a review board, there are key cautionary issues to keep in mind as we follow its development.
1. Ethics Isn’t Just About Legal Risk
The first issue to be concerned with is the limits of ethics framed as legal advice.
We don’t know who will be invited to be on the ethics board, but we do know that that “chief ethics officer” has been a popular role in business for more than a decade. That position has primarily been filled by lawyers focused on compliance issues or following extant law. Google exemplifies this trend with its Ethics & Compliance team that works “with outside ethics counsel to ensure compliance with all relevant political laws and the associated filings and reports.”
This specific focus can lead to wonderful outcomes, such as decreasing consumer risk and improving public safety. But let’s not kid ourselves: the focus is dictated by a self-interested  goal of minimizing corporate liability. Harms that aren’t currently prohibited are therefore not considered much. This is a notoriously grey moral area for emerging technologies, since they usually are unanticipated and unaddressed by laws or regulations. Just as poking unnecessarily into a hornet’s nest is dangerously foolish, companies are afraid that probing into issues beyond what is legally required may compromise plausible deniability and open up new possibilities for litigation.
As it turns out, there isn’t much law that directly governs AI research—though the usual business laws about privacy, product liability, and so on still apply. So, DeepMind’s demand for an ethics board may be a signal they’re interested in more than legal risk-avoidance.
If this is the case, we hope the key players appreciate the full scope of ethics. Ethics isn’t just about dictating rules for what you should and should not do. Especially in the domain of technology ethics, the answers to pressing questions tend to be unclear: the law is often undefined; applications of new technologies are unclear; and, social and political values conflict, both internally and with each other in new ways.
A technology ethics board, therefore, can be an invaluable canary in the coalmine—scouting for explosive issues in advance of emerging technology and before the law eventually turns its attention to these new problems and the company itself. An ethics board might suggest that research and applications should be taken in a direction that avoids such problems entirely. Or, it could recommend an open discussion to clarify and defuse toxic issues before a public backlash.
2. Internal vs. External Advisors: Pros And Cons
The second issue to be concerned with is the limits of setting up an internal ethics board.
On the one hand, an internal ethics board has special standing. It potentially can influence corporate leadership to a greater degree than an external board or advisor can. It may have access to privileged information on the inside, and it can provide on-demand guidance as needed. So, even if Google also consults with outside ethicists, there’s real value in creating in-house capabilities.
In-house ethics committees have been a mainstay in medicine for the last 30 years, when a US Presidential commission recommended it in 1983. Those committees are composed of lawyers too, but also doctors, nurses, bioethicists, theologians, and philosophers—a much more capable approach than mere risk-avoidance to tackle controversial procedures, such as ending life support and amputating healthy limbs.
Internal ethics boards—not just lawyers focused on legal compliance—are less common in other industries, though they seem to be trending up, given the rise of technology ethics in the last decade or so. Besides the Google-DeepMind deal, the automotive giant BMW recently told us that it (wisely) had an internal ethics team to help guide development of automated or self-driving cars.
On the other hand, external boards have unique benefits. They can be much more independent than internal advisors—unafraid to offend management and less inclined to pull their punches. Simply put, outsiders typically have greater freedom to call it like they see it without worrying about losing their jobs or being co-opted into hired guns. Having distance from the center of things, outsiders are also more likely not to have drunk the metaphorical Kool-Aid.
This separation can result in more objective counsel and a greater capacity to look past individual items and see things holistically. For these and other reasons, ethicists like us are increasingly called upon to advise industry, government, and nongovernmental organizations, such as the US Department of Defense.

3. Lip-Service About Ethics
The third issue to be concerned with is ethical smokescreens.
News reports stated DeepMind had “pushed” for and “insisted” on an ethics board, implying that Google was reluctant about the idea. This possibility raises questions about how long the arrangement will stick and how seriously Google will take it. Did Google agree to an ethics board to appease DeepMind, knowing that it would make for nice window-dressing? Or will the board have real opportunities to provide input?
This is a familiar worry, that organizations engage ethics only as part of a public-relations checklist to show that they care. We’ve heard this in connection to the defense community’s interest in weapons ethics, for instance, on drones and military human enhancements. But our experience is that these organizations really do care about ethics and account for it as much as they can in their decisions. The design of Stuxnet, as Naval Postgraduate School’s professor George Lucas and others have noticed, seemed to have paid close attention to academic articles about ethics of cyberweapons. Even if motivated by self-interest, inviting ethics in is still a positive step forward for society at large.
While we wish to avoid rumors, they can nonetheless be revealing. In the absence of official statements, speculation inescapably fills that void. One rumor that Google is taking ethics seriously comes from its possible withdrawal from the DARPA Robotics Challenge—a contest it is currently winning—that bestows not just prize money, but also great international acclaim. Whether Google believes that the military market is insufficiently profitable, or whether it doesn’t want to participate in the military-industrial complex, many anti-war campaigners are relieved.
Back to DeepMind… If the AI ethics board is little more than for show, Google will be missing an incredibly valuable opportunity to embody its often-repeated philosophy of “Don’t Be Evil.” Without an ethics board or other such experts to help define “evil” and identify evil activities, it will be difficult—as critics point out—to truly live up to that world-famous motto.
It’s not just Google’s soul at stake here; it’s also about the future of our increasingly wired world. Whether its ethics board is tasked mainly with privacy issues or with existential risks, even natural skeptics like us are encouraged by the news. We hope it inspires other technology leaders to be aware of the power they wield and their responsibility to us all.

1 comments:

Integrity Matters said...


It's very nice of you to share your knowledge through posts. I love to read stories about your experiences. They're very useful and interesting. I am excited to read the next posts. I'm so grateful for all that you've done. Keep plugging. Many viewers like me fancy your writing. Thank you for sharing precious information with us.
Get for more information sexual harassment awareness training

Post a Comment