A Blog by Jonathan Low

 

Apr 26, 2017

Evaluating the Risks and Rewards of Artificial Intelligence

Powerful forces have been unleashed. Whether they can be contained or even managed remains to be seen.

Since machines and the algorithms that drive them reflect the inclinations and biases of their programmers, it may just be that the instinct for self-preservation will assert itself. We hope. JL

Howard Yu comments in US News and World Report:

Technological advancement always demands a broad-based societal negotiation. As connectivity grows ubiquitous, the size and nature of technologies also multiply. A computer programmer won't be able to explain a machine's behavior by reading the software code any more than a neuroscientist can explain your hot dog craving just by staring at an MRI scan. "Both the promise and peril are deeply intertwined."
Elon Musk once declared that artificial intelligence could "be more dangerous than nukes."
The billionaire founder of Tesla and SpaceX is known for his grandiloquence. But the rapid advancements in computing power since the turn of the 21st century mean that algorithms now pose some of the most testing questions for humankind.
In a letter to shareholders last week, Amazon CEO Jeff Bezos argued that the future is often easy to predict. "Big trends are not that hard to spot, [because] they get talked and written about a lot," he explained. "We're in the middle of an obvious one right now: machine learning and artificial intelligence."
Indeed, machine algorithms are already used to predict how we click, buy or lie; companies have automated how they mail, call and offer discounts. Thanks to machine algorithms, credit card companies can detect in real time which transactions are likely to be fraudulent; insurers can identify which customers are likely to file a claim, or who is likely to die. Such breakneck development has made the world's foremost economists, scientists, entrepreneurs and policymakers all clamber to understand the full societal and ethical implications. Their core concern revolves around how these machines learn and their intractability.
At the most basic form, artificial intelligence, known as AI, is enabled by building a network of hardware and software that mimics the web of neurons in the human brain. Computers are programmed to seek positive reinforcement in the form of scores, just as we humans seek pleasure. A programmer could, therefore, correct the errors made by a machine by telling an image recognition system, "That's a dolphin, not a fish," the same way we would teach a two-year-old toddler.
By forgoing code with hardwired rules, reinforcement learning has made autonomous machines possible, and these machines have attained stunning mimicry of the intuitive wisdom once found only in human brains.
Algorithms' trouncing of humans on the quiz show Jeopardy and at chess, the ancient board game Go, and most recently, Texas Hold'em poker are some of the latest examples of the rapid maturity of intelligent machines. "This is the moment, I think, when we have the highest level of anxiety because we can see advances in AI that are beyond what we had expected," said Marc Benioff, chief executive of Salesforce.com, at the World Economic Forum in Davos this year.
As machines such as driverless cars continue to learn autonomously, their human creators may no longer be able to tell how the machine precisely chooses to achieve a stated goal. We can see the data that go in and the action that comes out, but we can't grasp what happens in between. Put differently, a computer programmer won't be able to explain a machine's behavior by reading the software code any more than a neuroscientist can explain your hot dog craving just by staring at an MRI scan of your brain.

If only smart machines were the only thing to confront.
The world is becoming more connected. There were about 1,000 devices connected to the internet in 1984, one million internet devices came live in 1992 and 10 billion in 2008. Cisco, an electronics firm, predicts that 50 billion devices are expected to be connected by 2020.
As connectivity grows ubiquitous, the size and the catastrophic nature of technologies also multiply. Former Wall Street trader and famed essayist Nassim Nicholas Taleb called such improbably rare events black swans. All swans were thought to be white until we first saw a rare black swan. Because of the rarity of black-swan incidents, human cognition has not evolved to foresee these low-probability occurrences, let alone prevent them from happening.
It took only one lower Manhattan investment house – the Lehman Brothers – turning insolvent in 2007 to trigger a worldwide financial meltdown, and the Federal Reserve hadn't seen it coming. Chairman Ben Bernanke summarized his experience in a 2015 interview: "I don't know if there was much more we could have done."
In studying the Three Mile Island nuclear accident, Yale sociologist Charles Perrow concluded that conventional engineering approaches to ensuring safety – building-in more warnings and safeguards – will always fail in the face of increasing system complexity. He called the nuclear accident a "normal accident." Similarly, the Chernobyl accident in 1986, the Space Shuttle Columbia disaster in 2003, the 2008 financial crisis and the Fukushima Daiichi nuclear disaster in 2011 are, in fact, perfectly normal. We just don't know when or how a black swan will show up.
It is little wonder, then, that some of the pioneers of the information revolution are also the most vocal critics of AI. Musk's prejudice prompted him to donate millions to the ethics think tank OpenAI – and he's urging other billionaire techies like Facebook's Mark Zuckerberg and Google's Larry Page to proceed with caution on their myriad of machine learning and robotics experiments."The future is scary and very bad for people," Apple co-founder Steve Wozniak once gloomily proclaimed. "Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?" Stephen Hawking, the great theoretical physicist at the University of Cambridge, is even more ominous: "The development of full artificial intelligence could spell the end of the human race," he told the BBC.
While such disconsolate forecasts may be exaggerated, few can deny that as we relentlessly march into an age when men and machines are increasingly connected, we may be sowing the seeds of our own destruction. Ray Kurzweil, who often writes about the coming android invasion as part of his Singularity thesis, says, "Both the promise and peril are deeply intertwined."
Technological advancement always demands a broad-based societal negotiation. Whether we deploy nuclear power to generate electricity, to spur an arms race or to abandon it wholesale is a human choice. Technology itself has no say in this. As with the earlier general-purpose technologies that preceded AI, how we conduct the most sweeping technological experiment of our time should not be left to a handful of Silicon Valley luminaries. The rest of us also need to engage in the debate, not just staring down at the tiny smartphone screens in our hands.

0 comments:

Post a Comment