A Blog by Jonathan Low


Aug 3, 2016

Why Instead of Asking If Robots Are Becoming More Human, We Should Be Asking If Humans Are Becoming More Robotic

Co-evolution asserts that closely associated species influence each other's development. Which is why, as machines attain more human-like characteristics, humans may be becoming more machine-like.

The issue is more than idle speculation because our interpersonal relationships, cultural mores, diplomatic exchanges and economic vitality depend to a great degree on our ability to interact with others, however they may be defined.

If we lose the spontaneity and irrationality that often inspires, infuriates and otherwise characterizes human interaction, we may be sacrificing a fundamental attribute of our civilization. JL

 Olivia Goldhill reports in Quartz:

We need an inverse Turing Test to determine to what extent humans are becoming indistinguishable from machines. Changes in technology and our environment are making humans more machine-like. Our obsession with efficiency fuels the infatuation with new technologies. Technology is changing our environment to make us more robotic. Growing surveillance and “nudges” are slowly transforming the way we behave.
For more than 65 years, computer scientists have studied whether robots’ behavior could become indistinguishable from human intelligence. But while we’ve focused on machines, have we ignored changes to our own capabilities? In a book due to be published next year, Being Human in the 21st Century, a law professor and a philosopher argue that we’ve overlooked the equally important, inverse question: Are humans becoming more like robots?
In 1950, computer scientist Alan Turing put forward what’s now known as the “Turing Test.” Essentially, Turing proposed that a key test of machine thinking is whether someone asking the same questions to both a human and a robot could tell which is which. This has since become an important method to evaluate artificial intelligence, with regular Turing Test competitions to determine the extent of robots’ growing ability to mimic human behavior.
But Brett Frischmann, professor at Cardozo law school, and Evan Selinger, philosophy professor at Rochester Institute of Technology, argue that we need an inverse Turing Test to determine to what extent humans are becoming indistinguishable from machines. Frischmann, who has published a paper on the subject, says that changes in technology and our environment are slowly, but surely, making humans more machine-like.
 Changes in technology and our environment are slowly, but surely, making humans more machine-like. You’ve probably heard people complain that technology is dehumanizing or that someone they know is acting “like a machine.” Earlier this year, US senator Marco Rubio was compared to a short-circuiting robot after he repeated the same scripted lines in a Republican debate. Frischmann also points out that it’s often hard to tell whether a call-center operator is human or robot at first, and Amazon warehouse employees have said that the degree of automated control involved in their work means, “We are machines, we are robots.”
These may seem like small examples, says Frischmann, but taken together they’re “meaningful.”

What does it mean to be human?

In order to test whether humans are becoming more machine-like, it’s important to define what makes us distinctively human. Philosophers have long considered this question, and often define human traits by comparing us to another category—typically, animals.
Frischmann and Selinger instead consider what distinguishes humans from machines. Several of these traits involve intelligence: common sense, rational thinking, and irrational thinking are all intrinsically human. Frischmann points out that, as humans, our emotions sometimes make us behave irrationally. “If we engineered an environment within which humans were always perfectly rational, then they’d behave like machines in a way we might be worried about,” he adds.
Another key category is autonomy and free will. The environment may influence our behavior, but it shouldn’t control it. “I have some range of choice about how I can be an author of my own life,” says Frischmann.
 “Every day, you and I and millions of people routinely respond to a stimulus and click and go without understanding what we’re getting ourselves into.” Frischmann and Selinger blame “techno-social engineering” for a growing machine-like behavior among humans, which is another way of saying that technology is changing our environment to make us behave in a more robotic way. Growing surveillance and “nudges” are slowly transforming the way we behave.
One seemingly innocuous example is electronic contracts: Those pages that ask you to click and agree to terms and conditions before proceeding with a download or update. “You see a little button that says ‘click to agree’ and what do you do? You click. Because it’s a stimulus response,” says Frischmann. “It’s easy to dismiss those things. But the fact that every day, you and I and millions of other people routinely respond to a stimulus and click and go without understanding what we’re getting ourselves into, we are behaving like machines. We’re being, in a sense, conditioned or programmed to behave that way.”
Frishchmann also highlights Oral Roberts University in Oklahoma, which switched from asking students to keep a journal of physical activity to tracking their actions with Fitbit devices. This removed students’ ability to reflect on their own behavior and their freedom to exaggerate or lie if they so chose. Your ability to reflect on your experiences is a key aspect of being human, says Frischmann, as is the ability to think about how you relate that behavior to others.
“For us, the Fitbit example is more about the culture of surveillance and the culture of a series of technologies that are tracking not just your activity in one context, but in a variety of context,” he says. “Before long, you’re not really thinking about your own activity.”

Why is this happening?

Dehumanization can’t simply be blamed on the growing use of technology. Instead, Frischmann says our fetishization of technology is behind the trend. We’re overly trusting and reliant on technological developments, mindlessly assuming that every new piece of tech must be beneficial.
The other key factor, he says, is our obsession with efficiency, which fuels the infatuation with new technologies. “If we can be made happy, cheaply, then what could be better?,” he notes. “You don’t ask questions, you don’t resist. You want to minimize transaction costs. But sometimes being human is costly.”
Maintaining personal relationships, in particular, is a costly but ultimately valuable aspect of being human. “If we lose our ability to relate to each other along the way, because it’s efficient and cheap, we lose something of who we are.”
It’s entirely possible, says Frischmann, that it will be increasingly impossible to distinguish between humans and robots because of our machine-like behavior as much as robots’ human-like features. And could this eventually become the norm, with humans spending their entire lives acting like machines?
“I desperately hope we don’t get there,” he says. “I don’t think we’ll get there. But that’s kind of impossible to predict.”


Sorshe said...

Interesting article. Robots stole my job at ABC News and the robot takeover is going to be way worse than people realize: http://helpmebro.com/posts/xoU1EYBLnZ

Anonymous said...

Interresting thought at first but all these examples are no sign of any kind of dehumanization. A politician being compared to a robot, Donald Trump is a dick (where's the dick-ization of mankind?). Amazon workers are like robots? watch Chaplin's Modern Times and you'll see it predates AI by years. For the call-center example define "at first". If it's 0.001s then you can't also tell if everyone you meet isn't a reptilian illuminati. Last but not least is the Agree button example. It's exactly the opposite, we click straight away because we can't be bothered to read all that crap. It just shows humans are still lazy humans. I guess it could be a good reverse-Turing test to see at which point someone stops giving a fuck abt doing a boring task since a machine will never. Yet this would define human behaviour as being lazy, which isn't true either. Machines are, for now, just tools. Very complex hammers that we use to build logic. So, since we've been building houses for millenia, are humans becoming more hammer?

NB: For the survey thing it's for the efficiency of data collecting, reflecting on your actions is something you should do on your own.

Anonymous said...

This article seems to imply that that too much rationality leads to a reduction in creativity. The opposite is actually true in that almost every material thing that humans have created has been done in a rational manner. Take the example of a big machine like a roller coaster which is designed to provide humans with an emotional and physical thrill ride. I have heard some people say as the coaster approaches the top of the first big drop that they want to get off, It's too late for that so the act of riding a roller coaster may be irrational for some people. However the creative design and construction of the device must be conducted rationally using the principles of statics and dynamics so that the cars stay on the track and the passengers don't fall out. Any coaster ride with a loop must to be properly designed or the rider's neck can be snapped.

Generally we should want people and thinking machines to act rationally.

Post a Comment