A Blog by Jonathan Low

 

Jan 1, 2012

Silicon Superego: How Much Personality Do We Want From Our Gadgets?

2011 was a year in which the primacy of design in human life became a fact rather than an argument.

The importance of technology to the functional reality of our lives has meant that thinking about how and why we employ those devices has led to broader questions about what will deepen the dependency, but make it more useful - and pleasurable.

One crucial element in this evolution is the role that engineered technological personality traits may play. We growl at the annoying, but frequently helpful voice of the GPS device as we try to u-turn against traffic. And we scream at the CRM (customer relationship management) software that intermediates any attempt to seek help from our corporate 'service' providers. But we are also beguiled by the feminine Siri voice that invigorates the latest iPhone - and expect more to come. This despite knowing that it is a design element, not a functioning two-way interaction. Coevolution teaches us that behavior induces responses which further inform our own actions.

Our dependence on technology and disembodied relationships - primarily, for now, at the commercial level - may make the psychological and moral aspects of design an important new factor in our evolving interplay with both the tangible and intangible in our lives. JL

David Pogue comments in Scientific American:
The most buzzed-about new feature in the latest iPhone is Siri, the virtual minion. You can give her an amazing range of spoken commands, without any training or special syntax, and marvel as she does your bidding.

Siri is a breakthrough in voice control, sure, but she’s also a breakthrough in computerized personality. The question is: Do we want our gadgets to have personality? Programmers and designers have always struggled with that question. The creators of every operating system have had to come up with a consistent syntax for communicating with people. Over the years various companies have flitted uncertainly from one philosophy to another.
You can say, “Call my assistant” or “Wake me up at eight” or “Make an appointment with Dr. Woodward for Friday at 2 p.m.” You can say, “How do I get to the airport from here?” or “Play Taylor Swift” or “When I get to the office, remind me to file the Smithers report.” You can ask her how many fluid ounces there are in a liter or the distance to Mars or when George Washington was born.

In each case, Siri briefly contacts Apple’s servers and then responds in a calm female voice, simultaneously displaying the information you requested.

It didn’t take long, though, for Internet wiseacres to start asking her questions with less concrete answers—and marveling at her witty, sometimes snarky replies.

You: “Siri, I love you.” Siri: “That’s sweet, David. Now can we get back to work?”

You: “What’s the meaning of life?” Siri: “I can’t answer that now, but give me some time to write a very long play in which nothing happens.”

You: “Open the pod bay doors, Siri.” Siri: “I’m sorry, David, I’m afraid I can’t do that. [Pause] Are you happy now?”

Until Siri came along, Apple’s software has always avoided personal pronouns such as “I” and “you.” The result: some awkward passive-voice snarls like “The document could not be opened because it could not be found.”

Microsoft’s dialog-box English not only favors the passive voice, but it’s usually aimed at programmers, not humans: “SL_E_CHREF_BINDING_0UT_0F_T0LERANCE: The activation server determined that the specified product key has exceeded its activation count.” Ah, of course!

Citibank’s automated-teller machines lie at the opposite end of the Emily Post spectrum. They take the “I”/”you” personal approach to an extreme. “Hello. How may I help you?” says the welcome screen. When you sign off, you get, “Thank you. It’s always a pleasure to serve you.” These machines even try to take the blame for your own dumb mistakes: “I’m sorry, I don’t recognize that password.”

Now, deep down—actually, not that far down—we all know that our computers are not really engaging us; every utterance they make was written by a programmer somewhere. So why do the software companies even bother? If everyone knows it’s just a trick, should we even care how personable our machines are?

Yes, we should.

The designers’ intention, no doubt, was to make their machines more user-friendly by simulating casual conversation with fellow humans. But there’s a side effect of that intention: in trying to program machines that speak like people, the programmers are forced to think like people.

In Citibank’s case, writing messages in that second-person conversational style forced the engineers to put themselves in the mind-set of real humans. You can’t write an “I” statement for your ATM without also considering the logic, the terminology and the clarity of those messages. Someone writing in that frame of mind would never come up with “The activation server determined that the specified product key has exceeded its activation count.”

The genius of Siri’s “personality,” meanwhile, is that she doesn’t care if you say, “Will it rain?” or “Will I need an umbrella?” or “What’s the forecast?” She is programmed to understand any wording. This time the payoff is more than user-friendliness; it’s happiness. When Siri does what you want, the first time, when you haven’t read any instructions or followed any rules, you feel a surge of pride at your instantaneous mastery.

So yes, of course, machines that converse like people are a total fake-out, and we know it. But psychology is a funny thing—as when we’re watching a great magic show, we’re delighted even when we know it’s all a trick.

0 comments:

Post a Comment