A Blog by Jonathan Low

 

Jan 13, 2018

The Problem With Chatty Apps That AI May Solve

In their haste to create personalized connections between technology and its users, software coders are often incapable of designing in the importance of context.

Artificial intelligence may prove better at figuring that out than have humans, so far. JL


Sara Wachter-Boettcher reports in Wired:

Text strings are used in software to tailor a message to its context. But technology companies have become obsessed with bringing more “personality” to their products, making it cute, quirky, and “fun.” One of the key components of having a great personality is knowing when to express it, and when to hold back. That’s a skill most humans learn as they grow up but, seem to forget as soon as they’re tasked with making a dumb machine “sound human.” Cutesy copy strings create false intimacy between us and the products we use.
One day in 2015, Dan Hon put his toddler, Calvin, on the scale. He was two and a half years old, and he clocked in at 29.2 pounds—up 1.9 pounds from the week before, and smack in the middle of the normal range for his age. Hon didn’t think twice about it.

But his scale did. Later that week, Hon received Calvin’s “Weekly Report” from Withings, the company that makes his “smart scale” and accompanying app. It told Calvin not to be discouraged about his weight gain, and to set a goal to “shed those extra pounds.”
“They even have his birth date in his profile,” Hon tweeted about the incident. “But engagement still needs to send those notifications!”
Withings specializes in “smart” scales, meaning internet-connected devices that save your data to an account you access using an app on your smartphone or other device. In the app, you can see your weight over time, track trends, and set goals.

There’s just one problem: the only goal Withings understands is weight loss.
Sometimes, like in Calvin’s case, the result is comically absurd: Most people would chuckle at the idea of a healthy two-year-old needing a weight goal. But in other cases, it might be downright hurtful. Like the default message that Withings sends if you weigh in at your lowest ever: “Congratulations! You’ve hit a new low weight!” the app exclaims. Hon’s family got that one too—this time, for his wife. She’d just had a baby, not met a goal. But Withings can’t tell the difference.
Have an eating disorder? Congratulations!
Just started chemo? Congratulations!
Chronically ill? Congratulations!
Withings is designed to congratulate any kind of weight loss—even if that’s not your goal.

Withings is far from the only service with this problem. Everywhere you turn online, you’ll find products that just can’t wait to congratulate, motivate, and generally “engage” you...no matter what you think about it.

Never Miss a Terrible Thing

One day in September 2016, Sally Rooney felt her phone buzz. She looked at the screen and saw a notification from Tumblr: “Beep beep! #neo-nazis is here!” it read.
Rooney’s not a neo-Nazi. She’s an Irish novelist. “I just downloaded the app—I didn’t change any of the original settings, and I wasn’t following that tag or indeed any tags,” she told me. “I had a moment of paranoia wondering if I’d accidentally followed #neo-nazis, but I hadn’t.”

Yet there Rooney was anyway, getting alerts about neo-Nazis, wrapped up in the sort of cutesy, childish little package you’d expect to hear in a preschool. How did this happen? After a screenshot of the notification went viral on Twitter, a Tumblr employee told Rooney that it was probably a “what you missed” notification. Rooney had previously read posts about the rise in fascism, and the notification system had used her past behavior to predict that she might be interested in more neo-Nazi content.
Now on to the copy. As you might guess, no one at Tumblr sat down and wrote that terrible sentence. They wrote a text string: a piece of canned copy into which any topic could be inserted automatically: “Beep beep! #[trending tag] is here!” (In fact, another Tumblr user shared a version of the notification he received: “Beep beep! #mental-illness is here!”)
Text strings like these are used all the time in software to tailor a message to its context—like when I log into my bank account and it says, “Hello, Sara” at the top. But in the last few years, technology companies have become obsessed with bringing more “personality” into their products, and this kind of copy is often the first place they do it—making it cute, quirky, and “fun.” I’ll even take a little blame for this. In my work as a content strategy consultant, I’ve helped lots of organizations develop a voice for their online content, and encouraged them to make their writing more human and conversational. If only I’d known that we would end up with so many inappropriate, trying-too-hard, chatty tech products. One of those products is Medium. In the spring of 2015, Kevin M. Hoffman wrote a post about his friend Elizabeth, who had recently died of cancer. Hoffman works in technology, and he knew Elizabeth from their time spent putting on conferences together. So he wanted to share his memorial in a place his peers, and hers, would see it. Medium was an obvious choice.
A few hours after posting his memorial, he got an email from Medium letting him know how his post was doing, and telling him that three people had recommended it. And inserted in that email was the headline he had written for his post, “In Remembrance of Elizabeth,” followed by a string of copy: “Fun fact: Shakespeare only got 2 recommends on his first Medium story.”
It’s meant to be humorous—a light, cheery joke, a bit of throwaway text to brighten your day. If you’re not grieving a friend, that is. Or writing about a tragedy, or a job loss, or, I don’t know, systemic racial inequalities in the US criminal justice system.
When the design and product team at Medium saw Kevin’s screenshot, they cringed too—and immediately went through their copy strings, removing the ones that might feel insensitive or inappropriate in some contexts. Because, it turns out, one of the key components of having a great personality is knowing when to express it, and when to hold back. That’s a skill most humans learn as they grow up and navigate social situations— but, sadly, seem to forget as soon as they’re tasked with making a dumb machine “sound human.”

Fake Friends

The neo-Nazi Tumblr notification that Sally Rooney received struck a nerve: As I write this, her screenshot has been retweeted nearly seven thousand times, and “liked” more than twelve thousand times. It even caught the attention of Tumblr’s head writer, Tag Savage. “We talked about getting rid of it but it performs kinda great,” he wrote on Twitter, as Rooney’s screenshot went viral.
When Savage says the “beep beep!” message “performs,” he means that the notification gets a lot of people to open up Tumblr—a boon for a company invested in daily active users and monthly active users. And for most tech companies, that’s all that matters. Questions like, “is it ethical?” or “is it appropriate?” simply aren’t part of the equation, because ROI always wins out.
All these cutesy copy strings and celebratory features create a false intimacy between us and the products we use. We’re not actually friends with our digital products, no matter how great their personalities might seem at first. Real friends don’t try to tell you jokes when you’re in the middle of a crisis. They don’t force you to relive trauma, or write off hate speech, or any of the things tech products routinely do in the name of engagement. They simply care. It’s time for the tech industry to get better at that.

0 comments:

Post a Comment