A Blog by Jonathan Low

 

Mar 15, 2020

Can Algorithms Nudge People Away From Their Workplace Biases?

Yes, tentatively, though that presumes those programming the algorithm share a belief in the benefits of the outcome and are competent enough to code accordingly. JL

Corinne Purtill reports in the New York Times:

A nudge is a design choice that changes people’s behavior in a predictable way, without taking away their right to choose. AI is only as smart as the input it gets. If biases are present in the data, machines will learn and replicate them on a faster, bigger scale than humans could do. (But) , if A.I. can identify the decisions that end up excluding people, it can also spot those that lead to more diverse and inclusive workplaces. The nudge isn’t supposed to replace human decision-making. It suggests alternatives so subtly that employees don’t realize they’re changing their behavior.
In 2014, engineers at Amazon began work on an artificially intelligent hiring tool they hoped would change hiring for good — and for the better. The tool would bypass the messy biases and errors of human hiring managers by reviewing résumé data, ranking applicants and identifying top talent.
Instead, the machine simply learned to make the kind of mistakes its creators wanted to avoid.
The tool’s algorithm was trained on data from Amazon’s hires over the prior decade — and since most of the hires had been men, the machine learned that men were preferable. It prioritized aggressive language like “execute,” which men use in their CVs more often than women, and downgraded the names of all-women’s colleges. (The specific schools have never been made public.) It didn’t choose better candidates; it just detected and absorbed human biases in hiring decisions with alarming speed. Amazon quietly scrapped the project.
Amazon’s hiring tool is a good example of how artificial intelligence — in the workplace or anywhere else — is only as smart as the input it gets. If sexism or other biases are present in the data, machines will learn and replicate them on a faster, bigger scale than humans could do alone.
On the flip side, if A.I. can identify the subtle decisions that end up excluding people from employment, it can also spot those that lead to more diverse and inclusive workplaces.
Humu Inc., a start-up based in Mountain View, Calif., is betting that, with the help of intelligent machines, humans can be nudged to make choices that make workplaces fairer for everyone, and make all workers happier as a result.
A nudge, as popularized by Richard Thaler, a Nobel-winning behavioral economist, and Cass Sunstein, a Harvard Law professor, is a subtle design choice that changes people’s behavior in a predictable way, without taking away their right to choose.
Laszlo Bock, one of Humu’s three founders and Google’s former H.R. chief, was an enthusiastic nudge advocate at Google, where behavioral economics — essentially, the study of the social, psychological and cultural factors that influence people’s economic choices — informed much of daily life.
Nudges showed up everywhere, like in the promotions process (women were more likely to self-promote after a companywide email pointed out a dearth of female nominees) and in healthy-eating initiatives in the company’s cafeterias (placing a snack table 17 feet away from a coffee machine instead of 6.5 feet, it turns out, reduces coffee-break snacking by 23 percent for men and 17 percent for women).
Humu uses artificial intelligence to analyze its clients’ employee satisfaction, company culture, demographics, turnover and other factors, while its signature product, the “nudge engine,” sends personalized emails to employees suggesting small behavioral changes (those are the nudges) that address identified problems.
One key focus of the nudge engine is diversity and inclusion. Employees at inclusive organizations tend to be more engaged. Engaged employees are happier, and happier employees are more productive and a lot more likely to stay.
With Humu, if data shows that employees aren’t satisfied with an organization’s inclusivity, for example, the engine might prompt a manager to solicit the input of a quieter colleague, while nudging a lower-level employee to speak up during a meeting. The emails are tailored to their recipients, but are coordinated so that the entire organization is gently guided toward the same goal.
Unlike Amazon’s hiring algorithm, the nudge engine isn’t supposed to replace human decision-making. It just suggests alternatives, often so subtly that employees don’t even realize they’re changing their behavior.
Jessie Wisdom, another Humu founder and former Google staff member who has a doctorate in behavioral decision research, said sometimes she would hear from people saying, “Oh, this is obvious, you didn’t need to tell me that.”
Even when people may not feel the nudges are helping them, she said, data would show “that things have gotten better. It’s interesting to see how people perceive what is actually useful, and what the data actually bears out.”
In part that’s because the nudge “doesn’t focus on changing minds,” said Iris Bohnet, a behavioral economist and professor at the Harvard Kennedy School. “It focuses on the system.” The behavior is what matters, and the outcome is the same regardless of the reason people give themselves for doing the behavior in the first place.
Of course, the very idea of shaping behavior at work is tricky, because workplace behaviors can be perceived differently based on who is doing them.
Take, for example, the suggestion that one should speak up in a meeting. Research from Victoria Brescoll at the Yale School of Management found that people rated male executives who spoke up often in meetings as more competent than peers; the inverse was true for female executives. At the same time, research from Robert Livingston at Northwestern’s Kellogg School of Management found that for black American executives, the penalties were reversed: Black female leaders were not penalized for assertive workplace behaviors, but black male executives were.
An algorithm that generates one-size-fits-all fixes isn’t helpful. One that takes into account the nuanced web of relationships and factors in workplace success, on the other hand, could be very useful.
So how do you keep an intelligent machine from absorbing human biases? Humu won’t divulge any specifics — that’s “our secret sauce,” Wisdom said.
It’s also the challenge of any organization attempting to nudge itself, bit by bit, toward something that looks like equity.

0 comments:

Post a Comment