A Blog by Jonathan Low

 

Jan 22, 2020

Can A Digital Avatar Who's Your Boss Really Fire You?

Lawyer up. Because the answer to this question is ultimately going to depend on legal definitions and contract language, rather than the underlying 

Assuming that the robot or digital avatar is acting on behalf of management, which has dotted its Is, crossed its Ts and otherwise devolved authority to hire (already happening) and fire to such intangible overlords, the right to do so will likely be be recognized eventually. Until that time - and due to public outrage, it may never come - using this time-and-emotion saving expedience raises questions about leadership legitimacy that no smart executive who values her bonus and stock options will want surfaced in an era of hair-trigger boards and declining CEO tenure. JL


John Brandon reports in Wired:

Are you entitled to ignore what a fake human tells you? An avatar is a collection of pixels programmed to trigger a visual pattern, one that we perceive as a human. Algorithms determine the response, so a human is always behind the response. A digital avatar is incapable of understanding the emotional experience of being fired. How do you program that? To be cognizant of the shock and surprise, the awkwardness of telling your loved ones later, the weirdness of telling coworkers you may never see again. Getting fired by an avatar is not valid because there are too many nuances.
You walk into the office and greet a digital avatar that replaced the company receptionist a few years ago. After sliding your badge into a reader, you smile and nod, even though you know “Amy” is not a real person. You sit down at your cubicle and start browsing the web.
Then the trouble starts.
You receive an email requesting a meeting. “Bob” wants to chat about your job performance. You fire up a Zoom chat and another digital avatar appears on the screen.

“I have some unfortunate news for you, today … ” says the middle-aged man wearing bifocals. He looks real and talks like a human, and all of his facial expressions seem realistic. There is no uncanny valley, just a bored-looking avatar who’s about to fire you.
Recently, at CES 2020, a company called Neon (which is owned by Samsung subsidiary Star Labs) introduced digital avatars, which are called Neons. Based on real humans but fully digitized, they don’t have that awkward cartoon-like appearance of less-detailed replicants. Details were scarce, and the demo was highly controlled. But a press release trumpeted that “Neons will be our friends, collaborators, and companions, continually learning, evolving, and forming memories from their interactions.” And among the avatars on display were a digital police officer, someone who looked like an accountant, an engineer, and a few office workers. Some looked authoritative, even stern.
I imagined, like some of Neon’s potential clients may imagine, one of them being a boss. Unless you look up close, you can’t tell Neons are not real people. Maybe “Bob” and other bots will laugh, cough, roll their eyes, or furrow their brows.
Some might even act like they are in charge of something.
“I’m afraid I am going to have to let you go today. Do you have any questions?” he says.
Well, yes, many. The first one is: Does it really count?
Ethicists have argued for years that a digital avatar is not a real human and is not entitled to the same rights and privileges as the rest of us. You might wonder if that works both ways. Are you entitled to ignore what a fake human tells you? Let’s look at one possible not-so-distant scenario: Can a digital avatar fire you?
In the workplace, it’s not like an avatar needs a W2 or a Herman Miller chair. What, exactly, is “Bob”? On the Zoom screen, it’s a collection of pixels programmed to trigger a visual pattern, one that we perceive as a human. Algorithms determine the response, so a human is always behind the response. Someone has to create the code to determine whether “Bob” gets angry or chooses to listen intently. In fact, Neon announced a development platform called Spectra that controls emotions, intelligence, and behavior.
Yet, avatars (and robots) don’t understand the deep emotional connection we have to our jobs and our coworkers, or what it means to get fired.
They probably never will. More than algorithms and programming, human emotions are incredibly personal, derived from perhaps decades of memories, feelings, deep connections, setbacks, and successes.
Before starting a writing career, I was an information design director at Best Buy. At one time, I employed about 50 people. I loved the job. Over six years, I hired dozens of people and enjoyed interviewing them. I looked forward to getting to know them, to asking unusual questions about favorite foods just to see how they would respond.
My worst days were when I had to fire someone. Once, when I had to fire a project lead on my team, I stumbled over my words. I wasn’t nervous as much as I was terrified. I knew it would be devastating to him. I still remember the look on his face when he stood up and thanked me for the opportunity to work there.
A digital avatar is incapable of understanding the deeply emotional experience of being fired. How do you program that? To be cognizant of the shock and surprise, the awkwardness of telling your loved ones later on, the weirdness of telling coworkers you may never see again.
In my view, getting fired by an avatar is not valid. It doesn’t count, because there are too many nuances. Maybe the employee wants to discuss other options or a lesser role; maybe they want to explain a rather complex workplace issue that led to their poor performance.
More importantly, a digital avatar will always be a collection of pixels and some code. Avatars that greet you in the morning, inform you about a road closure, tell you a few jokes, or even notify you about a change in your cable service are all more valid than a bot that delivers bad news. News that’s personal and will have a major impact on you, your family, and your future.
My initial reaction to being fired by a digital avatar would be to find a real person. I would want to make it more official before I pack up a single stapler or office plant.
I’m OK with an avatar that teaches me yoga. Bring it on. I want to learn, and a real instructor would probably cost too much. Someday, an avatar might try to teach one of my kids how to drive in a simulator or a videogame, and that’s perfectly acceptable. If a digital avatar with way more patience than me handles the educational part before we hit the pavement, that’s fine.
But a “police officer” that hands me a ticket when I was obviously going the speed limit? A “doctor” that talks to me about cancer risks? Don’t even get me started on a bot that tries to teach a sex-ed class to teenagers in high school. Any avatars that deliver important news or require actual credentials to understand the nuances of emotions won’t cut it.
That said, I know where this is heading. In most of the demos for the Neons, it became obvious to me that this is not meant as a mere assistant answering queries. One of the avatars that looked like an accountant started ruffling pages as though it was making a big business decision; another smirked and smiled like it was trying to get to know me.
That’s not a problem if I want to send one to a boring meeting. Bots work well for information exchange, for handling mundane tasks. They are pretty good fill-ins. I’d love to “send” one to a session discussing my taxes or to figure out which paint to use for my house.
But Neons and their ilk shouldn’t be a part of any discussion where actual emotions are involved. I might need to fire one of them.

0 comments:

Post a Comment