A Blog by Jonathan Low

 

Apr 1, 2024

How Intrusive AI Assessment Is ChangingTeam Dynamics In Organizations

AI is in your meeting and it has firm opinions, assessments - and biases - about who is engaged, who is contributing - and who is not. 

Increasingly, AI tools offer summaries of what was discussed, which ideas or comments were most helpful or strategic and who was the most productive. While potentially beneficial, the downside is that knowing AI is taking notes and assigning scores may silence everyone is who is not the boss or who is concerned about the negative impact on their careers of criticism offered. It may also diminish contributions as a 'let the technology do it' mentality takes hold. To prevent unintended deleterious side effects of AI's use in such settings, smart leaders are mandating processes by which AI contributions are reviewed by humans for accuracy, fairness and impact. JL 

Megan Reitz and John Higgins report in Harvard Business Review:

Features add AI-powered feedback to tools, helping people catch up who arrive late to a meeting, or help summarize key discussion points. Other AI tools record, transcribe, and analyze interactions, incorporating them into summaries, while measuring meeting participants’ engagement, sentiment, and airtime. These tools offer productivity and feedback benefits, but if people outsource their listening to technology, understanding and commitment to act might be lacking. Criticism also could be (diminished) as people silence themselves because they fear the public record, especially if AI attaches subjective labels to summaries. AI could ingrain patterns where the most secure, powerful voices dominate.

Generative AI has the fastest take-up of any technology to date. Now, as AI applications are becoming immersed in workplace culture and power, we’re beginning to see how GenAI tools will impact our conversational habits, which direct what we say and who we hear.

Already, we’re seeing features that add AI-powered feedback to familiar tools. On Zoom there’s an “AI companion,” helping you catch up when you arrive late to a meeting, and on Teams, “Copilot” will help you summarize key discussion points. Read AI (a smart AI tool that records, transcribes, and analyses interactions and incorporates into summaries) goes further, measuring meeting participants’ engagement, sentiment, and airtime. These applications can be integrated into work so seamlessly users begin to use them almost without realizing.

These tools offer productivity and feedback benefits, but there are also downsides to them joining our conversations. If people outsource their listening wholesale to the technology, skipping the work of thinking through for themselves the key messages, then meetings may be efficient, but understanding and commitment to act might well be lacking.

From what we see, conversations about how we should have AI-enabled conversations are absent in many organizations, alongside discussions about mitigating risks and maximizing benefits. Incurious acceptance of the technology and uncritical implementation means that missteps — when topics or people are silenced — are not “intelligent failures,” to use Harvard Business School professor Amy Edmondson’s phrase. Conversational failures are inevitable when we venture into new territory, however they are only intelligent if we do our homework before we experiment, engage in thoughtful consideration as we make choices and anticipate outcomes.

Building on our decade of research into speaking truth to power, this article champions the need to pay attention to how we talk when using AI and how we talk about AI. It highlights five highly intertwined areas of opportunity and potential problems arising from how technology is used in the specific context of an organizational culture with its habits, ingrained through perceptions of power differences, of who has the right to speak and be heard.

Who Speaks and Gets Heard

We all have suspicions about who gets airtime, gets interrupted, and interrupts in meetings. AI allows us to move from suspicion to fact, by providing hard data around the share of voice.

With curiosity and positive intent, those who take airtime might be motivated to dial it down, creating space for others. In a conversation Megan had with David Shim, CEO of Read AI, he told the story of a venture capital executive shown data through the app that revealed he had spoken for 80% of a pitch — too much, given the aim of the meeting was to hear about the potential investment in detail. His raised awareness meant he spoke less and listened more in subsequent conversations.

But the reasons for speaking a lot or a little are complicated. If you simply focus on the amount of contribution as a conversational goal, people who don’t want to speak or who contribute by being active listeners, might get sucked into speaking when they have nothing they want to say.

The quiet ones might also be those who least trust being recorded and having their words made permanent. The AI tool could raise the stakes for speaking up, ingraining patterns where the most secure and powerful voices dominate.

What Gets Said and Heard

In virtual meetings, many are busy taking notes, creating their record of what has been said, so missing visual signals and relying on listening biases. Only afterwards do they realize there are multiple perspectives on what was said and agreed upon that don’t align to what is in their notebook. Now an AI bot can do the note-taking, produce the summary, and list action points and responsibilities, and we can turn our full attention to the people on screen — with just one shared version of what was said available to all.

However, subjects such as failures, mental health, ort criticism of strategy could be pushed underground as people silence themselves because they fear that their perspective will be put onto the public, permanent record. This may especially be the case if AI attaches subjecive labels, such as “low sentiment,” which could be translated as shorthand for critical or unenthusiastic, when someone raises a particular subject. While actionable positivity is important, so is tentative uncertainty, and our work frequently highlights situations where more doubt and humility would have contributed positively to the quality of conversation.

To address this, Read AI announces itself in the meeting and allows participants to opt out from recording and delete data before reports are generated. In a world without social hierarchy this would be fine, but in workplaces filled with it, the question is whether employees would feel empowered to make that choice when their boss is running the meeting?

When We Speak and Listen

As humans, our energy varies through the day and from day to day. AI can factor in these considerations, scheduling meetings for when participants are likely to be most engaged.

Read AI for instance, can track your engagement through the week and recommend when organizers should schedule meetings to get the best of you. They might be told that your engagement is highest before 10 am in the morning — then goes downhill.

Tracking engagement may help break the pattern of “domino Zooms,” when one meeting is scheduled immediately after the other. AI can mandate breaks we know we need but don’t take between meetings, because we collectively overestimate our capacity for attention.

AI may also make it possible to identify which meetings are unnecessary (through measuring engagement or analyzing action steps), helping create spaces in work where people can pause and think, rather than remain pathologically busy.

However, this relies on “engagement” metrics being credible. An issue with relying on GenAI is the misapplication of assumptions drawn from a limited range of datasets. Tracking engagement and sentiment remains difficult and manufactures incorrect conclusions due to limited attention to cultural differences, neurodiversity and introversion.

Looking away, pausing, frowning, or using humor (especially sarcasm) might lead AI to conclude you are disengaged or have low sentiment when in fact you may be thinking, imagining, or attempting to lighten the mood.

Where We Speak and Listen

In our global research on speaking truth to power, most of the 20,000 employees we’ve surveyed agree they are most guarded in formal work meetings, i.e., ones where AI will be most used. This compares to one-to-ones with bosses and informal conversations with colleagues. They may become even less willing to speak up in formal settings if words are recorded and shared in unknown ways.

As reported in our research, leaders are often the ones who feel most comfortable in formal meetings and are likely to be in an “optimism bubble,” overestimating how approachable they are, how transparent others are and oblivious that more junior people are only telling them what they think they can bear to hear.

AI enabled meetings may exacerbate this, driving conversations offline, undermining collective sense making, and adding to the conversational workload.

How We Speak and Listen

Many organizations espouse a feedback culture, without acknowledging power dynamics. Giving and receiving feedback is often awkward, if not career-limiting — few will call out their boss for scrolling emails while others are talking or admit to checking out from a meeting because it’s boring.

An AI bot isn’t afraid of speaking up and doesn’t get embarrassed. Many of us will soon be receiving feedback (even if we don’t elicit it) on our virtual presence and how to speak and listen: whether we keep interrupting (and whom); use body language that silences others; talk too quickly or use exclusionary language.

This could be immensely valuable. However, if employees feel they are under surveillance and being labelled on how they communicate, “performing” behaviors may take over as meeting participants game the system (i.e., use the words and body language they know gets rated highly by the AI). According to David Shim, “people can’t keep that up” for too long, however our research suggests employees readily learn the cultural rules of meetings, sustaining persistent disingenuity in communications.

A benefit AI could bring is in how we pay attention. When someone is fully present with us, listening nonjudgmentally, then we may be more willing to speak up, reflect and change and performance improves. Virtual meetings are often used as multi-tasking opportunities, undermining the productive potential of bringing people together. Read AI measures eye movements and facial expressions to assess whether we are paying attention and engaged. This could mean that doing other work will become trickier — or if we do, visible. AI could help to kick our multi-tasking habit by helping us to eliminate those meetings that aren’t productive and encourage us to listen better in the ones that are.

The question is whether participants learn how to speak and listen skillfully or how to seem to speak and listen skillfully.

What determines whether AI will be for better or worse in our conversations?

Many positive outcomes described above assume that AI is authoritative (i.e., trustworthy and accurate), will be used generatively by those in power, and is implemented in a psychologically safe workplace. There are clear reasons to doubt these criteria will usually be the case.

The balance of pros and cons of AI on the way we speak and listen comes down to three things:

  • The impact of power and status: How people’s sense of relative power will impact their trust of AI tools, their ability to opt out of AI tracking and to influence how data is used. Key is acknowledging what power culture exists. The main cultural divide, based on Joyce Fletcher, is whether AI is used for control over others to perform to expectation or in support of others to make their own choices.
  • What we count as knowledge: Whether we hand over responsibility to listen and understand one another to AI or use AI to augment our ability to encounter one another generatively. If we do the former, overly relying on AI to source our knowledge, we could lose the muscles we need for learning and coaching, listening and speaking up, so become ruled by the intentions of the technology suppliers and the datasets AI is learning from and then creating (even hallucinating).
  • Pausing to learn: Whether we create spaces to learn as we implement new technology and make wise choices about adoption. AI’s “productivity gains,” within our current business philosophy, could lead to more busyness, more tasks, even less respite — time saving within a philosophy that privileges attention and relationship makes possible deeper bonds of trust and reflective thinking. There are implications for what gets said and who gets heard in both scenarios.

Wise application of AI invites a philosophical lens that compels us to face into our never-ending quest to do more tasks, faster. It requires restraint and an ability to lift our instrumental gaze, which is hypnotized by rationality and targets, and engage our relational gaze, which prompts us to see how deeply connected we are to one another, the world around us and how our choices now, frame the world we and our children will occupy. In this way, perhaps AI could help us to have the conversations that matter.

0 comments:

Post a Comment