A Blog by Jonathan Low

 

Nov 11, 2020

Will AI Force Humans To Become More Human?

It's just a theory at this point. But it could happen. JL

Bill Schmarzo reports in LinkedIn:

Yes. AI is going to make further inroads into what we have defined as intelligence (which) will no longer be defined by one’s ability to reduce inventory costs or improve operational uptime or detect cancer or prevent unplanned maintenance or flag at-risk patients and students. Those are all tasks at which AI models will excel. No human competitive advantage there anymore. The people that will thrive in the future will be the ones who excel with their ability to empathize, define/refine, ideate, prototype, test and learn about the human condition.

Will Artificial Intelligence (AI) create an environment where design thinking skills are more valuable than data science skills? Will AI alter how we define human intelligence? Will AI actually force humans to become more human?

Okay, sounds questions one might expect from an episode of Rod Serling’s TV series “Twilight Zone” (which I preferred over the meaningless college football bowl games on New Year’s Day).  Instead of AI replacing humans, will AI actually make humans more human, and the very human characteristics such as empathy, compassion and collaboration actually become the future high-value skills that are cherished by leading organizations.

Let’s explore these wild-assed questions a bit further, but as always, we need to start with some definitions.

AI, AI Rational Agents, and the AI Utility Function, Oh My!

Artificial intelligence (AI) is defined as the simulation of human intelligence. AI relies upon the creation of “AI Rational Agents” that interact with the environment to learn, where learning or intelligence is guided by the definition of the rewards associated with actions. AI leverages Deep Learning, Machine Learning and/or Reinforcement Learning to guide the “AI Rational Agent” to learn from the continuous engagement with its environment to create the intelligence necessary to maximize current and future rewards (see Figure 1).

No alt text provided for this image

Figure 1: AI Rational Agent

The rewards against which the “AI rational agents” seek to maximize are framed by the definition of “value” as defined in the AI Utility Function; the objective criterion that measures the progress and success of an AI rational agent’s behaviors. In order to ensure the creation of an “AI rational agent” that exhibits the necessary intelligence to make the “right” decision, the AI Utility Function must cover a holistic definition of “value” including financial, operational, customer, society, environmental and spiritual (see Figure 2).

No alt text provided for this image

Figure 2: “Why Utility Determination Is Critical to Defining AI Success”

To summarize, AI is driven by AI Rational Agents that seek to drive “intelligent” actions based upon “value” as has been defined by the AI Utility Function. To design a holistic AI Utility Function that drives “intelligence” (whether artificial intelligence or human intelligence), we need to start by defining, or redefining, what we mean by “intelligence.”

Re-Defining Intelligence

Intelligence is defined as the ability to acquire and apply knowledge and skills. 

Our educational institutions have created numerous tests (Iowa Basic Skills, ACT, SAT, GMAT) to measure one’s “intelligence.” And while these tests may try to measure “intelligence” today, there are a multitude of stories where the education system’s need to put people into “intelligence boxes” has actually stifled one’s true intelligence.

Sir Ken Robinson talks about how today’s educational systems efforts to put students into a specific “intelligence box” actually kills creativity. See his famous podcast “Sir Ken Robinson: How Do Schools Kill Creativity?” and the story of Gillian Lynne, famous for changing the world of dance and choreography through musicals such as “Cats” and “Phantom of the Opera,” for more about the intelligence stifling prowess of our educational institutions.  Anyone with children knows the horror of this “intelligence box” dilemma as our children panic to study, tutor and prepare for ACT and SAT tests that play an over-sized role in deciding their future.  

This archaic definition of “intelligence” is actually having the exact opposite impact in that it reduces students (our children) to becoming rote learning machines that actually drives out the creativity and innovation skills that differentiates us from machines.  

We already have experienced machines taking over some of the original components of intelligence. I mean, how many of you use long division, or manually calculate square root, or multiple numbers with more than 2 digits in your head? Traditional measures of intelligence are already under assault by machines.

And AI is going to make further inroads into what we have traditionally defined as intelligence.  Human intelligence will no longer be defined by one’s ability to reduce inventory costs or improve operational uptime or detect cancer or prevent unplanned maintenance or flag at-risk patients and students. Those are all tasks at which AI models will excel. No human competitive advantage there anymore.

We must focus on nurturing the creativity and innovation skills that distinctly make us humans and differentiate us from analytical. We need a new definition of intelligence that nurtures those uniquely human creativity and innovation capabilities (said by the new Chief Innovation Officer at Hitachi Vantara, wink, wink).

What Is Innovation or Creativity?

Creativity is the application of imagination plus exploration with a strong tolerance to learn through failure.

Innovation and creativity are the human ability and the willingness to ask provocative questions (like Tom Hanks in the movie “Big"), embrace diverse ideas and perspectives, blend these different ideas and perspectives into a new perspective (frame), and explore, test, fail and learn the relevance and applicability of the new blended perspective to real-world challenges.

Yea, that definition doesn’t exactly fit into our ACT, SAT, GMAT tests of intelligence, and that is exactly the point! As AI models take over more of the tasks and jobs traditionally associated with intelligence, humans need to focus on those human skills which make humans unique – humans need to become more human. Which is why I think Design Thinking is such a critical skill in a world where AI is going to eliminate rote-skill jobs (flipping burgers, operating a machine press, detecting cancer, replacing broken parts, driving cars).

Design thinking is a human-centric approach that creates a deep understanding of and empathy for users in order to generate ideas, build prototypes, share what you’ve made, embrace learning through failure, and put your innovative solution out into the world (see Figure 3).

No alt text provided for this image

Figure 3: “Design Thinking Humanizes Data Science

The Empathize stage of Design Thinking in particular is critical as it sets the frame around which we can apply creative and innovate thinking and exploration to come up with different, relevant and meaningful real-world solutions. 

The Empathize stage captures what your users are trying to accomplish (i.e., tasks, roles, responsibilities, expectations, gains and pains). Walk in their users’ shoes by shadowing them, and where possible, actually become a user yourself. That involves understanding: What are their usage patterns and engagement characteristics? What are they trying to accomplish and why? What matters to this person? What are their gains or sources of value in their endeavor? What are their impediments (pains) to success? What frustrates them most?

Summary

So, let’s get back to those original questions, with my answers (you can grade me and send me my score so that I can see what colleges I am qualified to attend):

Will Artificial Intelligence (AI) create an environment where design thinking skills are more valuable than data science skills? 

Yes. As AI and its associated deep learning, machine learning and reinforcement learning capabilities continue to expand almost exponentially, the challenge with AI won’t be building AI models that work. The challenge with AI will be in defining and codifying the difference between “Right and Wrong” in order for the AI Rational Agent to make “intelligent” decisions. I expect that one day soon, AutoML / AutoAI capabilities will expand to the point where AI models can build themselves and will only rely upon humans to define the criteria (AI Utility Function) against which to optimize performance. And Design Thinking will play an indispensable role in ensuring that we are building holistic, coherent and intelligent AI Utility Functions.

Will AI alter how we define human intelligence? 

Yes. Since AI models can be shared, re-used, and learn without human intervention (see “Crossing the AI Chasm with Infographics” blog), this will allow humans the time and perspectives to build out the skills and capabilities that make humans human. And Design Thinking will play a critical role in helping humans blend, bend, reframe, ideate and innovate in ways that AI models cannot.

Will AI actually force humans to become more human?

Yes. The people that will thrive in the future will be the ones who excel with their ability to empathize, define/refine, ideate, prototype, test and learn about the human condition – the key capabilities that a Design Thinker with their black bag of incantations have mastered.  And the understanding, articulation and formulation of the human “ethics equation” will become even more important as AI forces humans to actually become more human.  

No alt text provided for this image

Figure 4: “AI Ethics Challenge: Understanding Passive versus Proactive Ethics

Will AI actually force humans to become more human…interesting that the technology that potentially threatens so many human jobs, might actually be the technology that forces humans to become more human.



0 comments:

Post a Comment