A Blog by Jonathan Low

 

May 6, 2019

Why Studying AI the Way Social Scientists Study Humans Best Explains Machine-Human Interaction

As machines and algorithms become more autonomous - and as humans become more dependent on intelligent devices - the reality is they are having a co-evolutionary impact on each other.

Leaving the analysis of machines to computer scientists and mathematicians in a siloed institutional structure eliminates the potential for better understanding the proverbial 'black box,' but by incorporating disciplines which explain human behavior, explains how those deploying these systems can optimize their performance while maintaining some degree of influence over the developmental process. JL


Karen Hao reports in MIT Technology Review:

A machine behaviorist is to a computer scientist what a social scientist is to a neuroscientist. The former looks to understand how an agent, whether artificial or biological, behaves in its habitat, when coexisting in groups, and when interacting with other intelligent agents. The latter seeks to dissect the decision-making mechanics behind those behaviors.“We’re seeing the rise of machines that are actors making decisions and taking actions autonomously." Thus they need to be studied as a new class of actors with their own behavioral patterns and ecology. “We are one giant human-machine system. We need to acknowledge that and start studying it that way.”
Much ink has been spilled on the black-box nature of AI systems—and how it makes us uncomfortable that we often can’t understand why they reach the decisions they do. As algorithms have come to mediate everything from our social and cultural to economic and political interactions, computer scientists have attempted to respond to rising demands for their explainability by developing technical methods to understand their behaviors.
But a group of researchers from academia and industry are now arguing that we don’t need to penetrate these black boxes in order to understand, and thus control, their effect on our lives. After all, these are not the first inscrutable black boxes we’ve come across.
“We've developed scientific methods to study black boxes for hundreds of years now, but these methods have primarily been applied to [living beings] up to this point,” says Nick Obradovich, an MIT Media Lab researcher and co-author of a new paper published last week in Nature. “We can leverage many of the same tools to study the new black box AI systems.”
The paper’s authors, a diverse group of researchers from industry and academia, propose to create a new academic discipline called “machine behavior.” It approaches studying AI systems in the same way we’ve always studied animals and humans: through empirical observation and experimentation.
In this way a machine behavorist is to a computer scientist what a social scientist is to a neuroscientist. The former looks to understand how an agent—whether artificial or biological—behaves in its habitat, when coexisting in groups, and when interacting with other intelligent agents. The latter seeks to dissect the decision-making mechanics behind those behaviors.
“We’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously,” Iyad Rahwan, another Media Lab researcher and lead author on the paper, said in a blog post accompanying the publication. Thus they need to be studied “as a new class of actors with their own behavioral patterns and ecology.”
This doesn’t mean to suggest that AI systems have developed some kind of free will. (They certainly have not; they’re only glorified math models.) But it is meant to move away from viewing AI systems as passive tools that can be assessed purely through their technical architecture, performance, and capabilities. They should instead be considered as active actors that change and influence their environments and the people and machines around them.
So, what would this even look like? A machine behaviorist might interrogate, for example, the impact of voice assistants on a child’s personality development. Or they might examine how online dating algorithms have changed how people meet and fall in love. Ultimately, they would study the emergent properties that arise from many humans and machines coexisting and collaborating together.
“We are all one giant human-machine system,” says Obradovich. “We need to acknowledge that and start studying it that way.”
It’s important to note that most of these ideas aren’t new. Roboticists, for example, have long studied human-computer interaction. And the field of science, technology, and society have what’s known as the “actor-network theory,” a framework for describing everything in the social and natural worlds—both humans and algorithms—as actors that somehow relate to one another. But for the most part, each of these efforts have been siloed in separate disciplines. Bringing them together under one umbrella helps align their goals, formalize a common language, and foster interdisciplinary collaborations. “It will help us find each other,” Obradovich says.
Despite being in a distinct discipline from AI researchers, machine behaviorists should still work closely with them. As the latter discover new ways AI systems behave and affect people, the former can bring those learnings to bear on the system’s designs. The more each discipline can take advantage of the other’s expertise, the more they will be able to ensure that artificial agents benefit humans rather than harm them.
“We need the expertise of scientists from across all behavioral and computational disciplines,” Obradovich says. “Figuring out how to live with machines is a problem too vast for any one discipline to solve alone.”

0 comments:

Post a Comment