A Blog by Jonathan Low

 

Jun 16, 2019

How AI-Driven Bots Collaborate To Help Each Other Win Games

Understanding that to collaborate gives them a competitive advantage could be a key breakthrough in the drive to create a general intelligence AI. JL

Daniela Hernandez reports in the Wall Street Journal:

AI architects can build machine intelligence that is more like that of humans. Their goal is developing artificial intelligence that can solve problems in diverse settings without additional training, much the same way humans leverage prior experience to navigate new situations or to improvise.  Such technology, “developing fundamental algorithms which could lead to” more humanlike intelligence," would open up new applications like self-driving cars or collaborative robots that can work more seamlessly alongside humans or other robots in factories, warehouses, the home or on city streets.
Researchers at Google’s artificial-intelligence labs said they have developed virtual videogame players that learned to master a game by working with other digital gamers.
The findings, published Thursday in the journal Science, points to how AI architects can build machine intelligence that is more like that of humans, some researchers said.
In most cases, the virtual players played a capture-the-flag videogame better than professional, human game testers, according to the researchers, who work at the DeepMind artificial-intelligence labs at Google parent company Alphabet Inc. GOOG -0.31%
“This is another one of those game domains where you think the humans have a special capability,” said Jonathan How, a Massachusetts Institute of Technology professor who works on multiagent systems and wasn’t involved in the research. “To have a technology come out and say that’s not true...it created quite a buzz.”
The work, which was first described in a company blog post last year, is a step toward “developing the fundamental algorithms which could in the future lead us to” more humanlike intelligence, said Max Jaderberg, a DeepMind researcher who was one of the authors of the paper.
Across Silicon Valley, researchers at firms like Open AI Inc., Facebook Inc. and Microsoft Corp. have been trying to develop AI machines that can do more than perform a single task, like labeling images or translating language.
Their goal is developing artificial intelligence that can solve a variety of problems in diverse settings without additional training, much the same way humans leverage prior experience to navigate new situations or to improvise.
Such technology would open up new applications like self-driving cars or collaborative robots that can work more seamlessly alongside humans or other robots in factories, warehouses, the home or on city streets.
Most robots today are kept apart from humans for safety reasons, or follow very prescribed rules that limit what they can do. They also often depend on humans to come to their rescue if they stall or fall.
Yet realizing what researchers refer to as general AI has proven elusive. The struggles have prompted some researchers to see gaming as a path forward.
In collaborative games, players don’t know what actions their teammates will take. Plus, one player’s actions may influence those of others, creating a more varied assortment of scenarios each player needs to master to be successful.
Yet some artificial-intelligence researchers question the role of games in building next-generation AI systems.
AIs that have bested humans at various videogames have been duped when small changes were made to the settings with which the bots were familiar, showing they are not as smart as they appear, according to the critics.
“I am less and less convinced that computer games are still in the critical path toward general AI. I don’t think we’ve exhausted them yet, but we’re pretty close,” said Mark Riedl, an AI researcher at the Georgia Institute of Technology.
As complex as games are, they are still self-contained worlds with finite rules, and to build better AIs, researchers will “need environments that are much more complicated than what computer games can offer,” he said.
DeepMind’s software learned to play a multiplayer, first-person game called “Quake III Arena.” The game has a variety of modes, including a capture-the-flag challenge during which multiplayer teams work together to obtain a flag in the opposing team’s territory.
For the study, the AI-powered players only learned the capture-the-flag component. The task requires developing a strategy and planning, two signs of intelligence, according to AI researchers.
The DeepMind engineers trained a total of 30 virtual gamers. During training, the digital players learned the rules of the game themselves, based on their own observations and experiences, including whether certain actions—like having a flag or tagging an opponent—were correlated with winning.The indoor and outdoor regions they had to explore were chosen at random to ensure the software could learn to play in a variety of settings. Training lasted about three weeks, during which each gamer played 450,000 games, or the equivalent of four years of real-time, human play, the researchers said.
Then to test the virtual players, researchers entered them randomly into tournaments. In the matches, the gamers competed with and against each other, and against people.
The researchers evaluated the performance of teams consisting of two, three or four virtual players.
Roughly three quarters of the time, the machines outperformed humans, even when researchers tweaked the bots so they took about the same amount of time as humans to react to what was happening in the game.
Faster reaction times could have given AIs an unfair advantage, and prevented the researchers from ascertaining whether their software was simply a sharper shooter or doing something impressive such as devising an intelligent strategy.
DeepMind researchers said they are currently working on scaling the technology to accommodate bigger teams, and also larger, more complicated environments.
In January, DeepMind said it had developed an AI capable of beating some of the best professional players at “StarCraft II,” considered among the most complicated strategy games.
In April, OpenAI pitted bots specializing in a multiplayer game called “Dota 2” against humans. The bots bested a champion esports team at the strategy game, which AI experts say is much more complex than “Quake III Arena.”
The “Dota 2”-playing machines improved at speeds that far exceed those of humans, said Jonathan Raiman, an OpenAI researcher developing game-playing AIs who wasn’t involved in the DeepMind study. Within a week, the program’s abilities went “far beyond anything we’d ever seen,” he added.
That characteristic and computing-power advances give researchers hope that they might be on the cusp of cracking general AI, said Mr. Raiman.

0 comments:

Post a Comment