A Blog by Jonathan Low

 

Apr 19, 2021

Robots Are More Like Animals Than Humans. Get Over It.

Man domesticated animals to help assist in completing difficult or boring tasks. 

And that analogy may be far more apt - and productive - than wasting years attempting to make machines human. JL 

Kate Darling reports in WiredUK:

Technologists are still trying to figure out how humans learn and attempt to recreate it in machines. So we tend to compare artificial intelligence to human intelligence and robots to people. (But) comparing robots to animals helps us see that robots don’t necessarily replace jobs, but instead are helping us with specific tasks. Robots differ from animals in their abilities, scale and impact. But these differences only further illustrate the point that when we broaden our thinking to consider what skills might complement our abilities instead of replacing them, we can better envision what’s possible with this new breed.

In the early 2000s, a Russian man named Boris Zhurid struck a deal to sell the Iranians a large collection of weaponry. He chartered a transport aircraft to make the delivery from Sevastopol, the largest city on the Crimean Peninsula, in the Black Sea, to the Persian Gulf. An old sonar manufacturer brochure describes what Zhurid was peddling as, “self-propelled marine vehicle[s], or platform[s]; with a built-in sonar sensor system suitable for detecting and classifying targets; and carrying an on-board computer… capable of being programmed for complex performance.” The cargo of Zhurid’s chartered plane? Twenty-seven animals, including dolphins, walruses, sea lions, seals and a white beluga whale.

Dolphins attacking enemy divers with strap-on harpoons sounds like something from a James Bond movie, but both the United States and the Soviet navies started secret marine mammal training programs in the 1960s. Despite an unsuccessful attempt by the British during World War I, whose trained sea lions turned out to be better at following fish than German submarines, militaries worldwide began experimenting with aquatic animals. The US Navy tested a wide range of sea creatures, from turtles to birds to sharks, eventually settling on bottlenose dolphins and California sea lions. The investment paid off: the animals had physical capabilities, senses and intelligence that were extremely handy for all sorts of operations. They also have a colourful history, both in their uses as pseudo-robots and also in relation to real robots.

Dolphins, intelligent enough to understand things like human pointing and gaze, are easily trained. They also use a form of echolocation that is so precise, they can tell the difference between an air gun pellet and a kernel of corn from about 15 metres away. Sea lions, for their part, have exceptional hearing capabilities and can see objects and people in dark, murky waters. The dolphins and sea lions soon proved useful in detecting not just mines and lost equipment, but also enemy swimmers.

Then things started to get eerily close to the 1973 science fiction film The Day of the Dolphin, which is about dolphins that are trained to assassinate the president by attaching mines to the hull of the presidential yacht. Just like in the film, the Soviets also trained their dolphins to attach mines onto enemy submarines. A dolphin trainer even revealed to the BBC decades later that the dolphins could attack foreign divers with harpoons strapped to their heads.

By the early 1990s, the programmes started to languish. After the Soviet Union crumbled, the dolphin programme became part of the Ukrainian navy, but there wasn’t much interest in maintaining it. That’s when Boris Zhurid, the programme manager for the main Sevastopol base, decided to sell the lot to Iran. Zhurid had been the animals’ trainer for years. But money for the programme had run out, and the animals were starving. Concerned for their welfare, he negotiated a home for them in a brand-new oceanarium in Iran, and accompanied them to their new country. What Iran intended to use them for remained unclear. Some speculated military use, but at that point, many of the former Soviet dolphins had been repurposed as tourist attractions. Zhurid didn’t disclose what purpose he sold the animals for, saying: “I am prepared to go to Allah, or even to the devil, as long as my animals will be OK there.”

It’s easy to assume that the need for military sea mammals decreased as our sonar technology got better. While marine mammals can vastly outperform people at finding things on the ocean floor, how do they measure up against machines? In 2012, the US Navy announced that they were winding down part of their marine mammal programme, with the goal of phasing in robots by 2017. But despite over $90 million (£65 million) in investments, they still haven’t been able to replace the animals. According to the programme website, “Dolphins naturally possess the most sophisticated sonar known to science […] Someday it may be possible to complete these missions with underwater drones, but for now, technology is no match for the animals.”

The Russian marine mammal programmes were far from over, as well. In 2012, the Ukrainian navy reopened the programme, until it was seized by Russia in the 2014 annexation of Crimea, leading to some unsubstantiated rumours. A Ukrainian spokesperson claimed that the dolphins went on a “patriotic” hunger strike and perished after being separated from their trainers, while the Russians claimed there were no dolphins left in the programme in the first place. Then, the Russian government bought five new dolphins in 2016. A 2018 RT article touted the Russian navy’s use of sea lions in combat. In 2019, when Norwegian fishers discovered a beluga whale with a harness that said “Equipment of St. Petersburg,” rumours abounded that it was a spy whale from Russia. (One retired Russian colonel laughed off the rumours by pointing out “if we were using this animal for spying do you really think we’d attach a mobile phone number with the message ‘please call this number’?”)

Like in Russia, the American military sea animal training programme is still going strong. The roughly 70 dolphins and 30 sea lions in the programme have located mines in the Persian Gulf during the Gulf Wars and during the US invasion of Iraq, and are still trained to recover objects, guard against unauthorised intruders, and even help retrieve materials from aeroplane crashes. In many cases, they do the same jobs as, and even work alongside, our modern autonomous underwater vehicles (AUVs). The services that animals have provided, and continue to provide, to humans are too many to count. You can rent goats to mow your lawn, or clean your aquaculture ponds with Asian carp (an invasive species, so depending on where you are, don’t try this at home). Dogs and horses have been helping us herd sheep for ages. An aspiring entrepreneur in 1906 trained raccoons to be chimney sweeps in Washington, DC. In the entry hall of my office building in the MIT Media Lab, some of our researchers collaborated with 6,500 silkworms to create a silk pavilion, continuing the long tradition of human use of silkworms that began in China around 3000 BCE. Dogs were used for medicinal purposes in ancient Egypt, and we still draw blood from patients using leeches.

Animals haven’t replaced people but instead have become powerful tools that enable us to work differently, whether by pulling our plows, going to space, or ensuring that our beer is delicious. In fact, animals have made such a difference in what we’re able to do that their integration into our processes has catalyzed fundamental changes to our cultures, economies and societies as a whole.

The domestication of livestock like sheep and goats meant a different type of civilization, as humans went from hunter-gatherer to farmer. The animals themselves evolved differently than their wild counterparts because they lived in protected spaces and were fed and cared for by people, but they had at least as much of an impact on us: because of what was required to manage and feed the animals, herds of sheep meant that people had to settle in one place. Investing in domesticated animals also meant establishing ownership. These animals weren’t freely available for anyone to hunt, and they became people’s property. This introduced new concepts of power. The wealth of those who had cattle versus those who didn’t created new disparities and favoured some cultures over others.

Introducing animal and farmland ownership was a profound change, which eventually led to societal concepts like inheritance and marriage, and led to changes in how land itself was structured and cultivated. Now, we had land that was divided into partitions. And much of the land and the structures started to be designed and shaped to support our agricultural pursuits. We’re poised to see more transformations as we integrate our new breed: robots. The animal world contains a wide variety of different talents, many of which exceed human abilities. Yet when it comes to robots and AI, we’re hung up on a very specific type of intelligence and skill: our own. From the moment I was visibly pregnant, I’ve heard one phrase over and over again: “You must find it so interesting to watch your child’s brain develop, given your love for robots.” This phrase is a great conversation starter, and rather than tire of it, I find it very interesting that people make this well-intended inference repeatedly.

Of course, it’s fascinating to observe how babies learn about the world. But when we compare children to robots, we sometimes fall into incorrect assumptions about the likeness between artificial intelligence and human intelligence. While there may be similarities here and there, my child doesn’t sense, act, or learn the way a machine does.

Given our tendency to compare robots to ourselves, it’s no surprise that a Google image search for “artificial intelligence” in 2020 mostly returns pictures of human brains and human-shaped robots. We use our own brains as models when thinking about AI in part because historically, the goal of the very first AI developers was exactly that: to recreate human intelligence.

Today, some technologists are still chasing that original goal – to figure out how humans learn and attempt to recreate it in machines, and we have decades of sci-fi and pop culture rooted in the idea that machines will think like us or try to outsmart us. So we tend to compare artificial intelligence to human intelligence and robots to people, not just in stock photo images and science fiction scenarios of robot revolutions, but more crucially in our conversations around robots and jobs.

Automation has, and will continue to have, huge impacts on labour markets – those in factories and farming are already feeling the after-shocks. There’s no question that we will continue to see industry disruptions as robotic technology develops, but in our mainstream narratives, we’re leaning too hard on the idea that robots are a one-to-one replacement for humans. Despite the AI pioneers’ original goal of recreating human intelligence, our current robots are fundamentally different. They’re not less-developed versions of us that will eventually catch up as we increase their computing power; like animals, they have a different type of intelligence entirely.

In 1993, science fiction author Vernor Vinge published an essay titled “The Coming Technological Singularity”, in which he stated that “the creation of greater than human intelligence will occur during the next 30 years.” And thus the concept of the Singularity – that crucial moment when artificial intelligence surpasses human intelligence (sometimes called superintelligence) – was born.

Since Vinge’s original prediction, the Singularity has transformed into an all-consuming conversation topic within futurist circles. But at the same time that Elon Musk warns that robots are getting too smart, we also hear (and smell) accounts of robot vacuum cleaners encountering some dog poo and cheerfully spreading it around the house while they “clean”. How are robots our greatest intellectual threats while simultaneously being derailed by the slightest obstacle?

A common answer is the exponential growth of computing power. In 1965, Gordon Moore predicted that the number of integrated circuit chip transistors would double every year, and it turned out he was absolutely right. His prediction became known as Moore’s law. Its modified version holds that the number of transistors on a chip will double every two years, exponentially increasing the efficiency and speed with which computers can execute tasks. Experts agree that there are physical limits to this law, but, so far, we haven’t hit them.

I don’t question the principle behind Moore’s law, but I do think we should question whether intelligence is simply a matter of computing power, especially when the intelligence we’re currently building works so differently than our own. Recent major breakthroughs in artificial intelligence are due to progress in the brute computing force required to process huge amounts of data rather than innovations in complex algorithms. For example: show a computer 100,000 pictures of a hotdog, and it can start to recognise and caption hotdogs in new pictures it’s never seen before. This works for computer vision, speech recognition and other pattern recognition tasks, and the effect is that machines are able to do things that they’ve never been able to do before, like sort farm cucumbers by size, shape, and colour. A definite leap for computing power, but not necessarily for superintelligence. Plus, weird stuff sometimes comes out of these systems.

AI researcher Janelle Shane collects some of these glitches in her blog, AI Weirdness. For example, she features a study from the University of Tübingen that looked at how a particular image classification system identified fish – specifically, when it was given a photo of a certain type of fish, what parts of that image were really key to the system deciding that fish’s species. The answer actually wasn’t fish related at all. To the researchers’ surprise, the system showed them the parts of the photos that contained human fingers. Most of the available photos (and thus what the system had been trained on) were of people holding the fish as a trophy, so the system learned that the most surefire way to identify the fish was by the human fingers around it. This fishy scenario shows that AI can be spoofed in ways that would never throw a person.

That’s not to say that these machines aren’t smart. But it’s important to understand that they’re smart in a very different way than we are. The mistakes they make feel strange to us because they don’t perceive the world like a human does. They’re not meant to.

Human intelligence is incredibly generalisable and adaptive, unlike even the most sophisticated AI. According to Takeo Kanade, a leading computer vision expert at Carnegie Mellon University, “if you think it’s easy for you to do, most of the time it’s very difficult for robots to do.” Or as Janelle Shane says, “Humans have a sneaky habit of doing broad tasks without even realising it.”

We’re able to multitask, context-switch and handle unexpected situations with an ease that’s currently inconceivable for machines. And, as computer scientist Mark Lee points out: “Despite 60 years of research and theorising, general purpose AI has not made any progress worth mentioning, and this could even turn out to be an impossible problem.” Computing power doesn’t seem to help with that. We don’t even know how to define human intelligence, let alone structure it in a machine.

When it comes to robots, we’re not anywhere close to developing the kind of intelligence or skill that humans have. It’s possible that some unforeseen breakthrough will propel us past every single remaining hurdle toward recreating a machine version of our incredibly complicated brains and bodies. But given the trajectory we’re looking at right now, that’s far less likely than the alternative: that it will take many small steps and will not necessarily lead where we think.

To borrow from computer scientist Andrew Ng, worrying about artificial superintelligence taking over is akin to worrying about over-population on Mars. Plus, as tech entrepreneur Maciej Cegłowski notes, there’s also the question of motivation: why would something “smarter” – whatever that means – want to destroy us, as so many fear? “For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation. Or maybe it would become obsessed with the risk of hyperintelligence, and spend all its time blogging about that.”

Cegłowski also points out that intelligence is unpredictable and often doesn’t land on the targets we set. The smartest person in the world will still struggle with the basic task of getting a cat into a cat carrier if the cat doesn’t want to go inside. The Australian military underestimated animal intelligence in 1932 while fighting a series of battles against an unlikely foe: emus. In trying to cull the public nuisance of birds running amok, the Aussies found that their machine guns were no match for the animals’ clever “guerrilla” tactics. (The emus split into small groups and even posted sentries, ultimately causing the human soldiers to lose the “emu war” and give up.) Animals, despite being “intellectually inferior”, have skill sets that can outsmart our own. That’s because intelligence isn’t as simple as a linear graph of processing power.

Rodney Brooks, roboticist and co-founder of iRobot, Rethink Robotics and, most recently, Robust.AI, famously wrote, “It is unfair to claim that an elephant has no intelligence worth studying just because it does not play chess.” He also wrote an essay titled “What Is It Like to Be a Robot?”, which draws on animal intelligence to illustrate that there are types of intelligence – an octopus’s for example – that evolved entirely independently from mammal brains. Similarly, he says, robots have a different way of seeing and processing the world than we do. They can sense things that we can’t and be totally oblivious to things that are obvious to us. Rather than artificial intelligence being a step on the path to human intelligence, it can and will be something entirely its own, and this means that, just as we’ve done with animals in the past, we’re at our best when we team up. 

That’s not to say that human replacement is always a bad outcome. Under an unrelenting Sun, the camels of Qatar are poised and ready to run. Camel racing – a tradition on the Arabian Peninsula for thousands of years – is one of Qatar’s most popular sports, with millions bet on races.

For decades, camel races were haunted by a dark shadow of exploitation as owners sought the smallest, lightest-weight jockeys they could find. Human rights organisations documented child trafficking rings that fed kids as young as age three into jockey camps where abuse, starvation and death were part of daily life. Despite multiple regulations that added minimum age and weight requirements for jockeys, violations persisted, until the robots took over.

In the mid-2000s, Qatar outlawed human riders and invested in developing robotic ones. Today, each robot jockey comes equipped with a remote-controlled whip operated by an owner or trainer who rides in a car alongside the race, as well as a hump-mounted speaker that allows for communication with the animal. The robot jockeys haven’t completely eliminated the pipeline of child jockeys, but by directly replacing human drivers, they’ve put a sizable dent in that slave labour market, both in and outside of Qatar.

Machines have subbed in for all sorts of human activity for centuries. Much of it is the type of dirty, dull, and dangerous work that we don’t want people to do. We’re thrilled to send robots to explore nuclear waste sites, dispose of bombs and collect data on Mars. In India, robots are taking over the work of sewer cleaners, the impoverished workers who dive into sewage pipes to manually shovel excrement and trash, risking death and disease every single day in the process. But the salaries that people rely on to live don’t necessarily need to disappear. Even with sewer-cleaning bots, the former human “scavengers” are still employed; instead of shoveling waste, many are now paid to set up and remotely steer the robots through the sewers, working with the technology designed to usurp them.

Economists and labour market analysts are divided over the potential effects of new automation through robots and AI. Some believe that automation raises productivity, creates greater labour demand and generates more wealth; and some view robots and artificial intelligence as a new type of disruption, one that’s poised to replace humans in ways we’ve never seen before. What further complicates this conversation is the fact that robots don’t automate jobs; they automate tasks, which means that robots are much more disruptive in sectors where jobs are heavily task oriented.

If, for example, your job, or the majority of your job, consists of planting seeds in rows, it’s likely that an automated planting system can do it (and probably is already doing it). Unless you can provide whatever tech support or supervision that robotic system requires, you’ll need a new job. But if your job consists of tasks that can’t be fully automated, your human skills might compliment automation in new ways, and even spur economic growth. When banks introduced ATMs, the number of human tellers at individual locations went down, but the number of bank branches exploded, increasing teller hiring overall. The shift also fundamentally changed what it meant to be a bank teller. Instead of just handling cash, tellers started providing a range of new services.

The ideal is that delegating some of our routine tasks to robots will complement our comparative advantage, freeing people to focus on anything that requires adaptable intelligence and basic common sense. But the main thing I want to argue is that, contrary to our tech-deterministic beliefs, we actually have some control over how robots impact the labour market. Rather than pushing for broad task automation, we could invest in redesigning the ways people work in order to fully capture the strengths of both people and robots.

In fact, that’s already happening in some patent offices. One of the main problems with the patent system is that in order to decide if a proposed thingamabob is truly a new invention, patent examiners would ideally sift through monumental troves of data to pinpoint how said thingamabob is (or isn’t) novel. With those requirements, patents would never get issued, so instead, examiners do as much research as possible within a reasonable time frame and take their best guesses. This leads to patents that never should have been granted, which is a drag on the economy. But some patent offices are hoping to change that with AI. For example, both the Japanese and US patent offices are exploring new systems that could help dig through the world’s available information, and flag relevant documents that examiners would otherwise miss. This gives examiners more information to use in their analyses and frees them up to pursue answers to questions that are hard for AI systems to come up with.

Compared to what I’ve seen too frequently elsewhere, there is a striking difference in the patent office’s approach. Instead of asking if AI could replace these pesky human patent examiners, for example by training it on available data and seeing if it can achieve a slightly better hit rate, these offices have built their strategies around an entirely different question: “how can we invest in technology that helps our people do their jobs better?”

How we use new technologies isn’t set in stone. We make choices about how robots affect our lives and labour markets, and we can learn lessons from the different ways that cultures across the globe view the role of robots. For example, my roboticist colleagues in Japan don’t field nearly as many questions about their creations replacing humans, in part because robots are more often viewed as mechanical partners rather than adversaries. Yukie Nagai, head of the Cognitive Developmental Robotics Lab at the University of Tokyo, points out that a Google image search for human-robot interaction in English returns image after image of a robotic arm and a human arm across from each other, shaking hands. If you do the same search in Japanese, the images aren’t of robots and humans opposite each other, but rather standing or sitting beside each other, sharing a perspective. They are partners, not in the sense of shaking hands, but in the sense of holding hands.

While there are many socioeconomic factors that influence how individual countries and societies view robots, the narrative is fluid, and our western view of robots versus humans isn’t the only one. Some of our western views can be directly attributed to our love of dystopian sci-fi. How much automation disrupts and shifts the labour market is an incredibly complicated question, but it’s striking how much of our conversations mirror speculative fiction rather than what’s currently happening on the ground, especially when our language places agency on the robots themselves, with pithy headlines like “No Jobs? Blame the Robots” instead of the more accurate “No Jobs? Blame Company Decisions Driven by Unbridled Corporate Capitalism”.

Comparing robots to animals helps us see that robots don’t necessarily replace jobs, but instead are helping us with specific tasks, like plowing fields, delivering packages by ground or air, cleaning pipes, and guarding the homestead. Robots differ from animals in their abilities: our modern missile guidance systems far exceed notorious psychologist B. F. Skinner’s World War II pigeon-piloted missile system in both scale and impact, and marine mammals have enough advantages over robots that the navy has not yet phased them out. But these differences only further illustrate the point that when we broaden our thinking to consider what skills might complement our abilities instead of replacing them, we can better envision what’s possible with this new breed.


0 comments:

Post a Comment