A Blog by Jonathan Low

 

Nov 14, 2017

If An Artificially Intelligent Guard Dog Attacks a Mail Deliverer, Who Is Responsible?

Whoa! Establishing intent and liability are complicated. Who takes the blame: the machine itself, the company that created it, the researcher or coder who designed it, the purchaser who deployed it...everyone in the value chain?

This could give a whole new lease on life to the intellectual property bar now that patent trolling is becoming less lucrative. And forget 'slip and fall' suits: don't even get us started on the opportunity for plaintiff's attorneys. JL


Varun reports in Startup Grind:

Who is in control of the “intent”? If its moves are not pre-programmed but it can learn  is it not a life form? AI systems work like black boxes; if backwards traceability to the entire set of input data doesn’t exist  the owner escapes criminal liability as intent would be impossible to establish. If the startup sold 100 AI guard dogs, and 50% went rogue, how is liability determined ? The manufacturer has pointed the finger at a particular AI researcher and a specific research document:is the researcher criminally liable ?
At the height of the Cuban missile crisis in 1962, four Soviet submarines carrying nuclear torpedoes were deployed to the region. One of them was B59. To launch the nuclear torpedo, while all other submarines required the approval of two officers on board only; B59, on that deployment, had a 3rd officer on board whose approval was also needed.
As the events of the missile crisis unfolded, B59 found itself out of radio contact with its chain of command, and fearing that the war had already started based on the data available to them at the time, and determining it was better to do something than nothing, two of the three officers made a decision to launch a nuclear torpedo at US assets.
However, this 3rd officer, Vasili Arkhipov, declined to give his consent. He deviated from logical decision making to make a decision which turned out to be a crucial one for humanity, one which prevented an all out nuclear WWIII from breaking out.
If in future conflicts, a human such as Arkhipov is to be replaced by a fully autonomous/AI-driven system, it is imperative that system also “cares” for humanity.
It needs to happen at every touch point an AI-driven system has with a human and needs to be pervasive by design and by regulation.

A simple scenario: AI-driven guard dog which bites the postman delivering mail

An AI-driven guard dog misidentifies and attacks and injures the postman. Who is responsible ? Who is in control of the “intent” ?
First, lets compare it to a regular dog, which will have beneficial use cases towards its owner, such as providing love and affection. If it goes rogue and attacks someone, in our society today, the dog is held responsible. The owner might face some civil action — but that’s about it.
However, if the owner directed the dog to go ahead and attack someone, then both the owner and the dog are held responsible, with the owner facing criminal charges, in addition to whatever civil action there might be.
Who directed the intent has been the key guiding force in criminal prosecution in vicious dog attack cases.
For an AI driven guard dog which misidentifies a good person as bad, and over-calibrates the response, who can be held responsible ? How does the postman, who we assume is human, get justice ? Who formulated the intent ?

Is the AI driven dog liable ? Is it an object or a new form of life ?

Since the dog formulated intent on its own, outside the set of input and expectations it received at its “birth”, isn’t it a form of life ?
Would it make sense to send it for re-training (equivalent to jail time for humans) instead of outright decommissioning ? Would the courts treat it as an independent life form and would it pass the threshold for that ?
If it can “feed” itself (self-charging solar battery panels on its back), if it can provide affection towards it owners, if its next moves are not pre-programmed but based on thought and intuition, if it can learn — is it not a life form ? Why is it necessary for it to have tissue and biological function for it to be considered a life form ?
If the courts rule such an AI object is not a life form, then where does the criminal liability lie ? Someone has to be held responsible right ?

Is the owner criminally liable ? The black box defence.

There are two possibilities here: either the owner only taught good things to the dog, and still the dog went rogue; or the owner did deliberately feed the AI driven dog with bad input data, hoping it would go rogue.
In the first case, the owner shouldn’t be held liable, whereas in the second case, the owner clearly should be. Given AI systems work like black boxes, if that backwards traceability to the entire set of input data doesn’t exist — the owner, whether good or bad, escapes criminal liability here as intent would be almost impossible to establish (For our scenario, key assumption is that the owner did not issue a specific command to the dog: it acted on its own).
The owner can and should still be sued though, extracting compensation for the victim.

Is the manufacturer/startup criminally liable ? The component defence.

If the startup sold 100 such AI guard dogs, and 50% went rogue, despite passing in tests internally —how is liability determined ?
What if the startup points out — “hey, we took this well-established model published by this researcher and simply used it as a component, its not us but the AI researcher who developed that model where the blame ultimately lies”.
The startup should face civil action, but perhaps the executives escape criminal action based on “component defence”. If the component is faulty, the blame then shifts to the component manufacturer.

Is the original AI researcher or the model host criminally liable ?

Suppose at this point the startup manufacturing these AI guard dogs has pointed the finger at a particular AI researcher and a specific research document which this individual had published as being the source (sidebar: what if the startup used a hosted model on an AI marketplace ?).
Is the researcher criminally liable ? Since there was no bad intent prescribed in the research and it was meant to be used for good things, then surely not right ?
If we think of such AI systems as being life forms, then that research paper is its DNA, and the researcher is basically “God”.
Since the legal system doesn’t allow for criminally charging God for the actions of beings which evolved from “simple” set of blueprints, the researcher is not liable.
But if the AI system is viewed as just another component, then culpability extends to the researcher as well. It might be very tough to trace — but if and as the stakes get higher, it could start coming into play.
However, a hosted AI model provider might have significant liability — as it is not just providing a blueprint, but an implementation of it, which the startup plugged into its own product.
Also, wouldn’t we want deliberately nefarious research identified and regulated somehow ?

But I built a hammer…

For most researchers who are doing what they do for the benefit of humanity, but it ends up being used in ways which hurts humanity, how do they balance their work ?
Can they balance their work — since they don’t know the hammer which they have invented will be used to build a chair (they hope) or used to hit someone (they hope not).
But since the hammer here now has a mind of its own — a black box — if it does something bad, the researcher doesn’t escape the moral blame at least. This is not then just a law and order problem. The AI community needs to pro-actively manage this issue: How can AI-driven systems be made safer towards humanity ?
Shouldn’t there be a law requiring every human-impact AI system to act in the beneficial interest of humans unless a human has specifically provided an override (for instance, a law enforcement officer) ?
Or in the case of AI-driven dog attacking the postman — that should have never happened without the express consent of the dog’s owner. If it did, the criminal liability lies solely with the dog’s owner, not with the manufacturer or with the researcher whose work was used to build that guard dog.
It becomes the manufacturer’s responsibility to ensure there is such a human-in-the-middle loop before the guard dog can harm another human.
Infact, the guard dog should be sounding an alarm, notifying its owner and the authorities and recording the interaction with any intruder, as opposed to acting by itself to try and “harm” the intruder.

Think Human

Now, what if, the “postman” was a robot itself, and gets attacked by the AI-driven dog ? Is that ok ? Surely not, right. For a dog that is natural behavior, but if that AI-driven dog had learnt about human-level empathy, then isn’t that better ?
A “human-impact empathy and state of mind” would prevent the dog from biting anyone in the first place. That is what we need to bake into AI systems across the board. That is what we need to legislate as well.

A Federal Beneficial AI Commission ?

What if we had a non-profit entity which works towards:
  • Advising the research community on how to safeguard their work from being misused. Also, are they using proper disclaimers and shielding their liability accordingly to not get exposed to over-enthusiastic prosecutors in the future or wide-reaching civil lawsuits ? How about hosted model marketplaces ?
  • Evaluating the worst-case use scenarios and determining ways to mitigate them. The bad faith actors will figure out horrible use cases of AI — we don’t want to live in a world where the good people are surprised by the bad and then act. No, we need to pre-empt, imagine and develop mitigation methods before-hand.
  • Informing the consumer about safety of day to day AI-driven systems in easy to understand ways.
  • Lobbying for legislation to make AI human focused. This needs to result in enforcement action against bad actors.
This is critically needed.
Why ? Because AI systems are getting “exceptionally smarter” by the day, and while the use-cases till today have been good (eg, identify lung cancer much sooner); nefarious use-cases can’t be far off as the tools and ability become more mass market.

Preparing for Artificial General Intelligence (AGI)

Beneficial AI would be wonderful for humanity; but like all technology before it, it too has the potential of being mis-used. The stakes though are the highest they could ever be.
A smarter life-form than us, if not empathetic towards us and our world, would eventually not see a point in our existence unless it is to help it, just like we don’t care about the bacteria in our world.
We get only get this chance to build in, regulate, and legislate empathy until we have artificial general intelligence, which eventually becomes smarter than humans.
At that point, it will be too late. We will not be able to control it. It will become the dominant life form on the planet, and with presumably many entities and nation-states launching their own AGIs — our collective future determined based on interactions between these AGIs.
We have to be exceptionally stupid to dream of, make and show Terminator movies to the world; then go ahead and build such systems which then annihilate us, not by force but by systemic design over a long period of time.
We weren’t born with a lifetime pass to being the dominant species on the planet. I am sorry to be a “speciest” but heck, I want a world with humans as the dominant species, and with machines with humane values helping us, not replacing us altogether.
How can we teach AI-systems to have a humane-outlook, and how do we go about measuring, informing and regulating that ?
I also strongly recommend reading this book which inspired me and provided the example of the B59 submarine incident (also including below another book which I am planning on reading as well). Any other reviews/opinions/suggestions ?

0 comments:

Post a Comment