A Blog by Jonathan Low

 

Jun 18, 2019

Iterative Liability And the Growing Demand For Assigning AI Responsibility

Where to begin? Or perhaps more importantly, where to end?

As AI become more pervasive and their impact more substantial, there is growing demand for accountability. The legal system is attempting to figure out the chain of responsibility while acknowledging that things change, especially AIs.

The doctrine that appears to be emerging is that there is legal liability for the behavior or effects of AI, but that like the AIs and their developers, that liability evolves as changes are made. This may not be satisfying to those who can demonstrate harm, but it may be a fairer, more logical way of apportioning blame as well as financial or operational obligations. JL


Eran Kahana reports in the Stanford Law School blog:

“Iterative liability” describes a legal liability standard that can be applied to AI designs capable of self-replication and iterative changes. The “parent” entity is not held liable for the actions of its progeny. Holding a human developer (and an AI) liable along the evolutionary chain of a learning-capable autonomous entity is innovation inhibiting. It reflects the reality that the foreseeability of an AI’s actions erodes through its evolution. The clarity of the starting point blurs, until it fades away. The liability attached to a particular developer, be it human or AI, is in motion and moves away from the original developer as the iterations increase.
Considering the proliferation of AI over the last decade, it is unsurprising that the concept of “iterative liability”  remains relevant, arguably even more so today. The same can also be said about the need for a normative framework by which our legal system can gauge and categorize the actions of AI entities. The foundations of this framework, in the form of an AI taxonomy, were described about seven years ago. This post revisits these two concepts.
With respect to iterative liability, it will be useful to begin with a recap of a couple of items: First, “iterative liability” describes a propagating legal liability standard, one that can be applied to cyber and cybernetic design. This standard is adaptable to pretty much any AI designs that are capable of self-replication and iterative changes (think machine learning, and more on that below). Under this liability framework, the “parent” entity is not (at least not as a matter of default) held liable for the actions of its “progeny,” at least not all of them.  Second, iterative liability is a macro-level framework in that it is not intended solely to benefit one type of AI development, but can be conducive for promoting the overall development of AI (which is by most, if not all measures, a desirable attribute). As such, iterative liability is both AI developer-friendly and pragmatic. It effectively captures the operational reality that holding a human developer (and an AI replicator) liable along the entire evolutionary chain (iteration) of a learning-capable autonomous cyber entity is innovation-inhibiting. It also reflects the reality that the foreseeability of an AI entity’s actions erodes through its evolutionary process. The pristine clarity of the starting point gradually blurs, until it invariably fades away, much like the light-dot in a CRT television. Of course, this is not to say that liability vanishes; it does not. The liability that is attached to a particular developer, be it a human or an AI entity, is in motion and moves away from the original developer as the iterations increase.
As for the AI taxonomy, four levels of AI apps (or entities) were described beginning with Level A and ending with Level D. (In a later post I introduced Level E, which is the nano-scale version of Level D. For the purposes of this post, I will stick with Level D, but its application to Level E should be readily apparent.) This AI taxonomy was founded on a computational capability-continuum which could be synced with a legal framework that can effectively deal with the potentially unpredictable behavior of such apps. Of particular relevance here is the Level D app. To recap, the “Level D app manifests intelligence levels so sophisticated that it can identify and reprogram any portion of its behavior (in unpredictable ways); i.e., it has a self-awareness capacity and can create other apps without human involvement.”(Another example of this replication is discussed in the UNTAME post from 2010.)
The conclusion: Iterative liability and AI taxonomy represent two critical concepts that together can help ensure our legal system effectively deals with harm and damage caused by AI apps. Absent the adoption of these types of AI-specific, mission centric concepts, our legal system will be severely handicapped. It will be woefully ineffective, if not entirely incapable, of yielding effective solutions.
***Postscript***
May 27, 2019: Level C and D/E AI algorithms contain fractal characteristics. The algorithm’s evolutionary process (“progression” in the fractal sense) is capable of manifesting an infinite number of iterations (inexhaustible complexity). I first discussed this principle nearly 9 years ago in the environmental-based learning post. Behaviorally, progression implicates operational unpredictability and from a liability perspective it underscores and supports the importance of isolating the Level C, D/E developer from legal responsibility for harm caused by the app.

0 comments:

Post a Comment