A Blog by Jonathan Low

 

Aug 17, 2013

Why Your Next Judge (Probably) Wont Be a Robot

We remain besotted with the notion of algorithmic objectivity. Data will reveal all - and the more of it the better - so will lead us to the promised land of ideal decisions based on perfect information. As if.

As the Obama Administration this week announced it would no longer seek mandatory sentences in certain cases, the debate about justice as it is practiced in the US came under even greater scrutiny than it ordinarily enjoys. Issues of ideological belief, personality, judicial temperament and other human factors that influence trials, sentencing and the related rulings that dominate judicial decision-making have all been called into question.

Which raises the matter of whether robots or a statistical model based on past judicial or legal behavior can or should be pursued with greater urgency. Allegations of bias are inherent in the current system. Race, religion, education, geographical origin, socio-economic status, political affliliation and even more personal characteristics are routinely cited by those challenging society's ability to render justice fairly.

The issue takes on greater urgency as global commerce and technological advances make the world's citizens better informed about standards and experiences outside their own personal experience. It is useful to know that attempts are already being made to codify knowledge about the context in which judicial decisions are rendered. That may influence the manner in which polities determine who among them is fit to sit in judgment. But the counter-argument is that humanity has done best where the perception that the individual can get a fair hearing about all of the factors that may have affected the situation in which the accused and accuser find themselves. Justice works best when those whose behavior it governs believe in it. Robots and computers are fine for informing our decisions, but it is not yet apparent that they are the optimal vehicle for making decisions themselves. JL

Luke Dormehl reports in Fast Company:

On the surface, the idea that we should be able to build an automated judging system makes sense. Current efforts in artificial intelligence rely on rule-based systems, combined with specialized languages for formulating and communicating these rules. Law similarly consists of rules, complete with what appear to be binary yes/no divisions regarding whether those rules have been broken.
It’s no secret that the justice system can be less than objective on occasion. From prejudice and bias to accidental error, part of being human means making mistakes. Daniel Kahneman’s 2011 book Thinking, Fast and Slow features a potent (and alarming) example of how, without deliberate malice or corruption, trained judges can mess up.
The book describes a parole board of judges in Israel, which was monitored over the course of one day as they approved or rejected parole applications. Parole approvals peaked at 65% following each of the judge’s three meal breaks, and steadily declined in the time after--eventually hitting zero immediately prior to the next meal.
Forget about past behavior or predictions of future dangerousness--in this situation the single most important factor in determining whether a person was able to leave prison before their maximum sentence was completed turned out to be nothing more scientific than the randomly assigned time of day of their parole hearing. Surely, we can strive to do better.
If this is the case, shouldn’t it then be possible to formalize legal rules using rule-based languages? Deciding legal cases in this way would be a matter of entering the facts of the case, applying the rules to the facts, and determining the correct answer. In fact, similar algorithms are already widely used by law enforcers to forecast criminal behavior. Could the justice system ever employ a version of the technology to decide criminal trials?
“In principle it would be possible, although it’s still a way away,” says Judge Richard Posner, the man identified by the Journal of Legal Studies as “the most cited legal scholar of the 20th century.” “The main thing that would be left out would be how the judges' views changed with new information. Any change that may affect the way judges think would somehow have to be entered into the computer program, and weighed in order to decide how he would decide a case.”

The Perils of Attitude

It turns out that this point of view Posner describes--the ideology that shapes a judge’s response to issues--is a far better predictor of how a judge will decide a case than just about anything, including political party affiliation. Outside of a few studies like the BU Law paper cited above, however, the ideological basis for a judge’s decision-making process, which legal scholars call “attitudinalism,” has not been subject to nearly enough study, particularly at the Supreme Court level.
Because of the lack of knowledge of how worldviews impact legal decision-making, Judge Posner recommends that AI researchers interested in law focus on helping judges uncover their biases. His suggestion is to build a sort of recommendation engine that builds a judge’s profile based on his past decisions.
“This could be a useful tool to help judges be more self-aware when it comes to bias,” he observes. “A particular judge might be unaware that he or she is soft on criminals, for example. When they receive their profile they might become aware that they have certain unconscious biases that push them in certain directions.”
Elaborating on this idea in a previous paper, Judge Posner wrote:
“I look forward to a time when computers will create profiles of judges’ philosophies from their opinions and their public statements, and will update these profiles continuously as the judges issue additional opinions. [These] profiles will enable lawyers and judges to predict judicial behavior more accurately, and will assist judges in maintaining consistency with their previous decisions--when they want to.”

Do Algorithms Dream Of Electric Laws?

This notion that algorithms might have a place in the legal system is not a new one. As Christopher Steiner points out in Automate This: How Algorithms Came to Rule Our World, the use of algorithms in matters of law dates back to the Babylonians, who applied a sophisticated math system to everything from trade to the courtroom.
In fact, the modern computational spectral promise of “algorithmic neutrality” as described by writers such as Tarleton Gillespie and Evgeny Mozorov, forms just part of a move towards scientific objectivism in law that has been playing out over the past two centuries.
In 1895, former Supreme Court Associate Justice Oliver Wendell Holmes, Jr. asserted that the “ideal system of law should draw its postulates and its legislative justification from science.” Declaring that “the man of the future is the man of statistics,” Holmes described a future where the world’s “ultimate dependence upon science [since] it is finally for science to determine, so far as it can, the relative worth of our different social ends.”
Holmes’ notion that law should be practiced as a branch of statistics became the basis for the so-called “Jurimetrics” movement, a concept that some scholars still see as a potential utopian future for legal computation.

Law As A Natural Science

Despite the confidence of Justice Holmes, applying computational logic to law isn’t as straightforward as computing a standard deviation. Unlike statistics, legal reasoning is largely a theory construction process that takes place through dialogue between judges, lawyers, and scholars. A question such as whether an ambulance driving into a park to save someone’s life is a violation of a law that states that no vehicles may enter the park is more than a semantic problem to be solved through better machine learning and natural language processing tools.
While natural science is positivist, based on the idea that objective knowledge can be acquired through the discovery of natural laws, the laws humans construct are not as well-suited to positivistic study. Unlike natural laws, knowledge in a legal setting may (and regularly does) consist of opposing theories, where it is unclear which one is better than the other. Outcomes in this case are decided through a dialogue-based process, where legal representatives argue over which theory is the most appropriate in a particular situation.
Algorithms can have plenty of useful applications in the legal world, and may even help keep judges on the straight and narrow, but it looks like robots won’t be handing down sentences anytime soon.
[Image: Flickr user Phil Roeder]

0 comments:

Post a Comment