A Blog by Jonathan Low

 

Mar 20, 2019

Explainable AI and the Rebirth of Rules

Gaining credibility in order to achieve acceptance of new technologies is dependent on transparency and on the ability to articulate the process as by which they are applied.

And that is why embracing rules and adhering to them is becoming important. JL


Tom Davenport and Carla Odell report in Forbes:

Artificial intelligence is great at generating predictions. But if you want to use artificial intelligence in a regulated industry, you better explain how the machine predicted a fraud or criminal suspect, a bad credit risk, or a good candidate for drug trials. Historically, rules relied more on logic than data. They learn from human experts. The process of extracting domain expertise from experts (is) “knowledge engineering.” (But with) “smart” software, experts have to understand how the software is trained, learn and used. Explainable AI may be the next generation because of  transparency. The price is to think the way the machines do…in rules.
Artificial intelligence (AI) has been described as a set of “prediction machines.” In general, the technology is great at generating automated predictions. But if you want to use artificial intelligence in a regulated industry, you better be able to explain how the machine predicted a fraud or criminal suspect, a bad credit risk, or a good candidate for drug trials.
International law firm Taylor Wessing (the firm) wanted to use AI as a triage tool to help advise clients of the firm about their predicted exposure to regulations such as the Modern Slavery Act or the Foreign Corrupt Practices Act.  Clients often have suppliers or acquisitions around the world, and they need systematic due diligence to determine where they should investigate more deeply into possible risk. Supply chains can be especially complicated with hundreds of small suppliers. It would be prohibitively costly for lawyers or supply chain managers to investigate every one of them.  Using AI software from Rainbird Technologies Ltd. (not to be confused with the sprinkler company), the firm worked with lawyers to train the software in the relevant legal domain and what clues to look for to identify a client’s potential pockets of risk. If the AI system reveals a high level of risk that a regulation is not being followed correctly, the next step is to call an attorney—and of course, Taylor Wessing hopes they will receive such a call.
Rumors of Rule Engines’ Death Have Been Greatly Exaggerated
Every form of AI has both strengths and weaknesses. The current darling of many AI technologists is deep learning, but it has significant disadvantages relative to other AI approaches in terms of transparency and interpretability. Rainbird has a rule engine at its core, which some view as “yesterday’s news” in the AI field. It’s true that rules were behind the expert systems that powered the last generation of AI “expert systems.” But rules are still surprisingly popular; 49% of executives from large U.S. firms, for example, said they were using rule-based AI in a 2017 Deloitte survey.
The strength of rule engines is their interpretability; a human with reasonable levels of expertise can look at the rules, see if they make sense, and modify them relatively easily. (This is handy in a court-room situation.) They are well-suited to small to medium-complexity decisions; above a few hundred rules, they can develop interactions that are difficult for humans to understand, and maintaining them is challenging.
Historically, rules relied more on logic than large amounts of data. Rather than learning from data, they learn from human experts. The process of extracting domain expertise from experts has been called “knowledge engineering.” Constructing a rule set for a simple knowledge domain is easy, and many non-technical experts can do it. Rainbird’s rules are structured as relationships among entities; the entities and relationships form a “knowledge graph” for the particular knowledge domain. Modeling a complex knowledge taxonomy with many rules and a large set of entities can be difficult and requires a trained knowledge engineer working with experts. Rainbird says that it typically takes about 20 person days to construct a knowledge graph of medium complexity.
There’s a catch when relying on experts: In the world of “smart” software, experts have to understand and accept how the software has to be trained, learn and be used.  The software dictates how experts share theirdecision rules and weight the factors they consider most important in routine decisions (the most automatable kinds of decisions at this point.) Experts might balk at this routine or the time it takes.
Enter the “knowledge engineers.”  Rainbird’s customers like Taylor Wessing find people who have a penchant for learning the domain and the technology.  They don’t have to be experts at either.  In the case of Taylor Wessing, these folks were paralegals with technology skills or technologists with a bent toward the law.   Knowledge engineers are not just business analysts; they act as Sherpas for the experts. They extract knowledge from the experts and help them use the software to build a “knowledge map” of the relevant terrain.  Knowledge engineers help solve the problem of scarce experts or experts’ time. They help pose the questions that experts respond to. They teach the organization how to add more data.
Rules Get Easier
Rainbird has made its rule engine easier to use than many were in the past. It has an editor, for example, that leads the user through the rule creation process and creates both a visual model and rule-based code; the user can work with either interface. Rainbird says that its customers can usually learn how to develop applications on their own with a small amount of training. Another advantage over the last generation of rule engines is that structured numerical data can be integrated into rules via APIs. Credit decisions could be made, for example, on a customer’s credit score or other type of data. And although rule engines are not usually probabilistic, Rainbird does allow knowledge engineers to enter subjective probabilities into rules.
Rule based AI overcomes the challenge of building models when there aren’t large sets of structured or unstructured data sets to test and refine the software. This is easier in fields such as reading medical images, where there might be tens of millions of examples of relevant MRI’s or CAT scans.   For many other fields and any new knowledge domain, there are just not enough large data sets available to train or maintain the software’s accuracy.  Rule based approaches solve that problem.
This leads to the third challenge in using AI: trust, privacy and data protection. Rainbird’s technology provides an example of the explainability advantage of rule engines: it offers an “evidence tree” that describes how a particular decision was made. Regulators in industries like health care and financial services find that capability particularly useful, the vendor says.
Claims of bias in AI are much in the news.  Transparency on how decisions are made and targets selected will help. Explainable AI was the last generation of AI, and it may also be the next generation because of its transparency. The price is that we have to think the way the machines do…in rules.

0 comments:

Post a Comment