A Blog by Jonathan Low

 

Aug 3, 2018

Why the Future of Artificial Intelligence Depends on Trust

Scaling any new technology requires a belief on the part of customers that its benefits outweigh its costs. This has become increasingly important as questions have grown about the impact data misuse and the capture of personal information consumers never realized was being gathered let alone for which they gave their permission. And those were for technologies happily and eagerly embraced by the public.

On the contrary, much of the early narrative about AI has been negative:that it will eliminate jobs, manipulate behavior and challenge human hegemony. To overcome those concerns, the enterprises that successfully deploy AI will need to be proactive in sharing more information than they might wish about how it works - and be willing to make changes as the interaction evolves. JL


Anand Rao and Euan Cameron report in Strategy +Business:

Opening the black box in which some complex AI models have previously functioned will require companies to ensure that for any AI system, the machine-learning model performs to the standards the business requires, and that company leaders can justify the outcomes. Those that do will help reduce risks and establish the trust required for AI to become a truly accepted means of spurring innovation and achieving business goals.
As more and more companies in a range of industries adopt machine learning and more advanced AI algorithms, such as deep neural networks, their ability to provide understandable explanations for all the different stakeholders becomes critical. Yet some machine-learning models that underlie AI applications qualify as black boxes, meaning we can’t always understand exactly how a given algorithm has decided what action to take. It is human nature to distrust what we don’t understand, and much about AI may not be completely clear. And since distrust goes hand in hand with lack of acceptance, it becomes imperative for companies to open the black box.
Deep neural networks are complicated algorithms modeled after the human brain, designed to recognize patterns by grouping raw data into discrete mathematical components known as vectors. In the case of medical diagnosis, this raw data could come from patient imaging. For a bank loan, the raw data would be made up of payment history, defaulted loans, credit score, perhaps some demographic information, other risk estimates, and so on. The system then learns by processing all this data, and each layer of the deep neural network learns to recognize progressively more complex features. With sufficient training, the AI may become highly accurate. But its decision processes are not always transparent.
With sufficient training, AI may become highly accurate. But its decision processes are not always transparent.
To open up the AI black box and facilitate trust, companies must develop AI systems that perform reliably — that is, make correct decisions — time after time. The machine-learning models on which the systems are based must also be transparent, explainable, and able to achieve repeatable results. We call this combination of features an AI model’s interpretability.
It is important to note that there can be a trade-off between performance and interpretability. For example, a simpler model may be easier to understand, but it won’t be able to process complex data or relationships. Getting this trade-off right is primarily the domain of developers and analysts. But business leaders should have a basic understanding of what determines whether a model is interpretable, as this is a key factor in determining an AI system’s legitimacy in the eyes of the business’s employees and customers.
Data integrity and the possibility of unintentional biases are also a concern when integrating AI. In a 2017 PwC CEO Pulse survey, 76 percent of respondents said potential for biases and lack of transparency were impeding AI adoption in their enterprise. Seventy-three percent said the same about the need to ensure governance and rules to control AI. Consider the example of the AI-powered mortgage loan application evaluation system. What if it started denying applications from a certain demographic because of human or systemic biases in the data? Or imagine if an airport security system’s AI program singled out certain individuals for additional screening at airport security on the basis of their race or ethnicity.
Business leaders faced with ensuring interpretability, consistent performance, and data integrity will have to work closely with their organization’s developers and analysts. Developers are responsible for building the machine learning model, selecting the algorithms used for the AI application, and verifying that the AI was built correctly and continues to perform as expected. Analysts are responsible for validating the AI model created by the developers to be sure the model addresses the business need at hand. Finally, management is responsible for the decision to deploy the system, and must be prepared to take responsibility for the business impact.
For any organization that wants to get the best out of AI, it is important for people to clearly understand and adhere to these roles and responsibilities. Ultimately, the goal is to design a machine-learning model (or tune an existing one) for a given AI application so that the company can maximize performance while comprehensively addressing any operational or reputational concerns.Leaders will also need to follow the evolving AI regulatory environment. Such regulatory requirements are not extensive now, but more are likely to emerge over time. In Europe, for example, the General Data Protection Regulation (GDPR) took effect on May 25, 2018, and will require companies — including U.S. companies that do business in Europe — to take measures to protect customers’ privacy and eventually ensure the transparency of algorithms that impact consumers.
Finally, executives should bear in mind that every AI application will differ in the degree to which there is a risk to human safety. If the risk is great and the role of the human operator significantly reduced, then the need for the AI model to be reliable, easily explained, and clearly understood is high. This would be the case, for example, with a self-driving car, a self-flying passenger jet, or a fully automated cancer diagnosis process.
Other AI applications won’t put people’s health or lives at risk — for example, AI that screens mortgage applications or that runs a marketing campaign. But because of the potential for biased data or results, a reasonable level of interpretability is still required. Ultimately, the company must be comfortable with, and be able to explain to customers, the reasons the system approved one application over another or targeted a specific group of consumers in a campaign.
Opening the black box in which some complex AI models have previously functioned will require companies to ensure that for any AI system, the machine-learning model performs to the standards the business requires, and that company leaders can justify the outcomes. Those that do will help reduce risks and establish the trust required for AI to become a truly accepted means of spurring innovation and achieving business goals — many of which have not yet even been imagined.

0 comments:

Post a Comment