A Blog by Jonathan Low

 

May 31, 2021

65 Percent of Companies Can't Explain How Their AI Models Make Decisions

Organizations are increasingly turning decision making over to AI - too often without adequate oversight or the ability of those affected to review and appeal. 

And what the latest research reveals is that executives and board members have no idea how those automated decisions are made, suggesting that the deference to automation is driven largely by cost considerations while those responsible abdicate responsibility for the implications in what can be life altering circumstances for those impacted. JL

Kyle Wiggers reports in Venture Beat:

65% of companies can’t explain how AI model decisions or predictions are made. “More and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level. Organizations are increasingly leveraging AI to automate key processes that  are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.”
Despite increasing demand for and use of AI tools, 65% of companies can’t explain how AI model decisions or predictions are made. That’s according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they’re ensuring AI is used ethically. 
“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” FICO chief analytics officer Scott Zoldi said in a press release. “Organizations are increasingly leveraging AI to automate key processes that — in some cases — are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.” 
The study, which was commissioned by FICO and conducted by Corinium, found that 33% of executive teams have an incomplete understanding of AI ethics. While IT, analytics, and compliance staff have the highest awareness, understanding across organizations remains patchy. As a result, there’s significant barriers to building support — 73% of stakeholders say they’ve struggled to get executive support for responsible AI practices
Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory. 
According to Corinium and FICO, while almost half (49%) of respondents to the survey report an increase in resources allocated to AI projects over the past year, only 39% and 28% say they’ve prioritized AI governance and model monitoring or maintenance, respectively. Potentially contributing to the ethics gap is a lack of consensus among executives about what a company’s responsibilities should be when it comes to AI. The majority of companies (55%) agree that AI for data ingestion must meet basic ethical standards and that systems used for back-office operations must also be explainable. But almost half (43%) say that they don’t have responsibilities beyond meeting regulations to manage AI systems whose decisions might indirectly affect people’s livelihoods. 
Turning the tide 
What can enterprises do to embrace responsible AI? Combating bias is an important step, but only 38% of companies say that they have bias mitigation steps built into their model development processes. In fact, only a fifth of respondents (20%) to the Corinium and FICO survey actively monitor their models in production for fairness and ethics, while just one in three (33%) have a model validation team to assess newly developed models. 
The findings agree with a recent Boston Consulting Group survey of 1,000 enterprises, which found fewer than half of those that achieved AI at scale had fully mature, “responsible” AI implementations. The lagging adoption of responsible AI belies the value these practices can bring to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. 
This being the case, businesses appear to understand the value of evaluating the fairness of model outcomes, with 59% of survey respondents saying they do this to detect model bias. Additionally, 55% say they isolate and assess latent model features for bias, and half (50%) say they have a codified mathematical definition for data bias and actively check for bias in unstructured data sources. 
Businesses also recognize that things need to change, as the overwhelming majority (90%) agree that inefficient processes for model monitoring represent a barrier to AI adoption. Thankfully, almost two-thirds (63%) respondents to the Corinium and FICO report believe that AI ethics and responsible AI will become a core element of their organization’s strategy within two years. 
“The business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards need to be aware of the risks associated with the technology and the best practices to proactively mitigate them,” Zoldi added. “AI has the power to transform the world, but as the popular saying goes — with great power comes great responsibility.”

0 comments:

Post a Comment