A Blog by Jonathan Low

 

Dec 20, 2019

AI's Black Box Problem Was About To Be Solved. But Then the Lawyers Got Involved

Intellectual property protection is more profitable than transparency. JL

Tristan Greene reports in The Next Web:

AI has a “black box” problem. We cram data in one side of a machine learning system and we get results out the other, but we’re often unsure what happens in the middle. Researchers nearly had the issue licked, with “explainable algorithms” and “transparent AI” trending over the past few years. Then came the lawyers. Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions. This is why lawyers operate under legal privilege, which gives information protected status, incentivizing clients to understand their risks rather than hide wrongdoings.
AI has a “black box” problem. We cram data in one side of a machine learning system and we get results out the other, but we’re often unsure what happens in the middle. Researchers and developers nearly had the issue licked, with “explainable algorithms” and “transparent AI” trending over the past few years. Then came the lawyers.

What’s a black box?

Black box AI isn’t as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbs and you only have a couple of hours to crack Kentucky Fried Chicken’s secret recipe. You’re pretty sure you have all the ingredients but you’re not sure which eleven herbs and spices you should use. You don’t have time to guess, and it would take billions of years or more to manually try every combination. This problem can’t realistically be solved using brute force, at least not under normal kitchen paradigms.
But imagine if you had a magic chicken fryer that did all the work for you in seconds. You could pour all your ingredients into it and then give it a piece of KFC chicken to compare against. Since a chicken fryer can’t “taste” chicken, it would rely on your taste-buds to confirm whether it’d managed to recreate the Colonel’s chicken or not.It spits out a drumstick, you take a bite and tell the fryer whether the piece you’re eating now tastes more or less like KFC’s than the last one you tried. The fryer goes back to work, tries more combinations, and keeps going until you tell it to stop once it has the recipe right.
That’s basically how black box AI works. You have no idea how the magic fryer came up with the recipe – maybe it used 5 herbs and 6 spices, maybe it used 32 herbs and 0 spices – but, it doesn’t matter. All we care about is using AI as a way to do something humans could do, but much faster.

The downside of transparency

This is fine when we’re using blackbox AI to determine whether something is a hotdog or not, or when Instagram uses it to determine if you’re about to post something that might be offensive. It’s not fine when we can’t explain why an AI sentenced a black man with no priors to more time than a white man with a criminal history for the same offense.
The answer is transparency. If there is no black box, then we can tell where things went wrong. If our AI sentences black people to longer prison terms than white people because it’s over-reliant on external sentencing guidance, we can point to that problem and fix it in the system.
But there’s a huge downside to transparency: If the world can figure out how your AI works, it can figure out how to make it work without you. The companies making money off of black box AI – especially those like Palantir, Facebook, Amazon, and Google who have managed to entrench biased AI within government systems – don’t want to open the black box anymore than they want their competitors to have access to their research. Transparency is expensive and, often, exposes just how unethical some companies’ use of AI is.
As legal expert Andrew Burt recently wrote in Harvard Business Review:
To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isn’t worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which information about the model is available to others.
The AI gold rush of the 2010s led to a Wild West situation where companies can package their AI any way they want, call it whatever they want, and sell it in the wild without regulation or oversight. Companies that have made millions or billions selling products and services related to biased, black box AI have managed to entrench themselves in the same position as the health insurance and fossil fuel industries. Their very existence is threatened by the idea that they may be regulated against doing harm to the greater good.

Can we regulate?

Simply put: No. The lawyers will make sure we’ll never know any more about why a commercial system is biased, even if we develop fully transparent algorithms, than if these systems remain in black boxes. As Axios’ Kaveh Waddell recently wrote:
Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.
The calculus for the AI industry is the same as the private healthcare industry in the US. Extricating biased black box AI from the world would probably put dozens of companies out of business and likely result in hundreds of billions of dollars lost. The US industrial law enforcement complex runs on black box AI – we’re unlikely to see the government end its deals with Microsoft, Palantir, and Amazon any time soon. So long as the lawmakers are content to profit from the use of biased, black box AI, it’ll remain embedded in society.
And we also can’t rely on businesses themselves to end the practice. Our desire to extricate black box systems simply means companies can’t “blame the algorithm” anymore, so they’ll hide their work entirely. With transparent AI, we’ll get opaque developers. Instead of choosing not to develop dual use, or potentially dangerous AI, they’ll simply lawyer up.
As Burt puts it in his Harvard Business Review article:
Indeed, this is exactly why lawyers operate under legal privilege, which gives the information they gather a protected status, incentivizing clients to fully understand their risks rather than to hide any potential wrongdoings. In cybersecurity, for example, lawyers have become so involved that it’s common for legal departments to manage risk assessments and even incident-response activities after a breach. The same approach should apply to AI.
When things go wrong and AI runs amok, the lawyers will be there to tell us the most company-friendly version of what happened. Most importantly, they’ll protect companies from having to share how their AI systems work.
We’re trading a technical black box for a legal one. Somehow, this seems even more unfair.

0 comments:

Post a Comment