A Blog by Jonathan Low

 

Jul 12, 2016

'Right To An Explanation' For Algorithmic Decisions Is Proposed By European Union


And just when the world thought the EU was a voice of reason and no one could be less sympathetic than the 'Little Britain' Brexit pro-leave campaigners...

But then maybe being uncompetitive is a goal, not an unintended consequence? JL

Mike Masnick reports in Tech Dirt:

Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them. It may limit machine learning and AI in Europe...
I saw a lot of excitement and happiness a week or so ago around some reports that the EU's new General Data Protection Regulations (GDPR) might possibly include a "right to an explanation" for algorithmic decisions. It's not clear if this is absolutely true, but it's based on a reading of the agreed upon text of the GDPR, which is scheduled to go into effect in two years.
Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which "significantly affect" users. The law will also create a "right to explanation," whereby a user can ask for an explanation of an algorithmic decision that was made about them.
Lots of people on Twitter seemed to be cheering this on. And, indeed, at first glance it sounds like a decent idea. As we've just discussed recently, there has been a growing awareness of the power and faith placed in algorithms to make important decisions, and sometimes those algorithms are dangerously biased in ways that can have real consequences. Given that, it seems like a good idea to have a right to find out the details of why an algorithm decided the way it did.

But it also could get rather tricky and problematic. One of the promises of machine learning and artificial intelligence these days is the fact that we no longer fully understand why algorithms are deciding things the way they do. While it applies to lots of different areas of AI and machine learning, you can see it in the way that AlphaGo beat Lee Sedol in Go earlier this year. It made decisions that seemed to make no sense at all, but worked out in the end. The more machine learning "learns" the less possible it is for people to directly understand why it's making those decisions. And while that may be scary to some, it's also how the technology advances.

So, yes, there are lots of concerns about algorithmic decision making -- especially when it can have a huge impact on people's lives, but a strict "right to an explanation" seems like it may actually create limits on machine learning and AI in Europe -- potentially hamstringing projects by requiring them to be limited to levels of human understanding. The full paper on this does more or less admit this possibility, but suggests that it's okay in the long run, because the transparency aspect will be more important.
There is of course a tradeoff between the representational capacity of a model and its interpretability, ranging from linear models (which can only represent simple relationships but are easy to interpret) to nonparametric methods like support vector machines and Gaussian processes (which can represent a rich class of functions but are hard to interpret). Ensemble methods like random forests pose a particular challenge, as predictions result from an aggregation or averaging procedure. Neural networks, especially with the rise of deep learning, pose perhaps the biggest challenge—what hope is there of explaining the weights learned in a multilayer neural net with a complex architecture?
In the end though, the authors think these challenges can be overcome.
While the GDPR presents a number of problems for current applications in machine learning they are, we believe, good problems to have. The challenges described in this paper emphasize the importance of work that ensures that algorithms are not merely efficient, but transparent and fair.
I do think greater transparency is good, but I worry about rules that might hold back useful innovations as well. Prescribing exactly how machine learning and AI needs to work too early in the process may be a problem as well. I don't think there are necessarily easy answers here -- in fact, this is definitely a thorny problem -- so it will be interesting to see how this plays out in practice once the GDPR goes into effect.

0 comments:

Post a Comment