A Blog by Jonathan Low

 

Feb 10, 2020

Court Disallows Use of AI Risk Scoring That Decides Social Payments Eligibility

The system violated an EU law protecting citizens from a purely technological decision with legal and financial effects.

The question is whether this ruling will become a model for democracies, or whether they will follow authoritarian government practice of using access to such financial rights to coerce behavior. JL

Natasha Lomas reports in Tech Crunch:

An algorithmic risk scoring system deployed to try to predict the likelihood that social security claimants will commit benefits or tax fraud is a breach of human rights law, a court has ruled. GDPR includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. Legal experts suggest the decision sets clear limits on how the public sector can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.
An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud is a breach of human rights law, a court in the Netherlands has ruled.
The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a “welfare surveillance state.”
A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.
The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.
GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.
In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.
Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life, with a fair and reasonable balance being required. The automated risk assessment system failed this test in the court’s view.
Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.
In a press release about the judgement (translated to English using Google Translate), the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.
The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic “blackbox” and shrugging.

Specifically, the Court places great emphasis on the lack of transparency about the risk models and risk factors that are being applied. The model and factors were kept secret.
The Court's reasoning doesn't imply there should be full disclosure, but it clearly expects much more robust information on the way (objective criteria) that the model and scores were developed and the way in which particular risks for individuals were addressed.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights.”
“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.
Back in 2018, Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.
So the decision by the Dutch court could have some near-term implications for UK policy in this area.
The judgement does not shut the door on the use by states of automated profiling systems entirely, but it does make it clear that human rights law in Europe must be central to the design and implementation of rights risking tools.
It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.
It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI (such as for social security assessments). recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk assessments and a patchwork of risk-based rules.

1 comments:

josaa registration 2020 said...

Thanks for the blog loaded with so many information

Post a Comment