A Blog by Jonathan Low

 

Nov 8, 2020

The Reason People Targeted By Algorithms Should Decide How They Are Used

Use of these systems is fundamentally antidemocratic because there is little or no public oversight, no transparency and no right of appeal. JL

Meredith Whittaker comments in the Boston Globe:

These systems are largely produced by companies and sold to companies, governments, and institutions. They’re beholden to the incentives of those who create them, and they’re designed to increase the profits, efficiency, and growth of those who use them. These are tools of the powerful, applied by those who have power over those who have less. These systems are protected from scrutiny by claims of corporate secrecy. Those who live under these systems (want) the right to determine forces that shape their livelihoods, and to refuse to be governed by technology that serves interests at their expense.

Workers for Shipt, the grocery-delivery platform owned by Target, are protesting the firm’s recent implementation of a new algorithm dictating workers’ schedules and wages. How the algorithm makes these decisions isn’t clear: Shipt has provided few details to their more than 200,000 employees, and the company refuses to share anything with the public, claiming that the system is “proprietary.” But even without access to the inner workings of the algorithm, workers feel its impacts. Since the system went live in September, they claim that their wages have decreased and that scheduling is more complicated and unpredictable, throwing their lives into precarity and financial uncertainty. As Shipt worker Willy Solis put it: “This is my business, and I need to be able to make informed decisions about my time.”

Even as evidence of artificial intelligence’s unevenly distributed harms and benefits mount, the question of when it is appropriate to allow algorithms to make important decisions persists. Often, the people asked this question are people like me: those who have expertise in technology, and are employed in privileged positions in industry and academia. Left unasked and unanswered is another fundamental query: Who should get to answer this question, and on what basis? People like Willy, whose lives and opportunities are shaped by these systems, are almost never included.

This is true across the board. Those who are subject to these systems are generally ignored in conversations about algorithmic governance and regulation. And when they are included, it’s often as a token or stereotype, and not as someone whose expertise is understood as central to AI decision-making.

In some sense, this shouldn’t be surprising. These systems are largely produced by private companies and sold to other companies, governments, and institutions. They’re beholden to the incentives of those who create them, and whatever else they might do, they’re ultimately designed to increase the profits, efficiency, and growth of those who use them. Put another way, these are the tools of the powerful, generally applied by those who have power over those who have less. Shipt’s own chief communications officer, Molly Snyder, said it herself: “We believe the model we rolled out is the right one for the company.” Here we see a tacit acknowledgment that the goals of the company are separate from those of its workers. And only one side has the power to choose how, and whether, to use the algorithm.

Shipt is but one example of algorithmic harms. Earlier this year, the British government deployed a new grading algorithm that failed spectacularly along predictable racial and class lines. Trying to compensate for COVID-related interruptions in testing, the algorithm guesstimated the scores it assumed students would have achieved under normal conditions, basing this assumption on things like teacher estimates and the historical performance of a given school. In doing so, it reduced the scores of poor, Black, and brown students, while giving higher marks to students from elite schools and wealthy areas. Cori Crider, a lawyer from London law firm Foxglove, whose case won the reversal of the faulty grades, said, “There’s been a refusal to have an actual debate about how these systems work and whether we want them at all.”

The British grading algorithm fit a familiar pattern. Details of these algorithms are rarely made transparent to the public, even to so-called experts. These systems are routinely protected from scrutiny by claims of corporate secrecy and decisions by governments and institutions that limit access and transparency. What is known about them is often what’s written by marketing departments and public relations representatives, presented to the public without evidence or verification. In Britain, government officials said the grading algorithm “was meant to make the system more fair,” a line that makes good PR but gives us zero information about the suitability of the system for its task, nor even the definition of “fairness” they might be relying on. This government and corporate protectionism and lack of access mean it’s extremely difficult for the public to make an informed decision about when and whether the use of AI is appropriate.




Those who do have the closely guarded information about the inner workings of these systems are also often poorly positioned to make these types of decisions. The tech industry is extraordinarily concentrated. In the United States, this means that a handful of firms in Silicon Valley are at the heart of AI development, including building algorithmic models they license to third parties or leasing the infrastructure for AI startups who build their own. And those developing this technology are a homogenous group: predominantly white and male, hailing from elite universities, and possessing rare technical training.

They represent the powerful and the elite, and while they may understand how to train a convolutional neural network, they are far removed from the contexts and communities in which their technology will be applied. Indeed, they are often unaware of how and where the technology they build is ultimately used. Technical know-how, whether in government or in the technology industry, cannot substitute for contextual understanding and lived experiences in determining whether it’s appropriate to apply AI systems in sensitive social domains. Especially given that these systems replicate and amplify the harms of structural racism and historical discrimination, which fall predominantly on Black, brown, and poor communities.




So what does it look like when the people who bear the risks of algorithmic systems get to determine whether — and how — they’re used? It doesn’t look like a neat flowchart, or a set of AI governance principles, or a room full of experts and academics opining on hypothetical benevolent AI futures. It looks like those who will be subject to these systems getting the information they need to make informed choices. It looks like these communities sharing their experiences and doing the work to envision a world they want to live in. Which may or may not include these technologies or the institutions that use them.

It looks like Tawana Petty’s work to ban facial recognition in Detroit; British students protesting the government’s grading algorithm; Shipt workers on strike; and parents and teachers at the Lockport school district in New York pushing back against the district’s procurement of these systems. It looks like social movements and civic engagements. And ultimately it looks like those who have to live under these systems reclaiming the right to determine the forces that shape their lives and livelihoods, and the right to refuse to be governed by technology that serves the interests of the powerful at their expense.We won’t know the actual answer to when it is appropriate to use algorithms until the people who are most affected have the power to answer that very question. And experts like me can’t answer it for them.

2 comments:

Tucker Conrad said...

A GREAT SPELL CASTER (DR. EMU) THAT HELP ME BRING BACK MY EX GIRLFRIEND.
Am so happy to testify about a great spell caster that helped me when all hope was lost for me to unite with my ex-girlfriend that I love so much. I had a girlfriend that love me so much but something terrible happen to our relationship one afternoon when her friend that was always trying to get to me was trying to force me to make love to her just because she was been jealous of her friend that i was dating and on the scene my girlfriend just walk in and she thought we had something special doing together, i tried to explain things to her that her friend always do this whenever she is not with me and i always refuse her but i never told her because i did not want the both of them to be enemies to each other but she never believed me. She broke up with me and I tried times without numbers to make her believe me but she never believed me until one day i heard about the DR. EMU and I emailed him and he replied to me so kindly and helped me get back my lovely relationship that was already gone for two months.
Email him at: Emutemple@gmail.com  
Call or Whats-app him: +2347012841542

ADRE_A said...

Sabe como lidar com infecções sexuais e sistema urinário e problemas estes que não pode decidir, então compre https://portugal-farmacia.com/comprar-zithromax/ excelente ativo droga e muito eficaz

Post a Comment