A Blog by Jonathan Low

 

Sep 24, 2018

Using Predictive Algorithms To Decide If Children Are Safe. With Their Parents.

Which raises questions about the capabilities and biases of those designing the models in use - as well as those of the people charged with interpreting the data they produce. JL

Elizabeth Brico reports in Undark:

“Sometimes when you start using data-driven methods, the flaws in the results highlight the flaws in the underlying system.” If those trends are analyzed in the name of gathering knowledge rather than predicting behavior, they can be used to correct systemic bias. One-fifth of investigated maltreatment allegations are ever substantiated by a court or child welfare agency. As these algorithms function now, they appear to simply perpetuate disparities.
In 1956, author Philip K. Dick published a short story called “The Minority Report.” Set in a near-future U.S., the piece describes a felony prevention system called “Precrime,” which harnesses the psychic powers of three mutated humans, dubbed “precogs.” The precogs predict egregious crimes like rape and murder before they ever take place. Then, Precrime detectives arrest the future assailants before they have the chance to complete the act. The targeting of people who haven’t committed an offense is justified by an inspired faith in the precogs’ psychic abilities. “The precogs are never wrong,” is a narrative refrain.
That justification begins to unravel when Precrime Commissioner John Anderton is faced with a devastating prediction: that he is destined to commit murder. Incredulous, he decides to investigate how the precogs could have gotten him wrong, and he ultimately discovers that the three precogs do not always see eye-to-eye. Sometimes one precog disagrees with the others over whether the accused will go through with the crime, a discovery that undermines the entire Precrime system. But Anderton decides the public benefit of Precrime is worth the risk of occasionally arresting innocent citizens, and to preserve the public’s faith in the system, he sacrifices himself — and his victim — by fulfilling the precogs’ prediction. To my knowledge, no branch of the actual U.S. government has access to psychic mutants. But, increasingly, child welfare agencies are behaving like they do. Agencies in several states now use predictive analytics to help decide whether a child is at risk in their home. By placing undue faith in algorithms, however, the agencies are performing a moral calculus akin to Anderton’s, except without the element of self-sacrifice.
It’s no secret that only about one-fifth of investigated maltreatment allegations are ever substantiated by a court or child welfare agency. According to the U.S. Department of Health and Human Services, for example, more than 100,000 kids were removed from their families in 2001 based on allegations that turned out to be unfounded. That’s to say nothing of the substantiated cases that are overturned on appeal, or the families that give in to the demands of a child welfare agency even if they don’t agree with court findings. Predictive analytics have been introduced into the child welfare industry with the promise of making the system more just. But in reality they make it easier to target already vulnerable populations — and make it more difficult to appeal decisions that were made in error.
Cathy O’Neil, a mathematician and the author of “Weapons of Math Destruction,” says that “data scientists are translators of moral choices.” Algorithms do not provide divine insight into people’s objective moral good. Rather, they predict patterns from the data they are fed — data that depend on which information is available and on which information the programmers deem relevant. This means that the people who design predictive child-welfare algorithms, however well-intentioned, insert personal, historical, and systematic biases into equations that, by definition, perpetuate those biases. Thousands of unwitting families are affected in the process. Families like mine.
One-hundred and sixty days ago, my three- and four-year-old daughters were removed from my custody, on a temporary but indeterminate basis, because authorities in Broward County, Florida, deemed them unsafe in my care. It began when I left the girls with their grandparents for three days while I was away in Miami. My mother-in-law phoned an abuse hotline and accused me of going to Miami to use drugs, an accusation that was given immediate weight because I had been treating a diagnosed substance use disorder with pharmacotherapy and counseling for the past five years. I was subsequently charged with placing my daughters at imminent risk of serious harm.
My accuser’s claim was later countered by a series of negative drug panels, including urine and hair analyses, the latter of which can detect substance use dating back 90 days. The investigator made no attempt to contact me before deciding to bring me before a judge, and when the judge ruled to place my children with their paternal grandparents, she did not acknowledge my negative drug tests. Instead, she stated that my “skill with language” indicated that I wasn’t trustworthy, and that I was “clearly drug-seeking” because I admitted to almost buying marijuana, which I’d occasionally used legally in the past.
It didn’t matter that no evidence ever arose to substantiate the allegations of drug use made against me, nor that, by the investigator’s own admission, my daughters showed no signs of abuse or neglect. What mattered was that somewhere, somehow, I was deemed high risk.
It turns out Florida is one of the first states in the country to implement predictive analytics as part of their child welfare system. Although my investigator would not reveal her methodology to me when I asked, I believe predictive analytics were used to determine my risk status and may have influenced the investigator’s recommendation to take my children away.
The cost of such biased and haphazard family separations can’t be ignored. Even short-term separations have the potential to significantly impact the children involved. This is especially true of children under three. Claudia Gold, pediatrician and author of “The Developmental Science of Early Childhood,” describes what young children experience when separated from their parents as “annihilation of the self.” She goes on to say that this can cause problems later in life, such as impulsivity, difficulty regulating emotions, low self-esteem, and attention deficit issues.
My own daughters have already shown signs of anger and depression since we’ve been separated. Earlier this month, my four-year-old bit her grandmother, after shouting that she was angry because she’d taken her from me. Last weekend, when I showed my three-year-old a photo of her that I use as my phone’s lock-screen image, she squirmed and refused to look at it.

To better understand which families are bearing the brunt of these traumas, I talked with Emily Putnam-Hornstein, an associate professor of social work at the University of Southern California who helped create a child welfare algorithm that’s being used in Allegheny County, Pennsylvania. She tells me the system isn’t designed to rate the severity of a maltreatment report — that’s still up to the human staff. Rather, it predicts the likelihood that a family will require future interventions from child welfare authorities, information that’s supposed to help call screeners decide which allegations to pass through for an investigation.
Putnam-Hornstein describes her work in hopeful terms, as a tool for positive social change. To hear her speak it is almost to believe it, but she also admits that more black and Hispanic families are being investigated than white families. Additionally, the 131 indicators that feed into the algorithm include records for enrollment in Medicaid and other federal assistance programs, as well as public health records regarding mental-health and substance-use treatments. Putnam-Hornstein stresses that engaging with these services is not an automatic recipe for a high score. But more information exists on those who use the services than on those who don’t. Families who don’t have enough information in the system are excluded from being scored.
In Florida, the Department of Children and Families contracted with private analytics giant SAS to try to predict which families are most likely to cause the death of a child through maltreatment. According to the SAS website, they used “criminal histories, behavioral health data, [and] drug and alcohol treatment data,” all of which, due to privacy laws, had to be sourced from public health databases. As a result, the behavioral-health and drug-treatment statistics don’t include privately insured patients. In other words, it’s a little bit like Philip K. Dick’s fictional precogs, if one or two of the precogs had visions only about poor people.
O’Neil thinks that algorithms like the one being used in Allegheny have the potential to help correct the discrimination taking place in child welfare investigations, but they have to be used differently for that to happen.  She says that “sometimes when you start using data-driven methods, the flaws in the results highlight the flaws in the underlying system.” She suggests that if those trends are analyzed in the name of gathering knowledge rather than predicting behavior, they can be used to correct systemic bias. As these algorithms function now, they appear to simply perpetuate disparities.
“The Minority Report” gives us terrifying insight into what happens when a justice system believes too fervently in its internal knowledge base. Ironically, child welfare agencies have decided to ignore that admonition in favor of new forms of prediction that target poor families, families of color, parents with physical and mental health disabilities, and parents who have undergone treatment for an addiction.
For me, the cost of that decision has been 160 days — 160 days and counting since I’ve been alone with my daughters, or held them without someone watching. 160 days since I’ve put them to bed. Since I’ve comforted them after a nightmare. 160 days since I’ve given them a bath or made them breakfast. Since I’ve fought with my four-year-old over shoes.
I’m forgetting the words to “Frozen.” I’m forgetting the smell of their hair. Sometimes now they call me grandma instead of mom. Sometimes they call her mom. 160 days.

0 comments:

Post a Comment