Algorithm flags black homes for child protection investigations

Algorithms rule everything around us, and now they have more and more say in deciding which families child protective services investigate for neglect. Even though they are far from perfect and even though researchers have found that they show racial bias.

This is far from the first time that algorithms meant to help end up wreaking havoc. From the production of the infamous echo chambers of the 2016 election to targeted Instagram ads to the keywords the media is allowed to monetize, algorithms play an important role in framing the information we consume, internalize and let’s transform into our own set of biases.

The latest in his Associated Press series Monitoringwhich “investigates the power and consequences of algorithm-driven decisions on people’s everyday lives”, just illustrated another powerful and inherently problematic aspect of machine learning algorithms, and it unfairly targets children and families of certain breeds.

According to new research from Carnegie Mellon University obtained exclusively by AP, a predictive algorithm used by Child Protective Services in Allegheny County, Pennsylvania showed a tendency to report a disproportionate number of black children. for a “mandatory” negligence investigation, compared to their white counterparts. .

The researchers also found that social workers who investigated these reported cases disagreed with the risk assessment produced by the algorithm, called the Allegheny Family Screening Tool (AFST), a whopping third of the time.

That is, if this algorithm were to receive a grade, it would be a 67% — a D+.

It is difficult to say what is problematic in the algorithm. Like Vox’s Rebecca Heiliweil noted in Why algorithms can be racist and sexistit’s nearly impossible to see what part of an algorithm’s initial coding made it susceptible to producing and rapidly replicating bias.

“Usually you only know the end result: how it affected you, if you even know that AI or an algorithm was used in the first place,” Heilweil notes.

It is the same in the case of the algorithm of Allegheny. There is no transparent way for the public to see which factors carry more weight than others in this algorithm designed to detect cases of child neglect. (The algorithm is not used in cases of physical or sexual abuse, which are investigated separately.)

The algorithm focuses on “everything from inadequate housing to poor hygiene”, a nebulous term that could, in theory, include datasets on everything from how often a child brushes their teeth per hour when the child goes to bed or not.

It also uses an alarming amount of personal data collected from birth, such as Medicaid records, substance abuse history, and prison and probation records. This data is already primed for racial bias, given that it comes from institutions rooted in white supremacy, like the prison system.

The importance of these individual factors is not decided by an objective, unbiased computer: they are decided by the programmers, who are, in fact, very human and each comes with its own set of inherent biases.

This worries tech-responsibility advocates because machine-learning algorithms can make many decisions very quickly, and therefore not only reproduce, but exacerbate economic, social and racial injustices. If algorithms like AFST are used without human verification, many mistakes can be made, quickly and irrevocably for the many people involved.

Algorithmic biases or “black box algorithms” have very real impacts on people of color in almost every facet of their lives, according to Public Citizen, a nonprofit consumer advocacy organization. For example, communities of color pay 30% more for car insurance than white communities with similar accident costs, thanks to a predictive algorithm. Social media apps like TikTok and Instagram have been lambasted by black creators whose content is often wrongfully removed by the platforms’ algorithms.

These black box algorithms are like the part of Fancy when Mickey serves as a sorcerer’s apprentice. To make his task faster, Mickey decided to program a broom to do his job and bring buckets of water to the well, just like we do with machine learning algorithms.

The broom quickly learned the task, did more of itself to become more efficient, and continued to replicate the task with increasing speed. Left unchecked, it led to a destructive mess.

Algorithms like the one used in Allegheny County by Child Protective Services are also used in other places across the country — and it’s very possible that they have the same flaws and tendencies. The results could do more harm than good.