A biased medical algorithm favored white people for health-care programs

A biased medical algorithm favored white people for health-care programs

A study has highlighted the risks inherent in using historical data to train machine-learning algorithms to make predictions.

The news: A algorithm that many US health providers use to predict which patients will most need extra medical care privileged white patients over black patients, according to researchers at UC Berkeley, in a study published in Science. Effectively, it bumped whites up the queue for special treatments for complex conditions like kidney problems or diabetes.

The study: The researchers dug through almost 50,000 records from a large, undisclosed academic hospital, and found that white patients were given higher risk scores, and were therefore more likely to be selected for extra care (like more nursing, or dedicated appointments), than black patients who were in fact equally sick. The researchers calculated the bias cut the proportion of black patients who got extra help by more than half.

What software was this? The researchers didn’t say, but the Washington Post identifies it as Optum, owned by insurer UnitedHealth. It says its product is used to “manage more than 70 million lives.” It’s likely the same issue exists in other healthcare decision-making software too, the researchers said.

How the bias crept in: Race wasn’t a factor in the algorithm’s decision-making (that would be illegal); it used patients’ medical histories to predict how much they were likely to cost the healthcare system in future. But cost is not a race-blind metric: for socioeconomic and other reasons, black patients have historically incurred lower healthcare costs than white patients with the same conditions. As a result, the algorithm gave white patients the same scores as black patients who were significantly sicker.

A small saving grace: The researchers worked with Optum to correct the issue. They reduced the disparity by more than 80% by creating a version that predicts both a patient’s future costs and the number of times a chronic condition might flare up over the coming year. So, algorithmic bias can be corrected, if—and sadly, it is a big if—you can catch it.

Read next: our interactive explainer on how AI bias affects the criminal legal system and why it’s so hard to eliminate.