ML Failures: Learning to Identify Algorithmic Bias

 

A series of Python labs designed to train the next generation of students to identify, discuss, and address the risks posed by machine learning algorithms.

 

You may have heard about machine learning bias or fairness in machine learning. But, if you have an algorithm in front of you, how do you know if that algorithm is biased? How is it biased? What do those biases mean in practice?

Led by the Nick Merrill, a postdoctoral fellow, CLTC has developed a series of hands-on notebooks in Python — designed to teach students how to detect, identify, discuss and address bias in real-world machine learning algorithms. The “ML Failures” (machine learning failures) labs delve into how these algorithms are situated in larger social contexts, prompting students to discuss who designs these algorithms, who uses them, and who gets to decide what it means for algorithms to be working properly.

Developed as part of CLTC’s Daylight Security Research Lab, the labs address a shortcoming in current computer science training, as many students may graduate and begin work as data scientists without having learned about bias or fairness in machine learning. The labs teach students how algorithms used for decision-making in fields such as health care, lending, and hiring may have built-in biases that are highly consequential for BIPOC communities.

 The labs are currently being taught to 50 students in UC Berkeley’s flagship Applied Machine Learning course, and we are working to have this new teaching tool included in other classes at UC Berkeley and other universities.

Lab 1: Algorithmic Bias in Health Care

Access the Lab

To effectively manage patients, health systems often need to estimate particular patients’ health risks. Using quantitative measures, or “risk scores,” healthcare providers can prioritize patients and allocate resources to patients who need them most. In this lab, students examine an algorithm widely-used in industry to establish quantitative risk scores for patients. This algorithm uses medical cost (i.e., the amount a patient spends on medical care) as a proxy for risk. Through analysis of this data, we will discover how this algorithm embeds a bias against Black patients, undervaluing their medical risk relative to White patients. Crucially, this bias is not immediately visible when comparing medical costs across White and Black pateints.

Bias frequently slips into algorithmic systems unnoticed, particularly when sensitive characteristics (such as race) are ommitted or backgrounded in the data science process. In this case, bias in algorithms affects people’s lives very concretely: the bias in the algorithm described here would make it more difficult for Black patients to receive the care they need.

Lab 2: Algorithmic Bias in Loan Approval

Access the Lab

Home Financing Inc. is in the business of home loans. They want to automate their home loan approval process, which they believe will be less biased than having humans make decisions about loan approval. They are concerned, for instance, that humans will exhibit forms of bias based on the gender of applicants and where they live — information they need to know to make their loan decision. The company is looking to build and deploy a binary classification model. The model would take as input applicant information, and return as output a binary loan decision: either “approved” or “not approved.”

However, if there has been historical human bias at Home Financing Inc, and the classifier is trained on past loan decisions, the classifier will learn these human biases and pass them forward when it makes loan decisions in the future. In this lab, students will learn how to use causal inference to determine whether there is evidence of bias in the dataset of prior loan decisions.

Lab 3: Correcting for Bias

Access the Lab

The datasets we use to train machine learning models can often encode human biases. From a social and ethical standpoint, we want to remove or minimize this bias so that our models are not perpetuating harmful stereotypes or injustices. From a business and legal perspective, we want to produce effective models that adhere to industry standards of fairness.

There are several ways that we can tackle this problem, including pre-processing the data to remove bias before training, in-processing the model to change the way it learns from the data, and post-processing the results to correct for bias. In this lab, we will be introducing an in-processing method to train a logistic regression classifier that maximizes fairness while maintaining a certain level of accuracy.

 

In addition to these labs, we’ve taken to Twitter to document Machine Learning failures in the real-world as they come up. This record is meant to help us do better in the future. Follow us at @mlfailures.

 

Current Project Team

Inderpal Kaur

Research Assistant

Samuel Greenberg

Samuel Greenberg

Research Institute