@mlfailures: The big-picture vision

You walk into a hospital. Unbeknownst to you, a number is assigned to you. An algorithm has assigned you a "risk score." This score will determine how much medical care you need, and how urgently you need it.

This is not science fiction. Medical risk scoring algorithms are widely deployed. And, according to recent research from Berkeley, these algorithms will systematically undervalue your risk if you're Black.

Real-world bias often makes its way into the data we collect, which results in models that learn and perpetuate that bias. This is an example of machine learning bias.

We've spent the past year developing hands-on educational lab materials that demonstrate examples of machine learning bias in real world settings—and teach students how to address it.

That example I mentioned earlier about racial bias in health care? That’s our first lab, based on work by Ziad Obermeyer here at Berkeley’s Public Health department. Our second lab deals with discovering whether there is bias in a home lending company’s prior decisions. Our third lab discusses gender bias in hiring decisions, and how to train a classifier to counteract this bias.

Our labs are interactive notebooks, containing Python code that students can run and edit for themselves. These Jupyter notebooks lay out the technical knowledge for identifying, and correcting for, bias. More importantly, the labs probe students to think critically about bias in context. That includes examining how we define bias, and who gets to do so.

A lot of people are concerned with AI safety, both from a social and ethical perspective as well as a business and legal perspective. We know bias is pervasive in the everyday world, and that bias finds its way into algorithms.

But there's a difference between knowing that bias is a problem, and knowing where to look for it, how to mitigate it, and most importantly how it actually impacts specific the communities in real-world context. Our aim is to close the gap between awareness of the problem and ability to do something about it. As far as we know, our labs are the first to teach students how to identify and ameliorate bias in machine learning.

The future

Our initial three labs help students think about bias in a rich and nuanced way, and give them the tools to analyze it in their own work.

In the future, we hope to get these labs, or labs like them, into every undergraduate curriculum in data science in the US.

With more funding, we can produce more labs that address new issues in machine learning bias as they arise—which they will.

With long-term, sustainable funding, we hope to move our labs into more and more diverse educational contexts. Our labs can reach students from non-traditional backgrounds (for example, first-generation college students; students at two-year or community colleges; high school students of color). Our labs, situated in real-life settings, demonstrate AI safety as something concrete in their communities. A pathway to get involved in security themselves.

Date: 2020-11-02 Mon 00:00

Author: ffff

Created: 2020-11-18 Wed 11:08

Validate