Embedded EthiCSTM @ Harvard Bringing ethical reasoning into the computer science curriculum

Machine Learning (CS 181) – Spring 2018

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: Machine learning and discrimination
Module Author: Kate Vredenburgh

Course Level: Upper-level undergraduate
AY: 2017-2018

Course Description: “This course provides a broad and rigorous introduction to machine learning, probabilistic reasoning and decision making in uncertain environments.”

Semesters Taught: Spring 2018, Spring 2019Spring 2020Spring 2021Spring 2022, Spring 2023

Tags

  • discrimination (phil)
  • formalized fairness metrics (CS)
  • impossibility results (CS)
  • machine learning (CS)

Module Overview

In this module, we probe the ways that machine learning models can be discriminatory and examine different methods for preventing discriminatory outcomes. We begin by introducing two concepts of discrimination: disparate treatment and disparate impact. We then use those concepts to argue that there are at least four sets of important tools for reducing discrimination from the use of machine-learning models in the social sphere: the reduction of bias from data, the definition of the optimization problem, the choice of features, and the use statistical fairness criteria. Finally, we discuss an impossibility result regarding three statistical fairness criteria, and explain why this impossibility result is not surprising, given that the data is generated by biased institutions.

    Connection to Course Technical Material

This topic was chosen because it connects material in the course with current research in machine learning on discrimination and statistical fairness criteria. It also connects with an important, contemporary social issue, discrimination resulting from the use of machine learning models to make important decisions about how individuals are treated.

This topic connects to course content about bias (in the technical sense of the term from the machine learning literature). As we discuss in the module, technical bias can give rise to discriminatory bias. The module topic also connects with course content about feature extraction from data and optimization.

© 2018 by Kate Vredenburgh, “Discrimination and Machine Learning” is made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0).

For the purpose of attribution, cite as: Kate Vredenburgh, “Discrimination and Machine Learning” for CS 181: Machine Learning, Spring 2018, Embedded EthiCS @ HarvardCC BY 4.0.

Goals

Module Goals

  1. Teach students two accounts of discrimination: disparate treatment and disparate impact.
  2. Explore the ramifications (including potential limitations) of using a disparate impact definition to identify discrimination.
  3. Introduce students to technical computer science work on discrimination (statistical fairness criteria) and discuss a relevant impossibility result.

    Key Philosophical Questions

Question (1) is the over-arching question of the module. The rest of the questions are raised to help students think through different aspects of the over-arching question.

  1. How can the use of machine learning models to make decisions lead to discrimination? How do we prevent or mitigate this discrimination?
  2. Can decisions be discriminatory even if they fail to satisfy the disparate impact definition of discrimination?
  3. Is the conflict among statistical fairness criteria surprising, given the causes of different base rates in the population? Where is the right place to intervene?

Materials

    Key Philosophical Concepts

Discrimination is an incredibly important concept for current work in computer science on machine learning and fairness. This module aims to show students that it is important to draw on domain experts such as lawyers to address ethical problems through design.

  1. Disparate treatment discrimination
  2. Disparate impact discrimination

    Assigned Readings

Barocas and Selbst discuss (1) how discrimination arises in algorithmic decision-making, and (2) whether that discrimination is wrongful, according to the disparate impact standard in the law. They identify two philosophical foundations for anti-discrimination law in the United States, and argue that these two foundations differ on when and why discrimination is wrongful.

Implementation

Class Agenda

  1. Overview.
  2. Key concepts: disparate impact and disparate treatment.
  3. Why reducing bias in the technical sense does not reduce bias in the normative sense.
  4. Why changing how the optimization task is defined is insufficient to prevent discrimination.
  5. Discussion activity on a hard case in which certain features that predict job success are also strongly correlated with protected attributes.
  6. Introduction to formal fairness criteria as a strategy for preventing discrimination in machine learning systems.
  7. The impossibility result.
  8. Implications of the impossibility result, and why it is not surprising.

    Sample Class Activity

This class activity facilitates student understanding of disparate impact and disparate treatment accounts of discrimination by asking them to determine whether the Glap case is a case of discrimination according to either of those standards. The activity also encourages students to begin to identify potential limitations of the disparate impact standard: many students judge that the Glap case is a case of wrongful discrimination, but this judgment cannot be explained by appeal to standard disparate impact accounts of discrimination. Finally, the activity sets up discussion of the impossibility result considered later in class. Given that discriminatory behavior by individuals produces some of the data on which the system is trained, is it surprising that individuals from subordinated groups have a higher probability of being incorrectly unfavorable classified than those from privileged groups?.

In small groups, discuss whether the following case (1) is a case of wrongful discrimination, according to the disparate impact standard, and (2) where you think it is a case of discrimination. If you answered yes to (2), explain why you think it is a case of discrimination. If you answered no, explain why you think it is not a case of discrimination.

Hiring at Glap. Glap have hired a new computer science team to design an algorithm to predict the success of various job applicants to sales positions at Glap. As you go through the data and design the algorithm, you notice that African-American sales representatives have significantly fewer average sales than white sales representatives. The algorithm’s output recommends hiring far fewer African-Americans than white applicants, when the percentage of applications from people of various races are adjusted for.

Module Assignment

Recall the Glap class activity. In class, we thought about the problem statically: given historical data, such as data about sales performance, who should Glap hire right now?

In this follow-up assignment, I want you to think about consumer behavior and firm hiring practice dynamically. Looking at features of the labor market dynamically allows you more, or different, degrees of freedom in your model. For example, in class, you probably took consumer preference about the race of their sales representative as given. What would happen if you allowed consumer preference to vary (say, on the basis of changing racial demographics in the sales force)?

Here’s the new case:

The United States Secretary of Labor has heard about your team’s success with Glap and comes to you with a request. The Department of Labor wants to reduce disparate impact discrimination in hiring. They want you to come up with a model of fair hiring practices in the labor market that will reduce disparate impact while also producing good outcomes for companies.

Write two or three paragraphs that explains the following:

  • What are the relevant socially good outcomes, for both workers and companies?
  • What are some properties of your algorithm that might produce those socially good results?
    • Think about constraints that you might build in, such the fairness constraints that we discussed in class, or how you might specify the prediction task that we are asking the machine to optimize.
  • [Optional] Are there tradeoffs that your algorithm has to balance?
  • [Optional] Are there any features of data collection, algorithm implementation, or the social world that make you wary of using machine learning in this case?

We expect that:

  • ou focus on one or two points of discussion for each question.
    • For example, for question 2, pick a single fairness criterion.
    • Depth over breadth here!
  • You provide reasons in support of your answers (i.e., explain why you chose your answer).
    • For example, for the first question, you might choose the socially good outcome of increased profit for companies, and give reasons why profit is the right social goal.
  • You are clear and concise – stick to plain, unadorned language.
  • You do not do any outside research.
  • You demonstrate a thoughtful engagement with the questions.

Lessons Learned

Student response to this module has been overwhelmingly positive. A few lessons stand out.

  • The topic of the module directly connects a social issue familiar to the students from recent news coverage (discrimination and AI) with specific technical material that is part of current AI research (formal fairness definitions from fair machine learning and an impossibility result). As a result, students are able to see immediately how the moral issues raised in the module were relevant to concrete, socially important applications of machine learning, as well as how current machine learning researchers are addressing issues of discrimination in current research.
  • The module uses small-group-based (2-5 student) short active-learning exercises and class brainstorming to stimulate student engagement. We have found that such exercises help dramatically in keeping students engaged in such a large class.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us