Embedded EthiCSTM @ Harvard Bringing ethical reasoning into the computer science curriculum

Artificial Intelligence (CS 182) – Fall 2022

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: Designing Responsible AI
Module Author: Michael Pope

Course Level: Graduate
AY: 2022-2023

Course Description: “Artificial Intelligence (AI) is already making a powerful impact on modern technology, and is expected to be even more transformative in the near future. The course introduces the ideas and techniques underlying this exciting field, with the goal of teaching students to identify effective representations and approaches for a wide variety of computational tasks. Topics covered in this course are broadly divided into search and planning, optimization and games, and uncertainty and learning. Special attention is given to ethical considerations in AI and to applications that benefit society.” (Harvard course catalogcourse site.

Semesters Taught: Fall 2021, Fall 2022

Tags

  • responsibility (phil)
  • stakeholders (phil)
  • respect (phil)
  • justice (phil)
  • harm (phil)
  • systems with AI (CS)
  • predictive accuracy (CS)
  • algorithmic design (CS)

Module Overview

This module introduces students to a framework for responsible system design. The framework helps students identify and evaluate ethical dimensions of a system through considerations of a system’s benefit/harm, respect for stakeholders, and justice. Students then bring these ethical lenses to bear on technical choices in the data, design, and deployment of an AI system. Each step of this approach is introduced through two real-world case studies involving the prediction of child maltreatment.

    Connection to Course Technical Material

Following recent lectures that explored the ethical questions that arise from technical decisions, this module provides students with a framework for thinking about responsible design. The framework is applied to two real-world cases in which ethical considerations arise through technical choices.

This course introduces students to AI systems. Prior to this module, students had learned a variety of technical tools for building AI systems. This module occurred at the end of the semester as part of a series of class meetings that focused on ethical dimensions of AI systems, including value alignment and statistical conceptions of fairness. To conclude that series of meetings, this module provides tools for recognizing and addressing ethical problems.

Goals

Module Goals

  1. Cultivate student awareness of ethical dimensions of AI system design and deployment
  2. Identify relevant stakeholders for a two case studies involving child maltreatment
  3. Consider the impacts of a system on stakeholders through lenses of benefit/harm, respect, and justice
  4. Recognize the ways that technical choices interact with the ethical dimensions of design and deployment

    Key Philosophical Questions

Q1: This question frames the module and is addressed by answering the other two questions. It is designed to invite students to reflect on the myriad ways that their work can impact others.

Q2: Within the scope of the module, answering this question has two parts. First, students practice identifying stakeholders for particular AI systems. Second, by responding to the impacts of a system, students consider how responsibilities require alterations to the system over time.

Q3: Having considered some ethical impacts of each case study, this question asks students to consider how those impacts are related to choice points in the data, design, and deployment of both systems.

  1. What responsibilities do computer scientists have in designing AI systems?
  2. Who is impacted by an AI system and how are they impacted?
  3. How can computer scientists’ technical choices result in ethical impacts for stakeholders?

Materials

    Key Philosophical Concepts

Responsibility is framed for this module in terms of the other concepts listed. Drawing on the ACM’s ethics code, students begin by identifying relevant stakeholders for a given AI intervention. With stakeholders in mind, they then consider how technical choices can result in benefits or harms, show respect for autonomy and dignity, and result in just outcomes.

  • responsibility
  • stakeholders
  • benefits/harms
  • respect
  • justice

    Assigned Readings

This article provides students with background for the first case study, the Allegheny Family Screening Tool (AFST). The AFST is a predictive machine-learning algorithm that predicts childhood abuse and neglect designed to assist human decision makers in determining whether investigation is warranted. This reading prepares students for a second case study, Hello Baby PRM, which is a risk prediction model from the designers of the AFST that aims to predict child maltreatment at the point of birth. Both algorithms are currently in use.

  • Virginia Eubanks, “A Child Abuse Prediction Model Fails Poor Families,” excerpt from Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. On Wired.com.

Implementation

Class Agenda

  1. Introduction: brainstorming what responsible AI consists in, with discussion. Case Study 1: the Allegheny Family Screening Tool (AFST).
  2. Identify stakeholders and investigate ethical dimensions of the AFST by looking at potential benefits/harms, respect, and justice.
  3. Discuss how technical choices influence ethical impacts and discuss how Version 2 of the AFST responsibly corrects concerns arising from data use, system design, and deployment.
  4. Case Study 2: Hello Baby PRM.
  5. Apply the ethical framework from Case Study 1 to Case Study 2.

    Sample Class Activity

Having applied the framework to the first case study, this activity is designed to help students practice identifying ethical dimensions of real-life cases. In turn, the activity concludes with a large-group discussion of how ethical and technical responsibilities can be intertwined.

Students develop and apply the ethical framework through Case Study 1. With Case Study 2, students break into small groups and identify ethical impacts on relevant stakeholders. In turn, the class meeting concludes by examining how technical choices interact with what the students discussed in small groups.

    Module Assignment

One possible assignment could ask students to apply the ethical framework discussed in class to another case study (e.g., self-driving cars).

There was no assignment for this module.

Lessons Learned

Students were engaged throughout and were able to bring different technical concepts from the course to bear on the case study.

  • This is a large undergraduate course in which students are unaccustomed to small-group discussion. To facilitate effective engagement, it is imperative to model the ethical analysis in the large-group discussion of the first case study.
  • Given two case studies, pacing is crucial. Even in this large-group environment, an instructor should not hesitate to set aside the second case study (perhaps to a follow-up assignment), if student engagement is high.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us