Embedded EthiCSTM @ Harvard Bringing ethical reasoning into the computer science curriculum

Programming Languages (CS 152) – Spring 2019

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: Specification of ethical concerns
Module Author: Diana Acosta Navas

Course Level: Upper-level undergraduate and graduate
AY: 2018-20219

Course Description: “This course is an introduction to the theory, design, and implementation of programming languages. Topics covered in this course include: formal semantics of programming languages (operational, axiomatic, denotational, and translational), type systems, higher-order functions and lambda calculus, laziness, continuations, dynamic types, monads, objects, modules, concurrency, and communication.” (Course description

Semesters Taught: Spring 2018, Spring 2019, Spring 2021, Spring 2022, Spring 2023

Tags

  • software verification and validation CS
  • machine ethics both
  • moral rights phil
  • systems with AI CS
  • programming languages CS

    Module Overview

As we mention in “Lessons Learned” below, the focus on AI-based systems in this module is optional. In particular, the first strategy described here (using ethical design specifications) can be applied just as easily to systems that are not AI-based. When we teach this module again in the spring of 2019, we plan on developing a separate version of the module focusing on software systems that are not AI-based.

Software systems based on artificial intelligence often exhibit surprising emergent behavior that can have ethically problematic effects on the lives and interests of human beings. Machine ethics is a nascent interdisciplinary field devoted to ensuring that AI-based systems behave in ethically acceptable ways by modifying the way they make decisions to take ethical considerations explicitly into account.

In this module, we discuss two emerging strategies in machine ethics. The first makes use of ethical design specifications. Design specifications are concrete, formally verifiable desiderata that a software system is designed to satisfy. Design specifications are ordinarily technical or legal, but they can also be ethical. Ethical design specifications are intended to ensure that a system does not behave in specific ethically unacceptable ways in (relatively) specific contexts. The second makes use of machine moral reasoning. Machine moral reasoning uses advanced artificial intelligence techniques to simulate the ethical reasoning capacities of human agents, in an effort to prevent ethically unacceptable system behavior in situations that are not specifically foreseen. We consider a series of case studies in machine ethics in order to evaluate the promise and limitations of these two strategies for ensuring ethically acceptable system behavior.

Connection to Course Material

In the lead-up to the module, the course covers automated techniques that can be used to verify that a software system will behave in accordance with its design specifications. In this module, we introduce the idea of ethical design specifications, and consider how these might be verified (either using techniques covered in the course, or other methods).

© 2018 by David Gray Grant, “Ethics in Software Verification and Validation” is made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0).

For the purpose of attribution, cite as: David Gray Grant, “Ethics in Software Verification and Validation” for CS 152: Programming Languages, Spring 2018, Embedded EthiCS @ HarvardCC BY 4.0.

Goals

Module Goals

  1. Introduce students to a simple framework useful for analyzing case studies from an ethical perspective (see “Sample Class Activity” below).
  2. Give students practice using this framework to analyze a series of case studies in which an AI-based software system behaved in unexpected and ethically problematic ways.
  3. Introduce students to two different strategies for ensuring AI-based systems behave in ethically acceptable ways: (1) using ethical design specifications and (2) using machine moral reasoning.
  4. Give students practice thinking through how these two strategies might be applied to address specific ethical problems in real-world case studies featuring AI-based software systems.

Key Philosophical Questions

  1. How, in practice, can the engineers that design AI-based systems prevent them from behaving in ways that have ethically unjustifiable effects on the rights and interests others?
  2. How should ethical considerations be taken into account during the process of software verification and validation?
  3. To what extent can we prevent AI-based systems from behaving in ethically unjustifiable ways by carefully formulating their design specifications and verifying that those specifications are satisfied?
  4. What other strategies might be used to ensure that AI-based systems do not behave in ethically unjustifiable ways?

Materials

Key Philosophical Concepts

  • Moral obligation.
  • Moral rights.
  • Morally significant stakeholder interests.
  • Ethical design specifications.
  • Machine moral reasoning.

    Assigned Readings

This short piece from Nature provides an overview of contemporary research in machine ethics, familiarizing students with various approaches to designing AI-based systems to respond appropriately to ethical considerations.

This article, written by a team of computer and information scientists, considers how cutting-edge technologies from the field of artificial intelligence might be used to augment autonomous software systems with the capacity to apply general ethical principles to novel situations. In the last part of the module, we consider the potential advantages and disadvantages of this approach (compared to technically simpler approaches). The article also considers how formal verification techniques might be used to provide additional assurances that a system will respect ethical principles, and so connects directly with technical material covered in the course.

This reading is intended to be paired with the alternative class activity described below.

Alternative Class Activity

This activity focuses on a case study in which an AI-based software system behaved in unexpected and ethically problematic ways following launch: Microsoft’s Tay Twitterbot (which was manipulated by Twitter users into posting discriminatory messages).

After briefly discussing the case study, students are asked to break up in small groups and discuss (1) what features of Tay’s behavior are morally significant and (2) what programmers could have done to prevent this behavior by the system. Afterwards, we introduce a simple framework for anticipating potential ethical issues with a software system: first, identify as many groups of stakeholders that might be affected as possible; second, consider how the behavior of the system might affect the rights and interests of the individuals in those stakeholder groups. This framework is used to debrief the small group discussion of question (1), followed by a discussion of how the ethical issues the students identify could have been addressed during the software development process.

Later in the class, we repeat the activity just described with a follow-up case study: Microsoft’s Zo Chatbot, which was launched after Tay’s failure and designed to avoid controversial or potentially offensive topics of conversation.

Alternative Assignment

In this follow-up assignment, students collaboratively analyze a more detailed case study concerning the design of semi-automated weapon systems. Students are presented with a fictional scenario in which their employer gives them the task of coding a function for semi- automated weapons. The task is framed by a series of legal, technical and ethical requirements that the system’s behavior ought to respect.

In a follow-up in-class discussion, the Embedded EthiCS fellow and the course instructor guide students through a discussion of the case study. Students are prompted to discuss their answers to the assignment and consider how they weighed ethical requirements against technical and legal ones. Lastly, they are prompted to consider how ethical requirements may be addressed by incorporating technical specifications into the function.

Implementation

Class Agenda

  1. Case studies in machine ethics.
  2. Case study exercise: identifying stakeholder rights and interests that might be affected by a system’s behavior.
  3. Ethics in software verification and validation.
  4. Using ethical design specifications to ensure ethically acceptable system behavior.
  5. Using machine moral reasoning to ensure ethically acceptable system behavior.

    Sample Class Activity

The simple framework described here is adapted from Will Kymlicka’s excellent 1993 article “Moral Philosophy and Public Policy: the Case of NRTs.” According to Kymlicka, non-experts are liable to make significant mistakes – and overlook important considerations – when they attempt to evaluate technologies using complex tools from moral theory. By contrast, he argues, non-experts tend to be more successful when they focus on anticipating concrete, obviously important ways in which a technology might affect the lives of particular groups of people. Whether or not Kymlicka is right about this, it seems clear that the ability to anticipate how the behavior of a software system might affect our rights and interests is an important skill for computer scientists to have. Activities like this one provide students with an opportunity to practice this essential skill.

The first half of the class session focuses on a series of short case studies in which an AI-based software system behaved in unexpected and ethically problematic ways following launch. These case studies include Microsoft’s Tay Twitterbot (which was manipulated by Twitter users into posting discriminatory messages), Knightscope’s K-9 security robot (which disrupted the lives of residents of a camp of homeless individuals in San Francisco), and Google’s targeted advertising tools (which some companies have used in ways that arguably constitute illegal discrimination).

After briefly discussing the case studies, we introduce a simple framework for anticipating potential ethical issues with a software system: first, identify as many groups of stakeholders that might be affected as possible; second, consider how the behavior of the system might affect the rights and interests of the individuals in those stakeholder groups. (Here the Embedded EthiCS TA gives examples of putative rights, such as the right to privacy or the right not to be discriminated against.) Students then apply this framework to the case studies considered in the module in small groups of 5-6 students. Later in the class session, students consider how the potential problems they identify might be addressed at different phases of the software engineering process, including software verification and validation.

Module Assignment

In the follow-up assignment, students collaboratively analyze a more detailed case study. The case study features an AI-based software agent being developed at the Center for Artificial Intelligence in Society at USC to assist with the planning of a public health social work intervention targeting homeless youth in Los Angeles. Students review the case study independently and make two posts to a graded discussion forum. In the first, they identify a group of stakeholders in the case study, and give an example of how a right or interest of that stakeholder might be affected by the behavior of the agent. In the second, they suggest a possible strategy for addressing a potential ethical problem identified by another student, or comment on another student’s proposed strategy.

Lessons Learned

Student response to the module was positive when it was taught in the spring of 2018. In follow-up surveys, 86% of students reported that they found the module interesting, and 78% said that participating helped them think more clearly about the ethical issues we discussed. A few things we learned from the experience:

  • Student responses to the assignment were, on the whole, excellent: students identified a wide range of potential ethical issues with the system described in the case study, and a wide variety of potential solutions. We suspect that the assignment was successful for at least two reasons. First, the graded discussion forum format worked even better than we expected at stimulating a robust discussion among students: students appeared to welcome the opportunity to engage with each other’s ideas. Second, the assignment required students to apply skills and concepts they had already practiced applying during the class session activities.
  • While the module was on the whole a success, we plan on making at least two modifications to the content before it is taught again in spring of 2019. First, this version focuses almost exclusively on AI-based systems. The philosophical ideas covered in the module, however, are readily applicable to software systems that are not AI-based. While we stressed this to the students in class, we now think that it would be better to appeal to a broader range of examples and case studies. Second, in teaching the module in spring of 2018, we found that there was not sufficient time to adequately discuss both strategies mentioned above (ethical design specifications and machine moral reasoning). When we re-teach the module, we plan on cutting the material on machine moral reasoning in order to deepen our discussion of the ethical design specifications strategy

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us