
Embedded EthiCS @ Harvard Bringing ethical reasoning into the computer science curriculum
We Value Your Feedback! Help us improve by sharing your thoughts in a brief survey. Your input makes a difference—thank you!
Introduction to Computational Linguistics and Natural-language Processing (CS 187) – Fall 2023
First time reviewing a module? Click here.
Click ⓘ to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.
Click “Download full module write-up” to download a copy of this module and all marginalia information available.
Module Topic: Uncertainty, Moral Responsibility, and the Precautionary Principle
Module Author: Camila Hernandez Flowerman
Course Level: Upper-level undergraduate
AY: 2023-2024
Course Description: “Natural-language-processing applications are ubiquitous: Alexa can set a reminder, or play a particular song, or provide your local weather if you ask; Google Translate can make documents readable across languages; ChatGPT can be prompted to generate convincingly fluent text, which is often even correct. How do such systems work? This course provides an introduction to the field of computational linguistics, the study of human language using the tools and techniques of computer science, with applications to a variety of natural-language-processing problems such as these. You will work with ideas from linguistics, statistical modeling, machine learning, and neural networks, with emphasis on their application, limitations, and implications. The course is lab- and project-based, primarily in small teams, and culminates in the building and testing of a question-answering system.” (CS 187 Harvard Course Catalog)
Semesters Taught: Fall 2020, Fall 2021, Fall 2023
Tags
ⓘ
- large language models [CS]
- natural language processing [CS]
- uncertainty [phil]
- risk [phil]
- moral responsibility [phil]
- utilitarianism [phil],
- precautionary principle [phil]
Module Overview
In this module, participants consider decision-making about acting under conditions of uncertainty. The module takes as its starting point the decision by OpenAI not to release GPT-2 to the public, and asks whether that decision (and their subsequent decision to release the GPT) was morally justified. After discussing why utilitarian principles or cost benefit analyses may not resolve this question, participants are introduced to the precautionary principle as a way of thinking about decision-making under conditions of uncertainty. They discuss several variations of the principle and how these variations might apply to other scenarios involving decision-making about actions with potentially large consequences. The precautionary principle is explained in three key pieces: the damage condition, the knowledge condition, and the potential remedy. These pieces of the principle can be adjusted or changed to make the principle more or less permissive with respect to what actions it allows and/or what remedies are prescribed.
Connection to Course Technical Material
ⓘ
This module was chosen to situate certain ethical questions about the development of natural language processing models within the larger context of research and development. The module compares the development of these models to other cases where technologies have been developed in spite of concerns about their potential for harmful impacts. Because other courses cover some of the more technical issues with models that use natural language processing, this was a useful alternative to covering material that students may be covering in other courses and in other modules.
Although the module is fairly broad and does not tie in directly to the technical content covered in the course, the module is straightforwardly connected to the course material more generally, since the course is on natural language processing, and the primary case study used for the module is GPT-2.
Goals
Module Goals
- Identify the ethically relevant similarities and distinctions between OpenAI’s development of GPT-2 and the development of other historically significant new technologies.
- Critically evaluate OpenAI’s decision to withhold GPT-2 from the public (and its subsequent reversal) in light of the precautionary principle.
- Participate in productive discussions with peers about what kind of moral responsibilities different stakeholders have when developing new, high-consequence technologies under conditions of uncertainty.
Key Philosophical Questions
ⓘ
These key questions could be shifted to create a similar module with a slightly different focus. For example, the module could be focused more on the individual vs. collective moral responsibility aspects of the case.
- How should we make decisions about potentially high-consequence technologies under conditions of uncertainty?
- Was OpenAI morally justified in their decision-making with respect to GPT-2?
- Is the precautionary principle sufficiently action-guiding under conditions of uncertainty?
Materials
Key Philosophical Concepts
ⓘ
The module uses utilitarianism and/or a type of cost-benefit analysis as a foil to introduce the precautionary principle. This family of decision-making guidelines requires that we know [a] what the potential outcomes of each action are and [b] the probability of each of those outcomes. But under conditions of uncertainty, for example when developing new technologies, we may not know [a] and/or [b]. This motivates the introduction of the precautionary principle as a better option in the case of this kind of uncertainty.
- Uncertainty
- Moral Responsibility
- Precautionary Principle
- Utilitarianism
Assigned Readings
ⓘ
The OpenAI blog post explicitly states that they (OpenAI) recognize they cannot be aware of every potential harmful outcome. This provides the groundwork for suggesting that utilitarianism or a straight forward cost-benefit analysis cannot be applied in the case of GPT-2’s release, since it would require knowing all the possible outcomes of both releasing and not releasing GPT2. This then helps motivate the application of the precautionary principle in the OpenAI case (and the other case studies explored).
- OpenAI blog post on GPT-2 decision
Implementation
Class Agenda
- Introduce OpenAI GPT-2 case.
- Warmup with think/pair/share activity.
- Motivation for and introduction to the precautionary principle.
- Case studies in small lab groups.
- Larger class discussion
Sample Class Activity
ⓘ
The worksheet explicitly asks them to consider whether the case meets each condition within each principle, and then what the remedy would be in each case. It also asks them to figure out whether the principle was applied in real life. In other words, did the researchers making decisions at that point in time actually follow the precautionary principle and invoke the remedy.
After students learn about the precautionary principle, they are given a lab packet to work through in small groups. The lab packet contains:
- Short blurbs with descriptions of 2-3 examples of the development of other new technologies under uncertainty. Examples included in this module were the CERN large hadron collider, the development of recombinant DNA and subsequent Asilomar conference, and heritable genome editing.
- A worksheet which has two variations of the precautionary principle. One variation is a more permissive principle, and one is a stricture principle. These are achieved by changing the damage condition, knowledge condition, and remedy within the principles.
Students discuss how each principle would apply to each example of a new technology being developed, and whether the principle gives what they think is the right or wrong answer in each case.
Module Assignment
ⓘ
There are a few different variations of this assignment that might make sense. For example, instead of focusing on the principle itself, the assignment could straightforwardly analyze OpenAI’s decision to withhold (but then release) GPT-2 from the public.
After the module, students were asked to write a short 500 word essay on the following prompt:
“Propose a version of the precautionary principle — that is, a setting of the three parameters — that you believe gives the “right answers” for the three case studies that we examined in class, and argue for its appropriateness; or provide an argument that no such version of the precautionary principle exists.”
Lessons Learned
Overall the module went really well. Students were clearly very engaged and wanted to continue discussing even when time had run out. One problem was simply running out of time, however. Students only had about 10 minutes to do their small group work on their lab packets, and many weren’t able to finish the worksheet in time. Further, this left only about 5 minutes for large class discussion, which would have tied the module together. In future iterations of the module, time could be saved by omitting utilitarianism as a motivation for the precautionary principle and simply diving right into the precautionary principle.
Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.
Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us