Embedded EthiCS @ Harvard Bringing ethical reasoning into the computer science curriculum

We Value Your Feedback! Help us improve by sharing your thoughts in a brief survey. Your input makes a difference—thank you!

Advanced Computer Vision (CS 283) – Fall 2023

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: The Ethics of Emotion Recognition
Module Author: Dasha Pruss

Course Level: Graduate
AY: 2023-2024

Course Description: “Computer vision is about making systems that ‘see’ by turning measurements of light into useful information. This course provides a comprehensive foundation for understanding and creating such systems. Topics include: camera geometry; radiometry and light transport; elements of biological vision; and classical and neural-network methods for extracting information about 3D shape, materials, dynamics and semantics. The course balances breadth and depth, and it blends theory and practice.”

Semesters Taught: Fall 2021, Fall 2023

    Tags

  • emotion recognition [CS]
  • basic emotion theory [phil and CS]
  • privacy [phil and CS],
  • freedom of expression [phil]

Module Overview

This module invites students to critically evaluate the assumptions underlying and ethical issues raised by emotion recognition technology. The first part of the module introduces the history of emotion recognition and asks students to reflect on two problematic assumptions inherited by many contemporary applications of emotion recognition technology: that inner emotional states can reliably be inferred from facial expression, and that emotions and facial expressions are universal across cultures. In small groups, students practice identifying these assumptions in a real-world application of emotion recognition technology, Affectiva’s Automotive AI, which uses emotion recognition inside vehicles.
The second part of the module distinguishes the ethical issues that will be resolved by improving the performance of emotion recognition technology and those that will persist even if emotion recognition performs optimally. Students discuss a thought experiment that is designed to probe the latter set of ethical issues, where the AI in Affective’s Automotive AI is replaced by ‘natural intelligence’ (NI), that is, a trained human psychologist who rides along with drivers. A small-group discussion prompt invites students to consider issues including privacy, freedom of expression, consent, and data-sharing. The course concludes by inviting students to consider whether the responsible development of emotion AI is possible, which students explore further through a written assignment.

    Connection to Course Technical Material

Previous modules for this course centered on facial recognition systems; however, facial recognition is now largely considered a ‘solved’ technical problem, meaning that the ethical challenges raised by facial recognition pertain largely to their implementation. Unlike facial recognition, the technical challenges of emotion recognition are still unresolved and are an active site of research.

Emotion and affect recognition is an increasingly common and controversial application of computer vision. The lecture and reading provide historical context on the development of emotion recognition systems. During the lecture, students are asked to list the emotion recognition applications they have encountered or worked on.

Goals

    Module Goals

The TA felt it was important to emphasize that emotion recognition systems inherit the dubious epistemological assumptions made by Paul Ekman’s basic emotion theory, which underpin most contemporary systems.

  • Understand the origins and two assumptions behind contemporary applications of emotion recognition technology.
  • Identify these assumptions in a specific application of emotion recognition.
  • Understand that there is no way to build an emotion recognition system without relying on benchmarks made by human annotators, who are (a) culturally bound and (b) do not reliably differentiate affect and internal emotional state.
  • Identify the rights that are threatened by the use of emotion recognition, even if it is highly accurate.
  • Communicate a position on responsible use and development of emotion recognition

    Key Philosophical Questions

These questions follow from a distinction between the ethical challenges raised by contemporary, technically imperfect emotion recognition systems and the ethical challenges that persist even for hypothetical, sophisticated emotion recognition systems. The third question synthesizes the ethical challenges raised by both prongs.

  1. What assumptions and limitations does emotion recognition technology have?
  2. What are the tradeoffs between privacy/freedom of expression and the possible benefits of emotion recognition?
  3. In light of these issues, how, if at all, can emotion recognition technology be responsibly developed and used?

Materials

    Key Philosophical Concepts

Emotion recognition systems build on Paul Ekman’s basic emotion theory, which posits that we can learn someone’s true inner emotional state from their involuntary facial expressions and that there is a shared set of universal emotions across all cultures. The lecture explores problems behind this theory and how they are inherited by technical systems that use basic emotion theory.

Epistemological challenges aside, the use of emotion recognition technology threatens rights to privacy and freedom of expression. Individuals subject to emotion recognition systems may have sensitive emotional and other forms of information collected without their consent, and may alter their behavior in response to feeling surveilled (this is known as the Hawthorne effect). Students practice identifying these issues through a thought experiment in which a trained psychologist rides along with individuals and records their emotions for a private company.

  • Basic emotion theory
  • Freedom of expression/Hawthorne effect
  • Privacy
  • Thought experiment

    Assigned Readings

This module draws heavily on chapter 5 of Kate Crawford’s Atlas of AI, which discusses the history and dubious epistemological underpinnings of contemporary emotion recognition systems.

  • Chapter 5: Affect. Atlas of AI, 2021. Kate Crawford.

Implementation

Class Agenda

  1. Reading response (before class)
  2. Introduction and tie-in to computer vision
  3. Warm up: what applications of emotion recognition are you familiar with?
  4. Background on the origins of emotion recognition and Paul Ekman’s basic emotion theory
  5. Small group activity: identify two assumptions in Affectiva Automotive AI case study
  6. Small group discussion: HumaNI (human natural intelligence) thought experiment
  7. Closing thoughts: what would responsible development of emotion AI look like?
  8. Homework assignment: could an emotion AI recognition system for neurodivergent or blind people be responsibly developed in the short term?

Sample Class Activity

Students are introduced to the following scenario:

HumaNI is an innovative startup. HumaNI offers a service that puts Brian, a trained psychologist, in the back seat of your car to ride along with you everywhere you go. Brian has received extensive training in applying Ekman’s FACS framework.

Brian discreetly and quietly observes your facial expressions anytime you’re in the car. Brian has a control module for the car and can make changes in real time depending on what he perceives to be your “emotion, cognitive states, and reactions to the driving experience.” For instance, if Brian perceives that you are looking frustrated, he might trigger the car’s audio system to play a message encouraging you to take deep breaths. If he perceives you to be drowsy or distracted, the car’s display may show an image encouraging you to stop for a coffee.

Brian has signed a non-disclosure agreement with HumaNI, so he isn’t allowed to share what he observes in your car with his friends or family, but the company has access to your data. Brian might also be legally required to share what he sees if there is a police inquiry, such as in case of an accident or some other evidence of misconduct during a traffic stop.

In small groups, students are asked to address the following prompt:

  • What arguments can you give for and against having Brian ride along in the car?
    Consider how Brian’s presence might affect the experience of the driver and the passengers, and what unintended consequences the collected data may have.

After discussing in small groups, the TA calls on groups to share what they talked about and collects a list of possible arguments for and against Brian’s riding along.

    Module Assignment

Students were given the assignment through Canvas, and the module TA graded the responses. Students had one week to complete the assignment.

“Blog post style” is to set students at ease – that is, it’s not a formal essay.

“In the style of a blog post (250-300 words), write an answer to the following prompt:

Emotion recognition technology has recently been proposed to aid neurodivergent, autistic, or blind people read the emotions of people around them. Given the assumptions and ethical tradeoffs of emotion recognition that we discussed in class, do you think such a system can be responsibly developed in the near-term? Why or why not? Be specific in your response.”

Lessons Learned

Students listened attentively to the lecture segment and were engaged during the discussion. Students were familiar with emotion recognition and were able to provide many examples.

  • The poll at the beginning of the class warmed up students to talking in a low-stakes setting, which can be valuable in a class that is normally lecture-based.
  • Using a humorous case study (HumaNI) increased students’ attentiveness and engagement with the module material. The absurdity of the imagined case study seemed to make it easier for students to notice the ethical issues raised by emotion recognition without being distracted by the technical novelty of emotion AI.
  • Students were fairly unified in their opposition to Brian’s riding along in the car and, particularly the second time the module was delivered, had a difficult time coming up with arguments for this service. had mixed reactions to the second portion. Future iterations of the module could modify the case study to be slightly more favorable to the inclusion of Brian in the car (e.g., that his presence would lower insurance premiums for participating individuals).
  • Both times the module was delivered, students were divided roughly 50:50 in their responses to the homework assignment, which suggests that the question is provocative in the appropriate way.
  • The course head’s reception to the course material was very positive in both cases; he listened attentively to the lecture and left comments on the reading through Perusall alongside the students.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us