Embedded EthiCSTM @ Harvard Bringing ethical reasoning into the computer science curriculum

Economics and Computation (CS 136) – Fall 2019

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: Recommender Systems
Module Author: Heather Spradley

Course Level: Upper-level undergraduate
AY: 2019-2020

Course Description: “This is a class about the digital economy, specifically the interplay between economic thinking and computational thinking as it relates to electronic commerce, incentives engineering, and networked systems. Topics covered vary each year, but include a subset of:

  • game theory (including peer-to-peer file sharing models and algorithmic game theory),
  • auctions (including internet advertising and combinatorial auctions),
  • incentive compatible mechanism design (including theoretical and algorithmic approaches),
  • human computation, crowd sourcing, and peer prediction,
  • matching algorithms,
  • trust and reputation,
  • electronic currencies (including Bitcoin),
  • networks (network formation, cascades, games on networks)
  • privacy (including differential privacy and privacy-protected advertising)
  • ethical considerations

Emphasis will be given to core methodologies, and the class involves the discussion of theoretical, algorithmic and empirical results. We hope to convince you that incentives matter in many computational settings, and that computation matters in many economic ones. (Course description)”

Semesters Taught: Fall 2019, Fall 2021

Tags

  • fake news (both)
  • recommender systems (CS)
  • filter bubbles (both)
  • autonomy (phil)
  • reasonable belief (phil)
  • attention hacking (CS)

Module Overview

In this module, we discuss whether or not social media sites should use recommender systems that optimize for engagement. We unpack the notions of autonomous belief formation, which refers to forming beliefs for ourselves, and reasonable belief formation, which refers to forming beliefs in accordance with evidence. In order to understand the implications of recommender systems for both kinds of belief formation, we focus on news recommendations on YouTube as a case study. Recent research reveals that YouTube’s recommender system slowly recommends increasingly extreme videos regardless of what was searched for. When recommender systems filter information based on the user’s preferences they appear to: (1) provide the user with information that she herself is interested in; and (2) bias the information to which the user is exposed. We discuss what this means for both the autonomous and reasonable belief formation of the user. As a comparison case, we consider the shipowner from Clifford’s “The Ethics of Belief”, who intentionally ignores relevant evidence and seeks confirming evidence according to his preferences. We discuss the ethical considerations for creators of social media sites like YouTube (which might be the future role of computer science students), as well as for users and content creators (which any student might currently be).

Connection to Course Material

In the lecture just prior to the module, the CS professor presents technical material about recommender systems and leads the class in a discussion of the economic costs and benefits to various kinds of recommender systems. The module starts by asking what might be missed by a narrow economic analysis. We look at ethical, social, and epistemic costs, both to users and creators, challenging what seemed like a simple market exchange—convenience for profit.

Goals

Module Goals

By the end of the module, students will be able to:

  1. Understand consequences to belief formation when social media sites optimize for engagement.
  2. Learn philosophical distinctions in types of belief formation in order to think through these consequences.
  3. Apply these tools to the question of whether social media sites should optimize for engagement.
  4. Consider alternative optimization and training strategies for recommender systems.

Key Philosophical Questions

  1. What is autonomy in belief formation and why is it valuable?
  2. What is reasonable and responsible belief formation and why is it valuable?
  3. Is there a tension between autonomy and reasonability in belief formation, given the overload of information we face today?
  4. Does the curated news model, as an alternative, treat the consumer paternalistically?

Materials

Key Philosophical Concepts

  • Autonomy
  • Reasonable belief
  • Responsible belief
  • Paternalism

    Assigned Readings

“YouTube, The Great Radicalizer”, by Zeynep Tufekci – This reading quickly introduces students to empirical research on the social and ethical consequences of YouTube’s current recommender system.

“What is Enlightenment?”, by Immanuel Kant – Reading selections of this essay helps students begin to understand the notion of autonomous belief formation, giving a jumping off point for discussion of what appropriate autonomy in belief formation is..

“The Ethics of Belief”, by William Clifford – This reading forms a focal point for the class discussion. It helps introduce students to the notion of reasonable belief formation. It also provides a foil for the case study in the form of a thought experiment about a shipowner who ignores worries that his ship is unseaworthy. The essay explores what it is that the shipowner does wrong and why, which provides a clear comparison to various elements of what might be going wrong when we choose to get out news from recommender systems. We discuss whether getting news in this fashion makes us relevantly like the shipowner, as well as what would need to change if we want to be relevantly unlike the shipowner.

  • “YouTube, The Great Radicalizer”, by Zeynep Tufekci
  • “What is Enlightenment?”, by Immanuel Kant
  • “The Ethics of Belief”, by William Clifford

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us