Embedded EthiCSTM @ Harvard Bringing ethical reasoning into the computer science curriculum

Networks (CS 134) – Spring 2017

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: Facebook, fake news, and the ethics of censorship
Module Author: David Gray Grant

Course Level: Undergraduate
AY: 2016-20217

Course Description: “Networks—of social relationships, economic interdependencies, and digital interactions—are critical in shaping our lives. This course introduces models and algorithms that help us understand networks. Fundamental concepts from applied mathematics, microeconomics, and computer science will be presented through the lens of network science, in order to equip students to usefully analyze the “big data” generated by online networks. Applications discussed include the viral spread of ideas, maximizing influence, and the contagion of economic downturns. Concepts and tools covered include game theory, graph theory, data mining, and machine learning.” (Course description

Semesters Taught: Spring 2017

Tags

  • networks (CS)
  • influence maximization (CS)
  • fake news (phil)
  • censorship (phil)
  • free speech (phil)

Module Overview

In this module, we discuss whether social media companies (such as Facebook) have a moral obligation to suppress the spread of fake news over their networks. Students are introduced to contemporary arguments for censoring fake news, as well as John Stuart Mill’s classic arguments against censorship of any kind from On Liberty. Much of the discussion, as well as the follow-up assignment, asks students to brainstorm specific strategies Facebook might use to suppress fake news on its platform, and then evaluate those strategies from a moral perspective.

    Connection to Course Material

We chose this topic because it allows us to connect material already being discussed in the course (the use of network modeling tools to predict the spread of content on social media platforms) with a contemporary social issue that is likely to resonate with students (the spread of political “fake news” on social media in the lead-up to the 2016 Presidential election). However, many other topics would be suitable for this course, in part because network modeling tools have practical applications in a wide range of domains.

One thing students learn about in the course is influence cascade models (e.g. independent cascade), which can be used to predict how information and influence will spread through a social network. In one course exercise, students are asked to use such a model to predict whether a hypothetical social media marketing campaign will “go viral.” This module follows up on this material, posing moral questions about how social media companies ought to use these tools in designing their software platforms.

© 2018 by David Gray Grant, “Facebook, Fake News, and the Ethics of Censorship” is made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0).

For the purpose of attribution, cite as: David Gray Grant, “Facebook, Fake News, and the Ethics of Censorship” for CS 134: Networks, Spring 2017, Fall 2017, Embedded EthiCS @ HarvardCC BY 4.0.

Goals

Module Goals

  1. Familiarize students with important philosophical ideas and arguments from long-standing debates about the ethics of censorship.
  2. Show students how those ideas and arguments bear on contemporary debates about the censorship of “fake news” by social media companies.
  3. Give students practice applying those ideas and arguments to a concrete software engineering problem.

    Key Philosophical Questions

Question (1) is the central question for the module, and asks students to think about a real-world moral problem of the kind they might face in their future work as computer scientists. The remaining questions raised by the module are intended to help students think more clearly about how we should answer this central question

  1. Does Facebook have a moral obligation to suppress the spread of fake news over their networks?
  2. What distinctions can we draw among different kinds of content that get called “fake news”? Are these distinctions morally significant?
  3. How might Facebook suppress the spread of fake news over its platform? Are some of these strategies better than others from a moral point of view?
  4. If Facebook should suppress fake news in some way, how should it decide which stories or sources to suppress?
  5. Is the suppression of fake news by social media companies a form of censorship?
  6. Is censorship ever morally permissible? If so, under what conditions?

Materials

    Key Philosophical Concepts

The concepts of moral obligation, prohibition, permission, and supererogation are very basic concepts in ethics. However, participating students tend to vary widely in their level of familiarity with ethics, and may not be familiar with these concepts. We have found that discussing them explicitly helps students better understand what is at stake in how we answer the module’s central question about what Facebook is morally obligated to do, and how that question relates to other, related questions (such as whether the law should allow Facebook and other social media companies to regulate content posted on their platforms)

  • Moral obligation, permission, prohibition, and supererogation.
  • J.S. Mill’s harm principle.
  • Utilitarianism.
  • Censorship (hard and soft).
  • Democratic legitimacy.

    Assigned Readings

This reading is excerpted from chapter two of J.S. Mill’s On Liberty and provides the bulk of the philosophical content for the module. In this chapter, Mill argues that censorship is virtually always morally wrong, as a strict policy against censorship will produce more happiness in the long run than any other policy. In addition to being powerful and highly accessible, Mill’s arguments remain highly influential on contemporary discussions of censorship. Mill does not explicitly address cases of deliberate misinformation in the reading, creating an opportunity for students to interrogate Mill’s ideas as they apply to a contemporary (and for them, novel) problem in computer ethics.

These two editorials were selected to represent both diverse political views and diverse positions on the key questions for the lecture.
Chloe Rose Stuart-Ulin (2018), “Microsoft’s politically correct chatbot is even worse than its racist one” – This reading is intended to be paired with the alternative class activity described below.
The editorial from the editorial board of the New York Times appeared shortly after the 2016 Presidential election. In the piece, the editorial board excoriated Facebook for failing to take a more active role in suppressing political misinformation on its platform, and argued that it “owes its users, and democracy itself, far more.”

The editorial from the National Review argues that judgments by private organizations (such as media companies or fact-checking organizations) about what counts as “fake news” should not be trusted, as they are likely to be infected by political bias. Rather than relying on these organizations to judge the reliability of various news sources for us, the editorial suggests, private citizens should instead work together to assess the reliability of particular stories for themselves.

Implementation

Class Agenda

  1. Overview.
  2. Key concepts: moral obligation, varieties of “fake news,” censorship.
  3. Strategies Facebook could use to suppress fake news.
  4. An argument for censoring fake news (based on the NYT editorial).
  5. J.S. Mill’s arguments against censorship in On Liberty.
  6. Discussion.

    Sample Class Activity

This class activity, conducted early on the lecture, gets the students thinking about how the moral issues the module focuses on bear on real-world design problems faced by working computer scientists. The class activity accomplishes two goals. First, it makes the practical relevance of the material for the work of computer scientists obvious to the students from the outset. Second, it gives the students practice applying moral reasoning in the context of a concrete software engineering problem.

Students are asked to briefly brainstorm (in pairs) possible strategies that Facebook might use to suppress the spread of stories designated “fake news” by a nonpartisan fact-checker. The Embedded EthiCS TA then asks students to volunteer strategies, which are listed on the board. After discussing these suggestions, students vote (via Google poll) on whether Facebook should implement a few of the suggested strategies. The TA then briefly reviews the results of the poll and asks volunteers to justify their answers.

    Module Assignment

The sample class activity asks students to identify concrete strategies Facebook might use to suppress the spread of fake news. This assignment follows up on that activity, asking the students to evaluate particular strategies from a moral perspective (drawing on the philosophical ideas they learned in class). The assignment thus gives the students further practice applying moral reasoning to a realistic software engineering problem, as well as practice articulating and defending their own views about how that problem should be addressed.

The follow-up assignment asks students to write a short essay identifying a strategy for suppressing fake news and defending a position about whether Facebook morally ought to implement that strategy (drawing on material from the readings and in-class discussion).

Depending on the size of the class, this assignment could either be graded by the Embedded EthiCS TA or graded pass/fail by the CS TAs.

Lessons Learned

Student response to this module was overwhelmingly positive both times it was taught (spring and fall 2017). A few lessons stand out.

  • The topic of the module directly connects a social issue familiar to the students from recent news coverage (the spread of fake news on social media platforms in the leadup to the 2016 U.S. Presidential election) with specific technical material taught in the course (using network modeling tools to predict the spread of content on social media platforms). As a result, students are able to see immediately how the moral issues raised in the module are relevant to concrete, socially important applications of the course material. Tight connections between moral issues and technical issues discussed in a course are not always easy to identify, but they go a long way to ensuring student engagement with the module, and also provide natural opportunities for students to practice applying moral reasoning to realistic software engineering problems.
  • The module uses many small-group-based (2-5 student) short active-learning exercises to stimulate student engagement. We have found that such exercises help dramatically in keeping students engaged, and that they work well with classes of all sizes (this module has been taught both to a small class of under 20 students and to a large lecture class of over 60).

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us