Course: CS 134: Networks
Course Level: Upper-level undergraduate
Course Description: “Networks—of social relationships, economic interdependencies, and digital interactions—are critical in shaping our lives. This course introduces models and algorithms that help us understand networks. Fundamental concepts from applied mathematics, microeconomics, and computer science will be presented through the lens of network science, in order to equip students to usefully analyze the “big data” generated by online networks. Applications discussed include the viral spread of ideas, maximizing influence, and the contagion of economic downturns. Concepts and tools covered include game theory, graph theory, data mining, and machine learning.” (Course description )
Module Topic: Facebook, Fake News, and the Ethics of Censorship
Module Author: David Gray Grant
Semesters Taught: Spring 2017, Fall 2017
Module Overview: In this module, we discuss whether social media companies (such as Facebook) have a moral obligation to suppress the spread of fake news over their networks. Students are introduced to contemporary arguments for censoring fake news, as well as John Stuart Mill’s classic arguments against censorship of any kind from On Liberty. Much of the discussion, as well as the follow-up assignment, asks students to brainstorm specific strategies Facebook might use to suppress fake news on its platform, and then evaluate those strategies from a moral perspective.
Connection to Course Technical Material: One thing students learn about in the course is influence cascade models (e.g. independent cascade), which can be used to predict how information and influence will spread through a social network. In one course exercise, students are asked to use such a model to predict whether a hypothetical social media marketing campaign will "go viral." This module follows up on this material, posing moral questions about how social media companies ought to use these tools in designing their software platforms.
We chose this topic because it allows us to connect material already being discussed in the course (the use of network modeling tools to predict the spread of content on social media platforms) with a contemporary social issue that is likely to resonate with students (the spread of political "fake news" on social media in the lead-up to the 2016 Presidential election). However, many other topics would be suitable for this course, in part because network modeling tools have practical applications in a wide range of domains.
© 2018 by David Gray Grant, "Facebook, Fake News, and the Ethics of Censorship" is made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0).
Key Philosophical Questions:
Question (1) is the central question for the module, and asks students to think about a real-world moral problem of the kind they might face in their future work as computer scientists. The remaining questions raised by the module are intended to help students think more clearly about how we should answer this central question.
Key Philosophical Concepts:
The concepts of moral obligation, prohibition, permission, and supererogation are very basic concepts in ethics. However, participating students tend to vary widely in their level of familiarity with ethics, and may not be familiar with these concepts. We have found that discussing them explicitly helps students better understand what is at stake in how we answer the module’s central question about what Facebook is morally obligated to do, and how that question relates to other, related questions (such as whether the law should allow Facebook and other social media companies to regulate content posted on their platforms).
The remaining concepts both (a) help students to understand important arguments for and against the suppression of fake news by social media companies and (b) give students tools for thinking through the moral issues raised by the module for themselves.
This reading is excerpted from chapter two of J.S. Mill’s On Liberty and provides the bulk of the philosophical content for the module. In this chapter, Mill argues that censorship is virtually always morally wrong, as a strict policy against censorship will produce more happiness in the long run than any other policy. In addition to being powerful and highly accessible, Mill’s arguments remain highly influential on contemporary discussions of censorship. Mill does not explicitly address cases of deliberate misinformation in the reading, creating an opportunity for students to interrogate Mill’s ideas as they apply to a contemporary (and for them, novel) problem in computer ethics.
These two editorials were selected to represent both diverse political views and diverse positions on the key questions for the lecture.
The editorial from the editorial board of the New York Times appeared shortly after the 2016 Presidential election. In the piece, the editorial board excoriated Facebook for failing to take a more active role in suppressing political misinformation on its platform, and argued that it "owes its users, and democracy itself, far more."
The editorial from the National Review argues that judgments by private organizations (such as media companies or fact-checking organizations) about what counts as "fake news" should not be trusted, as they are likely to be infected by political bias. Rather than relying on these organizations to judge the reliability of various news sources for us, the editorial suggests, private citizens should instead work together to assess the reliability of particular stories for themselves.
Sample Class Activity: Students are asked to briefly brainstorm (in pairs) possible strategies that Facebook might use to suppress the spread of stories designated “fake news” by a nonpartisan fact-checker. The Embedded EthiCS TA then asks students to volunteer strategies, which are listed on the board. After discussing these suggestions, students vote (via Google poll) on whether Facebook should implement a few of the suggested strategies. The TA then briefly reviews the results of the poll and asks volunteers to justify their answers.
This class activity, conducted early on the lecture, gets the students thinking about how the moral issues the module focuses on bear on real-world design problems faced by working computer scientists. The class activity accomplishes two goals. First, it makes the practical relevance of the material for the work of computer scientists obvious to the students from the outset. Second, it gives the students practice applying moral reasoning in the context of a concrete software engineering problem.
Module Assignment: The follow-up assignment asks students to write a short essay identifying a strategy for suppressing fake news and defending a position about whether Facebook morally ought to implement that strategy (drawing on material from the readings and in-class discussion).
Depending on the size of the class, this assignment could either be graded by the Embedded EthiCS TA or graded pass/fail by the CS TAs.
The sample class activity asks students to identify concrete strategies Facebook might use to suppress the spread of fake news. This assignment follows up on that activity, asking the students to evaluate particular strategies from a moral perspective (drawing on the philosophical ideas they learned in class). The assignment thus gives the students further practice applying moral reasoning to a realistic software engineering problem, as well as practice articulating and defending their own views about how that problem should be addressed.
Lessons Learned: Student response to this module was overwhelmingly positive both times it was taught (spring and fall 2017). A few lessons stand out.