Embedded ethics at Harvard: Harvard: bringing ethical reasoning into the computer science curriculum.

Embedded EthiCS @ Harvard: bringing ethical reasoning into the computer science curriculum.
Course Modules / CS 263: Systems Security

Repository of Open Source Course Modules

 

Course: CS 263: Systems Security

Course Level: Graduate

While this is a graduate-level course, most of the students that take it are undergraduates. The material in this module is pitched at the advanced undergraduate level.

Course Description: "This course explores practical attacks on modern computer systems, explaining how those attacks can be mitigated using careful system design and the judicious application of cryptography. The course discusses topics like buffer overflows, web security, information flow control, and anonymous communication mechanisms like Tor. The course includes several small projects which give students hands-on experience with various offensive and defensive techniques; the final, larger project is open-ended and driven by student interests." (Course description )


Module Topic: The Ethics of Hacking Back

Module Author: David Gray Grant

Semesters Taught: Fall 2018

Tags:

systems CS
cybersecurity CS
hacking back CS
active cyber defense CS
moral obligation phil
moral rights phil
morally significant stakeholder interests phil
privacy phil
self-defense phil

Module Overview: In this module, we consider whether it is ethically acceptable for cybersecurity professionals to “hack back” when the software systems they defend are targeted by cyberattacks. We begin by discussing a recent real-world case of hacking back – Facebook’s 2011 response to persistent cyberattacks conducted by the “Koobface Gang.” With this case study as a focal point, students consider how various ways in which a cybersecurity team might respond to an attack (including by hacking back) are likely to affect the rights and interests of all parties concerned. We then discuss a number of prominent arguments for and against the claim that hacking back is in general ethically acceptable. Each argument represents a different position about how cybersecurity professionals should balance competing rights and interests as they decide whether hacking back is an ethically acceptable option.


Connection to Course Technical Material: This course focuses on teaching students how to mitigate the effects of cyberattacks using passive defensive strategies that protect sensitive systems and data by making them more difficult to compromise in the first place. To develop a practical understanding of how modern cyberattacks work, students learn how to conduct such attacks themselves, employing the same offensive techniques as malicious attackers. While the course teaches students these techniques so that they can better “harden” systems against them, the techniques can also be used offensively, to “hack back” in response to a cyberattack. This module explores whether – and if so, when – these techniques should be used offensively by cybersecurity professionals. In so doing, the module helps students to better understand what is at stake, from an ethical point of view, in how cybersecurity professionals respond to cyberattacks, and puts them in a better position to ethically evaluate possible responses in real-world decision-making contexts.

The topic for this module was selected using a strategy that we often employ. First, in consultation with the professor for the course, we come up with a list of interesting real-world applications for the technical material covered in class: concrete problems that working computer scientists are currently drawing on that material to solve. Second, we search for recent articles and papers about those applications, trying to identify ethical debates that might make good topics for the module. Third, we try to identify concepts, theories, and arguments from the philosophical literature that (a) help shed light on those ethical debates and (b) are simple enough to cover adequately in the context of a single class session. The third step is often the trickiest, but in some cases is as easy as finding the right philosophical reading. A significant portion of the philosophical content for this module, for instance, came from an introductory-level piece by Patrick Lin (see “Assigned Readings” below).


© 2018 by David Gray Grant, "The Ethics of Hacking Back" is made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0).

For the purpose of attribution, cite as: David Gray Grant and James Mickens, "The Ethics of Hacking Back" for CS 263: Systems Security, Fall 2018, Embedded EthiCS @ Harvard. CC BY 4.0.

 

Module Goals:

  1. Familiarize students with contemporary debates about whether corporations should be allowed to “hack back” in response to cyberattacks.
  2. Teach students to ethically evaluate possible courses of action that cybersecurity professionals might take in response to a cyberattack (including hacking back) by considering how available responses might affect the rights and interests of different groups of stakeholders.
  3. Help students to understand that, in many cases, determining which possible response(s) to a cyberattack would be ethically acceptable requires determining how competing rights and interests ought to be balanced.
  4. Introduce students to philosophical arguments that take different stances on whether hacking back (in general) strikes an ethically acceptable balance among competing rights and interests.
  5. Help students critically evaluate these arguments by considering whether they yield plausible results when applied to realistic case studies.

Key Philosophical Questions:

  1. Under what conditions, if any, is it morally permissible (or, less technically, ethically acceptable) for cybersecurity professionals to hack back on behalf of the corporations they work for?
  2. How might the decision to hack back (or not) affect the moral rights and morally significant interests of different groups of stakeholders?
  3. In cases where stakeholder rights and interests conflict, how should cybersecurity professionals decide how to balance those competing rights and interests in responding to an attack?
  4. Why do many commentators believe that hacking back is not just morally wrong, but obviously morally wrong? (Are they right?)
  5. Why do some commentators reject this orthodox position, claiming that hacking back is not only morally permissible, but should in fact be encouraged?
  6. How should cybersecurity professionals respond in cases where there is uncertainty concerning who is responsible for an attack?
  7. Is hacking back a form of vigilantism, self-defense, or neither? How should our answer to this question affect our views about the moral permissibility of hacking back?
  8. Many people believe that it is morally permissible for the government to quarantine individuals infected with highly contagious, life-threatening diseases without their consent. Is it, similarly, morally permissible for companies to hack into “infected” third-party machines controlled by cybercriminals, provided that doing so is necessary to prevent those machines from being used in potentially devastating cyberattacks?

Question (1) is the central question for the module. Questions (2) and (3) are closely related philosophical questions that are essential to consider in answering (1). The remaining questions explore various perspectives, considerations, and arguments that are useful to consider in answering questions (1)-(3).

 

Key Philosophical Concepts:

  • Moral obligation and permission.
  • Moral rights and morally significant stakeholder interests.
  • Electronic privacy.
  • Vigilantism, self-defense, and “self-help.”
  • Patrick Lin’s “argument from self-defense” and “argument from public health” in favor of the moral permissibility of hacking back.


Assigned Material:

This introductory-level piece sets out six arguments for and against the claim that it is morally permissible (ethically acceptable) for companies to hack back when they are targeted by cyberattacks. Reading the piece before class provides students with a good overview of the ethical issues discussed in the module, and sets us up to dig more deeply into some of the arguments Lin discusses during the class session.


This longform article from the New Yorker provides a mix of relevant empirical background, illustrative case studies, and quotes from experts and advocates who take varying positions on whether it would be a good idea to relax current legal restrictions on hacking back. The article contains a great deal of material that can be referred to during the class session in order to illustrate key points and concepts.

 

Class Agenda:

  1. Overview.
  2. What is hacking back?
  3. Case study: Facebook and the Koobface gang.
  4. Case study analysis: identifying potential effects on stakeholder rights and interests.
  5. Arguments that appeal to the benefits of hacking back.
  6. Arguments that appeal to the costs of hacking back.
  7. Adjudicating conflicting rights and interests: Patrick Lin’s “Argument from Self-Defense” and “Argument from Public Health.”


Sample Class Activity: The central case study for this module is the Facebook security team’s 2011 response to ongoing cyberattacks on its systems and users perpetrated by the so-called “Koobface Gang,” a group of cybercriminals operating out of St. Petersburg, Russia. After explaining the nature of these cyberattacks and the offensive actions taken by Facebook’s security team in response, the Embedded EthiCS TA leads students through a discussion of how the decision to hack back in similar cases might affect the rights and interests of various stakeholder groups. Groups discussed include the company being attacked (in this case, Facebook), the company’s userbase (Facebook users), third parties not involved in the attack who might be inadvertently affected by an offensive response against the alleged attackers, and other members of the public.

Group brainstorming activities like this one, which focus on helping students to explore the ethical problems discussed in a module for themselves through the lens of real-world case studies, both promote student engagement in the class discussion and help to make those ethical problems more concrete. This module was taught for the first time in the fall of 2018, and students were both highly engaged and made many illuminating contributions to the discussion (by their lights as reported in student evaluations, and in the opinion of the TA).


Module Assignment: The follow-up assignment for this module is incorporated into a problemset in which students attempt to use reverse engineering techniques learned in class to gain remote access to, and exfiltrate data from, a simple email server. (This is, of course, only a simulated exercise!) The final question of the problemset asks students to imagine that they are employees of a software company, WidgetCo, whose intellectual property has been stolen by cyberattackers. In this scenario, WidgetCo believes that the email server they have already compromised belongs to hackers that have stolen its intellectual property, and wants the student identify a further vulnerability that will allow them to erase all data stored on the server. WidgetCo claims that these actions are morally justified in light of the initial attack, and in light of the fact that the hackers appear to be planning a new attack on both WidgetCo and another company. Students are asked to explain how they would respond to their employer’s request, and to provide a moral defense of their response that appeals to arguments discussed in class or the readings.

This assignment was developed by the professor in consultation with the Embedded EthiCS TA. Exercises like this one ask students to apply the philosophical material discussed in the module to consider how they ought to act in a realistic employment scenarios featuring realistic computer science problems. Including such assignments in Embedded EthiCS modules is important from a pedagogical perspective, but they can be difficult for Embedded EthiCS TAs to construct without assistance from computer scientists. Actively involving the professor (and/or the CS TAs) in the construction of assignments is one effective way to address this challenge.


Lessons Learned: Student response to this module was overwhelmingly positive when it was taught in the fall of 2018. In follow-up evaluations, 92% of the class reported that the discussion was interesting, and 84% reported that the class helped them think more clearly about the moral issues we discussed. Further, more than half of the class stayed behind once the session was over to continue the discussion. Two lessons from our experience with this module are worth emphasizing:

  • Teaching this module has reinforced our view that Embedded EthiCS modules work best when they are built around detailed, real-world case studies. Case studies of this kind help students to immediately see the practical relevance of the moral issues and philosophical concepts we discuss in the modules, and provide an effective way for us to make those issues and concepts more concrete for students.
  • Approximately 25-30 students participated in the class session, making the class similar in size to a large discussion section. We have found that longer full-class discussion and brainstorming activities work well with groups of this size, and are an excellent way to promote student engagement and participatory learning. In larger classes, it is often better to use shorter activities in which students work together in small groups before reporting back to the full class.