Embedded EthiCSTM @ Harvard Bringing ethical reasoning into the computer science curriculum

Systems Security (CS 263) – Fall 2019

First time reviewing a module? Click here.

Click  to access marginalia information, such as reflections from the module designer, pedagogical decisions, and additional sources.

Click “Download full module write-up” to download a copy of this module and all marginalia information available.

Module Topic: The Ethics of Hacking Back
Module Author: David Gray Grant

Course Level: Graduate
AY: 2019-2020

Overview: While this is a graduate-level course, most of the students that take it are undergraduates. The material in this module is pitched at the advanced undergraduate level. 

    Course Description

While this is a graduate-level course, most of the students that take it are undergraduates. The material in this module is pitched at the advanced undergraduate level.

“This course explores practical attacks on modern computer systems, explaining how those attacks can be mitigated using careful system design and the judicious application of cryptography. The course discusses topics like buffer overflows, web security, information flow control, and anonymous communication mechanisms like Tor. The course includes several small projects which give students hands-on experience with various offensive and defensive techniques; the final, larger project is open-ended and driven by student interests.” (Course description

Semesters Taught: Fall 2018, Fall 2019, Fall 2020, Fall 2021

Tags

  • systems (CS)
  • cybersecurity (CS)
  • hacking back (CS)
  • active cyber defense (CS)
  • moral obligation (phil)
  • moral rights (phil)
  • morally significant stakeholder interests (phil)
  • privacy (phil)
  • self-defense (phil)

Module Overview

In this module, we consider whether it is ethically acceptable for cybersecurity professionals to “hack back” when the software systems they defend are targeted by cyberattacks. We begin by discussing a recent real-world case of hacking back – Facebook’s 2011 response to persistent cyberattacks conducted by the “Koobface Gang.” With this case study as a focal point, students consider how various ways in which a cybersecurity team might respond to an attack (including by hacking back) are likely to affect the rights and interests of all parties concerned. We then discuss a number of prominent arguments for and against the claim that hacking back is in general ethically acceptable. Each argument represents a different position about how cybersecurity professionals should balance competing rights and interests as they decide whether hacking back is an ethically acceptable option.

Connection to Course Technical Material

While this is a graduate-level course, most of the students that take it are undergraduates. The material in this module is pitched at the advanced undergraduate level Select circle icon with letter ‘i’ to read the marginalia for this paragraph This course focuses on teaching students how to mitigate the effects of cyberattacks using passive defensive strategies that protect sensitive systems and data by making them more difficult to compromise in the first place. To develop a practical understanding of how modern cyberattacks work, students learn how to conduct such attacks themselves, employing the same offensive techniques as malicious attackers. While the course teaches students these techniques so that they can better “harden” systems against them, the techniques can also be used offensively, to “hack back” in response to a cyberattack. This module explores whether – and if so, when – these techniques should be used offensively by cybersecurity professionals. In so doing, the module helps students to better understand what is at stake, from an ethical point of view, in how cybersecurity professionals respond to cyberattacks, and puts them in a better position to ethically evaluate possible responses in real-world decision-making contexts.
© 2018 by David Gray Grant, “The Ethics of Hacking Back” is made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0).

For the purpose of attribution, cite as: David Gray Grant and James Mickens, “The Ethics of Hacking Back” for CS 263: Systems Security, Fall 2018Embedded EthiCS @ HarvardCC BY 4.0.

Goals

Module Goals

  1. Familiarize students with contemporary debates about whether corporations should be allowed to “hack back” in response to cyberattacks.
  2. Teach students to ethically evaluate possible courses of action that cybersecurity professionals might take in response to a cyberattack (including hacking back) by considering how available responses might affect the rights and interests of different groups of stakeholders.
  3. Help students to understand that, in many cases, determining which possible response(s) to a cyberattack would be ethically acceptable requires determining how competing rights and interests ought to be balanced.
  4. Introduce students to philosophical arguments that take different stances on whether hacking back (in general) strikes an ethically acceptable balance among competing rights and interests.
  5. Help students critically evaluate these arguments by considering whether they yield plausible results when applied to realistic case studies.

    Key Philosophical Questions

Question (1) is the central question for the module. Questions (2) and (3) are closely related philosophical questions that are essential to consider in answering (1). The remaining questions explore various perspectives, considerations, and arguments that are useful to consider in answering questions (1)-(3).

  1. Under what conditions, if any, is it morally permissible (or, less technically, ethically acceptable) for cybersecurity professionals to hack back on behalf of the corporations they work for?
  2. How might the decision to hack back (or not) affect the moral rights and morally significant interests of different groups of stakeholders?
  3. In cases where stakeholder rights and interests conflict, how should cybersecurity professionals decide how to balance those competing rights and interests in responding to an attack?
  4. Why do many commentators believe that hacking back is not just morally wrong, but obviously morally wrong? (Are they right?)
  5. Why do some commentators reject this orthodox position, claiming that hacking back is not only morally permissible, but should in fact be encouraged?
  6. How should cybersecurity professionals respond in cases where there is uncertainty concerning who is responsible for an attack?
  7. Is hacking back a form of vigilantism, self-defense, or neither? How should our answer to this question affect our views about the moral permissibility of hacking back?
  8. Many people believe that it is morally permissible for the government to quarantine individuals infected with highly contagious, life-threatening diseases without their consent. Is it, similarly, morally permissible for companies to hack into “infected” third-party machines controlled by cybercriminals, provided that doing so is necessary to prevent those machines from being used in potentially devastating cyberattacks?

Materials

Key Philosophical Concepts

  • Moral obligation and permission.
  • Moral rights and morally significant stakeholder interests.
  • Electronic privacy.
  • Vigilantism, self-defense, and “self-help.”
  • Patrick Lin’s “argument from self-defense” and “argument from public health” in favor of the moral permissibility of hacking back.

    Assigned Readings

This introductory-level piece sets out six arguments for and against the claim that it is morally permissible (ethically acceptable) for companies to hack back when they are targeted by cyberattacks. Reading the piece before class provides students with a good overview of the ethical issues discussed in the module, and sets us up to dig more deeply into some of the arguments Lin discusses during the class session.
This longform article from the New Yorker provides a mix of relevant empirical background, illustrative case studies, and quotes from experts and advocates who take varying positions on whether it would be a good idea to relax current legal restrictions on hacking back. The article contains a great deal of material that can be referred to during the class session in order to illustrate key points and concepts.

Implementation

Class Agenda

  1. Overview.
  2. What is hacking back?
  3. Case study: Facebook and the Koobface gang.
  4. Case study analysis: identifying potential effects on stakeholder rights and interests.
  5. Arguments that appeal to the benefits of hacking back.
  6. Arguments that appeal to the costs of hacking back.
  7. Adjudicating conflicting rights and interests: Patrick Lin’s “Argument from Self-Defense” and “Argument from Public Health.”

    Sample Class Activity

Group brainstorming activities like this one, which focus on helping students to explore the ethical problems discussed in a module for themselves through the lens of real-world case studies, both promote student engagement in the class discussion and help to make those ethical problems more concrete. This module was taught for the first time in the fall of 2018, and students were both highly engaged and made many illuminating contributions to the discussion (by their lights as reported in student evaluations, and in the opinion of the TA).

The central case study for this module is the Facebook security team’s 2011 response to ongoing cyberattacks on its systems and users perpetrated by the so-called “Koobface Gang,” a group of cybercriminals operating out of St. Petersburg, Russia. After explaining the nature of these cyberattacks and the offensive actions taken by Facebook’s security team in response, the Embedded EthiCS TA leads students through a discussion of how the decision to hack back in similar cases might affect the rights and interests of various stakeholder groups. Groups discussed include the company being attacked (in this case, Facebook), the company’s userbase (Facebook users), third parties not involved in the attack who might be inadvertently affected by an offensive response against the alleged attackers, and other members of the public.

Alternative Class Activity (contributed by Marion Boulicault): Students are asked to consider how commonplace metaphors and stereotypes might impact the perception and ethical evaluation of the module’s central case study: Facebook security team’s 2011 response to ongoing cyberattacks perpetrated by the so-called “Koobface Gang.” The Embedded EthiCS TA begins by posing a set of question to the class to try and identify the kinds of background metaphors that pervade ethical debates over hacking back. Examples include questions like “How would you describe what an average hacker looks like? What do you imagine their lifestyle to be like?” and “In the readings for today’s class, how was the act of hacking back described? What was it often compared to?” During the discussion, the Embedded EthiCS Fellow writes the answers on the board: hackers are almost always imagined to be male, and usually thought to be misfits or outcasts, sometimes described as successful and powerful and sometimes as living a solitary life in front of screens in a dark basement. The world of ‘hacking back’ is described often as the “Wild West” and ideas of “vigilantes” evoke images of men with guns. After these metaphors, tropes, and stereotypes are identified, students are split into groups of three to discuss how these metaphors might impact our ethical intuitions about the Koobface case study. They are also asked to come up with alternative descriptions and consider whether we might view cases of hacking back differently if these alternative metaphors were in place. The overall aim of the activity is to help the students understand how ethics is embedded not only in technologies and arguments, but also in the descriptive language we use for describing technologies and making arguments.

    Module Assignment

This assignment was developed by the professor in consultation with the Embedded EthiCS TA. Exercises like this one ask students to apply the philosophical material discussed in the module to consider how they ought to act in a realistic employment scenarios featuring realistic computer science problems. Including such assignments in Embedded EthiCS modules is important from a pedagogical perspective, but they can be difficult for Embedded EthiCS TAs to construct without assistance from computer scientists. Actively involving the professor (and/or the CS TAs) in the construction of assignments is one effective way to address this challenge.

The follow-up assignment for this module is incorporated into a problemset in which students attempt to use reverse engineering techniques learned in class to gain remote access to, and exfiltrate data from, a simple email server. (This is, of course, only a simulated exercise!) The final question of the problemset asks students to imagine that they are employees of a software company, WidgetCo, whose intellectual property has been stolen by cyberattackers. In this scenario, WidgetCo believes that the email server they have already compromised belongs to hackers that have stolen its intellectual property, and wants the student identify a further vulnerability that will allow them to erase all data stored on the server. WidgetCo claims that these actions are morally justified in light of the initial attack, and in light of the fact that the hackers appear to be planning a new attack on both WidgetCo and another company. Students are asked to explain how they would respond to their employer’s request, and to provide a moral defense of their response that appeals to arguments discussed in class or the readings.

Lessons Learned

Student response to this module was overwhelmingly positive when it was taught in the fall of 2018. In follow-up evaluations, 92% of the class reported that the discussion was interesting, and 84% reported that the class helped them think more clearly about the moral issues we discussed. Further, more than half of the class stayed behind once the session was over to continue the discussion. Two lessons from our experience with this module are worth emphasizing:

  • Teaching this module has reinforced our view that Embedded EthiCS modules work best when they are built around detailed, real-world case studies. Case studies of this kind help students to immediately see the practical relevance of the moral issues and philosophical concepts we discuss in the modules, and provide an effective way for us to make those issues and concepts more concrete for students.
  • Approximately 25-30 students participated in the class session, making the class similar in size to a large discussion section. We have found that longer full-class discussion and brainstorming activities work well with groups of this size, and are an excellent way to promote student engagement and participatory learning. In larger classes, it is often better to use shorter activities in which students work together in small groups before reporting back to the full class.

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 4.0 International License.

Embedded EthiCS is a trademark of President and Fellows of Harvard College | Contact us