CAN A COMPUTER INTERCEPT YOUR ?

CAN A COMPUTER INTERCEPT YOUR EMAIL? Bruce E. Boyden † In recent years, it has become feasible for computers to rapidly scan the contents of large am...
Author: Barnard Lambert
0 downloads 0 Views 554KB Size
CAN A COMPUTER INTERCEPT YOUR EMAIL? Bruce E. Boyden †

In recent years, it has become feasible for computers to rapidly scan the contents of large amounts of communications traffic to identify certain characteristics of those messages: that they are spam, contain malware, discuss various products or services, are written in a particular dialect, contain copyright-infringing files, or discuss symptoms of particular diseases. There are a wide variety of potential uses for this technology, such as research, filtering, or advertising. But the legal status of automated processing, if it is done without advance consent, is unclear. Where it results in the disclosure of the contents of a message to others, that clearly violates the federal law governing communications privacy, the Electronic Communications Privacy Act (ECPA). But what if no record of the contents of the communication is ever made? Does it violate communications privacy simply to have a computer scan emails? I argue that automated processing that leaves no record of the contents of a communication does not violate the ECPA, because it does not “intercept” that communication within the meaning of the Act. The history, purpose, and judicial interpretation of the ECPA all support this reading: interception requires at least the potential for human awareness of the contents. Furthermore, this is not simply an accident of drafting or an omission due to the limited foresight of legislators. Under most theories of privacy, automated processing does not harm privacy. Automated processing may in some cases lead to harm, but those harms are not, in fact, privacy harms, and should be analyzed instead under other legal regimes better adapted to dealing with such issues.

† Assistant Professor, Marquette University Law School. I would like to thank Kevin Bankston, Steven Bellovin, Ryan Calo, Marcia Hofmann, Orin Kerr, Tal Zarsky, and the participants at the 2011 Privacy Law Scholars Conference for their helpful comments on an earlier draft of this article, and Elizabeth Brown and the Marquette University Law Library for their diligent efforts in tracking down the legislative materials cited herein.

669

670

CARDOZO LAW REVIEW

[Vol. 34:669

TABLE OF CONTENTS INTRODUCTION .................................................................................................................670 I.

BACKGROUND: THE ECPA’S REGULATION OF ISPS ...............................................677

II. THE WIRETAP ACT AND THE DEFINITION OF “INTERCEPT” ..................................681 A. The Text and History of the Statute ............................................................682 B. Judicial Interpretation of the Wiretap Act .................................................689 C. The Electronic Communications Privacy Act ............................................698 D. Application.....................................................................................................701 III. THE WIRETAP ACT AND THE PROTECTION OF PRIVACY .......................................703 A. The Harms of Machine Processing ..............................................................704 B. Wiretap Act as Regulation ...........................................................................708 CONCLUSION......................................................................................................................715

INTRODUCTION Every day, vast quantities of information transit the Internet. Much of that traffic is unencrypted, meaning that any of the computers handling it could read the contents. But for most of the Internet’s history, scanning messages beyond just the header information was a practical impossibility. It was all the networking equipment could do to keep up with the flow and route those messages to their destination in a timely manner. The balance has shifted, however. Advances in processing technology have outpaced the growth in the amount of traffic. 1 As a result, the owners of the network equipment that handles those communications increasingly have the ability to scan the contents of those communications and designate appropriate responses: block, flag, edit, or categorize. For the last ten years or so, this ability to scan the contents of messages has been an important tool in the war against spam and malware. But there have been other proposed uses as well. Internet service providers (ISPs) have sought to generate advertising revenue by attaching context-appropriate advertisements to email messages or web pages. 2 Copyright owners have pressed ISPs to filter out infringing 1 See Paul Ohm, The Rise and Fall of Invasive ISP Surveillance, 2009 U. ILL. L. REV. 1417, 1427–32 (2009); see also Alex Kozinski, The Dead Past, 64 STAN. L. REV. ONLINE 117, 118–19 (2012) (combing through telephone records to look for patterns would have been impossible twenty-five years ago). 2 Google, Ads in Gmail and Your Personal Data: How Gmail Ads Work, GMAIL HELP, https://support.google.com/mail/bin/answer.py?hl=en&answer=6603&from=1217362&rd=1 (last updated May 22, 2012); Saul Hansell, Charter Will Monitor Customers’ Web Surfing to Target Ads, N.Y. TIMES BITS BLOG (May 14, 2008, 8:40 AM), http://bits.blogs.nytimes.com/2008 /05/14/charter-will-monitor-customers-web-surfing-to-target-ads/. Some of these plans have

2012]

CAN A COMPUTER INTERCEPT EMAIL?

671

songs and videos. 3 The government has sought to identify terrorist conspiracies by sifting through email traffic looking for certain words or phrases thought to be suggestive of a plan to commit violence. 4 Still other uses are imaginable. Automated scanning could be used in the future for public health reasons, such as generating messages to persons showing signs of contemplating suicide, 5 or conducting research on outbreaks of disease. 6 It might be employed to determine if a resource is in use as part of an energy conservation program. 7 Researchers might scan content to identify regional variations in spoken language. 8 Or automated processing could be used to geographically tailor Internet content in various ways. 9 Some, all, or none of these might be worthwhile endeavors. been abandoned. See, e.g., Eric Pfanner, BT Decides Not to Adopt Internet-Based Ad System, N.Y. TIMES, July 7, 2009, at B7. 3 Brad Stone, AT&T and Other I.S.P.’s May Be Getting Ready to Filter, N.Y. TIMES BITS BLOG (Jan. 8, 2008, 7:07 PM), http://bits.blogs.nytimes.com/2008/01/08/att-and-other-ispsmay-be-getting-ready-to-filter/. Two recently tabled bills would have required ISPs to block the resolution of the domain names of “foreign infringing sites” designated by a court to be trafficking or facilitating the trafficking in infringing material, as well as unspecified additional “technically feasible and reasonable measures designed to prevent access” to the site. Stop Online Piracy Act, H.R. 3261, 112th Cong. (2011); Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act of 2011, S. 968, 112th Cong. (2011). 4 See, e.g., In re Nat’l Sec. Agency Telecomms. Records Litig., 671 F.3d 881 (9th Cir. 2011). For more on these examples, see Ohm, supra note 1, at 1432–36. 5 See Facebook Ramps Up Efforts to Prevent Suicide, CBS NEWS TECHTALK (Dec. 14, 2011, 10:26 AM), http://www.cbsnews.com/8301-501465_162-57342888-501465/facebook-ramps-upefforts-to-prevent-suicide/. Facebook’s initiative is not automated; it relies on a person’s friends to file a form provided by the site, which is then acted on by humans. But Facebook does automatically scan messages for signs of criminal activity. See Kashmir Hill, Yes, Facebook Scans Users’ Private Conversations Looking for Sexual Predators and Child Porn, FORBES THE NOT-SO PRIVATE PARTS BLOG (July 13, 2012, 10:13 AM), http://www.forbes.com/sites/ kashmirhill/2012/07/13/yes-facebook-scans-peoples-private-conversations-looking-for-sexualpredators-and-child-porn/. 6 See Jeremy Ginsberg et al., Detecting Influenza Epidemics Using Search Engine Query Data, 457 NATURE 1012 (Feb. 19, 2009), available at http://www.nature.com/nature/journal/ v457/n7232/pdf/nature07634.pdf. 7 See Smart Environments: Building with a Memory, ARIZ. ST. UNIV. SCH. OF ARTS MEDIA & ENG’G (last visited Aug. 29, 2012), http://ame.asu.edu/projects/smartenvironments/ bwam.html. The “Building with a Memory” program monitors oral conversations as well as video. If the conversants have a reasonable expectation of privacy, which they might if they are alone in a conference room, the monitoring implicates the same federal law as monitoring of electronic communications. See 18 U.S.C. §§ 2510(2), (12), 2511(1)(a) (2006). My thanks to Scott Boone for bringing this example to my attention. 8 See Jennifer Schuessler, Regional English, Tweet by Tweet, N.Y. TIMES ARTS BEAT BLOG, (Mar. 2, 2012, 8:00 AM), http://artsbeat.blogs.nytimes.com/2012/03/02/regional-english-tweetby-tweet/. The studies Schuessler discusses were based on public Twitter status updates, but one can imagine other studies analyzing a much broader sample of traffic. 9 This could be done for salubrious or less salubrious reasons. The most obvious purpose that comes to mind is to use automated scanning to block certain materials from the jurisdiction, such as material subject to an injunction or other legal restriction, or simple censorship. But other uses are imaginable. ISPs in small countries may be interested in translating foreign sites automatically into the local, but obscure, language. This might be more efficiently done upstream from the access provider.

672

CARDOZO LAW REVIEW

[Vol. 34:669

“Geographical tailoring,” for example, might be a simple euphemism for censorship. Automated suicide prevention could result in an antidepressant version of “Clippy.” 10 And there may be any number of policy concerns with the other proposals, such as whether copyright filters might overprotect, or whether ISPs should maintain a position of “net neutrality” that would prohibit blocking or interfering with content outside of a few narrow exceptions. 11 Regardless of the concerns with any one project, it is becoming increasingly evident that there is great potential for computers to assist in mitigating one of the unfortunate side-effects of the information age: the way in which it has deluged us in an undifferentiated, unsorted mass of data. That is, computers can now assist humans to an extent never before possible, not just in compiling and recording data, but in making judgments about that data, and taking the appropriate steps based on those judgments: categorizing messages, filtering out undesirable content, identifying patterns, determining relevance. 12 Some of this potential can be fulfilled through services offered directly to individual users, and thus with their prior consent; but in other cases, such as research or filtering occurring in the middle of the network, there may be no real opportunity for prior negotiations between the parties. Advance consent from every affected individual may be difficult or impossible to obtain. Whether such remote automated processing would be socially valuable may never be known, however, if courts take the position that existing law already prohibits automated filtering or alteration of content. 13 The Wiretap Act, 14 now part of the Electronic Communications Privacy Act (ECPA), 15 broadly prohibits the 10 See Erich Luening, Microsoft Tool “Clippy” Gets Pink Slip, CNET NEWS (Apr. 11, 2001, 11:30 AM), http://news.cnet.com/2100-1001-255671.html. 11 See Preserving the Open Internet, 76 Fed. Reg. 59,192, 59,200, 59,213 (Sept. 23, 2011) (to be codified at 47 C.F.R. pt. 0, 8). 12 The examples listed hereof computers finding patterns in communicationsare just a subset of all of the ways in which computers now assist in pattern recognition. See, e.g., Erica Goode, Gunfire Is Heard and Pinpointed, but Technology Is Argued Over, N.Y. TIMES, May. 29, 2012, at A13 (automated scanning of sounds for gunfire); William Saletan, Fully Digital Penetration, SLATE HUMAN NATURE BLOG (July 21, 2011, 7:50 AM), http://www.slate.com/ articles/health_and_science/human_nature/2011/07/fully_digital_penetration.html (automated scanning of airport passengers for weapons). 13 Paul Ohm has argued that the Wiretap Act likely bars most ISP monitoring. See Ohm, supra note 1, at 1478–87. But as discussed below, Ohm appears to be considering only the collection and retention of user information by ISPs; he does not specifically address transitory use of information by machines. Id. 14 18 U.S.C. §§ 2510–2522 (2006). As Patricia Bellia has noted, the term “Wiretap Act” gives a misleading impression of the scope of the Act’s coverage, as it prohibits both the interception of wire and electronic communications (originally accomplished by tapping phone lines) and “electronic eavesdropping,” the use of a device to overhear a private, in-person conversation. See Patricia L. Bellia, Spyware and the Limits of Surveillance Law, 20 BERKELEY TECH. L.J. 1283, 1287 n.11 (2005). 15 Electronic Communications Privacy Act of 1986, Pub. L. No. 99-508, 100 Stat. 1848

2012]

CAN A COMPUTER INTERCEPT EMAIL?

673

intentional interception of the contents of communications by a private actor without the consent of a party. 16 The Wiretap Act and the ECPA were adopted precisely to prevent government and private intrusion into the privacy of electronic communications. If they apply to automated processing of the contents of communications, then all such processing without advance consent is illegal, no matter how useful it might be. 17 But that conclusion follows only if automated processing of communications counts as an “interception” of those communications under the Wiretap Act. I argue below that it does not, nor should it. Put differently, a computer is, at most, a tool that enables humans to spy on other humans; it cannot itself spy on people, and thus automated processing that is not contemporaneously reviewable by humans, and does not produce a record for later human review, is not an interception and should not be defined as one in future revisions of the Wiretap Act. 18 The question has been boiling beneath the surface for several years. Perhaps the first widespread controversy over purely automated scanning occurred in 2004, when Google launched its free Gmail service. 19 Google attempted to generate revenue the same way it did from its search engine service: through the placement of contextualized advertisements automatically inserted into the email viewing page based in part on the contents of the email itself. But the placement of ads (codified as amended in scattered sections of 18 U.S.C.). The Wiretap Act was originally adopted in 1968 as Title III of the Omnibus Crime Control and Safe Streets Act of 1968, Pub. L. No. 90-351, 82 Stat. 197, 211–23 (codified at 18 U.S.C. §§ 2510–2520 (2006)). Title I of the ECPA amended the Wiretap Act to extend its coverage to electronic communications. See S. REP. NO. 99-541, at 1 (1986), reprinted in 1986 U.S.C.C.A.N. 3555, 3555. Nevertheless, the Wiretap Act is still widely referred to as “Title III,” particularly in criminal law circles. 16 18 U.S.C. § 2511(1)(a) (2006). 17 There are exceptions, discussed below, for the routine handling of communications by ISPs, e.g., routing communications to their intended destinations. The question presented here, however, is whether automatic handling outside of those exceptions falls within the scope of the Act. 18 There is some debate among federal courts and others about whether Internet communications can be intercepted at all. See infra text accompanying notes 142–144, 156. For the reasons discussed below, the notion that Internet communications may be categorically immune from interception liability seems contrary to one of the primary purposes of the ECPA. In any event, given the likelihood that this very issue would be addressed in any revision of ECPA, and given that not all courts have accepted the idea that interception of Internet traffic is a misnomer, there is at least some risk of interception liability attaching in the circumstances described above. 19 In 1996, in explaining the scope of a wiretap order placed on Harvard’s computer network, the U.S. Attorney appeared to define an “interception” as occurring only when a message was flagged as suspicious, not when it was merely scanned. See Press Release, U.S. Dep’t of Justice, Federal Cybersleuthers Armed with First Ever Computer Wiretap Order Net International Hacker Charged with Illegally Entering Harvard and U.S. Military Computers (Mar. 29, 1996), available at http://www.justice.gov/opa/pr/1996/March96/146.txt. But that incident attracted comparatively little controversy. But see Linnet Myers, Cybersleuthing vs. Civil Rights, CHI. TRIB., Mar. 30, 1996, at N1 (quoting Lawrence Lessig).

674

CARDOZO LAW REVIEW

[Vol. 34:669

alongside emails raised a furor among some users, in a way that the placement of ads in search results had not. 20 To some, it seemed as though Google would be reading their emails to place ads in them. 21 The controversy quickly subsided, however. Google currently maintains that only computersnever humanswill scan its subscribers’ emails; 22 a promise that seems to matter to at least some of its users. 23 Which set of intuitions is correct? Does automated scanning violate user privacy, or not? The answer to that question matters for two reasons. First, it informs our interpretation of the Wiretap Act. The Wiretap Act and the ECPA were intended to protect the privacy of telephone calls and electronic communications. If computer scanning of communications compromises that privacy, then it falls at least within the purpose of the Act. Second, and perhaps more importantly, the Wiretap Act is not set in stone. Reform efforts are underway to amend the ECPA to bring it into the 21st century. 24 And the question is only becoming more pressing as the ability of computers to quickly scan vast quantities of information increases. The issue is not easy to resolve, however. For example, the proper scope of the Wiretap Act is difficult to determine, for several reasons. First, the Wiretap Act is famously resistant to interpretation. 25 It has an 20 See Katie Hafner, In Google We Trust? When the Subject Is E-Mail, Maybe Not, N.Y. TIMES, Apr. 8, 2004, at G1. 21 See SAMIR CHOPRA & LAURENCE F. WHITE, A LEGAL THEORY FOR AUTONOMOUS ARTIFICIAL AGENTS 111 (2011) (“Google is reasonably described as ‘reading’ . . . messages.”); Jason Isaac Miller, Note, “Don’t Be Evil”: Gmail’s Relevant Text Advertisements Violate Google’s Own Motto and Your E-Mail Privacy Rights, 33 HOFSTRA L. REV. 1607 (2005); Letter from Chris Jay Hoofnagle, Assoc. Dir., EPIC, et al., to Bill Lockyer, Cal. Att’y Gen. (May 3, 2004), available at http://epic.org/privacy/gmail/agltr5.3.04.html. Chopra and White also suggest, somewhat inconsistently, that Google’s current AdSense program should not be described as “reading” emails because it is not yet complex enough. See CHOPRA & WHITE, supra, at 98. 22 Google, Coming Soon: Better Ads in Gmail, GMAIL HELP, http://mail.google.com/ support/bin/answer.py?answer=6603 (last updated Mar. 29, 2011). 23 See, e.g., Paul Boutin, Read My Mail, Please: The Silly Privacy Fears About Google’s EMail Service, SLATE (Apr. 15, 2004, 5:26 PM), http://www.slate.com/articles/technology/ webhead/2004/04/read_my_mail_please.html; see also Richard A. Posner, Privacy, Surveillance, and Law, 75 U. CHI. L. REV. 245, 249 (2008). 24 The Electronic Communications Privacy Act Amendments Act of 2011, S. 1011, 112th Cong. (2011). The bill, S. 1011, sponsored by Senator Leahy, focuses entirely on Title II of the ECPA, the Stored Communications Act, but a broader overhaul of the ECPA has been discussed. 25 See, e.g., United States v. Smith, 155 F.3d 1051, 1055 (9th Cir. 1998) (stating that the Wiretap Act is “a complex, often convoluted, area of the law”); Steve Jackson Games, Inc. v. U.S. Secret Serv., 36 F.3d 457, 462 (5th Cir. 1994) (calling ECPA “famous (if not infamous) for its lack of clarity”); Briggs v. Am. Air Filter Co., 630 F.2d 414, 415 (5th Cir. 1980) (expressing wish that “we had planted a powerful electronic bug in a Congressional antechamber to garner every clue concerning Title III” to aid in “the troublesome task of an interstitial interpretation of an amorphous Congressional enactment”); Orin S. Kerr, Lifting the “Fog” of Internet Surveillance: How a Suppression Remedy Would Change Computer Crime Law, 54 HASTINGS L.J. 805, 820 (2003) (stating that the “law of electronic surveillance is famously complex, if not entirely impenetrable”); Daniel J. Solove, Reconstructing Electronic Surveillance Law, 72 GEO. WASH. L. REV. 1264, 1292 (2004) (“There are a myriad of different terms with complicated

2012]

CAN A COMPUTER INTERCEPT EMAIL?

675

interlocking set of prohibitions, definitions, and exceptions that defies facile comprehension. Subsequent amendments have only added to the complexity of the statute. Second, the foundation of the Wiretap Act was laid in 1968, when the primary threats to privacy that Congress had in mind were the live surveillance of telephone conversations and electronic eavesdropping. 26 As a result, the exact status of recordings— that is, mechanically reproduced copies of a communication—has never been entirely clear. Although the law was updated in 1986, that update was primarily accomplished by grafting “electronic communications” into the statute, and leaving the existing foundation untouched. 27 Finally, there is a paucity of authority on which to base careful distinctions. The Wiretap Act in its inception and subsequent reformulations has been primarily focused on the regulation of law enforcement investigation techniques. 28 Long passages are devoted to the hoops law enforcement officials must jump through to obtain and then use wiretap evidence. Comparatively short shrift is given to the obligations of private parties such as parents, employers, or telecommunications service providers. The scholarship on the ECPA, including the Wiretap Act, is likewise heavily focused on the ECPA’s role in regulating government surveillance. 29 Only relatively recently has scholarship emerged on the issue of whether machine handling of communications violates user privacy, or more specifically, the Wiretap Act. 30 That scholarship is split, with some suggesting that the potential privacy harms are legion, 31 while others argue that there is little cause for concern. 32 The answer to the question of whether the Wiretap Act prohibits automated handing of communications depends on what exactly is done with the communication. If some sort of record is created that preserves definitions. The statute zigzags with dozens of cross-references. . . . [I]t contains at least seven different legal threshold requirements for government surveillance . . . .”). 26 Many states subsequently adopted their own wiretapping statutes, but for the most part, they are patterned on the federal Wiretap Act. 27 Electronic Communications Privacy Act of 1986, Pub. L. No. 99-508, 100 Stat. 1848 (1986). This structure was the result of a compromise with the Department of Justice and other opponents of a broad reform of electronic surveillance law. See Robert W. Kastenmeier et al., Communications Privacy: A Legislative Perspective, 1989 WIS. L. REV. 715, 735. 28 See S. REP. NO. 90-1097 (1968), reprinted in 1968 U.S.C.C.A.N. 2112, 2157 (“The major purpose of title III is to combat organized crime.”). 29 See, e.g., Bellia, supra note 14; Susan Freiwald, First Principles of Communications Privacy, 2007 STAN. TECH. L. REV. 3; Kerr, supra note 25; Deirdre K. Mulligan, Reasonable Expectations in Electronic Communications: A Critical Perspective on the Electronic Communications Privacy Act, 72 GEO. WASH. L. REV. 1557 (2004). 30 Early examinations of the issue include Catherine R. Gellis, Note, CopySense and Sensibility—How the Wiretap Act Forbids Universities from Using P2P Monitoring Tools, 12 B.U. J. SCI. & TECH. L. 340 (2006); and Miller, supra note 21. 31 See M. Ryan Calo, The Boundaries of Privacy Harm, 86 IND. L.J. 1131, 1146–52 (2011); Ohm, supra note 1, at 1444–47. 32 See, e.g., Posner, supra note 23; Matthew Tokson, Automation and the Fourth Amendment, 96 IOWA L. REV. 581 (2011).

676

CARDOZO LAW REVIEW

[Vol. 34:669

or conveys the “substance, purport, or meaning” 33 of the communication for human review, then the act of creating that record is an interception under the Act. Thus, a copyright filter that records the fact that a particular user attempted to download a particular infringing file, or ad-generating software that compiles a list of personal preferences based on the subject matter of emails received, likely intercepts user communications. But as I argue below, where automated blocking or alteration of content is accomplished without creating any record that reasonably permits later human review, the best reading of the Wiretap Act is that such actions are not “interceptions” subject to the Act. In reaching this conclusion, at least two somewhat ethereal distinctions must be drawn. The first concerns what it means to “acquire” communication contents under the Wiretap Act. The structure and legislative history of both the Wiretap Act and the ECPA suggest that interceptions must, at some level, be accomplished by a person. Interception is defined under the statute as the “aural or other acquisition” of the contents of a communication “through the use of any electronic, mechanical, or other device.” 34 The phrase “aural acquisition,” which dates back to the original Wiretap Act, refers to apprehension through listening. Early in the history of the Wiretap Act, this focus on listening as the prohibited activity posed a problem for courts attempting to apply the Act to recordings. If only listening is an interception, then a person surreptitiously recording a conversation without listening to it could escape liability. Courts avoided this conclusion by holding recordings to be merely a means of assisting a person in aurally acquiring a conversation. One logical consequence of this view of interception, however, is that a device cannot intercept a communication by itself where there is no prospect of later human reception. The second issue to be resolved is whether automated processing of personal information in some way violates individual privacy. Under most views of privacy, the real danger from the use of personal information is the harm to the subject’s reputation or autonomy that may result, or at least the actions a subject forgoes to minimize the risk of such a harm occurring. Such harms necessarily flow from the fact or prospect of ultimate human interception of the contents of communications. Although there are harms that could potentially flow from purely automated handling of communications, for the most part they are not harms that are best addressed through an anti-interception prohibition. This is significant not only in reading the Wiretap Act in light of its purposes, but also in guiding any debate over current efforts 33 34

18 U.S.C. § 2510(8) (2006) (defining “contents”). Id. § 2510(4).

2012]

CAN A COMPUTER INTERCEPT EMAIL?

677

to reform the ECPA. In Part I below, I give a brief background, followed in Part II by the central argument of the Article: that automated handling of communications does not constitute acquisition. Part III then considers whether this conclusion comports with general notions of what a privacy-protective law like the Wiretap Act should prohibit. I argue that it does. Finally, the Article briefly concludes. I. BACKGROUND: THE ECPA’S REGULATION OF ISPS The primary federal law regulating the privacy of electronic communications is the Electronic Communications Privacy Act, first adopted in 1986. 35 The ECPA comprises three parts: the Wiretap Act, which was amended by Title I of the ECPA to cover electronic communications; the Stored Communications Act, Title II of the ECPA; and the Pen Register statute. Two of those sections are potentially implicated by ISP handling of electronic communications, the Wiretap Act and the Stored Communications Act. The Wiretap Act, broadly speaking, prohibits the intentional interception of the contents of communications using a device. 36 Each of those terms is specially defined in the Act, and furthermore the general prohibition is subject to a number of exceptions. It is worth beginning with the exceptions. There has long been considerable uncertainty about what constitutes an interception; that issue is considered in detail in Part II below. But resolution of that issue is unnecessary if an exception to interception liability clearly applies to all or almost all automated processing. There are three important exceptions to consider here. First, the Wiretap Act provides for an exception where the person engaged in the interception has the consent of one of the parties to the 35 Other federal laws also protect the privacy of electronic communications, such as the Computer Fraud & Abuse Act (CFAA), 18 U.S.C. § 1030 (Supp. IV 2011); and the Cable Communications Policy Act (CCPA), 47 U.S.C. § 551 (2006). For the most part, those laws would not apply to the situations analyzed here. The CFAA requires “unauthorized access,” or at least “exceeding authorized access,” to a networked computer. Accessing a message present on one’s own computer, however, does not constitute “unauthorized access.” The CCPA requires cable operators to obtain affirmative consent from subscribers to “collect” and use their personally identifiable information, unless the use is “necessary to render a cable service or other service provided by the cable operator to the subscriber.” 47 U.S.C. § 551(b). Personally identifiable information does not include aggregate data that does not identify any individual. Id. § 551(a)(2)(A). 36 See 18 U.S.C. § 2511(1) (“[A]ny person who . . . intentionally intercepts, endeavors to intercept, or procures any other person to intercept or endeavor to intercept, any wire, oral, or electronic communication . . . shall be punished as provided in subsection (4) or shall be subject to suit as provided in subsection (5).”); id. § 2510(4) (defining “intercept” as “the aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device”).

678

CARDOZO LAW REVIEW

[Vol. 34:669

communication. 37 As a practical matter, this exception will be the simplest one to invoke for many ISPs. Any ISP in direct communication with the sender or recipients of a communication—such as Google with its Gmail service—can obtain whatever consent it needs to engage in scanning and handling of communications by requesting that consent as part of a user agreement. 38 But not all ISPs will be able to easily obtain consent from a sender or a recipient of a communication; for example, higher-level ISPs more in the middle of the network will not have direct relationships with individual subscribers. And even those ISPs that are in direct contact with users may fall outside of the exception: consent provisions are sometimes inadequately drafted or ineffectively distributed. 39 That is a particular danger for any ISP that does not include its privacy policy as part of a “leak-proof” clickwrap agreement. Second, the Wiretap Act contains a pair of exceptions that together exempt much of what ISPs do in the ordinary course of business from coverage under the Wiretap Act. First, recall that all interceptions under the Wiretap Act require, by definition, the use of a “device.” 40 The Act, however, carves out some exceptions from what can qualify as the necessary “device.” In particular, no “telephone or telegraph instrument, equipment or facility, or any component thereof,” that is “being used by a provider of wire or electronic communication service in the ordinary course of its business” qualifies as a device. 41 In effect, an ISP conducting activities in the ordinary course of its business cannot intercept communications. The difficulty for any ISP taking advantage of this exception, however, is determining what the “ordinary course” of an ISP’s business is. Certainly it cannot include everything that the ISP chooses to do as part of its business. 42 And just as certainly it includes activities that communications service providers have always engaged in, such as routing communications. It is difficult to determine, however, what criteria might be used to assess business practices that change. The first provider to offer a new service, whatever it might be, would not be 37 Id. § 2511(2)(c), (d). Paragraph (d) has a further exception, which is not relevant here, for consent provided to a non-law-enforcement officer to intercept a communication. Such consent is invalid if the “communication is intercepted for the purpose of committing any criminal or tortious act.” Id. § 2511(2)(d). 38 In fact, Google has done this. See Advertising Privacy FAQ—Policies & Principles, GOOGLE, http://www.google.com/intl/en/policies/privacy/ads/ (last visited Aug 31, 2012). 39 See, e.g., Rassoull v. Maximus, Inc., No. DKC 2002-0214, 2002 U.S. Dist. LEXIS 21866, at *9 (D. Md. Nov. 8, 2002) (finding that the policy did not clearly cover telephone calls); Ali v. Douglas Cable Commc’ns, 929 F. Supp. 1362, 1377 (D. Kan. 1996) (finding that the policy was not clearly distributed until after monitoring began). 40 18 U.S.C. § 2510(4). 41 Id. § 2510(5)(a). 42 See Watkins v. L.M. Berry & Co., 704 F.2d 577, 582 (11th Cir. 1983) (“The phrase ‘in the ordinary course of business’ cannot be expanded to mean anything that interests a company.”); cf. Smith v. Bob Smith Chevrolet, Inc., 275 F. Supp. 2d 808, 818–19 (W.D. Ky. 2003) (interpreting “legitimate business need” under the Fair Credit Reporting Act).

2012]

CAN A COMPUTER INTERCEPT EMAIL?

679

engaged in a traditional business practice. If “ordinary course of business” is limited to traditional business practices, there would seem to be no way to adopt new business practices without risk. Beyond that conclusion, however, it is unclear what the “ordinary course of business” exception permits, such as the practices discussed here. And of course the provision would be of no help at all to someone who wished to establish automated processing of communications but was not a service provider. There is a second exception aimed at protecting service providers from liability, set forth as a freestanding exception in § 2511(2)(a)(i). Whereas the device exception allows ISPs to conduct activities in the ordinary course of business, the exception in § 2511(2)(a)(i) permits the communications providers’ employees to conduct certain activities without risk of liability. Specifically, the exception permits the employees of an ISP to intercept, disclose, or use [a] communication in the normal course of his employment while engaged in any activity which is a necessary incident to the rendition of his service or to the protection of the rights or property of the property of the provider of that service, except that a provider of wire communication service to the public shall not utilize service observing or random monitoring except for mechanical or service quality control checks. 43

The reference at the end to wire communication service reveals this provision’s origins in the original Wiretap Act as an exception aimed at protecting telephone company employees in the normal course of their duties, as long as they were not snooping—that is, as long as their listening to a telephone conversation was “a necessary incident” of connecting calls, or to protect against unauthorized use of the telephone system, or to check the quality of the lines. 44 But in adopting the ECPA, Congress specifically declined to extend the limitation on random monitoring to electronic communications, citing “an important technical distinction between electronic communications and traditional voice telephone service,” namely, that electronic communications service functions such as “monitor[ing] a stream of transmissions in order to properly route, terminate, and otherwise manage the individual messages . . . do not involve humans listening in

43 18 U.S.C. § 2511(2)(a)(i). “Service observing” and “random monitoring” were apparently intended to refer to the same thing: a specific procedure used by employees in the Bell Telephone system to check on the quality of telephone connections. The proviso was evidently inserted in order to preclude Bell companies from monitoring employee conversations. See Surveillance: Hearing Before the Subcomm. on Courts, Civil Liberties, and the Admin. of Justice of the H. Comm. on the Judiciary, 94th Cong. 224–25 (1975) (statement of H.W. William Caming, attorney for AT&T). 44 The text of the provision, in referring to the covered employees, still specifically refers to, among others, “an operator of a switchboard.” 18 U.S.C. § 2511(2)(a)(i).

680

CARDOZO LAW REVIEW

[Vol. 34:669

on voice conversations. Accordingly, they are not prohibited.” 45 The legislative history therefore seems to directly suggest that automated handling of communications should not be subject to the same restrictions as monitoring performed by a human. But the exception itself excuses only service provider employees from liability under the Act; it does not appear to apply to service providers themselves. 46 And it protects those employees only so long as they are acting within the “normal course of . . . employment” and “engaged in any activity which is a necessary incident to the rendition of his service” or protecting the service provider’s “rights or property.” 47 It is unclear whether this exception allows an employee to perform any monitoring necessary to the rendition of service as defined by the service provider, or whether the monitoring has to be necessary in a more objective sense. In any event, as with the device exception, the exception offers no comfort to researchers or others using automated processing who are not employed by service providers. Finally, although not strictly an exception, there is an argument that the Wiretap Act does not apply to electronic communications such as emails. In addition to amending the Wiretap Act, the ECPA added another body of law to protect the contents of communications; the Stored Communications Act. The Stored Communications Act contains at least three sorts of provisions. The first, set forth in 18 U.S.C. § 2701, prohibits unauthorized access to an ISP’s facilities in order to obtain, alter, or destroy a communication. The second, set forth in § 2702, prohibits ISPs from disclosing communications except under certain identified circumstances. And the third set of provisions, which represent most of the rest of the Stored Communications Act, provides the procedures by which the government can request communications from ISPs. 48 As discussed further below, 49 some courts have held that, in essence, the Wiretap Act does not apply to electronic communications such as emails, because such communications are obtained only out of temporary storage, which is the subject of the Stored Communications Act. And if the Stored Communications Act, not the Wiretap Act, 45 H.R. REP. NO. 99-647, at 47 (1986); see also S. REP. NO. 99-541, at 20 (1986), reprinted in 1986 U.S.C.C.A.N. 3555 (same). 46 Section 2511(2)(a)(i) applies only to “an operator of a switchboard, or an officer, employee, or agent of a provider of wire or electronic communication service.” 18 U.S.C. § 2511(2)(a)(i). It is true that a service provider can only act through its employees, but it is not true that only the employees can be liable for violations. See id. § 2510(6) (defining “persons” subject to the Act as including “any individual, partnership, association, joint stock company, trust, or corporation”). 47 The “rights or property” condition is intended to allow such things as tracking down someone who is using the service without paying for it. See United States v. DeLeeuw, 368 F. Supp. 426 (E.D. Wis. 1974). 48 See 18 U.S.C. §§ 2703–2706, 2709. 49 See infra text accompanying notes 142–144.

2012]

CAN A COMPUTER INTERCEPT EMAIL?

681

governs the sort of automated handling of contents discussed in this Article, then there is an easy solution to the problem: under the Stored Communications Act, ISPs can do whatever they want with communications, so long as they abide by their non-disclosure obligations. By its terms, § 2701 contains a clear exception for “conduct authorized . . . by the person or entity providing a wire or electronic communications service.” 50 But there is considerable controversy about the conclusion that emails in temporary storage during transit cannot be intercepted. 51 The entire purpose of Title I of the ECPA was to extend the protections of the Wiretap Act to emails. An interpretation of the Wiretap Act that makes that impossible frustrates one of the core purposes of the ECPA. Although courts are split on the matter, at least a few courts have agreed, 52 and given reform efforts currently under way, it is likely that some version of anti-interception legislation will clearly apply to electronic communications in the near future. II. THE WIRETAP ACT AND THE DEFINITION OF “INTERCEPT” The Wiretap Act prohibits “any person” from “intentionally intercept[ing] . . . any wire, oral, or electronic communication.” 53 The precise acts prohibited by this language have always been somewhat opaque, but the text of the statute and the legislative history suggest that the Wiretap Act prohibits only the interception of communications by humans, not interception by machines that does not enable later human review. And while there is a dearth of cases under the Wiretap Act considering the question of whether computer processing of communications alone is an “interception,” there is a substantial body of law concerning a somewhat analogous question that arose under the Wiretap Act as originally drafted: whether a recording of a conversation that was not contemporaneously overheard by a third party can be said to intercept that conversation. Based on this evidence, the best interpretation of the statute is that where communications are accessed and accessible only to a machine, not to a person, no interception has occurred within the scope of the Wiretap Act. Arriving at that interpretation will require a detailed examination of how the language in the Wiretap Act evolved and was subsequently 18 U.S.C. § 2701(c)(1). See, e.g., Bellia, supra note 14, at 1323; Samantha L. Martin, Note, Interpreting the Wiretap Act: Applying Ordinary Rules of “Transit” to the Internet Context, 28 CARDOZO L. REV. 441 (2006); Katherine A. Oyama, Note, E-Mail Privacy After United States v. Councilman: Legislative Options for Amending ECPA, 21 BERKELEY TECH. L.J. 499, 514 (2006). 52 Compare, e.g., Konop v. Hawaiian Airlines, Inc., 302 F.3d 868, 878 (9th Cir. 2002) (stating that electronic communications cannot be intercepted while in electronic storage), with, e.g., United States v. Szymuszkiewicz, 622 F.3d 701, 706 (7th Cir. 2010) (stating that copying of emails at point between sender and recipient is interception). 53 18 U.S.C. § 2511(1)(a). 50 51

682

CARDOZO LAW REVIEW

[Vol. 34:669

interpreted. First, we will need to take an extended tour of the legislative history of the Act, focusing on how the definition of a violation evolved as a consensus on how to regulate electronic surveillance coalesced. Although the issue was only rarely addressed, there are nevertheless a number of indications that it was commonly understood that “interception” required the acts of a person, not a machine. Indeed, it was only at the last minute that the text of the Act was changed to eliminate an express reference to persons, and that was done in order to achieve other objectives. The case law initially interpreting the Wiretap Act agreed with this interpretation; to the extent that machines could “intercept” communications, it was only by acting on behalf of a person. Finally, I will examine the text and history of the Electronic Communications Privacy Act of 1986 to ascertain the effect of that statute on automated processing. If anything, that history also supports the argument being made here: automated processing that is not preserved for human review is beyond the scope of the Wiretap Act. A.

The Text and History of the Statute

Congress adopted the Wiretap Act after decades of debate concerning how best to regulate law enforcement surveillance of telephone and in-person conversations. In 1928, at the dawn of the modern telecommunications age, the Supreme Court held in Olmstead v. United States that telephone wiretaps did not violate the Fourth Amendment because they did not constitute a search of “persons, houses, papers, [or] effects.” 54 Chief Justice Taft’s majority opinion suggested, however, that “Congress may of course protect the secrecy of telephone messages by making them, when intercepted, inadmissible . . . in federal criminal trials, by direct legislation . . . .” 55 Congress followed Chief Justice Taft’s suggestion six years later when, as part of the Communications Act of 1934, it adopted section 605, which provided that “no person not being authorized by the sender shall intercept any communication and divulge or publish the existence, contents, substance, purport, effect, or meaning of such intercepted communication to any person.” 56 Almost immediately, section 605 proved unsatisfactory. For one thing, its precise scope was difficult to determine. Like the ECPA decades later, section 605 was drafted to regulate the privacy of a new technology—telephones—but did so in language more appropriate for an older and much more well-understood technology—namely, 277 U.S. at 465; see U.S. CONST. amend. IV. 277 U.S. at 465. 56 Communications Act of 1934, Pub. L. No. 73-416, § 605, 48 Stat. 1064, 1103 (codified as amended at 47 U.S.C. § 605 (2006)). 54

55

2012]

CAN A COMPUTER INTERCEPT EMAIL?

683

telegraphs. 57 Thus, it somewhat mysteriously referred to the “sender” of a communication, which was a bit difficult to apply to telephone calls. 58 It also forbade “interception” of wire communications, but did not define the term. These interpretive difficulties with the statute surfaced only rarely, however, because despite its broad language, section 605 was given a very narrow application by courts; it was construed to prohibit only the act of both intercepting and divulging wire communications, not intercepting alone. 59 Essentially, section 605 barred only the introduction of wiretap evidence in federal courts. 60 It failed to regulate eavesdropping at all, as well as wiretapping outside of a federal court case; and it failed to provide for use of wiretap evidence in courts under any circumstances, even where it was possible to obtain a warrant or court order. For these reasons, Attorney General Nicholas Katzenbach in the 1960s called section 605 “the worst of all possible solutions.” 61 These problems with section 605 were recognized soon after it was adopted, but for several decades efforts to amend it went nowhere. Initially, most attempts at amendment would have simply grafted on a provision allowing the introduction of wiretap evidence in federal court 57 Several earlier statutes had protected the privacy of telegraphic communications. See, e.g., Act of Oct. 29, 1918, Pub. L. No. 65-230, 40 Stat. 1017 (prohibiting “tap[ping]” of telephone or telegraph lines maintained by Postal Service without authorization or consent of the users). This includes some of the early regulations of radio. Although radio was a newer technology than telephones, until the advent of commercial broadcasting in the 1920s, radio was widely used in a more traditional way: to send “wireless” telegraphic messages. Foreshadowing the Stored Communications Act, the Radio Act of 1912 prohibited radio operators from divulging messages to anyone other than the intended recipient or another station. Radio Act of 1912, Pub. L. No. 62-264, § 4(19), 37 Stat. 302, 307; cf. 18 U.S.C. § 2702 (2006 & Supp. IV 2011). The Radio Act of 1927 amended this language to add, for the first time, a prohibition on “interception” of communications:

[N]o person not being authorized by the sender shall intercept any message and divulge or publish the contents, substance, purport, effect, or meaning of such intercepted message to any person; . . . Provided, That this section shall not apply to the receiving, divulging, publishing, or utilizing the contents of any radio communication broadcasted or transmitted by amateurs or others for the use of the general public or relating to ships in distress. Radio Act of 1927, Pub. L. No. 69-632, § 27, 44 Stat. 1162, 1172. The context makes clear that “interception” as used in the Radio Act refers to the act of someone obtaining a radiotransmitted telegraph in between sending and receipt, either by listening in on the radio signal or by obtaining it from one of the intermediate stations. Id. This provision formed the basis of section 605 of the Communications Act seven years later, when it was extended to include all communications “by wire or radio.” Communications Act of 1934 § 605. 58 See United States v. Polakoff, 112 F.2d 888, 889 (2d Cir. 1940) (“The word, ‘sender,’ in § 605 is less apt for a telephone talk than for a telegram, as applied to which there can be no doubt of its meaning.”). 59 See, e.g., Massicot v. United States, 254 F.2d 58, 65 (5th Cir. 1958). 60 See 1 JAMES G. CARR & PATRICIA L. BELLIA, Title III Enactment, Constitutionality, in THE LAW OF ELECTRONIC SURVEILLANCE §§ 2.9–.15 (2006). 61 Criminal Laws and Procedures: Hearing Before the Subcomm. on Criminal Laws and Procedures of the S. Comm. on the Judiciary, 89th Cong. 38 (1966) (statement of Nicholas deB. Katzenbach, Att’y Gen. of the United States).

684

CARDOZO LAW REVIEW

[Vol. 34:669

in certain cases where those wiretaps were specifically authorized by the Attorney General; for example, national security prosecutions or organized crime. 62 Most such bills did not define “interception,” and several confusingly seemed to distinguish “interception” from other forms of obtaining a communication, such as “listening in on,” “recording,” and “acquiring.” 63 Ultimately, by the mid-1950s, the definition of “intercept” began to take its modern form, as the character of proposed legislation changed from merely tinkering with section 605 to the addition of an entirely new chapter or chapters to the criminal code. 64 The shift occurred as the question of wiretapping and eavesdropping regulation began to capture sustained public attention. Numerous legislative hearings were conducted, 65 state bar committees studied the problem, 66 exposés on electronic surveillance were published, 67 symposia were held at law schools, 68 and the issue was debated on television. 69 By that point, the question was becoming more urgent, and several bills were introduced that would work a complete overhaul of electronic surveillance law. Emblematic of the structure of the proposals that eventually resulted in the Wiretap Act was the 1961 bill proposed by Connecticut Senator Thomas Dodd, the first to receive the support of the Department of Justice. The outlines of the regulation of wiretapping in 62 See, e.g., S. 3756, 75th Cong. § 2 (1938), reprinted in Wiretapping, Eavesdropping, and the Bill of Rights: Hearing Before the Subcomm. on Constitutional Rights of the S. Comm. on the Judiciary, 86th Cong. 955, 956 (1959) [hereinafter 1959 Hearings]. The level of sustained concern about organized crime, the challenges it posed to traditional law enforcement, and the changes wrought to criminal law as a result, seem to have made it the mid-twentieth-century equivalent of present concerns over domestic terrorism. 63 See, e.g., H.R. 3563, 81st Cong. (1949), reprinted in 1959 Hearings, supra note 62, at 987; H.R. 9929, 81st Cong. (1950), reprinted in 1959 Hearings, supra note 62, at 1001; H.R. 4228, 77th Cong. (1941), reprinted in 1959 Hearings, supra note 62, at 971. Either the bills were using synonyms in an attempt to eliminate ambiguity, or “interception” was viewed as something distinct from “listening in on,” “recording,” or “acquiring.” 64 See, e.g., H.R. 4513, 84th Cong. (1955), reprinted in 1959 Hearings, supra note 62, at 1017, 1019 (introduced by Rep. Emanuel Celler of N.Y.) (defining “intercept” as “the obtaining of the whole or any part of a telephone communication by means of any device, contrivance, or machine, of any kind”). The definition went on to include two exceptions, versions of which appear in the Wiretap Act: “[B]ut it shall not include eavesdropping on a party line or any act or practice done in the ordinary and usual course of business in the operation or use of common carrier communications system by regular employees thereof.” Id. 65 See, e.g., 1959 Hearings, supra note 62; N.Y. J. Legis. Comm. on Privacy of Commc’ns & Licensure of Private Investigators, Rep. No. 1958-9 (N.Y. 1958), reprinted in 1959 Hearings, supra note 62, at 1041. 66 ASS’N OF THE BAR OF THE CITY OF N.Y., REPORT ON PENDING WIRETAP BILLS (1954), reprinted in Wiretapping and Eavesdropping Legislation: Hearings Before the Subcomm. on Constitutional Rights of the S. Comm. on the Judiciary, 87th Cong. 143 (1961). 67 See, e.g., SAMUEL DASH ET AL., THE EAVESDROPPERS (1959). 68 See, e.g., Symposium, The Wiretapping-Eavesdropping Problem: Reflections on The Eavesdroppers, 44 MINN. L. REV. 808, 811 (1960). 69 See, e.g., All America Wants to Know: Are Wiretapping Laws Helping Criminals? (Reader’s Digest & Freedoms Foundation television broadcast Mar. 1962), available at http://blogs.princeton.edu/reelmudd/2010/12/are-wiretapping-laws-helping-criminals.html.

2012]

CAN A COMPUTER INTERCEPT EMAIL?

685

the Dodd bill were clear: all wiretaps conducted without consent, outside of the normal course of business of the telephone company, or without a court order, were prohibited. The Dodd bill was the first to define a prohibited “interception” as “acquisition,” namely, “the acquisition of the contents of any wire communication from a wire communication facility or component thereof, through the use of any intercepting device, by any person other than the sender or receiver of such communication or a person authorized by either.” 70 This definition of interception had three important features: first, it applied only to “acquisition” of the contents of a communication, rather than any other fact about the communication; second, it required the use of an “interception device”; third, and most importantly for the discussion here, the bill explicitly defined interception as the actions of a person, namely a third party interloper acting without the consent of either party to the communication. Thus, use of an interception device alone was not an interception under the Dodd bill definition; only use of a device by a person enabling that person to acquire the contents of a communication constituted an interception under the bill. Making a recording would likely have satisfied this definition, as it constituted a means by which a person, using the recorder, could acquire the contents of a conversation. This makes sense, as Congress was well aware that recordings were commonly used in the course of intercepting communications. 71 Simple machine processing without recording, however, to the extent it existed, 72 does not appear to fall within the Dodd bill, as it would not allow a person to acquire the contents of a communication. Despite its eventual adoption, and despite the increasing attention being given to electronic surveillance, 73 the Dodd bill languished for 70 S. 1495, 87th Cong. § 2(4) (1961), reprinted in Wiretapping and Eavesdropping Legislation, supra note 66, at 4. Versions of the Dodd bill were repeatedly reintroduced in Congress throughout the 1960s, and continued to receive Administration support. See G. Robert Blakey & James A. Hancock, A Proposed Electronic Surveillance Control Act, 43 NOTRE DAME L. REV. 657, 661 n.5 (1968) (noting that S. 675 in the 90th Congress was the bill “long supported by the United States Department of Justice”). 71 See, e.g., H.R. REP. NO. 77-358, at 1 (1941), reprinted in 1959 Hearings, supra note 62, at 972, 972 (bill would authorize Attorney General to “intercept, listen in on, or record telephone, telegraph, or radio messages or communications”); Wiretapping and Eavesdropping Legislation, supra note 66, at 14 (S. 1221 applicable to all “unauthorized snooping by means of acoustical devices,” including wiretapping, “microphones, detectaphones, spike mikes, recorders, and similar devices not yet invented”); id. at 61–65 (transcript of recording of telephone scam played during subcommittee hearing). 72 Mechanical telephone switches had existed since the late 19th century, but automated electronic telephone switches capable of handling large amounts of traffic were not installed until the mid-1960s. Such equipment would have been excluded from the definition of “intercepting device” as a switchboard or other wire communication facility. See S. 1495 § 2(5), Wiretapping and Eavesdropping Legislation, supra note 66, at 4. 73 The pace of Congressional hearings alone noticeably increased. See Right of Privacy Act of 1967: Hearings Before the Subcomm. on Admin. Practice and Procedure of the S. Comm. on the Judiciary, 90th Cong. (1967); Criminal Laws and Procedures: Hearings Before a Subcomm.

686

CARDOZO LAW REVIEW

[Vol. 34:669

several years. In 1967, it served as the basis for the wiretapping and electronic eavesdropping legislation proposed by a presidential commission, 74 the Commission on Law Enforcement and Administration of Justice, otherwise known as President Johnson’s Crime Commission. The Commission issued its report in February, 75 and the appendix containing the proposed legislation was published a few months later. 76 At that point, the Supreme Court forced Congress’s hand. In Berger v. New York, the Court upended the regulation of electronic surveillance by subjecting electronic eavesdropping to Fourth Amendment restrictions, invalidating New York’s law permitting government eavesdropping and wiretapping with a court order. 77 Two weeks later, a reworked version of the Commission’s bill was introduced in the Senate; 78 and after the Supreme Court’s decision in Katz v. United States 79 six months later, it was inserted as Title III in a package of anticrime proposals that had already passed the House, the Omnibus Crime Control and Safe Streets Act of 1968 (OCCSSA). 80 In the course of that final rush to adoption, two significant changes happened to the Wiretap Act that clouded the status of recordings, as well as automated processing. First, the bill was broadened to prohibit on Criminal Laws and Procedures of the S. Comm. on the Judiciary, 89th Cong. (1966); Wiretapping and Eavesdropping Legislation, supra note 66; Wiretapping—The Attorney General’s Program—1962: Hearings Before the S. Comm. on the Judiciary, 87th Cong. (1962). 74 See Blakey & Hancock, supra note 70, at 662–63 n.10 (noting that definition of “intercept” in Commission’s proposed bill was modeled on Senate Bill 675, S. 675, 90th Cong. § 10(5) (1967)). Senate Bill 675 was a version of Senator Dodd’s 1961 bill. See id. at 661 n.5 (“[S. 675] is the bill long supported by the United States Department of Justice.”). 75 COMM’N ON LAW ENFORCEMENT & ADMIN. OF JUSTICE, THE CHALLENGE OF CRIME IN A FREE SOCIETY (1967). 76 G. Robert Blakey, Aspects of the Evidence Gathering Process in Organized Crime Cases: A Preliminary Analysis, in COMMISSION ON LAW ENFORCEMENT AND ADMINISTRATION OF JUSTICE, TASK FORCE REPORT: ORGANIZED CRIME 80 (1967) (setting forth proposed statute). Prof. Blakey is generally credited as the “architect of Title III.” Heggy v. Heggy, 944 F.2d 1537, 1541 (10th Cir. 1991). 77 388 U.S. 41 (1967). 78 Berger was decided on June 12, 1967. Id. Senate Bill 2050, 90th Cong. (1967), which was a version of the Electronic Surveillance Control Act of 1967 proposed in Blakey, supra note 76, was introduced by Senator Hruska two weeks later, on June 29, 1967. A companion bill to Senate Bill 2050, House Bill 13275, 90th Cong. (1967), was later introduced in the House by Reps. Ford and McCulloch. See S. REP. NO. 90-1097 (1968), reprinted in 1968 U.S.C.C.A.N. 2112, 2284 (statement of Sen. Eastland). 79 389 U.S. 347 (1967). In Katz, the Supreme Court finally abandoned the theory, cited in Berger, that the Fourth Amendment applied only to “trespassory intrusions” on private conversations, famously holding that “the Fourth Amendment protects people, not places.” Id. at 351. Thus, any electronic eavesdropping and wiretapping would be subject to Fourth Amendment restrictions, no matter how conducted. Id. at 353. This represented a shift from practice under section 605, which had been held to regulate only wiretapping, not eavesdropping. See Goldman v. United States, 316 U.S. 129, 134 (1942). The surveillance in Katz constituted eavesdropping rather than wiretapping because only Katz’s side of the conversation was overheard. Katz, 389 U.S. at 348. 80 Pub. L. No. 90-351, 82 Stat. 197 (1968). Among other things, the law provided federal funding to state law enforcement agencies, expanded the responsibilities of the Department of Justice, made changes to habeas corpus procedures, and added federal firearms regulations.

2012]

CAN A COMPUTER INTERCEPT EMAIL?

687

not just wiretapping, which had been the sole focus of the Dodd bill, but also electronic eavesdropping. The definition of “intercept” was accordingly changed to cover any form of acquiring the contents of a private conversation, whether oral or transmitted by wire. But in the process, the bill’s drafters became concerned that such a broad prohibition might sweep in too much, such as surveillance methods that had nothing to do with a person listening to a conversation. The Crime Commission Task Force thus proposed limiting the definition of “intercept” to only aural acquisitions of the contents of communications; 81 that is, acquisitions accomplished by hearing the communication, 82 as opposed to other forms of surveillance, such as visual surveillance. 83 Interception, under the old Wiretap Act, therefore consisted of obtaining the contents of a communication by listening to it. This definition aligns well with the most straightforward view of the harms caused by wiretapping and eavesdropping. Under the Wiretap Act as originally adopted, the harm of interception is the harm that results when someone uninvited listens to a private conversation. But that definition of “intercept” presents a puzzle: what about recordings? Does a recording of a conversation “aurally acquire” it at the moment of recording, or is it only acquired when it is later listened to? Members of Congress clearly intended to address the use of recordings in surveillance. 84 Indeed, the statute they drafted requires recordings to be made, where possible, of communications intercepted by law enforcement personnel pursuant to a court order. 85 But not all interceptions need unfold like the typical law enforcement wiretap, 81 See Blakey, supra note 76, at 106 (proposed 18 U.S.C. § 3806(8): “The term ‘intercept’ means the aural acquisition of the contents of any wire or oral communication through the use of any intercepting device by any person other than the sender or receiver of such communication or a person given prior authority to by either.”); see also S. 2050, 90th Cong. § 3(a) (1967) (providing a similar definition in proposed new section 18 U.S.C. § 2510(8)). 82 See, e.g., United States v. Rodriguez, 968 F.2d 130, 136 (2d Cir. 1992) (“[S]ince the definition of interception includes the ‘aural’ acquisition of the contents of the communication, the interception must also be considered to occur at the place where the redirected contents are first heard.”); United States v. Falcone, 505 F.2d 478, 482 (3d Cir. 1974) (“An ‘aural acquisition’ by definition engages the sense of hearing.”). 83 As used in the legislative history, references to “aural” surveillance were typically contrasted with “visual” surveillance. The difficulty in interpreting the term is in trying to determine what it excludes that was not already excluded from coverage under the Act. Professor Blakey explained the addition as intended to exclude such activities as shining a searchlight on a boat, see United States v. Lee, 274 U.S. 559 (1927); using a geiger counter (or “scintillator”) outside of an apartment, see Corngold v. United States, 367 F.2d 1 (9th Cir. 1966); or obtaining phone records from the telephone company, see United States v. Russo, 250 F. Supp. 55 (E.D. Pa. 1966). The Senate Judiciary Committee report contained nearly the same explanation. See S. REP. NO. 90-1097 (1968), reprinted in 1968 U.S.C.C.A.N. 2112, 2178. But none of those forms of surveillance were likely covered under the Act to begin with, as they would not have acquired “the contents of a communication.” 84 See S. REP. NO. 90-1097, reprinted in 1968 U.S.C.C.A.N. 2112, 2162 (noting that Katz’s words had been recorded); id. at 2224 (individual views of Sens. Long and Hart). 85 See 18 U.S.C. § 2518(8)(a) (2006).

688

CARDOZO LAW REVIEW

[Vol. 34:669

where the person making the recording is also engaged in live surveillance. What members of Congress apparently failed to contemplate was that the act of recording might be performed at a different time than the act of listening, and by someone else. Such a situation requires a determination if the act of recording, by itself, constitutes an interception. The second important change was made at almost the last minute, after the Wiretap Act had been added to the OCCSSA. Under the bill as originally proposed, the parties to a conversation could never be liable for intercepting it, because by definition, an interception required a “person other than the sender or receiver of such communication . . . .” 86 The earlier definitions thus made abundantly clear that interceptions were actions performed by persons, not devices. When the Wiretap Act was added as Title III, however, any mention of the parties to a communication or other persons was stripped from the definition of “intercept.” Instead, the definition of “intercept” in the final bill read: “‘intercept’ means the aural acquisition of the contents of any wire or oral communication through the use of any electronic, mechanical, or other device.” 87 Being a party to the communication, or being authorized by a party, instead became a defense to a claim of interception. The reason for the change was to make consent a defense available in some situations but not others. Specifically, private parties, but not government agents, who participate in conversations for the purpose of committing a crime or tort cannot take advantage of the consent defense. 88 The result is that merely answering a telephone call for the purpose of committing a crime or tort, such as blackmail, is an interception in violation of the Wiretap Act. 89 The effect of both amendments was to remove some of the clarity in the earlier bills concerning the roles of persons and machines in prohibited interceptions. Defining interceptions as “aural acquisitions” 86 See, e.g., S. 2050 § 3(a). Senate Bill 675, 90th Cong. § 10(5) (1967), defined “intercept” in almost the same way: “The term ‘intercept’ means the acquisition of the contents of any wire communication from a wire communication facility or component thereof, through the use of any intercepting device, by any person other than the sender or receiver of such communication or a person authorized by either.” 87 Omnibus Crime Control and Safe Streets Act of 1968, Pub. L. No. 90-351, § 802, 82 Stat. 197, 212. 88 See 18 U.S.C. § 2511(2)(c), (d). 89 The exception was probably intended more to apply to recordings than merely answering a telephone call, but answering a residential telephone constitutes the use of a device to aurally acquire the contents of a communication, and thus literally falls within the definition of “intercept.” (There is some debate, not important for present purposes, about whether a residential telephone in ordinary use is a “device” as defined under the Act, which excludes telephone equipment used by the subscriber “in the ordinary course of its business.”) In any event, it is not clear why it was necessary to add the exception at all, since the underlying act is, by definition, already a crime or tort and thus subjects the actor to criminal or civil liability. It appears the amendment needlessly complicated what were relatively intuitive definitions of “intercept” and “consent.”

2012]

CAN A COMPUTER INTERCEPT EMAIL?

689

rather than merely acquisitions seemingly narrowed the scope of the Act, but since it was clear that machine capture of conversations— through recording devices—was intended to be among the prohibited behaviors, courts have consistently struggled to make sense of the definition of “interception” in a way that gives some meaning to “aural” while ensuring that privacy-invasive recordings are prohibited. 90 The second amendment similarly clouded the scope of the Act by removing the term “person” from the definition of “intercept,” which may make it seem based on a cursory reading as though machine processing is subject to the limitations of the Act. Nevertheless, it is clear from the legislative history that neither amendment was intended to work a fundamental shift in the scope of the Act. The Wiretap Act as finally passed prohibits only invasions by persons of the privacy of communications, not by machines. The judicial interpretation of these provisions is consistent with this reading of the Act. B.

Judicial Interpretation of the Wiretap Act

Eventually cases began to emerge that raised the distinction between recording and live surveillance. The issue is difficult. If “aural acquisition” means listening, the act of making a recording would appear not to qualify as an interception. Rather, it would be the act of listening to the recording that would intercept the conversation. But however well that interpretation corresponds with the harms to be prevented, it has certain troubling consequences. It would mean that the person making a recording, arguably the most culpable party, would go free, while those merely listening to it would be punished. It would also mean that each recording of a conversation could give rise to multiple interceptions of the same conversation, days or even years after it occurred. And finally, it would create an odd dichotomy with mail. An opened letter in someone’s possession would be subject to search based on a search warrant, or even less. 91 A recorded conversation in someone’s possession, however, would require a court order, even though in both cases a record of a private communication would be obtained. Only the form of the record would be different. The Fifth Circuit was one of the first courts to confront these issues, in an influential opinion issued eight years after the Wiretap Act was adopted. In United States v. Turk, the defendant, Frederick Turk, had discussed a narcotics transaction over the phone with a fellow 90 After passage of the ECPA, the definition of “intercept” now refers to the “aural or other” acquisition of the contents of a communication. Still, the courts’ struggles to determine how a recording could be an aural acquisition informed the later debates about whether copying of emails in transit constituted the “aural or other acquisition” of those communications. 91 See Goldman v. United States, 316 U.S. 129, 134 (1942) (noting that a letter still in possession of the sender is not subject to statute protecting mail from search).

690

CARDOZO LAW REVIEW

[Vol. 34:669

trafficker, Charles Kabbaby. 92 Unfortunately for Turk, Kabbaby recorded the call, and Kabbaby had the tape in his car when he was arrested. The police listened to the tape, and Turk was convicted of perjury for having earlier lied to a grand jury. The issue before the Fifth Circuit was whether, in listening to Kabbaby’s recording, the police “intercepted” the telephone call between Turk and Kabbaby. 93 This required identifying the precise act that intercepted their conversation. 94 If it was Kabbaby’s act of recording, then the tape was covered by the consent exception, and listening to the tape was not a violation. 95 But if it was the police’s act of listening, then they did so without the consent of either party, and a court order was required. The Turk court rejected the notion that a communication is “aurally acquired” each time a recording is replayed. Not only would that mean that “innumerable ‘interceptions,’ and thus violations of the Act, could follow from a single recording,” but, according to the court, it would also make the separate prohibition on disclosing the contents of an unlawfully intercepted communication superfluous, as each new disclosure would also be a new interception. 96 Rather, the court concluded that the aural acquisition occurs when a recording is made, with the recorder serving as “the agent of the ear.” 97 In other words, recording serves as a substitute for listening. As a result, despite the focus of the statute on the act of listening, the Turk court suggested that in some circumstances machine handling of communications could itself qualify as an acquisition. 98 526 F.2d 654 (5th Cir. 1976). Id. at 657. 94 Id. at 657–58. 95 Further acquisitions of a lawfully acquired communication are also lawful. See id. at 658. For example, it is not a violation to record a lawfully monitored conversation. See United States v. Cheely, 814 F. Supp. 1430, 1441–42 (D. Alaska 1992), aff’d, 36 F.3d 1439 (9th Cir. 1994). Similarly, recording a lawfully made recording is lawful. See Payne v. Norwest Corp., 911 F. Supp. 1299, 1303 (D. Mont. 1995), rev’d in part on other grounds, 113 F.3d 1079 (9th Cir. 1997). 96 Turk, 526 F.2d at 658. The latter conclusion is subject to some doubt. If listening to a recorded conversation were an interception, disclosure liability would not be eliminated. Rather, after the initial interception by listening, some subsequent disclosures would give rise to two violations instead of one. Reciting the contents of an overheard conversation to another would constitute disclosure, but not interception. The person receiving the contents of the intercepted conversation would in addition be liable for interception. 97 Id. at 658 n.2 (“In a forest devoid of living listeners, a tree falls. Is there a sound? The answer is yes, if an active tape recorder is present, and the sound might be thought of as ‘aurally acquired’ at (almost) the instant the action causing it occurred.”). The Turk court held out the possibility that aural acquisition occurs only when a recording is made and then listened to by the same person making it. Id. at 658. But that would appear to be too restrictive, as it would allow easy circumvention of the Wiretap Act by making a recording made by one person and listened to only by others not subject to the Act at all. 98 As noted above, the Turk court was tentative on this point, expressing no view as to whether interception was complete when recorded, or only when recorded and listened to by the person making the recording—i.e., when the communication was delivered to the “principal,” in Turk’s analogy. The distinction did not matter in Turk because either way the police officers listening to Kabbaby’s tape would not have been committing a violation. 92 93

2012]

CAN A COMPUTER INTERCEPT EMAIL?

691

Turk’s reading of the definition of “intercept” has proven influential. Turk conforms the text of the statute to a common-sense understanding of what “interception” requires: in some sense getting a hold of a communication as it is transiting between two parties. In other words, interception of a communication should work something like an interception in football. 99 In a situation where a person records a conversation, it would seem that the person making the recording, as opposed to people later listening to it, is committing the most direct intrusion upon the privacy of the communication. 100 Turk thus encouraged a focus on the act of acquisition that was “contemporaneous” with the conversation, as opposed to acts of listening that occurred later in time, 101 a notion that quickly became the common view of the Wiretap Act. It is worth keeping in mind the basis of the Turk theory. Under the theory, it is not just any piece of equipment that can aurally acquire a conversation in the absence of a live listener. Rather, recorders acquire conversations because they serve the purpose of permitting human listeners to obtain the contents of the communication later. They are “agents” of a person’s ear, in the Turk court’s analogy. If a machine can intercept a communication at all, the implication is that it occurs only when the machine preserves a communication for the purpose of later review by a third person. Later courts have generally followed the Turk court’s holding that recordings can intercept a communication, and that noncontemporaneous acquisitions of the contents of a communication do not, even if they have not greatly expounded on the reasons why this is true. For example, most courts faced with the question have held that where a recording device and other equipment is used to ultimately listen to a conversation, the use of the recording device constitutes the relevant point of interception, not the act of listening to the recording. 102 And most courts have held that even recorded conversations that are never actually listened to nevertheless aurally acquire those 99 See United States v. Szymuszkiewicz, 622 F.3d 701, 705 (7th Cir. 2010) (noting common sense understanding that interception in communication is akin to interception in football, but questioning its application to email). 100 “The words ‘acquisition . . . through the use of any . . . device’ suggest that the central concern is with the activity engaged in at the time of the oral communication which causes such communication to be overheard by uninvited listeners.” Turk, 526 F.2d at 658 (alterations in original) (quoting 18 U.S.C. § 2510(4)). 101 According to the Turk court, “interception” requires “participation by the one charged with an ‘interception’ in the contemporaneous acquisition of the communication through the use of the device.” Id. 102 See, e.g., Deal v. Spears, 980 F.2d 1153 (8th Cir. 1992); United States v. Rodriguez, 968 F.2d 130 (2d Cir. 1992); Pascale v. Carolina Freight Carriers Corp., 898 F. Supp. 276 (D.N.J. 1995); In re State Police Litig., 888 F. Supp. 1235 (D. Conn. 1995). But see Epps v. St. Mary’s Hosp. of Athens, Inc., 802 F.2d 412 (11th Cir. 1986) (holding that dispatch console to which recording device was attached, and not recording device, was relevant point of interception).

692

CARDOZO LAW REVIEW

[Vol. 34:669

conversations. 103 This is consistent with one theory offered in Turk: it is the recording for the purpose of later listening that is the violation, rather than recording and later listening, which would be little different from listening alone. Several courts have held that while recording devices might perform interceptions under the Wiretap Act, other sorts of devices that handle communications do not. The issue typically arises in the context of the so-called “business extension exception,” under which telephone equipment being used in the ordinary course of business—such as a separate extension in an office setting—does not qualify as a requisite “device” under the Act. 104 The most difficult cases, however, are those that involve a device that diverts a signal to a recording device, and there is some reason why it matters whether the diversion itself intercepted the communication, or the making of the recording. Courts have struggled in such cases to determine exactly when and where the prohibited interception may be said to have occurred. The cases fall into a few different categories. Several cases occurred before the Wiretap Act was amended by the ECPA in 1986 to except the use of any telephone equipment in the ordinary course of business—not just equipment supplied by the phone company—from liability. Thus prior to 1986, only devices supplied by a common carrier fell within this exception, and it therefore mattered precisely which device in a given fact pattern actually was used to perform the interception. Courts have split on that issue, with some finding that the diverting device was the point of interception, and others that the recorder was. 105 In all of the cases, however, the purpose of the diversion was to make a recording or 103 See, e.g., United States v. Lewis, 406 F.3d 11, 18 n.5 (1st Cir. 2005); Sanders v. Robert Bosch Corp., 38 F.3d 736, 740 (4th Cir. 1994); Stockler v. Garratt, 893 F.2d 856, 859 (6th Cir. 1990); Jacobson v. Rose, 592 F.2d 515, 522 (9th Cir. 1978); Ali v. Douglas Cable Commc’ns, L.P., 929 F. Supp. 1362, 1380 (D. Kan. 1996); Pascale, 898 F. Supp. at 279; In re State Police, 888 F. Supp. at 1264; Amati v. City of Woodstock, 829 F. Supp. 998, 1008 (N.D. Ill. 1993); cf. Arias v. Mut. Cent. Alarm Serv., Inc., 202 F.3d 553, 558 (2d Cir. 2000) (noting issue may need congressional clarification). At least one court has held that recordings constitute interceptions only because of the passage of the ECPA, which expanded the definition of “intercept” to cover “aural or other acquisition” of the contents of a communication. See George v. Carusone, 849 F. Supp. 159, 163 (D. Conn. 1994). But cases following Turk pre-date the ECPA. A few courts have held that unheard recordings do not violate the Act. See, e.g., Greenfield v. Kootenai Cnty., 752 F.2d 1387, 1389 (9th Cir. 1985) (recording not “aural acquisition” outside of program of active surveillance); By-Prod Corp. v. Armen-Berry Co., 668 F.2d 956, 960 (7th Cir. 1982) (Congress could not have intended $1000 minimum damages for erased recording that had never been heard). 104 The Wiretap Act does not prevent all listening to conversations without consent, but only those that make use of an “electronic, mechanical, or other device.” 18 U.S.C. § 2510(4) (2006). Ordinary eavesdropping is beyond the statute. 105 Compare United States v. Murdock, 63 F.3d 1391, 1394–96 (6th Cir. 1995) (noting that a recording device, and not an extension phone, was the instrument that intercepted the call), and Deal, 980 F.2d at 1157–58 (same), with United States v. Harpel, 493 F.2d 346, 350 (10th Cir. 1974) (stating that interception occurred at an extension telephone, not at a recorder attached to the extension), and Epps, 802 F.2d at 414–15 (stating the point of interception was the point at which the signal was diverted, not the point at which it was recorded).

2012]

CAN A COMPUTER INTERCEPT EMAIL?

693

otherwise enable an interception, and the only question was who supplied the relevant equipment. In the few cases where diversion alone has been considered without any subsequent recording or listening, liability has been rejected. 106 United States v. Rodriguez, 107 although decided after the ECPA was enacted, is similar. In Rodriguez, the Second Circuit declared that an interception occurs “when the contents of a wire communication are captured or redirected in any way.” 108 But Rodriguez concerned not mere redirection, but redirection followed by recording and analysis by the DEA. 109 The government had obtained a wiretap authorization from the Southern District of New York, and had arranged for an additional line to be installed by the telephone company at a restaurant in New Jersey that diverted calls to the DEA’s offices in Manhattan for recording and later analysis on equipment there. 110 The issue was whether the court’s authorization was consistent with 18 U.S.C. § 2518(3), which permits a judge authorizing a wiretap to do so only “within the territorial jurisdiction of the court in which the judge is sitting.” 111 Thus, the question in Rodriguez was not whether the diversion alone constituted an interception, but rather, whether an interception that crosses jurisdictional lines occurs in the jurisdiction where the line is tapped, where the intercepting equipment is located, or both. The Second Circuit concluded, in the language quoted above, that it occurs at least in the jurisdiction where the tap occurs—with the court assuming that “[r]edirection presupposes interception.” 112 But the Second Circuit then went on to hold that interception also occurs in the jurisdiction where the communication is heard—that is, where it is actually acquired. 113 Despite its language, Rodriguez therefore does not support the conclusion that a redirection of content that does not lead 106 See, e.g., Sanders, 38 F.3d at 740 n.8, 742 (stating that sounds transmitted to an unmonitored speaker are not intercepted); Pascale, 898 F. Supp. at 280 n.1 (stating that interception occurs when communications are “permanently memorialized, a feat impossible for a wire to perform”). 107 968 F.2d 130 (2d Cir. 1992). 108 Id. at 136. 109 Id. at 135–36. 110 Id. at 134. 111 18 U.S.C. § 2518(3) (2006). 112 Rodriguez, 968 F.2d at 136. It is important to note that the Rodriguez court did not say “Redirection constitutes interception.” The court’s epigram appears rather to mean that redirection is a necessary component of interception, for in every interception, “the contents of the conversation, whether bilateral as is usually the case, or multilateral as is the case with a conference call, are transmitted in one additional direction.” Id. Therefore, one place and time that interception occurs is the place and time it is redirected toward ultimate acquisition. The court concluded that a federal court at the location of the phone to be tapped would thus have the authority to issue a wiretap order under § 2518(3). This is ultimately not that different than regarding a recorder as “the agent of the ear.” United States v. Turk, 526 F.2d 654, 658 n.2 (5th Cir. 1976). 113 Rodriguez, 968 F.2d at 136.

694

CARDOZO LAW REVIEW

[Vol. 34:669

to any human acquisition at all constitutes an interception. 114 There is at least one line of cases that would appear to support that conclusion, although they are not Wiretap Act cases. In People v. Bialostok, the police installed a pen register on the telephone line of a suspected gambling operation, 115 which at the time did not require a warrant. 116 A warrant was required only for an “eavesdropping” device, such as a wiretap. The device the police used to capture the dialed phone numbers, however, was more advanced than a simple pen register; if an audio cable was attached to an output on the device, it was capable of passing the audio signal from the telephone call to an attached recording device. In Bialostok, that function was not enabled until later, after the police had obtained a warrant permitting wiretapping based on the evidence they had collected from the pen register. 117 The New York Court of Appeals, New York’s highest court, held that the fact that the device had the capability to redirect the signal from a telephone call meant that the device “acquired the ‘contents of communications’ from the moment it was installed.” 118 The attachment of a separate recorder “merely made accessible what was already being acquired.” 119 That meant that installation and use of the device, even as just a pen register, nevertheless required a warrant under New York’s eavesdropping statute. 120 The court’s theory appears to have been that passage of an audio signal through a device, combined with the presence in a device of wiring leading to an audio output that is not being used, “acquires” that signal within the meaning of the New York eavesdropping statute and, presumably, the Wiretap Act. The Bialostok court based its interpretation of New York law in 114 The Rodriguez court’s statement that a wire communication is intercepted when its “contents . . . are captured or redirected in any way” was cited favorably by the Ninth Circuit in Noel v. Hall, 568 F.3d 743, 749 (9th Cir. 2009). But the issue in Noel had nothing to do with redirection without capture, but rather whether the replaying of a recorded conversation constitutes a new act of interception by the listener, the same issue raised in Turk. A similar explanation underlies the court’s statement in In re State Police Litigation, 888 F. Supp. 1235, 1264 (D. Conn. 1995), that “it is the act of diverting, and not the act of listening, that constitutes an ‘interception.’” The issue in In re State Police was not whether simple diversion constituted an interception, but whether it was the making of a recording or the act of listening to it that was the interception. Courts from Turk on forward have rejected the conclusion that listening to a recording is an interception; the status of redirection without recording was irrelevant in those cases. Indeed, as argued above, to the extent Turk relied on a theory that a recording intercepts a conversation precisely because it preserves the contents for later human review, Turk cuts against mere redirection being deemed an interception. 115 610 N.E.2d 374 (N.Y. 1993). My thanks to Steven Bellovin for calling this case to my attention. 116 New York subsequently amended its eavesdropping statute to require a court order for installation of pen registers based on a lesser showing, namely “reasonable suspicion.” See N.Y. CRIM. PROC. LAW § 705.10 (McKinney 2012); Bialostok, 610 N.E.2d at 376 n.3. 117 610 N.E.2d at 376. 118 Id. at 377. 119 Id. 120 New York, in passing a combined wiretapping and eavesdropping law, took the opposite tack from Congress and refers to its statute as an “eavesdropping” statute.

2012]

CAN A COMPUTER INTERCEPT EMAIL?

695

part on the grounds that the use of a pen register with interception capability raised Fourth Amendment and New York constitutional concerns, and that those concerns should shape its interpretation of the statute. 121 In particular, the Bialostok court believed that the device at issue exceeded the bounds of the Supreme Court’s decision in Smith v. Maryland, 122 which held that the use of pen registers is not a “search” requiring a warrant under the Fourth Amendment. Smith reached its conclusion on the basis of the so-called “third party” doctrine, namely that a person cannot have a reasonable expectation of privacy in information that is passed to a third party such as the phone company. 123 The Bialostok court believed that an important premise of the Smith decision was that the pen register at issue in Smith, as the Smith court noted, “differs significantly from the listening device employed in Katz, for pen registers do not acquire the contents of communications.” 124 But Bialostok appears to have misread the Smith decision. The discussion of the limited capabilities of the pen register at issue in Smith was simply the Court’s attempt to delineate the contours of Smith’s argument, not an attempt to place Fourth Amendment limits on the unexploited capabilities of pen register devices. “Given a pen register’s limited capabilities,” the Smith Court concluded, “petitioner’s argument that its installation and use constituted a ‘search’ necessarily rests upon a claim that he had a ‘legitimate expectation of privacy’ regarding the numbers he dialed on his phone.” 125 In other words, given that there was nothing else Smith could be claiming, he must have been claiming that the numbers he dialed were subject to Fourth Amendment protection. That is very far from a conclusion that only devices whose sole function is to record telephone numbers can qualify as a pen register. There is one other section of the Smith opinion, cited in Bialostok, 126 that appears on first glance to be arguing that automated processing of information should be treated the same, for purposes of privacy, as human acquisition of that information. Smith had argued that because his local telephone calls were automatically connected and were not recorded by the phone company for billing purposes, he still had a reasonable expectation of privacy in the phone numbers he had dialed and the third party doctrine should not apply. The Court, however, rejected this argument, concluding that it should not make any 121 Bialostok, 610 N.E.2d at 377 (“[O]ur interpretation of article 700 must be sensitive to the constitutional guarantees against search and seizure that the statute seeks to protect.” (quoting People v. Washington, 385 N.E.2d 593, 595 (N.Y. 1978)). 122 Smith v. Maryland, 442 U.S. 735 (1979). 123 Id. at 743–44; see also Bialostok, 610 N.E.2d at 377 (“Central to the Court’s analysis was the pen register’s limited capabilities . . . .”). 124 Smith, 442 U.S. at 741. 125 Id. at 742. 126 Bialostok, 610 N.E.2d at 377.

696

CARDOZO LAW REVIEW

[Vol. 34:669

difference, for purposes of the reasonable expectation of privacy, what the third party actually did with the information it received. 127 “Regardless of the phone company’s election, petitioner voluntarily conveyed to it information that it had facilities for recording and that it was free to record. In these circumstances, petitioner assumed the risk that the information would be divulged to police.” 128 The third party doctrine determines whether the subject of a criminal investigation has taken an action with respect to certain information that waives any legitimate privacy interest in that information. 129 As the Smith court emphasized, the focus should be on the actions taken by the subject, not how the third party handles the information it receives, or the result would be a “a crazy quilt” of Fourth Amendment protection, coming and going depending on the connection and billing practices of private corporations. 130 The Wiretap Act, by contrast, determines not whether privacy has been waived, but whether it has been intruded upon. Thus, the focus of the core provisions of the Act—those prohibiting “interception”—is not on the actions of the parties to the communication, but on the actions of third parties. In other words, the focus is exactly the opposite of what it is for the third-party doctrine. In the context of interpreting the Wiretap Act, therefore, it makes a significant difference whether a human or simply a machine has received the communications. This is true even if the distinction makes no difference for purposes of the third-party doctrine. Regardless of the correctness of its interpretation of Smith, there is a second reason the Bialostok decision should not be followed as a guide for interpreting the Wiretap Act. It is simply unworkable as a regulation of modern telecommunications devices. Almost all computer equipment has the capability to route communications to somewhere other than their intended destination. If that capability was enough to make the equipment an interception device, then almost all Internet routers would violate the Wiretap Act. The exception for devices used by ISPs in the ordinary course of business would not save ISPs either, as the devices are certainly capable of routing communications in ways that fall outside of the ordinary course of business, and thus no computer equipment would qualify for the exception. Straightforwardly applying Bialostok to the Internet would be disastrous. The Bialostok decision makes more sense, however, as a decision 127 Smith, 442 U.S. at 744–45 (“We are not inclined to hold that a different constitutional result is required because the telephone company has decided to automate.”). 128 Id. at 745. 129 As Orin Kerr has recently argued, although the third-party doctrine is often stated as a consequence of the Fourth Amendment’s protection of only “reasonable expectations of privacy,” it is better thought of as a consent doctrine. See Orin S. Kerr, The Case for the ThirdParty Doctrine, 107 MICH. L. REV. 561, 588–90 (2009). 130 Smith, 442 U.S. at 745.

2012]

CAN A COMPUTER INTERCEPT EMAIL?

697

concerning the regulation of law enforcement investigations, at least in the context where police use of pen registers is completely unsupervised by courts. 131 If the police were allowed to use, without a court order, pen registers that can easily be converted into wiretaps, then judicial supervision of wiretaps would become more difficult. 132 An illegitimate police wiretap would at least superficially appear to observers exactly the same as a legitimate pen register. Subjecting the use of such combination devices to the more stringent requirements for wiretaps may help to ensure at least an observable distinction between the two activities. 133 This argument concerning ease of monitoring does not apply to private party wiretaps, however. There is no provision for court supervision of private party interceptions; they are simply prohibited unless an exception applies. Illegal interceptions are therefore unlikely to be revealed at all, ever, let alone disguised as permissible automated processing. The result of any similarity between automated processing and illegal recording is more likely to be that the automated processing will look suspicious rather than the interception look innocent—and all a private party plaintiff would need is at least a plausible inference that interception has occurred to initiate a lawsuit. Detecting illegal interceptions is unlikely to be more difficult if automated processing is allowed. The history and structure of the original Wiretap Act, as well as the cases interpreting it, thus demonstrate that the Wiretap Act as originally adopted targeted invasions of the privacy of communications committed by persons, albeit persons using devices. Nevertheless, it is clear that Congress also intended to regulate some non-human acquisitions of the content of communications, in the form of recordings, and courts later gave effect to that intent. As the cases 131 Title III of the ECPA included the Pen Register Statute, which required state and federal law enforcement agencies to obtain a court order authorizing installation of a pen register or trap-and-trace device, based upon a certification that the information sought was relevant to an ongoing criminal investigation. See Electronic Communications Privacy Act (ECPA) of 1986, Pub. L. No. 99-508, § 301, 100 Stat. 1848, 1868–72 (codified as amended at 18 U.S.C. §§ 3121– 27 (2006)). The ECPA, including the Pen Register statute, was adopted October 21, 1986, and became effective 90 days later. See id. § 302, 100 Stat. at 1872. The surveillance at issue in Bialostok occurred in November and December of 1986. See 610 N.E.2d at 376. New York subsequently amended its criminal procedure statutes to be compliant with the ECPA. See N.Y. CRIM. PROC. LAW § 705.10 (McKinney 2012) (requiring court order based on “reasonable suspicion” to install pen register); Bialostok, 610 N.E.2d at 376 n.3. 132 Government attorneys have argued for an expanded definition of what should count as a “pen register” in recent years. See Susan Freiwald, Uncertain Privacy: Communication Attributes After the Digital Telephony Act, 69 S. CAL. L. REV. 949, 985–87 (1996). 133 For the reasons stated above, however, it would difficult to apply Bialostok beyond the realm of traditional wire communications. Even as applied to wire communications, the New York Court of Appeals has limited Bialostok in subsequent decisions. See People v. Kramer, 706 N.E.2d 731, 737 (N.Y. 1998) (noting that Bialostok should be applied on case-by-case basis based on ease of evading statutory requirements).

698

CARDOZO LAW REVIEW

[Vol. 34:669

demonstrate, however, recordings fell within the scope of the Wiretap Act only on the premise that they enable later human perception of those communications. C.

The Electronic Communications Privacy Act

The Wiretap Act was significantly amended in 1986 to broaden its protection to cover electronic communications, including emails and other transmissions across the Internet. Those amendments, however, did little to alter the fundamental structure of the Wiretap Act described above. Acquisition of the contents of a communication still requires either receipt by a person or preservation for purposes of later human review. The Electronic Communications Privacy Act was adopted in 1986 in order to bring the Wiretap Act into the modern information age. As originally passed in 1968, the Wiretap Act applied only to voice communications in person and over wired telephone networks; it left data communications completely unprotected. The advent of computer networks made this an increasingly glaring omission. The ECPA addressed the issue in two ways. First, the ECPA amended the Wiretap Act to add “electronic communications” to the sorts of communications protected by the statute. Second, an entirely new chapter was added to the criminal code, protecting the privacy of electronic communications being delivered or stored by an ISP. The alterations to the Wiretap Act for the most part left its wording intact. 134 The definition of “intercept” was expanded to include the “aural or other acquisition of the contents of any wire, electronic, or oral communication through the use of any electronic, mechanical, or other device.” 135 As the Second Circuit has held, this language does not alter See Kastenmeier et al., supra note 27, at 735. 18 U.S.C. § 2510(4) (2006) (emphasis added). The legislative history somewhat puzzlingly explains that the addition of “or other” was only “intended to make clear that it is illegal to intercept the non-voice portion of a wire communication such as the data or digitized portion of a voice communication.” H.R. REP. NO. 99-647, at 34 (1986). Meanwhile, electronic communications were said to be subject to a different definition: “The term intercept with respect to ‘electronic communications’ is defined to mean ‘the interception of the contents of that communication through the use of any electronic, mechanical or other device.” Id.; see also S. REP. NO. 99-541, at 13 (1986), reprinted in 1986 U.S.C.C.A.N. 3555, 3567. The bills discussed in the reports did not do anything like this, however. Instead, the reports appear to be confusing the 1986 bills that ultimately became the ECPA with earlier versions that would have defined “intercept” somewhat circularly to mean “the interception of the contents of any electronic or oral communication through the use of any electronic, mechanical, or other device.” See Electronic Communications Privacy Act of 1985, H.R. 3378, 99th Cong. § 101(a)(2) (1985); Electronic Communications Privacy Act of 1985, S. 1667, 99th Cong. § 101(a)(2) (1985). Those earlier versions of the bills also eliminated the term “wire communication,” substituting “electronic communication” as a catch-all term for what are now referred to separately as wire communications and electronic communications. See H.R. 3378, § 101(c)(1); S. 1667, § 101(c)(1). 134 135

2012]

CAN A COMPUTER INTERCEPT EMAIL?

699

the definition of traditional wiretaps; 136 it merely grafts interception of electronic interceptions on top. The definition still defines interception as requiring “use of [a] device,” an odd way of framing the definition if a device could intercept by itself. 137 Thus, to the extent the Wiretap Act prohibits only human interception or machine preservation of a communication that permits later human review, that restriction on the scope of the Act is unaltered by the ECPA. Indeed, in adopting the ECPA, Congress in at least one instance specifically distinguished human observation of contents from automated processing, and declared the latter to be outside of the Act’s concerns. As discussed above, 138 18 U.S.C. § 2511(2)(a)(i) excludes from liability employees of a communications service provider who intercept, disclose, or use a communication in the normal course of their employment in certain circumstances. 139 The exception contains a proviso, however—it does not allow wire communications service providers to engage in “service observing or random monitoring except for mechanical or service quality control checks.” 140 Congress expanded the “normal course of employment” exception in § 2511(2)(a)(i) to include employees of electronic communications service providers, but left the proviso alone, such that it only applied to wire communications service providers. The Judiciary Committee reports explain that the reason is that electronic communications service monitoring is performed by computers, not persons: In applying the second clause [of § 2511(2)(a)(i)] only to wire communications, this provision reflects an important technical distinction between electronic communications and traditional voice telephone service. The provider of electronic communications services may have to monitor a stream of transmissions in order properly to route, terminate, and otherwise manage the individual messages it contains. These monitoring functions, which may be necessary to the provision of an electronic communication service, do not involve humans listening in on voice conversations. Accordingly, they are not prohibited. In contrast, the traditional limits on service “observing” and random “monitoring” do refer to human aural interception and are retained with respect to voice (“wire”) communications. 141

The exception in § 2511(2)(a)(i) applies only to “employees,” and so it might seem natural to exclude from the proviso monitoring generally performed by computers. But since the Wiretap Act as a whole United States v. Rodriguez, 968 F.2d 130, 135–36 (2d Cir. 1992). 18 U.S.C. § 2510(4). 138 See supra text accompanying notes 43–45. 139 18 U.S.C. § 2511(2)(a)(i). 140 Id. In other words, employee quality control checks are prohibited. See supra note 43. 141 H.R. REP. NO. 99-647, at 47 (1986); see also S. REP. NO. 99-541, at 20 (1986), reprinted in 1986 U.S.C.C.A.N. 3555, 3574 (nearly identical). 136 137

700

CARDOZO LAW REVIEW

[Vol. 34:669

prohibits only “persons” from intercepting communications, if automated handling is to fall within the Act, it must be under a theory that such handling constitutes acquisition by the person that established the procedures for handling the communications in question. If that is so, then it would follow that employees setting up a computer to conduct service observing or random monitoring for purposes other than “mechanical or service quality control checks” would equally be acquiring the contents of those communications. Congress evidently disagreed, however, and did not think it necessary to include automated monitoring in the proviso. Section 2511(2)(a)(i) has generated little controversy. Much of the litigation subsequent to the ECPA has instead been over the Wiretap Act’s uncertain application to emails and other electronic communications. Again, the issue is the way that the ECPA handles copies of communications such as recordings. Congress in 1986 was apparently misled by the way in which many emails and computer communications were sent during that era: by an initial telephone communication using a dial-up modem. That portion of the communication looked a lot like the wire communications that the Wiretap Act already governed (if uncertainly when it came to recordings), substituting modulated data for modulated voices. But once the communication reached a packet-switched network, it behaved very differently. Communications sent via packet-switched networks pass in and out of temporary storage at each router until finally being collected and held together at the recipient’s ISP. The original Wiretap Act, with the textual focus on the act of listening, did not apply clearly to recordings of communications; the ECPA applied even less certainly to copies made of temporary copies of communications. Were such copies of copies interceptions, subject to the Wiretap Act? The question was made more difficult by the fact that, in addition to amending the Wiretap Act, the ECPA also adopted the Stored Communications Act, which by its terms governs communications in “electronic storage,” meaning temporary storage of a message in transit or stored by an ISP for backup protection. 142 The definition would appear to include all of the points at which it is feasible to obtain an email in transit—both at the fleeting intermissions between hops between routers, and while the reassembled message is being held for the recipient. Given that almost all acquisitions of Internet communications will occur at one of those two times, there would appear to be a substantial amount of overlap between the two statutes. And indeed, several courts have found, in effect, that emails simply 142 See 18 U.S.C. § 2510(17) (defining “electronic storage” as “any temporary, intermediate storage of a wire or electronic communication incidental to the electronic transmission thereof” and “any storage of such communication by an electronic communication service for purposes of backup protection of such communication”).

2012]

CAN A COMPUTER INTERCEPT EMAIL?

701

cannot be “intercepted” within the scope of the Wiretap Act. 143 Other courts have disagreed. 144 The outcome of that debate is not important here. 145 Nothing in the case law on the proper scope of the modern Wiretap Act has altered the conclusion above that machine processing of the contents of a communication, where those contents are not preserved for later human review, is not an acquisition. And if several courts of appeal are correct that the Wiretap Act does not apply at all to electronic communications copied while in temporary or backup storage, then no amount of automatic scanning by ISPs would create liability under the ECPA. That is because the Stored Communication Act’s only regulation of access, § 2701, provides that it “does not apply with respect to conduct authorized . . . by the person or entity providing a wire or electronic communications service.” 146 Disclosure of the contents of those communications to third parties would still be prohibited under § 2702, but short of disclosure, an ISP or any person authorized by an ISP could use the contents of a communication in whatever way it wanted. D.

Application

Before proceeding, it is worth reviewing the limitations on the argument being made in this Article. Automated machine handling of the contents of a communication is not itself an “aural or other acquisition” of those contents, for the reasons discussed above; and it would not qualify as a proxy for “aural or other acquisition” so long as, unlike the case of recorded wiretaps, the automated processing and response to the contents of communication did not make those contents available for human review. But that condition is somewhat more difficult to achieve than it sounds. Acquiring the contents of a communication does not require making a verbatim copy; rather, the 143 See Fraser v. Nationwide Mut. Ins. Co., 352 F.3d 107, 114 (3d Cir. 2003) (as amended 2004); United States v. Steiger, 318 F.3d 1039, 1050 (11th Cir. 2003) (“[U]nder the narrow reading of the Wiretap Act we adopt from the Fifth and Ninth Circuits, very few seizures of electronic communications from computers will constitute ‘interceptions.’”); Konop v. Hawaiian Airlines, Inc., 302 F.3d 868, 878 (9th Cir. 2002); Steve Jackson Games, Inc. v. U.S. Secret Serv., 36 F.3d 457, 461–62 (5th Cir. 1994). 144 See United States v. Szymuszkiewicz, 622 F.3d 701, 706 (7th Cir. 2010) (copying of emails in transit occurred contemporaneously with communication); United States v. Councilman, 418 F.3d 67, 69–70 (1st Cir. 2005). In an earlier version of the Szymuszkiewicz opinion, the Seventh Circuit declared that the prospect of overlap between the Wiretap Act and the Stored Communications Act for copying of emails in temporary storage was not a problem, but that section was deleted in the amended opinion. United States v. Szymuszkiewicz, No. 10-1347, 2010 U.S. App. LEXIS 18815, at *10 (7th Cir. Sept. 9, 2010). 145 For more, see Bruce E. Boyden, Privacy of Electronic Communications, in PROSKAUER ON PRIVACY: A GUIDE TO PRIVACY AND DATA SECURITY LAW IN THE INFORMATION AGE 6-1, 6-30 to 6-40 (Kristen J. Mathews ed., 2011). 146 18 U.S.C. § 2701(c)(1).

702

CARDOZO LAW REVIEW

[Vol. 34:669

Wiretap Act defines “contents” broadly as the “substance, purport, or meaning” of a communication. 147 Thus, any record made of the machine’s action that captured the “substance, purport, or meaning” of the communication would constitute acquisition. For example, if a router logged that it blocked a file from a particular user because it contained a copy of a particular copyrighted film, that would likely qualify. Furthermore, there is the issue of what qualifies as “preserving a record.” It is likely that as part of any process that involves more than simply delivering a communication to its destination, some temporary, additional copy of the communication would be made in order to, for example, compare the contents against a list of keywords or hash files. Is it even feasible to conduct automated processing in a way that would not preserve a record that could potentially be reviewed by a human? Internet law is full of disputes of this nature. In general, the law in several areas is still struggling to take account of the fact that computers copy, and networked computers make lots of copies. Some of the copies are more equal than others, however. For example, a recent decision of the Second Circuit held that a progressive two-second buffer copy of a television program, although it could in theory be used to stitch together a permanent copy of the entire program, was not in fact a copy that existed for “more than a transitory duration” under copyright law. 148 In other words, a machine copy that as a practical matter is not a humanusable copy does not count. 149 The same sort of analysis applies here. Mechanical copies of communications qualify as acquisitions, under the theory outlined above, only because they permit later human listening. In the examples of recordings of wiretaps, there was no practical difference between a recording that permitted later human review and one that did not. Only in unusual circumstances can there be a delivery made of a voice communication that does not reasonably permit review. 150 But this is not the case with computer copies. Evanescent copies of communications that are made for purposes of scanning and then discarded, without affording a reasonable opportunity that a human could obtain and review them, should not be treated as acquiring the contents. 151 18 U.S.C. § 2510(8). Cartoon Network LP v. CSC Holdings, Inc., 536 F.3d 121, 127 (2d Cir. 2008). 149 See Joseph P. Liu, Owning Digital Copies: Copyright Law and the Incidents of Copy Ownership, 42 WM. & MARY L. REV. 1245, 1258–60 (2001) (citing articles); Aaron Perzanowski, Fixing RAM Copies, 104 NW. U. L. REV. 1067, 1075–80 (2010); R. Anthony Reese, The Public Display Right: The Copyright Act’s Neglected Solution to the Controversy over RAM “Copies,” 2001 U. ILL. L. REV. 83, 139–40 (citing articles). 150 See, e.g., Sanders v. Robert Bosch Corp., 38 F.3d 736, 742 (4th Cir. 1994) (stating that a tapped line that unknowingly led to a low-output speaker was not an acquisition). 151 See Cartoon Network, 536 F.3d at 129–30 (stating that two-second buffer copies are not sufficiently permanent for copyright law). What constitutes a “reasonable opportunity” will 147 148

2012]

CAN A COMPUTER INTERCEPT EMAIL?

703

Given these restrictions, what value would automated processing have? Even without a record being made, automated functions can still accomplish a significant amount now, and perhaps far more in the future. Automated scanning is already used to filter spam and malware; it could also be used to block other sorts of files, such as copyrightinfringing material. 152 And such scanning is already being used to generate keyword-generated advertisements to support free services such as Gmail. More sophisticated parsing programs will in the future no doubt be able to replace simple keyword searches with more complex algorithms that can place advertisements based on entire phrases or sentences. 153 While these uses may not be the subject of heavy consumer demand, that does not mean that they violate the Wiretap Act. III. THE WIRETAP ACT AND THE PROTECTION OF PRIVACY One possible objection to the statutory analysis set forth above is that it may undermine the purpose of the Wiretap Act: to protect the privacy of electronic communications. 154 This poses the fairly challenging problem of what it means to protect the “privacy” of communications, and the Wiretap Act’s proper role in protecting that privacy. The question is difficult because the concept of privacy is itself not very well understood, 155 let alone the proper sphere of a subset of privacy, communications privacy. Nevertheless, the issue is even more pressing now in light of reform efforts currently underway directed at revising the ECPA. 156 Even if the Wiretap Act is currently limited, its scope could be expanded to clearly cover all machine handling of vary depending on the technology available. 152 The policy objections one might have to this as a matter of copyright law should not affect whether it is consistent with the Wiretap Act. 153 See CHOPRA & WHITE, supra note 21, at 99. Of course, this may not sound like a benefit. No one likes advertisements. But even Alfred Hitchcock realized advertisements can be an efficient way to pay the bills. 154 S. REP. NO. 99-541, at 1 (1986), reprinted in 1986 U.S.C.C.A.N. 3555, 3555 (stating that ECPA amends existing law “to update and clarify Federal privacy protections and standards in light of dramatic changes in new computer and telecommunications technologies”); H.R. REP. NO. 99-647, at 18–19 (1986) (stating that ECPA necessary to protect privacy and “ensure the continued vitality of the Fourth Amendment” in the face of technological advances); S. REP. NO. 90-1097 (1968), reprinted in 1968 U.S.C.C.A.N. 2112, 2154 (noting that “widespread use and abuse of electronic surveillance techniques” can jeopardize “privacy of communication”). 155 See, e.g., HELEN NISSENBAUM, PRIVACY IN CONTEXT: TECHNOLOGY, POLICY, AND THE INTEGRITY OF SOCIAL LIFE 2 (2010) (describing theoretical disarray); Daniel J. Solove, Conceptualizing Privacy, 90 CALIF. L. REV. 1087, 1088–90 (2002) [hereinafter Solove, Conceptualizing Privacy] (same). The problem is not of recent vintage. See Spiros Simitis, Reviewing Privacy in an Information Society, 135 U. PA. L. REV. 707, 708 (1987) (quoting assessments of privacy research as in “hopeless disarray” and “ultimately, futile”). 156 See Electronic Communications Privacy Act Amendments Act of 2011, S. 1011, 112th Cong. (2011). The current reform efforts, as with past reform efforts, are mostly focused on the regulation of law enforcement investigations.

704

CARDOZO LAW REVIEW

[Vol. 34:669

communications not essential for delivery. Should it be? For the reasons explored below, the harms that might be associated with machine handling of communications, where the content of those communications is not preserved for later human review, do not appear to be appropriate subjects for an anti-interception statute like the Wiretap Act. First, the harms do not appear to intrude on what most definitions of privacy seek to protect. Second, to the extent that regulation of machine handling of communications is desirable for other policy reasons, the Wiretap Act is a poor vehicle to achieve those goals. The Wiretap Act is a blunt instrument: it defines certain methods of obtaining information to be felonies, 157 and others to be entirely permissible. Such a scheme is a suboptimal candidate for regulation of behavior that may depend on context or degrees, such as the harms that might plausibly arise out of machine handling of communications. A.

The Harms of Machine Processing

Privacy has long been a field in disarray. There is little agreement on what constitutes “privacy,” or what privacy laws should seek to protect. Nevertheless most conceptions of privacy do share a common theme: they define a violation of privacy as information conveyed to other people, either actually or potentially. 158 That is, a privacy invasion is either the actual disclosure of information to others, or an action that creates the potential for such disclosure. Neither of those situations is presented by the machine processing of information, where that processing is not monitored by humans and does not result in any record for later human review. Borrowing from Daniel Solove’s excellent categorization of concepts of privacy, 159 there can be said to be at least six theories of what causes privacy harm. First, that harm might be said to derive from reputation or status-based injury from the dissemination of information beyond a certain social network. 160 This appears to be the basis for Warren and Brandeis’s characterization of privacy in their seminal article as “the right to be let alone.” 161 The “right to be let alone” is 157 See 18 U.S.C. § 2511(4) (2006). Prosecutions for violations of the Wiretap Act are rare. However, violators are also potentially subject to statutory damages in civil suits of $10,000 per violation. See § 2520(2). 158 As Matthew Tokson notes, this is not entirely surprising, given that it is only recently that the prospect of fully automated processing of personal information has become a practical reality. See Tokson, supra note 32, at 611. Nevertheless, as noted below, some scholars seem to have anticipated it. 159 Solove, Conceptualizing Privacy, supra note 155. 160 For a theory of the privacy torts based on recent social science research on social networks, see Lior Jacob Strahilevitz, A Social Networks Theory of Privacy, 72 U. CHI. L. REV. 919 (2005). 161 Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 HARV. L. REV. 193, 195

2012]

CAN A COMPUTER INTERCEPT EMAIL?

705

vague, at best, but would appear to involve a right to request state intervention to protect against other individuals. 162 Second, privacy might be described as the control of access to the self. 163 This too, rests on concerns about the access that other people have, according to Ruth Gavison: “the extent to which we are known to others, the extent to which others have physical access to us, and the extent to which we are the subject of others’ attention.” 164 Likewise for the theory of privacy as the concealment of secrets about oneself. As Solove describes the theory, “[u]nder this view, privacy is violated by the public disclosure of previously concealed information.” 165 Theories of privacy that focus on individual control over information similarly concern themselves with information exchanged with other people. Alan Westin has described privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” 166 Charles Fried has similarly written that “[p]rivacy . . . is control over knowledge about oneself.” 167 The theory that privacy is primarily concerned with protecting intimacy is similarly focused. As Solove describes the theory, “privacy is not just essential to individual self-creation, but also to human relationships.” 168 Human relationships require by definition an exchange of information between two or more people. Solove’s own theory of privacy views privacy as a set of social practices that can be disrupted by certain behaviors. The two examples Solove gives are “disclosure of concealed information” and “surveillance,” that is, “being watched.” 169 Both of these actions require human observers or recipients of information. Surveillance is particularly relevant here, as protecting the right to be free from intrusive surveillance was one of the core purposes of the Wiretap (1890). 162 See Ruth Gavison, Privacy and the Limits of Law, 89 YALE L.J. 421, 438 (1980) (describing typical privacy claim). Although such claims are typically viewed favorably in the privacy literature, others have argued that privacy protections can go too far precisely because they sometimes illegitimately limit the freedom of others to exchange information. See RICHARD A. POSNER, THE ECONOMICS OF JUSTICE 271 (1981) (“[W]hen people today decry lack of privacy, . . . they want more power to conceal information about themselves that others might use to their disadvantage.”). 163 Solove, Conceptualizing Privacy, supra note 155, at 1102–05. 164 Gavison, supra note 162, at 423. Solove similarly describes the core of the theory as the protection of “the individual’s desire for concealment and for being apart from others.” Solove, Conceptualizing Privacy, supra note 155, at 1102 (emphasis added). 165 Solove, Conceptualizing Privacy, supra note 155, at 1105 (emphasis added). 166 ALAN F. WESTIN, PRIVACY AND FREEDOM 7 (1967). 167 Charles Fried, Privacy, 77 YALE L.J. 475, 483 (1968). The context makes it clear that Fried is concerned with knowledge obtained by other people. See id. at 482 (“The person who enjoys privacy is able to grant or deny access to others.”); see also Tokson, supra note 32, at 614. 168 Solove, Conceptualizing Privacy, supra note 155, at 1121. 169 Id. at 1130.

706

CARDOZO LAW REVIEW

[Vol. 34:669

Act. 170 In a later article, Solove has expounded upon the harms caused by surveillance. 171 Overt surveillance, as described by philosopher Stanley Benn, harms individuals through their realization of and reaction to “[f]inding oneself an object of scrutiny, as the focus of another’s attention.” 172 Covert surveillance also gives rise to privacy harms, however, of a different sort; it “enable[s] the watchers to gather a substantial degree of information about people,” such that they can “record[ ] behavior, social interaction, and potentially everything that a person says and does.” 173 In other words, the harm from surveillance is the actual or threatened harm of what another person, beyond one’s intended recipients, might do with that information. 174 Although not included in Solove’s list, Helen Nissenbaum’s recent theory of privacy as contextual integrity 175 bears some similarities to Solove’s own theory. However, Nissenbaum, unlike most other privacy scholars, disclaims any attempt to capture the notoriously elusive concept of privacy. 176 Like Solove, Nissenbaum treats privacy as a cultural artifact, a set of social expectations about how information will flow in various social contexts. But instead of seeking to define privacy from the inside out, so to speak, by describing its essence, 177 Nissenbaum’s theory provides a method for determining when changes to information practices are “likely to arouse protest, indignation, or resistance,” and identifying “the sources of objection.” 178 Such a reaction 170 S. REP. NO. 99-541, at 2 (1986), reprinted in 1986 U.S.C.C.A.N. 3555, 3556 (“Title III is the primary law protecting the security and privacy of business and personal communications in the United States today.”); S. REP. NO. 90-1097 (1968), reprinted in 1968 U.S.C.C.A.N. 2112, 2153 (“Title III has as its dual purpose (1) protecting the privacy of wire and oral communications, and (2) delineating on a uniform basis the circumstances and conditions under which the interception of wire and oral communications may be authorized.”). Another stated purpose of the 1968 Wiretap Act, as mentioned above, was actually to permit the use of wiretaps, to aid in investigation of organized crime. See S. REP. NO. 90-1097, reprinted in 1968 U.S.C.C.A.N. 2112, 2157 (“The major purpose of title III is to combat organized crime.”). 171 Daniel J. Solove, A Taxonomy of Privacy, 154 U. PA. L. REV. 477 (2006) [hereinafter Solove, A Taxonomy of Privacy]. 172 Stanley I. Benn, Privacy, Freedom, and Respect for Persons, in NOMOS XIII: PRIVACY 1, 2 (J. Roland Pennock & John W. Chapman eds., 1971), quoted in Solove, A Taxonomy of Privacy, supra note 171, at 494. 173 Solove, A Taxonomy of Privacy, supra note 171, at 495. 174 Solove gives examples of discrediting or blackmailing someone. Id. at 495–96. 175 See NISSENBAUM, supra note 155. 176 Id. at 3. 177 Most theories of privacy seek to provide what philosophers of language might call an “intensional” definition of privacy—a set of necessary and sufficient criteria that distinguish all cases of “private” from all cases of “not private.” Nissenbaum’s theory, by contrast, is more like an “extensional” definition: one that defines a concept by listing examples of it. See William J. Rapaport, Intensionality vs. Intentionality, DEP’T OF COMPUTER SCI. & ENG’G, http://www.cse. buffalo.edu/~rapaport/intensional.html (last updated Mar. 28, 2012). Nissenbaum does not actually list all instances of uses of the term “privacy,” which would be impossible, but instead provides a “decision heuristic” that one can use for any given situation to determine if the participants in that situation are likely to describe an action as a violation of privacy. See NISSENBAUM, supra note 155, at 148. 178 Nissenbaum’s theory is therefore at least initially a descriptive one. But Nissenbaum

2012]

CAN A COMPUTER INTERCEPT EMAIL?

707

is likely to occur if there has been a perceived violation of “contextual integrity,” the societal rules and norms that govern information flows. 179 Nissenbaum’s emphasis on information flows means that her theory focuses on transmissions of information between persons. Violations of “contextual integrity,” as Nissenbaum defines it, are instances in which some sort of social, economic, or technological change results in information flowing in unexpected ways within or outside of a social context—to different people, in other words, or to the same people, but under different conditions. For example, in laying out a “decision heuristic” for determining if a violation of contextual integrity has occurred, Nissenbaum advises looking at “changes in who receives information (recipient), whom the information is about (subject), or who transmits the information (sender).” 180 Alternatively, there may be changes in “the types of information transmitted from senders to recipients.” 181 Or, there may be shifts in whether a person is required to disclose information or not, or what constraints on redistribution the recipient is subject to. 182 Violations of contextual integrity arising from any of these sorts of changes result in different persons receiving different information than had previously been the case. 183 The harms that Solove and Nissenbaum describe in their separate accounts of privacy are consistent with many of the descriptions in the literature of the potential harms from automated processing of the contents of communications. That is, the harms described tend to be actual or potential disclosure harms. For example, all of the harms posited by Paul Ohm in his recent call to arms against ISP monitoring appear to fall into this category. 184 Ohm argues that, if allowed to monitor the content of user communications, ISPs “will . . . be able to compile a detailed record of thoughts and behavior”; “track your believes there is a normative payoff: all else being equal, Nissenbaum argues, changes to established information practices should be regarded as prima facie violations of privacy, unless “special circumstances are so compelling as to override this prescription.” NISSENBAUM, supra note 155, at 191. Nissenbaum recognizes that such a theory carries the risk of entrenching suboptimal practices, however, so she also outlines a method for determining whether a changed information practice represents an improvement: if it betters not only “general welfare” but also “the attainment of context-relative ends, goals, and values.” Id. at 205. 179 Id. at 150. 180 Id. at 149. 181 Id. 182 Id. at 149–50. Nissenbaum calls such constraints “transmission principles.” Id. Transmission principles are “constraint[s] on the flow (distribution, dissemination, transmission) of information from party to party in a context”—that is, the “terms and conditions under which such transfers ought (and ought not) to occur.” Id. at 145. 183 The examples of violations of contextual integrity discussed by Nissenbaum are illustrative: surreptitious recording of personal telephone calls, disclosures of information provided to financial institutions, cross-border transfers of medical prescriptions, resale of retail purchase data, advertiser use of social networking communications. See id. at 152–57, 205–16, 221–30. 184 Ohm, supra note 1, at 1440–47.

708

CARDOZO LAW REVIEW

[Vol. 34:669

ailments, emotions, and the state of your relationships”; “learn your travel plans, big dates, and trips across town to do mundane chores”; and “know what you read, watch, buy, and borrow.” 185 Compile, track, learn, and know—the potential harms Ohm describes are the harms from the long-term assembly and storage of a “digital dossier,” a record that is necessarily in existence long enough to permit review by humans. And as Ohm makes clear, the potential harm from such a dossier does not “materialize from the storage of information alone;” rather, it is the potential for disclosure to others, whether intentional or accidental, that creates the risk of harm. 186 Such long-term storage, giving rise to the potential for disclosure, would certainly be sufficient to qualify as an acquisition, as I am defining it here. B.

Wiretap Act as Regulation

Despite the privacy literature’s strong focus on disclosure and access, there are statements in that literature that indicate support for the idea that automated processing by itself, with no realistic prospect of further disclosure, can constitute a privacy harm. To some extent such statements are a function of the fact that much of the writing on privacy harms is in the passive voice, and therefore it is sometimes difficult to determine who commits privacy violations or what precisely they consist of. 187 But it seems clear that at least some scholars intend to suggest that the mere handling, without further use, of personal information by a commercial entity may threaten privacy. 188 This strand of thought is most evident in the one category of Id. at 1445. Id. at 1445–46. 187 See, e.g., U.S. Dep’t of Justice v. Reporters Comm. for Freedom of the Press, 489 U.S. 749, 763 (1989) (noting that individual privacy is “control of information concerning his or her person”); PRIVACY WORKING GROUP, INFORMATION INFRASTRUCTURE TASK FORCE, PRIVACY AND THE NATIONAL INFORMATION INFRASTRUCTURE: PRINCIPLES FOR PROVIDING AND USING PERSONAL INFORMATION 5 (1995) (defining privacy as “an individual’s claim to control the terms under which personal information—information identifiable to the individual—is acquired, disclosed, and used”). 188 Somewhat relatedly, Chopra and White attempt to collapse the distinction being drawn in this Article between machine handling of communications and persons acquiring knowledge of those communications, by simply defining machine processing as knowledge. See CHOPRA & WHITE, supra note 21, at 76, 108–09, 112. But this appears to resolve the issue by fiat. There is no reason to think that attaching the label “knowledge” to the activities of a nonsentient machine invests those machines with the same capabilities feared by people who worry about privacy invasions. What people who worry about privacy are trying to prevent is changed beliefs about themselves, changed behavior by other people, or changed attributions of social status resulting from a disclosure of private information—in other words, changed mental states. Chopra and White point to evidence that users in fact ascribe mental states to web servers, see, e.g., id. at 14–15, 78, but not only is the evidence sparse that users attribute mental states directly to web servers, as opposed to their owners, it is in any event unclear why false anthropomorphization should have legal consequences. 185 186

2012]

CAN A COMPUTER INTERCEPT EMAIL?

709

Daniel Solove’s six categories of privacy theories not discussed above: privacy as protecting personhood or autonomy. According to this theory, privacy’s function is to protect a zone in which a person can develop their selves and their beliefs free of the overbearing influence of others. 189 As a result, scholars writing in this vein appear to be the most concerned with the effects of surveillance. For example, Spiros Simitis has raised concerns about the ability of “information processing” to develop “into an essential element of long-term strategies of manipulation intended to mold and adjust individual conduct.” 190 While at times Simitis appears the most concerned with the construction of dossiers, 191 he also describes changes in behavior that may result merely from “automated processing of personal data” by “both government and private enterprises.” 192 Julie Cohen similarly describes “autonomy” as connoting “an essential independence of critical faculty and an imperviousness to influence.” 193 But because “information shapes behavior, autonomy is radically contingent upon environment and circumstance. The only tenable resolution—if ‘autonomy’ is not to degenerate into the simple, stimulus-response behavior sought by direct marketers—is to underdetermine environment.” 194 Cohen’s and Simitis’s views appear to be that the responses that automated processing generates, even if they do not result in the disclosure of any personal information, can undermine the autonomy that privacy protects by shaping behavior in a deleterious way. Neither Cohen nor Simitis, however, identifies the precise harm to autonomy that might result from processing alone, without disclosure or storage for later use. Admittedly, the automated actions taken by a computer (such as filtering or appending a message to a communication) might influence behavior; indeed, the mere knowledge that such actions might be taken by a computer could conceivably influence behavior (for example, by discouraging attempts to download a file that is likely to be blocked). But the autonomy theory of privacy cannot be that individuals must make choices in a zone free of any external influences at all, for that is impossible. Humans are social animals; they influence each other all the time, and are influenced by 189 See Julie E. Cohen, Examined Lives: Informational Privacy and the Subject as Object, 52 STAN. L. REV. 1373, 1424 (2000) (“Autonomy in a contingent world requires a zone of relative insulation from outside scrutiny and interference—a field of operation within which to engage in the conscious construction of self.”). 190 Simitis, supra note 155, at 710. 191 See id. at 729 (expressing concern that “the subscriber’s everyday life is painstakingly recorded,” and concluding “[i]t is little wonder that security agencies, advertisers, and directmail marketers have repeatedly underlined their interest in getting access to individual ‘home profiles’”). 192 Id. at 733. 193 Cohen, supra note 189, at 1424. 194 Id.

710

CARDOZO LAW REVIEW

[Vol. 34:669

their environment as well, whatever it may be. Rather, the theory must at least implicitly distinguish between influences that are legitimate, and thus do not undermine autonomy, and influences that are illegitimate. And in fact, Cohen and Simitis appear to be most concerned about certain sorts of influences: the influence exerted on individuals by the collection, storage, and use of data by commercial and government entities. 195 That is, their primary concern is the construction of the “digital dossiers” that Solove, Ohm, and others have more recently warned about. The influence from the creation of such databases is, according to Cohen and Simitis, illegitimate, and therefore autonomydestroying; but the moral status of the influence resulting from other actions that companies or individuals might take is uncertain. Presumably, at least some of it is permissible. Under the autonomy theory of privacy, therefore, some actions that influence individuals are acceptable, while others are not. Distinguishing between these two categories will require a contextspecific determination of whether a given influence crosses the threshold, however defined, from beneficent to baleful. Whatever that has going for it as a theory of privacy, it would be difficult to use it as a basis for the legal framework of the Wiretap Act. The Wiretap Act is by its nature an exceedingly blunt instrument. As Patricia Bellia has noted, “our surveillance law statutes, as written, simply are not general data privacy statutes. In other words, the statutes do not broadly identify a particular category of personal data that should be subject to protection or restrict the acquisition, use, or transfer of such data.” 196 The Wiretap Act defines liability based solely on whether the privacy of communications has been breached, and not, for the most part, on what information was obtained, who obtained it, or how it was used. The Act is completely binary: permitted actions are not subject to any regulation whatsoever; 197 prohibited actions are a potential felony and subject to statutory damages of $10,000 per violation. 198 Nor is this lack of gradation a defect of the Wiretap Act that should be corrected as part of any reform effort. Rather, it is concomitant in 195 See id. at 1431 (noting focus of article on “the privacy problems created by large commercial databases” but suggesting that “government collection and cross-referencing of personal data” also poses privacy threat); Simitis, supra note 155, at 729 (describing threat to privacy as “the subscriber’s everyday life [being] painstakingly recorded,” resulting in creation of “individual ‘home profiles’”). 196 Bellia, supra note 14, at 1322. 197 Obviously the requirements and procedures for law enforcement investigators to obtain permission to engage in wiretapping or electronic eavesdropping are fairly complex, and impose some constraints. But obtaining a court order is itself a formal process that leads to a binary outcome: either the police get one, or they don’t. This is in contrast to many other privacy regulations that govern based on social context, the judgments of reasonable people, detailed administrative regulations subject to discretion in enforcement, or more finely graded scales of potential damages. 198 See 18 U.S.C. §§ 2511(4)(a), 2520(c)(2) (2006).

2012]

CAN A COMPUTER INTERCEPT EMAIL?

711

having a communications privacy statute. The purpose of such a statute is to keep channels for communications secure and private, so that individuals and businesses will feel comfortable sharing information with each other. 199 A communications privacy law therefore does not protect the information based on its content or sensitivity; that is a task for a privacy law of some kind, such as the tort of public disclosure of private facts, or the Health Insurance Portability and Accountability Act. 200 The objective behind any given intrusion, or the consequence of it, is therefore relatively unimportant. 201 The purpose of a communications privacy law is to protect the security of the entire system, by providing remedies, or at least supervision, for intrusions. The harms that a law like the Wiretap Act should prevent are therefore harms to individual conversants resulting from the unexpected exposure of their communications. At this point, a series of objections might be raised that, in fact, there are situations in which harm can occur as the result of an interception without any human actually obtaining the information, or, as in the case of recordings, even potentially obtaining the information. For example, imagine an automated system set up by the government during a Republican presidential administration that automatically scans emails and, upon discovering emails containing certain keywords identifying the sender as a likely Democrat, inserts at the top of the email: “***WARNING: The Department of Justice has identified this email as containing seditious libel. The sender and any recipients may be subject to prosecution under 18 U.S.C. § 794.” 202 Certainly this would constitute a harm, and it would be a harm stemming from the scanning and tampering with emails, even if no human third party ever has access to the email in question. While the hypothetical has a superficial appeal, we need to be careful to identify precisely the source of the harm under consideration. The Wiretap Act, as discussed above, is a communications privacy statute, meaning that it protects the privacy of particular forms of communication. The Act attaches potentially stiff penalties to a discontinuous gradation of behaviors—those behaviors that pose a sufficient threat to the privacy of users of the communications channel can result in large damage awards or prison time even in the absence of a showing of actual harm, while those that fall short are subject to no penalty at all. This feature of the Wiretap Act reflects the fact that a See RICHARD A. POSNER, ECONOMIC ANALYSIS OF LAW 953 (8th ed. 2011). Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, 110 Stat. 1936. 201 Naturally the information obtained or the consequences to the victim might be important in calculating damages. But they are not important in determining if a violation occurred, unlike some other privacy laws, which protect information only if it reaches a certain sensitivity threshold. 202 My thanks to Orin Kerr for suggesting this hypothetical. 199 200

712

CARDOZO LAW REVIEW

[Vol. 34:669

communications privacy statute protects against one particular form of harm, one that is relatively easy to identify but difficult to measure: the loss that everyone using a particular mode of communication will experience if using that mode results in a loss of privacy. The difficulty is that “privacy” is a capacious concept. Without clear boundaries, there is a danger that a “privacy harm” justifying the invocation of a communications privacy statute could be defined as simply any negative consequence that results from the use of private, personal information in transit, regardless of its effect on status, reputation, control, or autonomy. 203 Such a definition would have the advantage of making the rule of liability less dependent on context. But it would be far too broad, as it would sweep in much activity that is not a privacy violation, but rather a frustration of some other goal. There are other legal regimes to provide redress for those other sorts of harms, but they may be more difficult to invoke—they may require proof of actual harm, 204 or objective unreasonableness, or emotional distress, or state action. It may therefore be tempting to take advantage of the nebulousness of the concept of “privacy” by classifying harms resulting from the handling of communications as privacy harms, giving rise to a claim under the present or a future amended Wiretap Act. But that would be a mistake. Using the unilinear penalties of the Wiretap Act to address highly contextualized harms would be like using a sledgehammer to repair a filigree. Consider a colorful example to illustrate the point: suppose someone sends an email with an attachment and the ISP scans the content for malware. In the process and as a result of the scan, the ISP’s email server explodes and all data stored on it is lost, including many of the sender’s emails. The loss of those emails is certainly detrimental to the sender, and it resulted from a use of the content of his or her communication. The proper rule for analyzing liability in such a situation, however, is negligence, breach of contract, or product liability. That is, the loss of the emails might fairly be said to be a harm resulting from failure to take proper precautions, or failure to live up to a promise, or manufacturing a defective product. But it is not a privacy harm. The exploding mail server example may not seem like a privacy harm because the damage there is tangible, rather than intangible. But the same reasoning applies to intangible harms resulting from other, less dramatic scenarios involving automated processing of communications. Consider the Department of Justice warning mentioned above. By hypothesis, no record of the contents of the email is kept; even 203 See supra note 188 (discussing the ascription of “knowledge” in CHOPRA & WHITE, supra note 21, at 76, to computer processing of reliably obtained data). 204 Nearly all tort and contract claims require proof of damages as an element of the claim. The Wiretap Act does not. See 18 U.S.C. § 2520(c)(2) (2006) (providing statutory damages for violations).

2012]

CAN A COMPUTER INTERCEPT EMAIL?

713

recording whether a given email met the criteria for insertion of the warning would arguably acquire the “purport, meaning, or substance” of the email—namely, that it had something to do with Democratic party politics—and therefore constitute an interception. The harm that is suffered in the case of the DOJ warning, therefore, is the harm to speech resulting from the perceived threat that the government is intercepting email—that is, preserving the contents of emails as a basis for later prosecution. The harm is a First Amendment harm, and must be justified or subjected to sanction under existing Bivens 205 and First Amendment doctrine. Ryan Calo has recently considered a number of similar scenarios. 206 For example, there is the well-ruminated case of Gmail. Suppose that Gmail scans an incoming email message from a sender offering to sell a bicycle, and triggered by the word “bicycle” in the message, displays an advertisement for cheaper bicycles alongside the email in the recipient’s inbox. 207 One can understand why the sender of the email might not prefer that situation. But the harm the sender is experiencing is a harm to their competitive interests, and it seems a little odd to describe the harm of a lost sale due to competitive advertising as a “privacy harm.” Indeed, if the information in the advertisement is accurate, the display of the ad might actually result in a net social welfare benefit. Calo also postulates a series of clever hypothetical situations in which a computer is set up to scan private communications and, without any human input or review, take negative actions based on that review. Imagine a computer that, on the basis of an automated scan of a person’s communications, makes binding determinations about that person’s eligibility for welfare benefits, or purges someone from the voting rolls, or determines that a parent owes additional child support. Alternatively, imagine that the police set up a computer to scan a psychologist’s patient session notes, such that every time the computer encounters evidence of anything illegal, it will issue a citation to the person who is the subject of the notes. None of the content of the notes will be disclosed to the police, however. 208 These examples are certainly outrageous. But the harm in all of them is harm to the subject’s due process rights. 209 That is, by 205 Bivens v. Six Unknown Named Agents, 403 U.S. 388 (1971). The Supreme Court has never definitively ruled on the issue, but lower courts have held that First Amendment violations can be the subject of a Bivens action. See, e.g., Haynesworth v. Miller, 820 F.2d 1245, 1255 (D.C. Cir. 1987), overruled in part on other grounds by Hartman v. Moore, 547 U.S. 250, 256–57 (2006). 206 See Calo, supra note 31. 207 Id. at 1152. A similar hypothetical is posed in Miller, supra note 21, at 1607. 208 Calo, supra note 31, at 1152. Even the subject of the violation might be enough to disclose the “meaning, purport, or substance” of the communication, however; so in order to avoid qualifying as an interception, we would have to assume that neither the police nor the courts are informed what violation the ticket is for. 209 See generally Danielle Keats Citron, Technological Due Process, 85 WASH. U. L. REV. 1249

714

CARDOZO LAW REVIEW

[Vol. 34:669

hypothesis, these situations all involve a computer making a binding determination as to someone’s legal rights, but leaving no record of the basis for that determination whatsoever. It would thus be a complete mystery in these scenarios why the computer made the determination that it did—indeed, that is by design, as the entire purpose of having the computer make the determination is to insulate the information serving as the basis for the decision from human review. But judicial review is a form of human review. If the records are not preserved, then there is effectively no way the subjects of these decisions can challenge them. And if the records are preserved, then the privacy violation is manifest. An objection might be raised that, while other harms may exist in the above examples, that does not rule out the presence of privacy harms resulting from automated processing as well. In particular, it might be argued that the mere perception of being monitored, whether or not any information is preserved for later human review, has an effect on people—it increases their inhibitions, “dampen[s] creativity, skew[s] [their] thoughts and actions toward the mainstream, and hinder[s] selfdevelopment in much the same way as actual ubiquitous surveillance.” 210 The mere presence of a non-operational security camera, for example, has been demonstrated to influence behavior. 211 If the effect is indistinguishable from a privacy harm, then it is a privacy harm. There are two responses to this argument. First, a sense of being monitored is not a reliable indicator of whether one knows that one’s actions are being monitored. As Calo has observed, browsing the Internet tends to produce very little subjective sense of being watched, even if the user is aware of the collection and use of information by others; whereas individual behavior can be affected by something as trivial as drawing a pair of eyes on a tip jar. 212 Surely the eyes on the tip jar do not invade privacy in any way, even if they have some effect on behavior. 213 Second, and relatedly, more is required to have an invasion (2008). The psychotherapy notes hypothetical also poses a harm to an important public policy, namely not dissuading people from seeking treatment for mental health problems. That is the same public policy that underlies the psychotherapist-patient privilege in evidence law. See Jaffee v. Redmond, 518 U.S. 1 (1996). But the privilege is protective of privacy because it achieves its purpose by restricting the forced disclosure of psychotherapy discussions in a court proceeding. The situation imagined here by contrast involves, by definition, no disclosure. Rather, the computer in effect imposes a financial penalty on some patients without disclosing why. The harm is the same as if the state imposed a highly burdensome tax on any psychotherapist visits by convicted criminals. 210 M. Ryan Calo, People Can Be So Fake: A New Dimension to Privacy and Technology Scholarship, 114 PENN ST. L. REV. 809, 844–45 (2010). 211 Id. at 841 (citing Thomas J.L. van Rompay et al., The Eye of the Camera: Effects of Security Cameras on Prosocial Behavior, 41 ENV’T & BEHAV. 60, 61 (2009)). 212 See id. at 841, 847. The enhanced advertising bots envisioned by Chopra and White that appear to initiate a conversation with the user, are another example of this phenomenon. See CHOPRA & WHITE, supra note 21, at 112–13. 213 One could, of course, simply define “invasion of privacy” as anything that influences a

2012]

CAN A COMPUTER INTERCEPT EMAIL?

715

of privacy than a mere sense of a loss of privacy. Take the person who genuinely but unreasonably believes that their actions are being monitored by Martians. That person might exhibit all the symptoms of constant, obtrusive surveillance. And yet there is no such surveillance; the person’s actions are being monitored by no one. While the person may lack a sense of privacy, his or her unobserved behavior is in fact private. 214 The converse is true as well: someone who is completely unaware of surveillance will maintain a sense of privacy even though he or she has none. Again, to the extent there are other goals impeded by some automated processing of the contents of communications, other legal regulatory schemes are better disposed to achieve those goals. Competitive harms are governed by trademark law, unfair competition law, antitrust law, and advertising law. The Due Process Clauses of the Fifth and Fourteenth Amendments have a large body of doctrine associated with them to adjudicate what constitutes fair procedures. The Wiretap Act’s core competencies lie elsewhere. The Act protects the privacy of communications—the penalties attach to interception, with only limited categorical exceptions, not measured according to the use or potential harm that results. The intrusion itself is the harm the Act prevents. The few instances in which the Wiretap Act requires an examination of the context of an interception—the consent exception, or the ordinary course of business exception, for example—are among the most problematic and most administratively difficult provisions in the Act to apply. Importing contextual determinations into a communications privacy statute reduces the effectiveness of the statute. Using the Wiretap Act as a more general privacy regulation is problematic because the nature of privacy is too amorphous to serve as the clear trigger for liability a communications privacy statute requires. CONCLUSION We are now only at the advent of the use of computers to assist with tasks that previously were the sole province of human judgment. This development is one that holds considerable promise for assisting humans in coping with some of the consequences of the digital age, person’s behavior to be more mainstream or prosocial. But that has the disadvantage of sapping the concept of privacy of any real clarifying power, in the same way the broad definition of autonomy did above. See supra text accompanying note 195. 214 In response to the objection that what defines a privacy harm is a reasonable sense of the loss privacy, it is unclear why the word “reasonable” should make any difference. The only difference between objectively reasonable false beliefs and objectively unreasonable false beliefs is how many people hold them. But if the false subjective sense of one person that they are being monitored is not an invasion of that person’s privacy, it is not clear why the false subjective sense of 1000 people would be.

716

CARDOZO LAW REVIEW

[Vol. 34:669

namely the flood of information that has resulted from the increased capacity to collect, store, copy, and transmit data. Automatic processing can help by categorizing, filtering, routing, or identifying patterns in that data and taking appropriate actions, without the need for human input. Such automated processing does not by itself pose any threat to privacy. Although there is a tendency to anthropomorphize computers, just like we anthropomorphize cars and toasters, a computer scanning an email is the functional equivalent of a thermostat turning on the heat. A thermostat is not a surveillance device; it does not monitor a house and make a decision about what temperature the house should be. It mechanically triggers a switch according to its programming. Automated processing of communications is similar. There is therefore no need to erect a legal barrier to such processing to protect privacy, and the current Wiretap Act does not impose one. The Act has always required at least the prospect of human review, and not only because it was initially drafted in 1968. Rather, it is because, as the drafters of the ECPA in 1986 understood, computer monitoring is qualitatively different from human monitoring. It is the threat of human use of personal information that reduces privacy, and not simply that one’s information may be used in some way. It is too soon to tell exactly how much value there will be in automatically scanning and processing communications in situations that do not fall within an exception to the Wiretap Act—where prior consent cannot be obtained, and where the purpose is something other than operating or maintaining a computer network. But it appears likely that at least some useful applications would be impeded. For example, environmental controls based on detecting whether there is conversation or other sounds within a given room would require obtrusive notices to be placed around the room to ensure implied consent, perhaps detracting from the room’s aesthetics and perhaps leading to some uncertainty as to whether all users of the room will see them. There is no need to bear those costs, however, in the name of privacy.

Suggest Documents