How do we realistically assess the past,

FROM THE EDITORS Spider-Man, Hubris, and the Future of Security and Privacy H ow do we realistically assess the past, present, and future of our se...
1 downloads 3 Views 650KB Size
FROM THE EDITORS

Spider-Man, Hubris, and the Future of Security and Privacy

H

ow do we realistically assess the past, present, and future of our security and privacy technology? Like Spider-Man, with our great computing power comes great responsibility—to use our power wisely, to design high-quality security and privacy applications, and to quickly correct errors and improve results. The difficulty of predicting the future of security and privacy is often accompanied by arrogance about our abilities to understand issues and solve problems effectively. But this scientific hubris can also be an opportunity to look back, applaud our successes, evaluate our mistakes, and take corrective action. This article is my last as editor in chief of IEEE Security & Privacy. It’s also my last professional article; I’m refocusing my efforts on helping mend America’s fraying social safety net. So it seems appropriate to look closely at security and privacy in a societal context: how we got here, where we are now, and what we need to ensure that we use our great power responsibly.

How We Got Here

In 1951, Lord Bowden reported this conversation with Professor Douglas Hartree, the builder of England’s first differential analyzer:1 He told me that, in his opinion, all the calculations that would ever be needed in this country could be done on the three digital computers which were then being built— one in Cambridge, one in Teddington [at the National Physical Laboratory] and one in Manchester. No one else, he said, would ever need machines of their own, or would be able to afford to buy them. He added that machines were exceedingly difficult to use, and could not be trusted to anyone who was not a professional mathematician, and he advised [UK electronics firm] Ferranti to get out of the business and abandon the idea of selling any more. 1540-7993/15/$31.00 © 2015 IEEE

I. Bernard Cohen assures us that Howard Aiken, a designer of IBM’s Mark I computer, made a similar statement in 1952:2 “Originally one thought that if there were a half dozen large computers in [the US], hidden away in research laboratories, this would take care of all requirements we had throughout the country.” Clearly, we computer scientists (like many other professionals) are not very good at peering into the future. In fact, Lord Bowden was quite blunt: “It is amazing how completely wrong a great man can be.”1 “The Rugged Manifesto,” described in the sidebar, points out that “code will be used in ways I [the developer] cannot anticipate, in ways it was not designed, and for longer than it was ever intended.” This statement encapsulates how we got from a small network designed for collaboration to a global technology infrastructure on which much of the world’s functioning depends. Let’s look more carefully at this evolution, because its course hints at what we can do to harness and apply our power. Many of you know the history of the Internet and its genesis from small US defense project to mammoth, interconnected infrastructure. Just as Hartree and Aiken thought too small, so did the Internet’s builders. When the first “worm” infected the Internet in 1988, many people were surprised; they saw the Internet as a mechanism for education and collaboration, not a vector for mischief or malice. As Virginia Tech historian Janet Abbate notes, “They thought they were building a classroom, and it turned into a bank.”3 What were technologists and policymakers thinking during the US government’s initial foray into building an electronic infrastructure? Kirk Lougheed, one of the inventors of the Border Gateway Protocol (BGP), notes that, “In the early days of the Internet, getting stuff to work was the primary goal. There was no concept that people would use this to do malicious things. … Security was

Copublished by the IEEE Computer and Reliability Societies

Shari Lawrence Pfleeger Editor in Chief

November/December 2015

3

FROM THE EDITORS

not a big issue.” That function-first mentality continues today: functionality über alles, implemented quickly. “Nash,” a member of L0pht, an early hacker collective, describes this philosophy: “It’s get it up, get it running as fast as we can. Let’s make some money. … There’s this tremendous push to get code out the door, and we’ll fix it later.”4 But David Clark, MIT computer scientist, dispels the notion that Internet designers never thought about security. He says, “It’s not that we didn’t think about security. … We knew that there were untrustworthy people out there, and we thought we could exclude them.”3 The next step toward insecurity occurred when packet-switched communications replaced telephone networks: technologists traded a stable design for one that was inherently riskier. To see why, examine where intelligence resides in each type of network. The public switched-telephone networks’ design reflected the phone companies’ investment in their own ­capabilities—­­a system that is smart in the center and dumb at the edges. That is, the phone companies’ switches ran everything, with little intelligence in business or personal handsets. The Internet’s design is the reverse: the network simply carries data, but the edges are smart. “The center is a computationally powerful but fundamentally dumb collection of routers and transmission channels.”5 The central design makes it easy for new users to join the network. But this lack of smarts has a security price: it’s difficult to provide centralized security and privacy. In the Internet’s early days, when the users knew and trusted one another, this price didn’t much matter. As more people joined the network, the equation changed. The resulting design is more like the Wild West, perhaps because of an underlying principle of caveat emptor—let the 4

IEEE Security & Privacy

buyer beware. “From its unlikely roots in a Pentagon research agency, the Internet developed into a global communications network with no checkpoints, no tariffs, no police, no army, no regulators and no passports or any other reliable way to check a fellow user’s identity. Governments would eventually insinuate themselves into cyberspace—to enforce their laws, impose security measures and attack one another— but belatedly and incompletely.”3 In other words, the Internet architecture leaves Internet users to protect themselves; few inherent design constructs assure security and privacy in Internet traversals and transactions. The trust model relies largely on the parties involved to be who they say they are, affiliated with the organizations they say they are with, taking the actions they say they are taking, and leading to the results they claim to be realizing. Clearly, this model has huge weaknesses; among other sites, the Risks Forum (http:// catless.ncl.ac.uk/Risks) illustrates that the Internet is rife with weaknesses that have long been accidentally and intentionally exploited.

With Greater Power Comes Greater Responsibility

But that was then and this is now. In the interim, several players have tried to be more responsible, by warning others, protecting their own sites and users, or providing services and applications that offer protection. For example, as early as 1973, Ethernet coinventor and 3Com founder Bob Metcalfe warned about significant risks, telling the Arpa Network Working Group that it was very easy for outsiders to log in to the network. He wrote, “All of this would be quite humorous and cause for raucous eye winking and elbow nudging, if it weren’t for the fact that in recent weeks at least two major serving hosts were crashed under suspicious circumstances

by people who knew what they were risking; on yet a third system, the system wheel password was compromised—­by two high school students in Los Angeles no less. We suspect that the number of dangerous security violations is larger than any of us know [and] is growing.”6 Even when we know about risks, we often ignore them. For instance, the 1988 Morris Worm exploited buffer overflows, a problem identified two decades earlier. Today’s news media continue to report buffer overflow–based attacks and accidents. Similarly, the BGP (sometimes called the Napkin Protocol because several computer scientists devised it on table napkins while brainstorming the solution to a routing problem) was full of security holes. “Warnings about the risks inherent in BGP are almost as old as the protocol itself. ‘I knew that routing security was a problem,’ Columbia University computer scientist Steven M. Bellovin said. ‘Seeing this conceptually is fairly easy and straightforward. Sorting it out in terms of the engineering is fiendishly difficult.’”7 Boston University professor Sharon Goldberg recently asked why BGPSEC, a solution to the two-decade-old problem, still isn’t deployed.8 The answer rests with the Internet’s size and complexity and the need for substantial numbers of users to change their ways. “BGP is a global protocol, running across organizational and national borders. As such, it lacks a single centralized authority that can mandate the deployment of a security solution; instead, every organization can autonomously decide which routing security solutions it will deploy in its own network. Thus, the deployment becomes a coordination game among thousands of independently operated networks. This is further complicated by the November/December 2015

fact that many security solutions do immediately, or install and ask per- can intervene to impose structure, not work well unless a large number mission to restart. The option to not guidance, standards, or penalof networks deploy them.”8 install updates does not appear to be ties on security practice. But well-­ In fact, Internet-related actors present on the base version of the intentioned interventions that often work toward different ends. OS. But that decision chimes with sound perfect in theory don’t always “The common purpose that the advice of security experts, who work well in practice. For instance, launched and nurtured it no longer say that the number one thing for consider the effects of Europe’s prevails. There are, and have been for staying safe online is to install every imposition of a right to be forgotsome time, important and power- security update immediately.”10 ten. This right is often expressed as ful players that make up the InterIn many cases, a security prac- an argument about fairness: in erasnet milieu with interests directly at tice is effective only when every- ing erroneous online material or odds with each other. The Internet one implements it; BGPSEC is evidence of our youthful mistakes, is not a single happy family of peo- an example of how a few holdouts we eliminate material that could ple dedicated to univerunfairly keep us from sal packet carriage. There future opportunities. But is contention among the Internet architecture leaves Internet users Harvard’s Jonathan Zitplayers. … The technical to protect themselves; few inherent design train points out that “the architecture must accompractice of shaping what constructs assure security and privacy modate the tussles of stays and what goes from in Internet traversals and transactions. society while continuing the database is hopelessly to achieve its traditional individualistic. By allowgoals of scalability, reliing the delisting of inforability, and evolvability. This expan- can have a huge, adverse impact. mation that is incorrect, outdated or sion of the Internet’s architectural Unfortunately, a community per- harmful for individuals, who knows goals is a difficult, but central tech- spective that encourages good prac- what else will follow. It sets us on a nical problem.”9 tice to develop “herd immunity” is path … where the internet becomes Indeed, vendors sometimes try sometimes viewed by individuals the lowest common denominator to build security and privacy in, as annoying or unnecessary. And result of what all the world’s counjust as security experts suggest. But sometimes the objection is simply tries and courts are prepared to even then, users push back, based that no one wants to pay for secu- leave behind.”11 on arguments about cost, complex- rity. “Industry skepticism [about There are many other examples ity, or convenience. For instance, BGPSEC adoption] was rooted in of inherent unfairness in the way Guardian technology reporter Alex the idea that security was a bad bet the Internet works. For instance, Hern analyzed the privacy implica- for business. Nobody liked to get Latanya Sweeney, chief technoltions of Microsoft’s recently issued hacked, but companies were not ogy officer of the US Federal Trade Windows 10 operating system. He legally liable for the damages. Pro- Commission, demonstrated that the notes that, “The European digital tective measures, meanwhile, car- targeted advertisements Google’s rights organisation (EDRi) sums ried costs that few wanted to pay, search engine shows its users difup the company’s 45 pages of terms such as limited features, slowed fer dramatically based on perceived and conditions by saying: ‘Micro- performance or higher sticker racial differences derived from a soft basically grants itself very broad prices for gear and software. … For user’s name.12 It’s not clear whether rights to collect everything you now—after years of warnings by the bias comes from the way the do, say and write with and on your [Radia] Perlman, Bellovin, [Steve] search engine provides advertising devices in order to sell more tar- Kent, [Richard] Clarke and many or whether Google is merely reflectgeted advertising or to sell your data others—­ perhaps the most telling ing society’s bias. What is clear is to third parties.’”10 statistic is the percentage of Internet that the bias is strongly reflected in But some argue that Microsoft traffic currently secured by [BGP- advertisement choices. is acting responsibly, giving users SEC], the new system of cryptoWhat about using our technolflexibility in some cases and forc- graphic network keys: zero.”7 ogy to create algorithms that impleing good security and privacy pracment fairness and balance? Perhaps tice in others. “Users have attacked Can Other we can design programs that are Windows 10 for only offering two Interventions Work? race neutral, gender neutral, or settings when it comes to Windows As with other technologies, gov- otherwise unbiased. Then volunUpdate: either install and restart ernments and regulating bodies tary agreements or even regulation www.computer.org/security

5

FROM THE EDITORS

could impose fairness rules, as happens in advertising or banking.13 Unfortunately, this is easier said than done. In an interview, Microsoft Research’s Cynthia Dwork identifies significant biases in the algorithms we use to make decisions and predictions. Indeed, algorithms attempting to be fair can sometimes bias decisions even more. “Design choices in algorithms embody value judgments and therefore bias the way systems operate. … These things are subtle: For example, designing an algorithm for targeted advertising that is gender-neutral is more complicated than simply ensuring that gender is ignored. … [C]lassification rules obtained by machine learning are not immune from bias, especially when historical data incorporates bias.”14 Similarly, facial recognition software relies on algorithms for identification and authentication: good security and privacy goals but with a flawed implementation. Use of facial recognition software is increasing, even though its error rate can be as high as 20 percent. It is “being eagerly adopted by dozens of police departments around the country to pursue drug dealers, prostitutes and other conventional criminal suspects. But because it is being used with few guidelines and with little oversight or public disclosure, it is raising questions of privacy and concerns about potential misuse.”15

A Needed Course Correction

Today, information technology is embraced by or embedded in almost everything, from applications remotely controlling your home devices to collision-­avoidance software for automobiles. Much of it holds great promise to save energy, money, and lives. But the public trusts technology more than is deserved. Technology is often fielded when it’s good enough, even if it hasn’t been fully tested. “It’s closer to 6

IEEE Security & Privacy

an assemblage of kludges … surviving only because of an industry-wide penchant for patching over problems rather than replacing the rot.”7 When we add the Internet to the mix, vulnerabilities in Internet design and implementation become pathways that enable malicious actions. Exacerbating these problems is the nature of the supply chain: we build systems (hardware and software) from parts we purchase or commission from other suppliers. This decentralization makes it difficult for manufacturers to ensure that individual parts have the latest updates and that these updates work well together. Feature interaction—the unanticipated interference of one function by another—has been a known, unsolved problem in the telecommunications industry for decades,16 and complex supply chains amplify its adverse effects. Abandoning the Internet is certainly not an option, and embracing new business models is difficult and can take decades. After all, most technologies have had long adoption periods with gentle slopes, even in the US. Nevertheless, we need a course correction soon, because the clamor for change is growing—especially as cyber is considered to be an option in warfare17 and businesses rush to move their products to the Internet of Things. Indeed, many of the individuals and enterprises building and selling these connected products fail to consider the risks of embedding software in items and then connecting them in networks. “The selling point for these wellconnected objects is added convenience and better safety. In reality, it is a fast-motion train wreck in privacy and security.”18 The press, in its zeal to notify the public, often overstates the scale of Internet-related risks. But the press is stepping in where businesses are reluctant to highlight problems. As a result, the growing number and

kinds of problems, many of which are difficult or impossible to resolve after they occur, are increasingly on the public’s radar. Major newspapers such as the New York Times and the Guardian frequently write editorials about Internet issues, and even law textbooks note that it’s increasingly easy for criminals to perpetrate crimes and hide on the Internet.19 Business shares the blame, but so do we, as security and privacy practitioners, researchers, and policymakers. “In designing the network, [Internet Hall of Famer Steve] Crocker said, ‘We could have done more, and most of what we did was in response to issues as opposed to in anticipation of issues.’”3

Step by Step

Crocker’s comment is, in some sense, the basis for our next steps. We need to review how we might have anticipated issues, rather than waiting for problems and then patching them. What concrete steps can we take right now to put us on the path to a more secure and privacy-protected technology infrastructure? To begin, we can encourage business to abandon the Wild West mentality, with “its purely libertarian ethos and Pollyannaishness about technology.”20 I propose we do this through better education, context, and coordination.

Better Education The need for cybersecurity professionals has been described as great and immediate. So, in many countries, government and business are pushing people into cybersecurity careers. Courses and programs abound to teach the uninitiated a handful of security techniques. For example, a Symantec security specialist posted his list of requirements for entering a security career: understanding TCP/IP and a bit about an operating system, learning November/December 2015

about intrusion detection systems, So we must consider current many said couldn’t be done. Why? setting up firewalls and routers, and proposed innovations in con- The convergence of three things: being able to read code, “and if nec- text. At the extreme, we don’t want “highly functioning markets; stable, essary modify[ing] or debug[ging] to over­react and dismiss promising forward-thinking policies; and disprograms and scripts, … knowing or proven technologies because of ruptive technologies. … [M]arkets, how to debug code … [and famil- single categories of use. For exam- policies, and technologies are all iarizing] yourself with the concept ple, several high-visibility tech- pointing in the same direction—­ of penetration testing.”21 Short nologists have warned of artificial which is up—and the result is courses are especially appealing; in intelligence’s (AI’s) dangers. “In powerful. It goes to show that a fact, almost anyone can complete a October [2014], Elon Musk called public–­private–technological partthree-month course and become a [AI] ‘our greatest existential threat,’ nership is transformative.”24 We can developer, without any dependabil- and equated making machines that use the same triumvirate to provide ity, security, or privacy training. think with ‘summoning the demon.’ context, support analysis, and impleWe need saner policies about In December, Stephen Hawking ment more responsible technology. who designs and builds our infor- said ‘full [AI] could spell the end of mation infrastructure: whom to the human race.’ And this year, Bill Better Coordination educate, how, and for how long. Gates said he was ‘concerned about However, these triumvirate memIt’s unacceptable to give “security super intelligence,’ which he appeared bers haven’t traditionally worked expert” status to anyone who can to think was just a few decades away. well together. To provide sensible breathe and run a virus checker. As But the real worry, specialists in the context, these and other parts of with any other essential profession, field say, is a computer program rap- society must better coordinate. we must differentiate novices from idly overdoing a single task, with no For example, several highly vismasters and experts and recognize context. A machine that makes paper ible automobile hacking incithat expertise and experience in one clips proceeds unfettered, one exam- dents were recently reported in security aspect doesn’t the press. In such cases, enable someone with narwhen a vulnerability One of the first questions should be asked is discovered, should row programming skills at the design stage: Does this product to perform solid securesearchers report it rity analysis and design or service need this technology? to business or governor effective penetration ment before publishing testing. Although many their research papers? organizations and governments are ple goes, and becomes so proficient Once notified, should business creating certificate and advanced that overnight we are drowning in and government notify consumdegree curricula, the need is out- paper clips.”22 ers? There have been many shamestripping supply. It’s time to stop A more focused example, set in ful cover-ups to save face or sales putting unskilled and inexperienced context, is an open letter that a large or both, sometimes significantly workers into cybersecurity positions. number of technologists signed and harming technology users. A presented at the 2015 International coordinated, contextual solution Better Context Joint Conferences on Artificial involves all parties with a stake in Our technology is not conceived, Intelligence. It suggests banning the the outcome. “Both researchers created, or used in isolation. Rather, use of [AI] to make autonomous and companies are going to need it’s always embedded in a social weapons.23 Evaluating this proposal to meet somewhere in the middle context that interacts with the tech- in this context is far more reason- … if the goal is to make products safer for consumers as soon as nology and its users. “Technology able than banning AI outright. in isolation, not embedded in any And the context should involve possible. Perhaps the most impornetwork of human and nonhuman aspects of the society in which the tant step a company can take is actors, has nothing to stabilize. technology will be used. Take Uni- coming up with a coordinated It is the whole actor network (as versity of Texas professor Michael disclosure policy, a set of public distinct from the Internet as a net- Webber’s analysis of America’s surg- guidelines for how it will respond work of technology) that becomes ing interest in renewable energy.24 when  researchers  come forward stable, as all the human and non- Only 40 years ago, the US worried with problems.”25 human actors align and harmonize about its dependence on oil; now One of the first questions themselves to common (socio- it’s in the midst of creating a robust, should be asked at the design stage: technical) interfaces.”9 diverse energy infrastructure that Does this product or service need www.computer.org/security

7

FROM THE EDITORS

A Rugged Manifesto

I

n September 2012, “The Rugged Manifesto” appeared online (www.ruggedsoftware.org), challenging developers to acknowledge their software’s shortcomings: I am rugged and, more importantly, my code is rugged. I recognize that software has become a foundation of our modern world. I recognize the awesome responsibility that comes with this foundational role. I recognize that my code will be used in ways I cannot anticipate, in ways it was not designed, and for longer than it was ever intended. I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security. I recognize these things—and I choose to be rugged. I am rugged because I refuse to be a source of vulnerability or weakness. I am rugged because I assure my code will support its mission. I am rugged because my code can face these challenges and persist in spite of them. I am rugged, not because it is easy, but because it is necessary and I am up for the challenge.

But being “rugged” and writing “rugged code” are easier said than done.

this technology? Years ago, one of my clients added software to its devices, even though the analog versions of its products were more reliable than the digital ones. The reason? The company thought it needed software to compete in the market. A 2015 Guardian editorial elaborates on similar decisions by automobile manufacturers:26 You may ask why a car needs an internet address in the first place. The bad answer is that lots of other cars have them now, so any new model will want one. The good answer is that computer networks are transforming the world, and in many ways to our benefit. Networks, by their nature, have a tendency to grow—and to grow more valuable as they do. But they have to be secure; and since it is impossible to make them entirely secure, they must also be designed to fail gracefully and as safely as possible. This requires changes in engineering culture, but also in the wider corporate culture: companies that make

8

IEEE Security & Privacy

things need to learn the hard lessons about openness that have been forced onto software companies in the past 20 years. It took the mainstream software industry years to understand that rewarding reports of security holes with bounties, rather than letting them be sold on the dark market, is sensible and necessary. We do not have that time now that software is so ubiquitous that it is invisible. We are moving towards a world where almost everything will have a computer inside it, and those computers will want to talk to the outside world, either to receive instructions or to report on what is happening around them. This is what is meant by the internet of things. How can we stop it becoming an internet of things that can kill us, or spy on us?

The answer to successful security and privacy implementation must be multidisciplinary and contextual. Its elements involve an understanding of ethics and

behavioral science. We can evaluate a technology’s potential effects— both costs and benefits—by simulating likely usage and outcomes. And we can probe its unanticipated (and possibly adverse) effects by using sophisticated testing and adversarial thinking. This approach is much harder than writing code— it requires a team of experts who know a lot more than how to use the latest intrusion detection system. In particular, the team must constantly imagine what could go wrong and what the consequences might be. Because we’re human, we can’t expect market forces and good wishes alone to lead to better software, systems, and societies. Regulation and oversight play a role, and many in the press, including the New York Times editorial board, are calling for regulation of automobile, medical device, and other software types.27 If well coordinated, this triumvirate of technology, markets, and policies should push us to implement and sustain better security and privacy. “Companies and governments, too, must realise that if a world of interconnected, highly capable and almost intelligent widgets is to be safe, they must be constantly expecting that it will be unsafe, and actively collecting knowledge about the ways in which it can—and will—be dangerous.”27

W

e are developing technology that’s increasingly powerful, not only in manipulating hardware and data but also in creating opportunities and changing peoples’ lives. Building and sustaining more pervasive and powerful technology require us to take more responsibility for our ideas and actions. So pin a Spider-Man picture to your office wall to remind you of that responsibility. We aren’t the first discipline to learn to view our technology as only one aspect of a large, evolving world November/December 2015

that’s built by and functioning within its societal context. Author Joseph Conrad spent significant time on sailing vessels, in part to inform his writing. “He recognized that technological progress, for all its much-heralded benefits, comes with social and ethical costs. To operate a sailing ship was to master a ‘craft.’ You had to observe and interpret nature, adapt and react to fast-changing conditions, obey without question, decide without doubt, toil without pause. The craft connotes more than a clutch of skills; it is a code for how to live. It turns a sailing ship into a ‘fellowship,’ a community forged by shared values.”28 We too must think of ourselves as active members working in a diverse society. We are more than a clutch of skills—we can use our technological insights and abilities to understand differences, articulate common goals, and enable the community forged by shared values.

Communications Assistance to Law Enforcement Act to Voice Over IP,” Int’l Trial Attorneys Association, 13 June 2006; http://privacyink.org /pdf/CALEAVOIPreport.pdf. 6. B. Metcalfe, “The Stockings Were Hung by the Chimney with Care,” Arpa Network Working Group, Dec. 1973; http://tools.ietf.org /html/rfc602. 7. C. Timberg, “Net of Insecurity: The Long Life of a Quick Fix,” Washington Post, 31 May 2015; w w w.washing tonpost.com/sf /business/2015/05/31/net -of-insecurity-part-2. 8. S. Goldberg, “Why Is It Taking So Long to Secure Internet Routing?,” ACM Queue, vol. 12, no. 8, 11 Sept. 2014; http://queue.acm.org /detail.cfm?id=2668966. 9. D.D. Clark et al., “Tussles in Cyberspace: Defining Tomorrow’s Internet,” IEEE/ACM Trans. Networking,

vol. 13, no. 3, 2005, pp. 462–475; h t t p : / / g r o u p s .c s a i l . m i t .e d u /ana/Publicat ions/PubPDFs /Tussle%20in%20Cyberspace%20 De f i n i ng % 2 0 To m o r row s % 2 0 Internet%202005%27s%20 Internet.pdf. 10. A. Hern, “Windows 10: Microsoft under Attack over Privacy,” Guardian, 1 Aug. 2015; www.theguardian .com/technology/2015/jul/31 /w indows-10-microsoft-faces -criticism-over-privacy-default -settings. 11. J. Powles, “Right to Be Forgotten: Swiss Cheese, Internet or Database of Ruin?,” Guardian, 1 Aug. 2015; www.theguardian.com /technology/2015/aug/01/right -to-be-forgotten-google-sw iss -cheese-internet-database-of-ruin. 12. L. Sweeney, “Discrimination in Online Ad Delivery,” ACM Queue vol. 11, no. 3, 2013; http://

References 1. Lord Bowden, “The Language of Computers,” 1st Richard Goodman Memorial Lecture, Brighton College of Technology, 2 May 1969; www.chilton-computing.org.uk/acl /literature/reports/p014.htm. 2. I.B. Cohen, “Howard Aiken on the Number of Computers Needed for the Nation,” IEEE Annals of the History of Computing, vol. 20, no. 3, 1998, pp. 27–32. 3. C. Timberg, “Net of Insecurity: A Flaw in the Design,” Washington Post, 30 May 2015; www .washingtonpost.com/sf/business /2015/05/30/net-of-insecurity -part-1. 4. C. Timberg, “Net of Insecurity: A Disaster Foretold and Ignored,” Washington Post, 22 June 2015; www.washingtonpost.com /sf/business/2015/06/22/net -of-insecurity-part-3. 5. S. Bellovin et al., “Security Implications of Applying the www.computer.org/security

The Department of Computer Science and Engineering at the University of Minnesota-Twin Cities invites applications for multiple tenure-track faculty positions in cyber security and in support of a Universitywide initiative (MnDRIVE) on robotics, sensors, and advanced manufacturing (http://cse.umn.edu/r /mndrive-minnesota-discovery-research-and-innovation -economy/). Specific topics of interest for the positions include cyber security, sensing and networking, machine learning, computer vision, robot design, manipulation, mobility, human-robot interaction, planning, algorithmic foundations, and embedded systems. Applicants from other areas will be considered as long as they address how their work fits into the security or MnDRIVE themes. Senior applicants will also be considered. We encourage applications from women and under-represented minorities. Candidates should have a Ph.D. in Computer Science or a closely related discipline at the time of appointment. The positions are open until filled, but for full consideration apply at https://www.cs.umn.edu/resources/employment/faculty by December 15, 2015. The University of Minnesota is an equal opportunity employer and educator.

9

FROM THE EDITORS

Erratum

I

n Robin Bloomfield’s From the Editors article in the September/October issue, pp. 3–4, the sentence should read, “This, as well as the updates to the UK strategy that are being discussed now and will lead to a refresh in 2016 … ”

queue.acm.org/detai l .cf m?id =2460278. 13. C. Dwork et al., “Fairness through Awareness,” 29 Nov. 2011; http:// arxiv.org/pdf/1104.3913v2.pdf. 14. C. Cain Miller, “Algorithms and Bias: Q. and A. with Cynthia Dwork,” New York Times, 11 Aug. 2015; www.nytimes.com/2015/08 /11/upshot/algorithms-and-bias -q-and-a-with-cynthia-dwork.html. 15. T. Williams, “Facial Recognition Software Moves from Overseas Wars to Local Police,” New York Times, 12 Aug. 2015; www.nytimes

.co m / 2 0 1 5 / 0 8 / 1 3 / u s / f ac i a l -recognition-software-moves-from -overseas-wars-to-local-police.html. 16. P. Zave, “Feature Interactions and Formal Specifications in Telecommunications,” Computer, vol. 26, no. 8, 1993, pp. 20–28. 17. E. Heginbotham et al., The U.S.– China Military Scorecard: Forces, Geography, and the Evolving Balance of Power, 1996–2017, tech. report RR392-AF, RAND, 2015, www.rand .o r g / p u b s / re s ea rc h _ re p o r t s /RR392.html#download. 18. Z. Tufekci, “‘Smart’ Objects, Dumb Risks,” New York Times, 11 Aug. 2015; www.nytimes.com /2015/08/11/opinion/zeynep-tufekci -w hy-smar t-objects-may-be-a -dumb-idea.html?emc=edit_th _20150811&nl=todaysheadlines& nlid=54093779. 19. J. Israel et al., White Collar Crime: Law and Practice, 4th edition, West Academic Publishing, 2015. 20. D. Peck, “Silicon Valley Is Not a

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field. Visit our website at www.computer.org. OMBUDSMAN: Email [email protected].

Next Board Meeting: 15–16 November 2015, New Brunswick, NJ, USA

Director, Products & Services: Evan M. Butterfield; Director, Sales & Marketing: Chris Jensen

EXECUTIVE COMMITTEE President: Thomas M. Conte President-Elect: Roger U. Fujii; Past President: Dejan S. Milojicic; Secretary: Cecilia Metra; Treasurer, 2nd VP: David S. Ebert; 1st VP, Member & Geographic Activities: Elizabeth L. Burd; VP, Publications: Jean-Luc Gaudiot; VP, Professional & Educational Activities: Charlene (Chuck) Walrad; VP, Standards Activities: Don Wright; VP, Technical & Conference Activities: Phillip A. Laplante; 2015–2016 IEEE Director & Delegate Division VIII: John W. Walz; 2014–2015 IEEE Director & Delegate Division V: Susan K. (Kathy) Land; 2015 IEEE Director-Elect & Delegate Division V: Harold Javid

COMPUTER SOCIETY OFFICES Washington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928 • Phone: +1 202 371 0101 • Fax: +1 202 728 9614 • Email: [email protected] Los Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720 • Phone: +1 714 821 8380 • Email: [email protected] Membership & Publication Orders Phone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email: [email protected] Asia/Pacific: Watanabe Building, 1-4-2 MinamiAoyama, Minato-ku, Tokyo 107-0062, Japan • Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 • Email: [email protected]

BOARD OF GOVERNORS Term Expiring 2015: Ann DeMarle, Cecilia Metra, Nita Patel, Diomidis Spinellis, Phillip A. Laplante, Jean-Luc Gaudiot, Stefano Zanero Term Expriring 2016: David A. Bader, Pierre Bourque, Dennis J. Frailey, Jill I. Gostin, Atsuhiro Goto, Rob Reilly, Christina M. Schober Term Expiring 2017: David Lomet, Ming C. Lin, Gregory T. Byrd, Alfredo Benso, Forrest Shull, Fabrizio Lombardi, Hausi A. Muller

IEEE BOARD OF DIRECTORS President & CEO: Howard E. Michel; President-Elect: Barry L. Shoop; Past President: J. Roberto de Marca; Director & Secretary: Parviz Famouri; Director & Treasurer: Jerry Hudgins; Director & President, IEEE-USA: James A. Jefferies; Director & President, Standards Association: Bruce P. Kraemer; Director & VP, Educational Activities: Saurabh Sinha; Director & VP, Membership and Geographic Activities: Wai-Choong Wong; Director & VP, Publication Services and Products: Sheila Hemami; Director & VP, Technical Activities: Vincenzo Piuri; Director & Delegate Division V: Susan K. (Kathy) Land; Director & Delegate Division VIII: John W. Walz

EXECUTIVE STAFF Executive Director: Angela R. Burgess; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: Sunny Hwang; Director, Information Technology Services: Ray Kahn; Director, Membership: Eric Berkowitz;

10

IEEE Security & Privacy

revised 5 June 2015

Force for Good,” Atlantic, vol. 316, no. 1, 2015, p. 88. 21. D. Parker, “TCP/IP Skills Required for Security Analysts,” Symantec, 2 Nov. 2010; www.symantec .com/connect/ar ticles/tc pip -skills-required-security-analysts. 22. Q. Hardy, “The Real Threat Posed by Powerful Computers,” New York Times, 11 July 2015; http://bits .blogs.nytimes.com/2015/07/11 /the-more-real-threat-posed-by -powerful-computers. 23. “Autonomous Weapons: An Open Letter from AI and Robotics Researchers,” Int’l Joint Confs. Artificial Intelligence, 28 July 2015; http://futureoflife.org/AI/open _letter_autonomous_weapons. 24. M.E. Webber, “Lessons from the Shale Revolution,” Mechanical Engineering, Oct. 2013, p. 16; www .webberenergygroup.com/wpnew /w p-content/uploads/webber -asme-me-magazine-shale-gas -lessons-october-2013.pdf. 25. A. Peterson, “Consumers May Be the Big Losers When Companies Hide Cybersecurity Problems,” Washington Post, 17 Aug. 2015; www.washingtonpost.com/news /the-sw itch/w p/2015/08/17 /consumers-may-be-the-big-losers -when-companies-hide-cybersecurity -problems/?hpid=z15. 26. “The Guardian View on Car Computer Hacking: Act Now,” editorial, Guardian, 26 July 2015; www.theguardian.com/comment isfree/2015/jul/26/the-guardian -view-on-car-computer-hacking -act-now. 27. “Regulators Should Develop Rules to Protect Cars from Hackers,” New York Times Sunday Review, 8 Aug. 2015, www.nytimes.com /2015/08/09/opinion/sunday /regulators-should-develop-rules -to-protect-cars-from-hackers.html. 28. M. Jasanoff, “At Sea with Joseph Conrad,” New York Times, 9 Aug. 2015; www.nytimes.com/2015/08 /09/opinion/sunday/at-sea-with -joseph-conrad.html. November/December 2015