Privacy Versus Security: Problems and Possibilities for the Trade-Off Model




© Springer Science+Business Media Dordrecht 2015
Serge Gutwirth, Ronald Leenes and Paul de Hert (eds.)Reforming European Data Protection LawLaw, Governance and Technology Series2010.1007/978-94-017-9385-8_10


10. Privacy Versus Security: Problems and Possibilities for the Trade-Off Model



Govert Valkenburg 


(1)
Faculty of Arts and Social Sciences, Maastricht University, Maastricht, The Netherlands

 



 

Govert Valkenburg



Abstract

Considerable criticism has been levelled against thinking of privacy and security as being placed in a trade-off relation. Accepting this criticism, this paper explores to what use the trade-off model can still be put thereafter. In specific situations, it makes sense to think of privacy and security as simple concepts that are related in the form of a trade-off, even though it has been argued widely that this is a misrepresentation of concepts that are far too complex to be thought of in such a simple structure. As a first step, the sociotechnical analysis in this paper further highlights the complexities of the practice of body scanners installed at airports for security purposes. These complexities contribute additionally to rendering a simple privacy/security trade-off untenable. However, as a second step, the same analysis is thought through again so as to highlight opportunities to use the – deliberately simple – structure of the trade-off model to overcome part of its own shortcomings. At closer look, the empirical inaccuracy of the trade-off model becomes only problematic if it is used as a justification for imposing security measures that encroach privacy: “this small piece of privacy must be sacrificed, as this additional security is indispensable”. However, some right to existence is still retained for the trade-off model. Therefore, instead, it is suggested that the trade-off model be used on the one hand as a heuristic device to trace potential difficulties in the application of a security technology, and on the other hand as a framing that by its simplicity and appeal earns impetus for a particular discourse.


Keywords
PrivacySecurityTradeoffAirport security



10.1 The Trade-Off Model Between Privacy and Security


Privacy and security are oftentimes discussed as if they are simply opposing concepts; as if a trade-off exists between them. This trade-off between privacy and security has long been criticized as untenable, chiefly because the complexity and multiplicity of either value are incompatible with such a simple relation. At the same time, the trade-off vocabulary of is remarkably persistent in various discourses, notably in policy. This persistence suggests that there is something attractive in the model, even though it is from some perspectives plain incorrect. This paper explores one possible function the trade-off model might yet fulfil, despite its empirical inaccuracy: its use as a heuristic model to highlight particular interests in debates.

In its general form, the trade-off model thinks of privacy and security as two simple concepts, which relate in such a way that promoting one of them leads to deteriorating the other. The idea is attractive for its simplicity. President Obama used the motive literally in defence of (parts of) the NSA activities revealed by Edward Snowden.1 Also European policy making is pervasively troubled by thinking in terms of a trade-off.2 Apart from the simplicity ingrained in this and similar trade-offs, the rhetoric is also powerful in the particular case of privacy versus security: the latter easily trumps the former, and who would not give up some of their privacy if it helps preventing terrorist attacks?3 This even works if the security risks are poorly specified,4 and it certainly works against the background of an increasing pervasiveness of identifications of threats.5

A range of criticisms have been produced against the model, which can be roughly divided in two clusters. On the one hand, there are the internal criticisms that address the validity of the model. Their bottom line is that at the end of the day, the overly simplistic representation offered by the trade-off model can never accurately represent the complexities and intricacies of how privacy and security are implemented in practice. On the other hand, criticisms are offered that I call external, which concern how the model is used. In general, they hold that the model is typically used to impose fallacious choices on a public.6

Regarding the internal validity of the model, it is often argued that the concepts of privacy and security are not simple but instead complex. From their heterogeneous constituents, it is hard if not impossible to articulate how such a zero-sum relation would be produced.7 The simplicity is already disproved by the fact that both security and privacy are radically differently enshrined in EU and US laws, respectively.8 Similarly, the model is argued to neglect the many examples of interventions being good both for privacy and for security, and even interventions that promote security through the promotion of privacy or the other way round.9 For example, it is conceivable that citizens are secured against abuse of state power by installing particular privacy-promoting sociotechnical configurations. The model also neglects examples where privacy or security is compromised without a clear benefit for the other side. Finally, even if the trade-off model in some particular situation appears valid, it will certainly merit some further qualification as to its limits.10 Even then, a complete sacrifice of one in favour of the other is still very likely to be unacceptable.

Regarding the use of the model, a different kind of criticism has been voiced. Schneier11 for example argues that the trade-off model is typically mobilized as a false choice for people: citizens are asked to give up some of their basic liberties in return for security. What makes things worse is that this security and risks are typically poorly specified and not self-evident.12 Additionally, Chandler13 argues that the model is intrinsically biased: when posed in opposition, security easily trumps privacy. After all, a lack of security is potentially life-threatening, whereas a lack of privacy is not. Also, it has been observed that public perceptions are more intricate and elaborate than a simple trade-off.14 In similar vein, it has been observed that it is impossible to predict how any balance between privacy and security would be struck by the general public, if only because the public’s trust in authorities importantly influences how such a balance would be struck.15 The resulting policy is also rather diverse across states.16 In a way, many uses to which the trade-off model is put, render public perception, public opinion making and policy making a bit of a caricature.

This paper develops a double perspective on the trade-off model, developing the empirical problematicity as well as the practical usability of the model. Importantly, the analysis is not intended to provide a conclusive account of all that matters in privacy and security studies, but to highlight only one exemplary couple of values, namely privacy and security, and how they might be thought to relate to thinking in terms of a trade-off model. The generalization and extension of this ideal-typical way of thinking is then left to further scholarship.

First, the internal line of criticism levelled against the trade-off model is furthered by empirical analysis. A sociotechnical analysis is presented of active-millimetre wave body scanners used in airport security. This analysis adds yet another reason why the trade-off model is in some respects too simple. As will become clear, sociotechnical practices are not the clear-cut implementations of generic design values such as security and privacy. Rather, their development is full of contingencies, particularities, and connections to context in any conceivable sense. These connections render the versions of privacy and security that are eventually found on the work floor of airport security very much particular, contingent and heterogeneous. This analysis adds further complexities in face of which the trade-off model indeed appears as hopelessly oversimplified.

Second, rather than stopping at just another blow to the trade-off model, the paper derives from the same sociotechnical perspective arguments that support a particular use of the model. Indeed, the trade-off model would be problematically oversimplified if it were used as a representational model, a model that structures our understanding of reality. Indeed, if such an alleged misrepresentation is used discursively and in pursuit of particular justifications, the external critiques just mentioned cut ice. However, uses of models are much more diverse than only representational. Particularly in policy contexts, models rather serve to organize intervention. Even if they do not offer the most accurate empirical representation, they may still serve to explain and inform decisions and provide legitimacy to intervention. I refer to models used this way as interventional models. Taking warning from the aforementioned external critiques, the challenge is to find a way to use models in this interventional way, without incurring the critique of playing inconsiderate discursive games. It must be borne in mind that simplification is often key to arriving at an intervention in the first place. The trade-off model might offer just the simplification needed there, and that is what this paper will continue to investigate. (In the following, the trade-off model will be approached in a rather monolithic way without much further internal differentiation, which is legitimated by the level of analysis at which the broader argument of this paper is situated.)

This paper is organized as follows. In the second section, the sociotechnical practice of airport scanners will be dissected into some of the underlying configurations, so as to articulate how particular versions of privacy and security emerge in the end. In the third section, it will be explored how models can be made productive in policy context, especially against the background of the complexity articulated in the sociotechnical analysis. In the fourth section, the particular sociotechnical analysis of airports scanners will be connected back to the idea of interventional use of models, and explore how the trade-off model in particular can function as one such model.


10.2 Inside Airport Security Scanners


In order to seek fertile ground on which the trade-off model can flourish, technological developments in airport security offer an interesting research site. This paper focuses on one particular type of body scanner, which has over the past few years been introduced at airports. This particular type works by means of millimetre waves, by which it detects objects hidden under the clothes. Upon detection, the scanner informs the security officer by means of a generic mannequin. On this mannequin, only those body parts are highlighted on which a suspect object is found. (In due course, the intricacies will be discussed of how this mannequin representation is created, both in terms of its technical implementation and in terms of its privacy implications.)

This active millimetre wave variety of body scanners has been researched through five expert interviews with developers, policy makers and security operators. Additional background information was sought, mainly from academic and internet sources. The research intent has not been to provide a comprehensive account of body scanning technologies, but rather to provide an ideal-typical analysis of how privacy and security appear once a cross-section is made through development and application. This cross-section provides for articulation of connections between privacy and security, such that, again ideal-typically, legitimate and fruitful uses of the trade-off model can be identified.

At face value, the scanner setup described above has some important advantages. First, privacy seems to be respected because no actual picture of the body is made, nor is such a picture displayed. Second, manual body searches are expected to decrease in numbers and manual body searches will in general be less burdensome as they can be directed at specific body parts, not the whole body. Third, the automatic assessment is argued to make the whole airport security process quicker. This means that the process will be more efficient and less costly, but also offer a better customer experience for the traveller. It must be noted that the idea of privacy underlying the present analysis is not some fixed concept such as the famous original notion framed by Warren and Brandeis17 or more recent dissections of the general idea.18 Rather, focus is directed at what goes in practice under the heading of privacy; how privacy is ‘performed’.

While this seems like a greatly privacy respecting implementation of security, implementation of security (or any other design goal in general, for that matter) is never the straightforward application of a universal idea of security. It is always a particular idea of security, geared towards a particular practice. Implementation of such a particular form of security into security technologies will always be an implementation against the backdrop of a particular technological state of the art – not just anything is possible – legal and regulatory frameworks – not just anything is allowed – and many other stakes and interests such as procedural efficiency, customer satisfaction, and all elements of the context upon which the design and operation of the security technology are contingent. Only if all these contingencies are taken into account, it can be made intelligible why and how particular versions of privacy and security become ‘enacted’ in practice.19

To begin at the end, through this particular sociotechnical configuration, security becomes enacted as the detection of materials other than skin and clothing. As one interviewee explains, the scanner is said to detect ‘anomalies’, or literally ‘things that cannot be classified’. This seems legitimate, as skin and clothing are typically the things that we would happily allow on board airplanes. However, on second thought, it is not all that clear that this accurately defines the fault line between safe and dangerous items. While the set of anomalies or ‘suspect items’ is indeed likely to include most of the things we do not want inside airplanes, it is also very likely to include a lot of things that should not be particularly worrisome. Indeed, in practice, as another interviewee explains something as innocent as a business card in a chest pocket triggers an alarm. Thus, the body scanner does not straightforwardly outperform conventional walk-through metal detectors in a quantitative sense when comes to false alarms. Rather, it produces very different false alarms from a qualitative perspective (which may, ultimately, still make a quantitative difference).

Similarly, privacy is enacted in the end as the elimination of body details – recall that the mannequin does not resemble an actual body. However, below this apparent neutral and impersonal look, numerous normative choices hide. For the technology to be able to assess whether something is suspect about a passenger, it needs to be inscribed with extensive assumptions about what a ‘normal’ body is: what normal body shapes and sizes are and what a skin’s normal reflection pattern is under illumination by millimetre waves. In the machine, these assumptions are translated primarily into assumptions of what a ‘normal’ millimetre-wave reflection pattern is. Typically, technologies perform such assumptions rather rigidly and indiscriminately. In this case, the technology renders abnormal those bodies that do not fit. As there is no such thing as a universal, normal body, it is likely that false alarms are triggered. Such false alarms de facto render some people abnormal.

Even though the scanner setup was intended to be privacy-respecting and to leave people’s abnormalities undisclosed much like the ways people also conceal such abnormalities in other spheres of life, it actually makes those abnormalities more visible. True enough, it does not do so by putting a nude picture on a screen. Instead, it does so by raising a false alarm, drawing attention to a person, and effectively forcing the person into exposing and explaining their abnormality. This abnormality will in most cases have nothing to do with the bomb-belt terrorists who initially served as the justification for the scanner to be installed. Rather, it reflects what was assumed to be a normal body and a normal reflection pattern in the course of developing the device.

It has been reported by multiple interviewees that people carrying medical devices such as stomas and pace makers trigger alarms. With stoma patients, embarrassing situations have been reported: alarms were indeed raised and explanation was demanded on the spot. Even after moving to a secluded inspection room, some difficulties remained. Particularly, running water is not always available in such rooms, even though it is needed in case the medical devices become dislocated. To prevent discomfort, the Dutch stoma patients’ association has agreed with security officials that stoma patients may identify themselves beforehand to security personnel, upon which they be treated in a more prudent way. Even though the ‘problem’ of stoma patients has been settled in a way that is accepted by both the stoma patients’ association and the security operators, it is still a peculiar translation of privacy. Apparently, privacy for stoma patients consists ironically of announcing: ‘I am a stoma patient’. All in all, part of the privacy challenge is not really solved, but ‘displaced’ to a particular burden put on colostomy patients.

This problem of bodies that are classified as abnormal is more endemic than only affecting the group of stoma patients: it has even been reported by one frequent flyer that he consistently triggers an alarm for which no reason could be identified on the spot – other than apparently some unspecified abnormality of the body. (Clearly some reasons such as sweat are principally known, but this is not to say that such explanations are practically available when an alarm is triggered, nor that they would suffice, to the complex background of airport security, to discard the alarm.) What appears first as a mere unfortunate technical difficulty of assessing particular bodies, is in fact the reflection of a strong normalization of the body that is inscribed in airport security technologies.

The negotiations that go into the configuration of the security scanner and its surrounding practice, can best be understood as a chain of translations.20 In each step of implementation, a negotiation takes place in which stakes and interests are balanced, and goals accordingly redefined. These translations entail that in the seemingly simple, technological implementation of ideas such as privacy and security, in the end considerable redefinitions of those concepts become visible: rearrangements and redefinitions of political interests, technical options, and airport operations were needed such that this particular scanner came to be feasible.

Only gold members can continue reading. Log In or Register to continue