It sufficed for a time to define privacy as “the right to be let alone” [Warren and Brandeis, 1890]. It endured for as long as it seemed that any individual might maintain some detachment from society’s gaze, an isolation constructed and construed from spatial and physical concepts by which one’s aloneness might be adjudged. My home. My room. My books. My letters. My car. My body and my personal space.
Then new media added new dimensions for information flow and the ‘space’ was no longer so readily perceivable. This systemic change has catalysed deep and wide interest in defining privacy if only so we might articulate how it is altered by new technologies and applications, how it might or should be degraded, protected, or enhanced, and how we might qualify and substantiate any change as for better or worse.
Daniel Solove refuses any attempt to pin a definition down succinctly, preferring to discuss a family of individually different concepts that are all related to our conception of privacy, collated and viewed ‘bottom-up’, deliberately adaptable to different attitudes found in different cultures, and focused on privacy problems rather than the concept of privacy itself. Despite the European Convention of Human Rights (1950), he concludes that privacy must be framed societally rather than as an individual right, specifically a society’s purview of information collection, information processing, information dissemination, and invasion.
Helen Nissenbaum also recognises the complexity of the concept of privacy and asserts that there is no need to construct a theory encompassing all the contexts in which privacy matters. Rather, she introduces the thesis of contextual integrity, that is:
that in any given situation, a complaint that privacy has been violated is sound in the event that one or the other types of the informational norms has been transgressed.
Such norms vary from one context to another, from one society to another, and privacy is therefore a social construct rather than a fundamental right. Nissenbaum’s thesis is pragmatic:
If a man has a right that we shall not do such and such to him, then he has a right that we shall not do it to him in order to get personal information from him. And his right that we shall not do it to him in order to get personal information from him is included in both his right that we shall not do it to him, and (if doing it to him for this reason is violating his right to privacy) his right to privacy.
Clearly, privacy is a tricky thing to pin down. It seems odd by comparison to consider the profusion of so-called privacy settings in software applications and web services.
Such facilities cannot be mapped in any meaningful way to the concepts reviewed briefly above, rather these are broad-brush and typically binary settings permitting or disallowing various monitoring of and actions upon the data created in the very use of the application or service. There is a broad recognition – amongst law-makers, policy-makers and the information technology industry – that this gap must be closed, although some parties who consider themselves beneficiaries of the status quo disagree and continue to pull in the other direction, not least Internet Service Providers in the United States [Senate Republicans Vote to Allow ISPs to Sell Your Private Data, Sam Gustin, Vice, 2017].
The need for new approaches to shift the locus of agency and control back towards the consumer / citizen has been described as a grand challenge for contemporary computing in general and human-computer interaction (HCI) in particular [Personal Data, Privacy and the Internet of Things: The Shifting Locus of Agency and Control, Crabtree and Mortier, 2016].