In my experience, privacy is very much a privilege. Privacy is there until it is not – whether by chance, by choice, or by coincidence. Whether the gaze that sees us is that of the media (social or otherwise), the state, or even just circumstance, privacy can be abruptly withdrawn.
For me this realisation came after I was physically assaulted and the details of the event publicly reported in the newspaper. While my name as the victim was not included, and there were some mistakes in what was reported, the reporting was sufficient that anyone who knew me at my workplace could make the connection. Given that the event was also clearly tied to my sexuality, the reporting meant I was outed to a much wider audience than I might have preferred. At the time, it felt like a further violation, on top of what was already feeling like a fairly significant physical and existential transgression. Why did anyone need to know about this? I, rather naively, thought that my personal trauma did not really constitute ‘news’.
With hindsight, I began to suspect that this sense of violation of my privacy was more accurately to with a temporary and unasked for withdrawal of a privilege that I had not previously had to question. Because I had not had to question it, I had presumed it was a right. Anyone who has had to deal significantly with the medical system, the legal system, the welfare system, the police, or who has been caught up in events that have drawn significant attention, or anything that has caught the attention of the media would be aware that, to some extent, privacy is something fairly flimsy, or at least highly contextual. Thus it is very far from being a universal right with clearly established boundaries or notions of what counts as an invasion of privacy and what does not.And as the information age evolves, I think there are some fundamental questions about privacy, and how (or even whether) it can be protected or maintained in any meaningful sense. In particular, as technology evolves, I think we should be asking, is privacy really viable? And even if it is still viable, or can somehow be made so, what are the costs that come from doing so?
In line with my argument that privacy has been a privilege rather than a right, I suggest that efforts to maintain privacy are as likely to be counter-productive as they are to be effective. I posit that endeavours to pursue privacy are likely to result in increasingly elaborate laws, systems and technologies that run counter to a fundamental shift enabled by new technologies, and are thus destined to either fail or be mismatched to the world they operate in, and have perverse or damaging effects. If privacy is a privilege then it will inherently provide benefit more to those who already have power, rather than potentially servicing a collective benefit. If privacy cannot be maintained for all, then it is questionable whether it can or should be defended for anyone.
Privacy and emerging technology – an uncomfortable pairing
In an age of social media platforms, with its accompanying mentality that sees personal data as valuable political or commercial intelligence, and where we have ubiquitous devices that provide constant data and data exhaust, the notion of privacy has already been challenged. There has also been some pushback to this challenge. Some states and enterprises have begun to adjust practices and requirements as people and systems have learned more about what is possible and what might be desirable, in an effort to find some sort of balance or accommodation of competing desires and needs.
But as new technologies emerge it is worth asking to what degree privacy is likely to remain feasible. While there is always hype and sometime initial overestimation of what is achievable with technology, there are also signals that technological developments, at least in theory, will reduce the scope for what we have traditionally known as privacy.
For instance, the physical self is perhaps our least private self – it is, after all, always visible and can reveal much to others, whether it be our age, elements of our past, some diseases or conditions, or sometimes elements of our identity. However, in large populations, the physical self has still afforded some degree of privacy in being both relatively anonymous and, with modern healthcare, a degree of achieved discretion about our respective health or experiences. So, in a world of over 7 billion people, we might naturally assume that we have never been more anonymous. However, technological advances mean that we might soon be better known than ever before.
In China police identified one person out a crowd of 60,000 through facial recognition. Singapore, as part of its Smart Nation initiative, is looking to connect its 110,000 or so lampposts into wireless sensor networks, and will test cameras with facial recognition capabilities as part of that network. Even if not run by state authorities, such technology will likely be increasingly ubiquitous as the financial costs of introducing it continue to reduce. And there may be additional relevant capabilities to come, offering even greater options, including in low-light or in foggy or darker conditions. Or even perhaps the ability to identify people without seeing their faces. Physical anonymity may thus be harder to come by, thereby increasing the potential for actors to know who has done what, when, and with whom.
Technology is also likely to blur the lines between the physical outwards self and other selves. For instance, our health. While some conditions, ailments or diseases have always been apparent upon sight, and thus part of our outward appearance, this may not have been regarded as an invasion of privacy (e.g. seeing that someone is pregnant) or may not have been seen as relevant to those not affected (e.g. someone having rosacea). However, new technologies potentially offer new detailed insights into the health of others, regardless of whether they consent to it or not, or whether they are comfortable sharing it or not. For instance, an algorithm might be capable of identifying depression through photos or even identifying someone at risk of suicide. Or scans of fingernails might reveal underlying diseases or conditions. Or diagnostic mirrors may provide more than our reflection, and be able to tell us (and others) important details about our health. Again, where these are sufficiently successful, they are likely to become marginal cost activities, and thus also ubiquitous. Such developments blur the line between what we have been comfortable with (or at least had to accept) in terms of people’s knowledge about our physical selves, and promises to expand the domain of what will be in the public domain, whether we want it or not.
Beyond even these potentially intimate details, some of which can already tell us a lot about a person and their experience and their (inner) lives, technology may enable even greater insight into people’s identities. For instance, while more speculative, and far from proven, it seems that there is a non-trivial possibility of identifying whether someone is gay through photos (while the research is controversial, the idea of algorithmically identifying personal characteristics through physical characteristics should not be dismissed). For a range of aspects of our selves that ultimately have biological bases or determinants, it is surely conceivable that algorithms may have a greater than random degree of success in identifying some associated physical manifestation. Artificial intelligence may be able to reveal much about ourselves, even before we might be consciously aware of it.
Of course any one of these (and many more) technologies may fail in their application. The question that is relevant however, is can we realistically expect all of them to fail? If only some are effective (and the path of previous technology should caution against being overly sceptical), it may still be enough to ensure that technology can reveal much more about ourselves, regardless of whether we want it to happen or not. If privacy is to be maintained in some form then, it is highly likely that it will take significant intervention, likely at the level of the State.
Ubiquitous technology makes legislative responses difficult
The idea of responding to all of the potential issues through law is problematic. If the development of these or similar technologies follows that of the recent past, we can expect exponential gains so that they become ubiquitous and low (or close to marginal) cost. This will make a legislative or regulatory response highly challenging.
As a general principle, technology is generally much easier to regulate when it is in the hands of a few actors. When world-changing capabilities were only available to state or state-associated actors, international institutions or agreements had perhaps a greater efficacy in managing the issues, e.g. the regulation of weaponry or of nuclear materials.
When technology reaches a much wider range of actors – say through industry – it understandably becomes harder to regulate. With a greater range of actors comes a greater range of motivations, levels of understanding and sophistication, and aptitudes. Effective regulation in this context requires greater education, standards, enforcement and reporting.
These challenges escalate again when it comes to population-wide capabilities. Government action will often rely here on a mix of price signals (taxation), social norms and signalling, collective effort, or labour intensive policing. Sometimes outright forbidding of options will work, but evidence (e.g. the drug trade) suggests that this is not a guaranteed option. In a lot of areas this has not mattered, as individuals have rarely had the opportunity, capability or desire to enact global-scale changes against the status-quo, with the notable exceptions of terrorism and some (mostly financial) crimes.
In short, I would suggest that the more ubiquitous a technology becomes, the more it will be difficult to regulate – with the caveat that this does not necessarily hold if the state uses technology to the fullest. So, if a government, for instance, were to enact a societal-level system for helping shape behaviour or were to engage in widespread monitoring and pre-emptive enforcement, then it may be more effective in using a regulatory option. This will still likely come with some significant costs (social and financial).
With regards to privacy, governments are likely then to be faced with some combination of the following three basic options:
- Regulating in more and more complex ways to respond to technologies that will expand the range of the possible dramatically. Previous experience suggests that this option is unlikely to keep pace with change, as technology generally outstrips the ability of parliamentary and other systems to understand and effectively regulate for it.
- Governments invest in using the technologies to help enforce the desired behaviours (e.g. ubiquitous monitoring) or create a new paradigm/system that preferences and rewards the right behaviours. In the case of privacy, this is likely to be essentially anathema to the desired state of affairs (the defence of an area that is private and thus not surveilled).
- Seeking to control the technology at source, involving a high degree of corporatism and intervention. This option is likely to be relatively ineffective in an interconnected world of many different States with differing interests.
Let us take each of these cases in turn, to explore those dynamics and the associated assumptions.
Regulating in more and more complex ways
The challenge with regulating for privacy when faced with numerous technologies that might change or weaken it, is that the technologies are generally likely to increase options, and therefore increase the range of circumstances that need to be catered for. If outright forbidding of various options is likely to be ineffective (as in the case of ubiquitous technology where machine learning and sensors are likely to be ever-present and available), then legislation has to account for a dizzying array of possibilities.
To take an example of one legislative response to our new digital reality, let us consider the “right to be forgotten”, which has sometimes required search engines to remove particular links from search results. While this is not strictly about privacy, as it is dealing with information that has already been disclosed, it is still indicative of the potential challenges that will emerge.
As the world becomes more and more documented through personal devices (e.g. smartphones), and the technology promises to make it even easier (through the Internet of Things), the tensions between such laws, the ability to enforce them, and other social goals becomes more stark.
For instance, what if an individual decides to record more of their life (or potentially even all of it) and the ability to do so is close to zero financial cost? And what if many multiples of people choose to do so, creating situations with continued records across different contexts? And what if human augmentation, in the form of some greater integration with digital systems, could even mean that digital memories are effectively part of the self? How will the law manage to navigate between what one (or many) decides to share and the right of individuals to have things forgotten?
Or to take a more tangible example, the General Data Protection Regulation (GDPR) introduced in Europe is feared by some to be in tension with the ability to identify online scammers. While such tensions can, and often will, be resolved, these adjustments take time and rarely reduce the legislative complexity involved. There will always be a trade-off between one set of values (e.g. privacy) and another (e.g. convenience, social goals, ensuring accountability and responsibility of actors with malicious intent).
Governments can definitely attempt to regulate for privacy in the digital world – but I would like to suggest that its ability to enact such regulation effectively will be hampered in an area with significant and continuing change and where the potential issues will continue to expand rather than contract. Where privacy could once be seen as a relatively straightforward issue, new technology will bring up many many new questions about what it actually looks like, how it can be enforced, and the costs associated with doing so.
Government using technology to enforce its aims
In this option, governments can use the technology at hand to ensure greater enforcement of laws, making them more effective. This might be about better tracking of behaviour (e.g. real time reporting of industrial emissions), of setting parameters (e.g. real-time adjusting price signals that ensure certain activities stay within pre-determined/agreed limits, such as traffic congestion), or monitoring (e.g. monitoring of terrorism-related pre-activity). Governments (and industry) may well be able to use machine learning, the Internet of Things, and drones for a whole range of social purposes, including environmental, health and social order concerns.
The one area where government using such technology to enforce its aims is likely to be counter-productive, however, is that of privacy. The tools themselves are likely to fundamentally be in tension with the desired aims.
(Non-state actors of course may introduce other technologies to attempt to slow or reverse the whittling away of what is private. I suspect that these will have difficulties however as they will contrast individual actions against larger societal trends and patterns, and thus are likely to be at a disadvantage.)
Controlling technology at its source
In the third option, governments may seek to control the allowable technology and shape its form. Afterall, governments have had significant success in working closely with industry to shape and control technology systems in certain domains – e.g. defence. A corporatist approach can be very effective in allowing strong state involvement in technology, ensuring that there is strong concern with national aims or objectives.
However in an interconnected multi-polar world, where much of the digital infrastructure is not only shared but is interconnected through supply chains and systems, this is unlikely to be a very effective option. In such a system, it only requires one state (or even sub-state) actor to go against the grain and to apply different technical options. For instance, to take an example of governments trying to work with IT firms to ensure a backdoor to encryption, this can be problematic in two ways. One – other actors may use the built-in insecurity as an advantage. Two – other actors may introduce their own new systems and make them available, circumventing state control in a particular country.
A challenge for governments, a challenge for others
Privacy then, even if feasible, may not be the easiest thing for governments to regulate for or to enforce. Yet, without government intervention it is unlikely that many individuals will be able to do much to protect themselves. Ubiquitous platforms, devices and systems will likely sit at the core of much of our economic, social, and political systems and thus make individual opting out difficult or even damaging, effectively requiring the abdication of many social and economic bonds. Such a step will often be as marginalising as the process of forfeiting any notions of privacy.
A potential new tangle to the question of privacy
AI will inevitably be able to create video and audio that is indistinguishable from reality. Existing research suggests that this will not be that difficult and may not be that far away. It will be easy for footage to be created of anyone saying or doing nearly anything. Our ability to rely on what we see or hear as being somewhat indicative of reality is likely to be challenged. Even where it is known or quickly proven that something is fake (yet looks and sounds real), it is likely that our brains may still be influenced by what we see. Recent political debates and tactics suggest that such tools will be used, and may have an effect even if they are understood by many to be fakes.
What does privacy mean in a world where your image and sound might be used by other people to say and do other things? If privacy is about our right to control or manage the release of information about ourselves, what is it when elements of our self, even if not attached to us, can be manipulated by others? Even more than the other technologies, this may be fundamentally challenging to any sense of control over how our self is perceived by others (which is what I would argue that privacy effectively is). It may also highlight that our only protection against such a misrepresentation will be by being more open, by being more transparent, and sharing even more about ourselves. In some ways, some of what we currently value about privacy, may paradoxically only be maintained by giving away more of our privacy.
Privacy as privilege
Has privacy ever really been a right? Or at least an inviolable right that is not dependent on context?
While far from being an expert in this field, I would suggest that there are a number of examples of where privacy has always been a privilege. For instance, a quick look at the history of law enforcement in many countries will demonstrate that enforcement in many discretionary areas (e.g. drug enforcement) generally affects minorities or the under-privileged more than those from more privileged backgrounds. Policing can often affect one group more, even if the behaviours do not significantly differ between groups. To take another example, the welfare system provides a means for extensive information gathering in a way that taxation systems rarely match. One group has to share much more about themselves than the other.
The right to privacy has also often seemed to benefit institutions and those associated with them far more than it might for its victims (e.g. the all too many child abuse scandals). The #metoo movement has been an example of a questioning of this privilege, yet it has also demonstrated that privacy has been an unequally held privilege, one that can often still be used more to shield someone from scrutiny rather than providing protection to those who might need it.
None of this is to say that privacy has no benefit. Nor is it to say that contemplating a privacy free world is necessarily comfortable. However, our world, and the technology it uses, appears to be heading in one direction – greater openness of information and revealing more about ourselves even if we do not consent, participate or contribute. If technology is going to make privacy more challenging, and it is unlikely that governments are going to be able to protect it while not violating it, then it is hard to say that privacy is a right. Rather it is a privilege, and thus something that will be available in vary degrees to different people.
Privacy as an indefensible privilege?
If privacy is not available to everyone, then there is a case to be made that it should not be available to anyone. If privacy is defended as a right, likely unsuccessfully, then those most likely to be able to take advantage of it as a privilege will be those with power. Those who are powerless are unlikely to be able to defend or exercise their ‘right’ to privacy, whereas those in power will inevitably seek to use their ‘right’ to privacy to entrench their (positional, political, financial) power. While the powerful have always faced some potential trade-offs around privacy, often having to pay a price in terms of exposure, speculation, and outright gossip, they have also been better placed to protect other elements of their privacy or have been compensated.
As technology becomes more sophisticated, resulting in the truncation of existing notions of privacy, fewer and fewer people will likely be able to meaningfully hold on to it. More and more information will be available about more and more people, providing great insight into the interests, motivations, beliefs, behaviours and activities of large segments of the population. Yet our ability to scrutinise the powerful might be weaker than ever before, because they will have every interest in holding onto their privacy, and will be able to bring more resources to bear on defending it.
Holding on to privacy, at least in its current form, is unlikely to sufficiently slow or derail this ongoing trend. Rather, a defence of privacy is likely to bolster the efforts of those in power to protect themselves from scrutiny. Anyone who has access to a privilege is unlikely to voluntarily sacrifice it. Therefore holding onto privacy is likely to be more damaging than either changing our notions of what privacy is and why it matters, or exploring how greater transparency might be freeing as opposed to restricting.