Snoopers’ Charter Consultation

The draft Communications Data Bill – the ‘Snoopers’ Charter’ – is currently up for consultation before a specially put together Joint Parliamentary Committee. The consultation period has been relatively short – it ends on 23rd August – and at a time when many people are away on holiday and while many other have been enjoying (and being somewhat distracted by) the Olympic Games.

Even so, it’s very important – not just because what is being proposed is potentially highly damaging, but because it’s a field in which the government has been, in my opinion, very poorly advised and significantly misled. There is a great deal of expertise around – particularly on the internet – but in general, as in so many areas of policy, the government seems to be very unwilling to listen to the right people. I’ve blogged on the general area a number of times before – most directly on ‘Why does the government always get it wrong?’.

All this means that it would be great if people made submissions – for details see here.

Here is the main part of my submission, reformatted for this blog.

————————————————-

Submission to the Joint Committee on the draft Communications Data Bill

The draft Communications Data Bill raises significant issues – issues connected with human rights, with privacy, with security and with the nature of the society in which we wish to live. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

As noted, there are many issues brought up by the draft bill: this submission does not intend to deal with all of them. It focusses primarily on three key issues:

1) The nature of internet surveillance. In particular, that internet surveillance means much more than ‘communications’, partly because of the nature of the technology involved and partly because of the many different ways in which the internet is used. Internet surveillance means surveilling not just correspondence but social life, personal life, finances, health and much more. Gathering ‘basic’ data can make the most intimate, personal and private information available and vulnerable.

2) The vulnerability of both data and systems. It is a fallacy to assume that data or systems can ever be made truly ‘secure’. The evidence of the past few years suggests precisely the opposite: those who should be most able and trusted with the security of data have proved vulnerable. The approach of the draft Communications Data Bill – essentially a ‘gather all then look later’ approach – is one that not only fails to take proper account of that vulnerability, but actually sets up new and more significant vulnerabilities, effectively creating targets for hackers and others who might wish to take advantage of or misuse data.

3) The risks of ‘function creep’. The kind of systems and approach envisaged by the draft Bill makes function creep a real and significant risk. Data, once gathered, is a ‘resource’ that is almost inevitably tempting to use for purposes other than those for which its gathering was envisaged. These risks seem to be insufficiently considered both in the overall conception and in the detail of the Bill.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1 The Nature of internet Surveillance

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below

1.1 ‘Communications Data’ and the separation of ‘content’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. On the surface, it might appear that ‘communications data’ relates primarily to ‘correspondence’ – bringing in the ECHR Article 8 right to respect for privacy of correspondence – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc, even without what might reasonably be called the ‘content’ of any communications being examined – though what constitutes ‘content’ is contentious.

Considering a Google search, for example, if RIPA’s requirements are to be followed, the search term would be considered ‘content’ – but would links followed as a result of a search count as content or communications data? Who is the ‘recipient’ of a clicked link? If the data is to be of any use, it would need to reveal something of the nature of the site visited – and that would make it possible to ‘reverse engineer’ back to something close enough to the search term used to be able to get back to the ‘content’. The content of a visited site may be determined just by following a link – without any further ‘invasion’ of privacy. When slightly more complex forms of communication on the internet are considered – e.g. messaging or chatting on social networking sites – the separation between content and communications data becomes even less clear. In practice, as systems have developed, the separation is for many intents and purposes a false one.  The issue of whether or not ‘content’ data is gathered is of far less significance: focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry: it is a key part of the business models of Google, Facebook and others. It is already advanced but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this Bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and profiling.

1.2 The broad impact of internet surveillance

The kind of profiling discussed above has a very broad effect, one with a huge impact on much more than just an individual’s correspondence. It is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather than targeted will impact directly on the rights of the innocent, rights that, according to the principles of human rights, deserve protection. In the terms set out in the European Convention on Human Rights, there is a potential impact on Article 8 (right to private and family life, home and correspondence), Article 9 (Freedom of thought, conscience and religion), Article 10 (Freedom of expression) and Article 11 (Freedom of assembly and association).  Internet surveillance can enable discrimination (contrary to ECHR Article 14 (prohibition of discrimination) and even potentially automate it – a website could automatically reject visitors whose profile doesn’t match key factors, or change services available or prices based on those profiles.

2 The vulnerability of data

The essential approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions. Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

Designing and building systems to implement legislation like the Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can and does spread without control.

3 Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

The risks associated with function creep in relation to the surveillance systems envisaged in the Bill have a number of different dimensions. There can be creep in terms of the types of data gathered: as noted above, the split between ‘communications data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions of the legislation (and any systems put in place to bring the legislation into action), function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

4 Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By its very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

This Bill is not just a modernisation of existing powers, nor a way for the police to ‘catch up’. It is something on a wholly different scale. We as citizens are being asked to put a huge trust in the authorities not to misuse the kind of powers made possible by this Bill. Trust is of course important – but what characterises a liberal democracy is not trust of authorities but their accountability, the existence of checks and balances, and the limitation of their powers to interfere with individuals’ lives. This bill, as currently envisaged, does not provide that accountability and does not sufficiently limit those powers: precisely the reverse.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis MP put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence, nor to provide ‘private’ evidence that cannot at least qualitatively be revealed to the public.

That brings a bigger question into the spotlight, one that the Committee might think is the most important of all: what kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the draft Communications Data Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception. Is that what the Committee would really like to support?

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law, UEA Law School

In praise of the ephemeral!

Like many people who spend a lot of time (perhaps far too much time) using Twitter, the recent revelation that Twitter was ‘partnering’ with data-mining company DataSift to ‘unlock’ their tweet archive made me distinctly uneasy. The idea was presented as something essentially beneficial – unlocking an archive sounds like a ‘good’ thing, getting benefits from what is ‘public’ information (because Twitter’s terms and conditions say quite clearly that the default position for a tweet is that it is ‘public’).

Why, then, do I feel nervous about it? Privacy campaigners reacted badly to the idea. Privacy International said: “Twitter has turned a social network that was meant to promote real-time global conversation into a vast market-research enterprise with unwilling, unpaid participants,” while the Electronic Frontier Foundation described the idea as ‘creepy’.

To my mind, both are right. Yes, the information is public, but for me the nature of twitter – the joy of twitter – is that it is spontaneous, instinctive, current and instantaneous. When I tweet, I tweet in the moment – and almost all the best tweeters work mostly like that. Pre-prepared, marketing, political tweets are generally as dull as dishwater – which is why such excellent hashtags as #tweetlikeanMP are so effective, showing up the lack of honesty, spontaneity and creativity in the tweeting of most of our politicians.

I may be unusual – after all, I don’t follow the likes of Lady Gaga or Justin Bieber as many millions do, and only follow a handful of MPs – but I don’t think I’m that unusual. I like the ephemeral nature of Twitter, the fact that something I tweet one day will be all but forgotten the next day – indeed, something I tweet one hour will be mostly forgotten even an hour later. Setting up a twitter ‘archive’ puts that spontaneity at risk.

Anyone who works in the privacy field must be familiar with the idea of the Panopticon. Bentham’s concept was of a prison, set out in a circular form, in which at any moment the occupant of a cell could be observed. The key point was that the possibility of being observed was intended to alter the behaviour of the prisoner. If they know they might be seen at any time, they would control their own behaviour – they would be naturally constrained, and not behave badly. The logic of the Panopticon lies behind many of the most privacy-invasive policies both online and in the ‘real’ world – ever-present CCTV cameras, constant monitoring of web-traffic and so forth. It makes sense, however, only when you want to restrict the behaviour of people. It curtails freedom, stifles creativity, crushes spontaneity. That might be necessary to control potentially violent and dangerous prisoners – but in a ‘free society’ it is disastrous.

For real freedom of action, for real freedom of expression, you need the reverse of the panopticon. You need people to feel free to speak, to write, to express themselves without the feeling that anything and everything you might say or do might be written down, quoted back at you (often out of context), manipulated and misused. You need to know that making mistakes won’t be fatal – that you can correct yourself and clarify your comments and not be treated as some kind of hypocrite.

Right now, on the internet, Twitter is one of the few places where that kind of freedom feels possible. Digital memory is all too eternal – Viktor Mayer-Schönberger’s excellent ‘Delete’ talks eloquently of the benefits of forgetting in the digital era. Mayer-Schönberger’s concept of data with expiry dates may be difficult to bring into reality – but Twitter has, to date, been one of the places where in a practical sense it almost happens. That is something worth celebrating, something worth preserving. The Twitter/DataSift deal, and others like it, put it at risk. For me, it puts the whole benefit of Twitter at risk.

If I want something to be archived, to be used as a reference, I’ll put it in a blog like this one – there are plenty of places where the eternal nature of internet data storage is possible. There are very few where the benefits of the opposite, the joys of the ephemeral shine through. Twitter is one. I hope Twitter itself realises this – and changes its direction.

Logout should mean logout! UPDATED

Hidden (or at least untrumpeted) amongst all the new features in the latest Facebook upgrade is one deeply concerning issue: when you ‘logout’ of Facebook, Facebook will continue to track you. This fact has made it onto a few blogs (for example Nik Cubrilovic’s blog here) and is doing the rounds on twitter – but for those of us concerned with privacy, there should be a lot more noise about it, because it has huge implications. It flies in the face of what users expect and understand – and that should really matter.

The reality is that very, very few users ever check their terms and conditions – almost all of us scroll straight through the pages and pages of legalese (even those of us who work in the law!) and then click ‘OK’ at the bottom. Why? Because we want to use the service, and because we know we don’t have any real choice about what’s in those terms and conditions – and because we have a reasonable expectation that what is in those terms and conditions is at least in most ways ‘reasonable’, and will conform to what we expect and understand terms and conditions to be.

So the question of what would we expect to happen when we ‘logout’ of Facebook is one that matters. Most people, I suspect, would expect that ‘logout’ would cut our connection with Facebook, until we log back in. It should be like putting the phone down when we’ve finished a conversation – you don’t expect the person on the other end of the line to be able to hear what you say after you’ve hung up, let alone be able to keep a microphone open in your living room and record every conversation you have with anyone in that room. In fact, if you thought something like that was happening, you’d be outraged, and rightfully so, as well as having all kinds of opportunities to take legal action against the people who are, in effect, bugging you.

Of course what Facebook is doing isn’t quite the same – but in some ways it could be considered even more invasive of your privacy, because the opportunities to analyse and exploit the data gathered through their tracking are greater in some ways that a simple phone tap. The data they can gather can be aggregated and analysed – its digital nature, together with the vast volume of other such data that they gather, gives them an unprecedented scope for such aggregation and analysis.

This is hardly the first time that Facebook has tried to move the goalposts on privacy, and to set new norms. This attempted resetting of norms, so that tracking is normal, whether you’re signed in or not, and that it should (and will) happen all the time, is one that should be resisted very strongly. The opposite should be the case – we should be able to assume that tracking DOESN’T take place unless we explicitly allow it, and are reminded that it is happening. We should have a right to know when we’re being tracked, and a right to turn that tracking off, and people like Facebook should be required to offer their services without that tracking, at the very least when we’re not signed in to their service.

Like it or not, the use of Facebook has become effectively the norm. I have a new batch of undergraduate students arriving today, and if the experience of the last few years is anything to go by, it will be a rare student indeed who doesn’t have a Facebook account. That in itself should place demands on Facebook, requirements that they must meet. That should mean that they should, in general, understand and meet the expectations of their users – and, in this case, that should mean that logout should mean logout. Tracking should be turned off the moment we log out of Facebook. And we, the users, should demand that it happens.

UPDATE (with gratitude to Emil Protalinski at ZDNet for his blog)Facebook are denying that this is what is happening – they say “…the logged out cookies are used for safety and protection including: identifying spammers and phishers, detecting when somebody unauthorized is trying to access your account, helping you get back into your account if you get hacked, disabling registration for a under-age users who try to re-register with a different birthdate, powering account security features such as 2nd factor login approvals and notification, and identifying shared computers to discourage the use of “keep me logged in.”

We’ll have to see what comes of this – and whether the privacy implications are as significant as they seem. However, regardless of the technical details, the underlying point needs stressing: when we logout, we need to know that we’re no longer monitored or tracked, even for some of Facebook’s stated purposes. Stated purposes don’t always match with real uses… and function creep is hardly unknown in this context! For me, this underlines the need for clarity of rights and practices in this area. Facebook need to be told in no uncertain terms that tracking is not acceptable in these circumstances….