Privacy-friendly judges?

Supreme court sealYesterday’s ruling by the Supreme Court of the United States, requiring the police to get a warrant before accessing a suspect’s mobile phone data, was remarkable in many ways. It demonstrated two things in particular that fit within a recent pattern around the world, one which may have quite a lot to do with the revelations of Edward Snowden. The first is that the judiciary shows a willingness and strength to support privacy rights in the face of powerful forces, the second is an increasing understanding of the way that privacy, in these technologically dominated days, is not the simple thing that it was in the past.

The stand-out phrase in the ruling is remarkable in its clarity:

13-132 Riley v. California (06/25/2014)

“Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans “the privacies of life,” Boyd, supra, at 630. The fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought. Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple— get a warrant.”

Privacy advocates around the world have been justifiably excited by this – not only is the judgment a clearly privacy-friendly one, but it effectively validates some of the critical ideas that many of us have been trying to get the authorities to understand for a long time. Most importantly, that the way that we communicate these days, the way that we use the internet and other forms of communication, plays a far more important part in our lives than it did in the past. The emphasis on the phrase ‘the privacies of life’ is a particularly good one. This isn’t just about communication – it’s about the whole of our lives.

The argument about cell-phones can be extended to all of our communications on the internet – and the implications are significant. As I’ve argued before, the debate needs to be reframed, to take into account the new ways that we use communications – privacy these days isn’t as easily dismissed as it was before. It’s not about tapping a few phone calls or noting the addresses on a few letters that you send – communications, and the internet in particular, pervades every aspect of our lives. The authorities in the UK still don’t seem to get this – but the Supreme Court of the US does seem to be getting there, and its not alone. The last few months have seen a series of quite remarkable cases, each of which demonstrates that judges are starting to get a real grip on the issues, and are willing to take on the powerful groups with a vested interest in downplaying the importance of privacy:

  • The ECJ ruling invalidating the Data Retention Directive on 8th April 2014
  • The ECJ Google Spain ruling on the ‘Right to be Forgotten’  on 13th May 2014
  • The Irish High Court referring Max Schrems’ case against Facebook to the ECJ, on 19th June 2014

These three cases all show similar patterns. They all involve individuals taking on very powerful groups – in the data retention case, taking on pretty much all the security services in Europe, in the other two the internet giants Google and Facebook respectively. In all three cases – as in the Supreme Court of the US yesterday – the rulings are fundamentally about the place that privacy plays, and the priority that privacy is given. The most controversial statement in the Google Spain case makes it explicit:

“As the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the Charter, request that the information in question no longer be made available to the general public on account of its inclusion in such a list of results, those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name” (emphasis added)

That has been, of course, highly controversial in relation to freedom of information and freedom of expression, but the first part, that privacy overrides the economic interest of the operator of the search engine, is far less so – and the fact that it is far less controversial does at least show that there is a movement in the privacy-friendly direction.

The invalidation of the Data Retention Directive may be even more significant – and again, it is based on the idea that privacy rights are more important than security advocates in particular have been trying to suggest. The authorities in the UK are still trying to avoid implementing this invalidation – they’re effectively trying to pretend that the ruling does not apply – but the ruling itself is direct and unequivocal.

As for the decision in the Irish High Court to refer the ‘Europe vs Facebook’ case to the ECJ, the significance of that has yet to be seen, but Facebook may very well be deeply concerned – because, as the two previous cases have shown, the ECJ has been bold and unfazed by the size and strength of those it might be challenging, and willing to make rulings that have dramatic consequences. The Irish High Court is the only one of the three courts to make explicit mention of the revelations of Edward Snowden, but I do not think that it is too great a leap to suggest that Snowden has had an influence on all the others. Not a direct one – but a raising of awareness, even at the judicial level, of the issues surrounding privacy, why they matter, and how many different things are at stake. A willingness to really examine the technology, to face up to the ways in which the ‘new’ world is different from the old – and a willingness to take on the big players.

I may well be being overly optimistic, and I don’t think too much should be read into this, but it could be critical. The law is only one small factor in the overall story – but it is a critical one, and if people are to begin to take back their privacy, they need to have the law at least partly on their side, and to have judges who are able and willing to enforce that law. With this latest ruling, and the ones that have come over the last few months, the signs are more positive than they have been for some time.

 

Addendum: As David Anderson has pointed out, the UK Supreme Court showed related tendencies in last week’s ruling over the disclosure of past criminal records in job applications, in R (T ) v SSHD [2014] UKSC 35 on 18th June. See the UKSC Blog post here.

Data Retention: taking privacy seriously

The repercussions of yesterday’s landmark ruling of the  Court of Justice of the European Union that the Data Retention Directive is invalid, and has been so since its inception are likely to be complex and wide-ranging. Lawyers, academics, politicians and activists have been reading, writing, thinking and speculating about what might happen. With the directive declared invalid, what will happen to the various national implementations of that directive – in the UK, for example, we have The Data Retention (EC Directive) Regulations 2009. Will it need to be repealed? Will it need to be challenged – and if so how, and by whom? What will the various communications service providers – the ISPs, the telecommunications companies and so forth – do in reaction to the declaration? What will happen to other legislation that at least in part relies on retained data – the Regulation of Investigatory Powers Act 2000 (RIPA) for example. Will the police and intelligence services change what they do in any way, shape or form? Will the various governments attempt some kind of replacement for the Data Retention Directive? If so, what form will it take?

These are just some of the open questions – and the answers to them are only just starting to emerge. Some will be clear – but a great many will be very messy, and will take a lot of time, energy and heartache to sort out. The question that should immediately spring to mind is how that all this mess, and the resultant wastes of time, energy, expertise and heartache could have been avoided. Actually, the answer is simple. It could have been avoided if privacy had been taken seriously to start with.

Underestimating privacy

For a long time, privacy hasn’t been taken nearly seriously enough. It hasn’t been taken seriously by the big operators on the internet – Facebook, Google, Apple, Microsoft, Yahoo! and so forth. Their policies and practices have treated privacy as a minor irritant, dealt with by obscure and unfathomable policies that people will at best scroll through and click OK at the bottom of without reading. Their products have treated privacy as an afterthought, almost an irrelevance – a few boxes to tick to satisfy the lawyers, that’s all. Privacy hasn’t been taken seriously by the intelligence agencies or the police forces either – just the province of a few geeks and agitators, the tinfoil hat brigade. It hasn’t been taken seriously by some of the open data people – the furore over care.data is just one example.

Privacy, however, does matter. It matters to ordinary people in their ordinary lives – not just to geeks and nerds, not just to ‘evil-doers’, not just to paranoid conspiracy theorists. And when people care enough about things, they can often find ways to make sure that those things are treated with respect. They fight. They act. They work together – and often, more often than might immediately seem apparent, they find a way to win. That was how the Communications Data Bill – the ‘Snoopers’ Charter’ was defeated. That is why Edward Snowden’s revelations are still reverberating around the world. That’s why behavioural advertising has the bad name that it does – and why the Do Not Track initiative started, and why the EU brought in the ‘Cookies Directive’, with all its flaws.

All these conflicts – and the disaster that is the Data Retention Directive – could have been avoided or at least ameliorated if the people behind these various initiatives, laws, processes and products had taken privacy seriously to start with. This is one of the contentions of my new book, Internet Privacy Rights – people believe they have rights, and when those rights are infringed, they care about it, and increasingly they’re finding ways to act upon it. Governments, businesses and others need to start to understand this a bit better if they’re not going to get into more messes like that that surrounds the Data Retention Directive.  It’s not as though they haven’t had warnings. From the very start, privacy advocates have been complaining about the Directive – indeed, even before its enactment the Article 29 Working Party had been strongly critical of the whole concept of mass data retention. That criticism continued over the years, largely ignored by those in favour of mass surveillance. In 2011, Peter Hustinx, the European Data Protection Supervisor, called the Data Retention Directive “the most privacy-invasive instrument ever” – and that was before the revelations of Edward Snowden.

They should have listened. They should be listening now. Privacy needs to be taken seriously.

 

Paul Bernal, April 2014

Internet Privacy Rights – Rights to Protect Autonomy is available from Cambridge University Press here. Quote code ‘InternetPrivacyRights2014′ for a 20% discount from the CUP online shop.

Data retention: fighting for privacy!

This morning’s news that the Court of Justice of the European Union has declared the Data Retention Directive to be invalid has been greeted with joy amongst privacy advocates. It’s a big win for privacy – though far from a knockout blow to the supporters of mass surveillance – and one that should be taken very seriously indeed. As Glyn Moody put it in his excellent analysis:

“…this is a massively important ruling. It not only says that the EU’s Data Retention Directive is illegal, but that it always was from the moment it was passed. It criticises it on multiple grounds that will make it much harder to frame a replacement. That probably won’t be impossible, but it will be circumscribed in all sorts of good ways that will help to remove some of its worst elements.”

I’m not going to attempt a detailed legal analysis here – others far more expert than me have already begun the process. These are some of the best that I have seen so far:

Fiona de Londras: http://humanrights.ie/civil-liberties/cjeu-strikes-down-data-retention-directive/

Daithí Mac Síthigh: http://www.lexferenda.com/08042014/2285/

Simon McGarr: http://www.mcgarrsolicitors.ie/2014/04/08/digital-rights-ireland-ecj-judgement-on-data-retention/

The full impact of the ruling won’t become clear for some time, I suspect – and already some within the European Commission seems to be somewhat in panic mode, looking around for ways to underplay the ruling and limit the damage to their plans for more and more surveillance and data retention. Things are likely to remain in flux for some time – but there are some key things to take from this already.

The most important of these is that privacy is worth fighting for – and that when we fight for privacy, we can win, despite what may seem overwhelming odds and supremely powerful and well-resourced opponents. This particular fight exemplifies the problems faced – but also the way that they can and are being overcome. It was brought by an alliance of digital rights activists – most notably Digital Rights Ireland – and has taken a huge amount of time and energy. It is, as reported in the Irish Times by the excellent Karlin Lillington, a ‘true David versus Goliath victory‘. It is a victory for the small people, the ordinary people – for all of us – and one from which we should take great heart.

Privacy often seems as though it is dead, or at the very least dying. Each revelation from Edward Snowden seems to demonstrate that every one of our movements is being watched at all times. Each new technological development seems to have privacy implications, and the developers of the technology often seem blissfully unaware of those implications until it’s almost too late. Each new government seems to embrace surveillance and see it as a solution to all kinds of problems, from fighting terrorism to rooting out paedophiles, from combatting the ‘evil’ of music and movie piracy to protecting children from cyberbullies or online pornography, regardless of the evidence that it really doesn’t work very well in those terms, if at all. Seeing it in that way, however, misses the other side of the equation – that more and more people are coming to understand that privacy matters, and are willing to take up the fight for privacy. Some times those fights are doomed to failure – but sometimes, as with today’s ruling over data retention, they can succeed. We need to keep fighting.

A progressive digital policy?

Yesterday I read a call for submissions to Labour Left’s ‘Red Book II’, by Dr Éoin Clarke – to develop a way forward for the Labour Party. It started me thinking about what would really constitute a progressive digital policy – because for me, any progressive party should be looking at how to deal with the digital world. It is becoming increasingly important – and policies of governments seem to be wholly unable to deal with or even understand the digital world.

It must be said from the outset that I am not a Labour Party member, but that I was for many years. I left in 1999, partly because I was leaving the country and partly because I was already becoming disillusioned as to the direction that Labour was taking – a stance that the invasion of Iraq only confirmed. I have not rejoined the party since, though I have been tempted at times. One of the reasons I have not been able to bring myself to join has been the incoherence and oppressiveness of Labour’s digital policies, which are not those of a progressive, positive and modern party, of one that represents the ordinary people, and in particular the young people, of Britain today.

That seems to me to be very wrong. Labour should be a progressive party. It should be one that both represents and learns from young people. It should be one that looks forward rather than back – and one that is brave enough to be radical. Right now it isn’t: and the last government presided over some appalling, oppressive and regressive digital policies.

I’ve written in the past about why governments always get digital policy wrong – but it’s much easier to snipe from the sidelines than it is to try to build real policy. Here, therefore, is my first attempt at putting together a coherent, progressive policy for digital government. It is of course very much a skeleton – just the barest of bones – and very much a first attempt. There is probably a lot missing, and it needs a lot more thought. It would take a lot of work to put flesh on the bones – but for me, the debate needs to be had.

The starting point for such a policy would be a series of nine commitments.

  1. A commitment to the right to access to the net – and to supporting human rights online as well as in the real world. This is the easiest part of the policy, and one where Labour, at least theoretically, has not been bad. Gordon Brown spoke of such a right. However, supporting such a right has implications, implications which the Labour Party seems to have neither understood nor follows. The most important such implication is that it should not be possible to arbitrarily prevent people accessing the net – and that the barrier for removal of that right should be very high. Any policy which relies on the idea of blocking access should be vigorously resisted – the Digital Economy Act is the most obvious example. Cutting people’s access on what is essentially suspicion is wholly inconsistent with a commitment to the right to access the internet.
  2. A commitment against internet surveillance - internet surveillance is very much in the news right now, with the Coalition pushing the Communications Data Bill, accurately labelled the ‘snoopers’ charter’, about which I have written a number of times.Labour should very much oppose this kind of surveillance, but doesn’t. Indeed, rather the opposite – the current bill is in many ways a successor to Labour’s ‘Interception Modernisation Programme’. Surveillance of this kind goes very much against what should be Labour values: it can be and has been used to monitor those organising protests and similar, going directly against the kinds of civil rights that should be central to the programme of any progressive, left wing party: the rights to assembly and association. Labour should not only say, right now, that it opposes the Snoopers Charter, but that it would not seek to bring in other similar regulation. Indeed, it should go further, and suggest that it would work within the European Union to repeal the Data Retention Directive (which was pushed through by Tony Blair) and to reform RIPA – restricting the powers that it grants rather than increasing them.
  3. A commitment to privacy and data protection – rather than just paying lip service to them. I have written many times before about the problems with the Information Commissioner’s Office. First of all it needs focus: it (or any replacement body) should be primarily in charge of protecting privacy. Secondly, it needs more real teeth – but also more willingness to use them and against more appropriate targets. There has been far too little enforcement on corporate bodies, and too much on public authorities. If companies are to treat individuals’ private information better, they need the incentive to do so – at the moment even if they are detected, the enforcement tends to be feeble: a slap on the wrist at best. The current law punishes each group inappropriately: public authorities with big fines, which ultimately punish the public, corporates barely at all. Financial penalties would provide an incentive for businesses, while more direct personal punishments for those in charge of public authorities would work better as an incentive for them, as well as not punishing the public!
  4. A commitment to oppose the excessive enforcement of copyright – and instead to encourage the content industry to work for more positive ways forward. This would include the repeal of the Digital Economy Act, one of the worst pieces of legislation in the digital field, and one about which the Labour Party should be thoroughly ashamed. Labour needs to think more radically and positively – and understand that the old ways don’t work, and merely manage to alienate (and even criminalise) a generation of young people. Labour has a real opportunity to do something very important here – and to understand the tide that is sweeping across the world, at least in the minds of the people. In the US, SOPA and PIPA have been roundly beaten. ACTA suffered a humiliating defeat in the European Parliament and is probably effectively dead. In France, the new government is looking to abolish HADOPI – the body that enforces their equivalent of the Digital Economy Act. A truly progressive, radical party would not resist this movement – it would seek to lead it. Let the creative minds of the creative industries be put to finding a creative, constructive and positive way forward. Carrots rather than just big sticks.
  5. A commitment to free speech on the internet. This has a number of strands. First of all, to develop positive and modern rules governing defamation on the internet. Reform of defamation is a big programme – and I am not convinced that the current reform package does what it really should, focussing too much on reforming what happens in the ‘old media’ (where I suspect there is less wrong than some might suggest) without dealing properly with the ‘new media’ (which has been dealt with fairly crudely in the current reforms). There needs to be clarity about protection for intermediaries, for example.
  6. A commitment against censorship – this is the second part of the free speech strand. In the current climate, there are regular calls to deal with such things as pornography and ‘trolling’ on the internet – but most of what is actually suggested amounts to little more than censorship. We need to be very careful about this indeed – the risks of censorship are highly significant. Rather than strengthening our powers to censor and control,via web-blocking and so forth, we need to make them more transparent and accountable. A key starting point would be the reform of the Internet Watch Foundation, which plays a key role in dealing with child abuse images and related websites, but falls down badly in terms of transparency and accountability. It needs much more transparency about how it works – a proper appeals procedure, better governance structures and so forth. The Labour Party must not be seduced by the populism of anti-pornography campaigners into believing in web-blocking as a simple, positive tool. There are huge downsides to that kind of approach, downsides that often greatly outweigh the benefits.
  7. A radical new approach to social media – the third strand of the free speech agenda. We need to rethink the laws and their enforcement that have led to tragic absurdities like the Twitter Joke Trial, and the imprisonment of people for Facebook posts about rioting. The use of social media is now a fundamental part of many people’s lives – pretty much all young people’s lives – and at present it often looks as though politicians and the courts have barely a clue how it works. Labour should be taking the lead on this – and it isn’t. The touch needs to be lighter, more intelligent and more sensitive – and led by people who understand and use social media. There are plenty of them about – why aren’t they listened to?
  8. A commitment to transparency – including a full commitment to eGovernment, continuing the good aspects of what the current government is doing in relation to Open Data. Transparency, however, should mean much more – starting with full and unequivocal support for Freedom of Information. There has been too much said over recent months to denigrate the idea of freedom of information, and to suggest that it has ‘gone too far’. The opposite is much more likely to be the case: and a new approach needs to be formulated. If it takes too much time, money and effort to comply with FOI requests, that indicates that the information hasn’t been properly organised or classified, not that the requests should be curbed. The positive, progressive approach would be to start to build systems that make it easier to provide the information, not complain about the requests.
  9. A commitment to talk to the experts – and a willingness to really engage with and listen to them. We have some of the best – from people like Tim Berner-Lee to Professor Ross Anderson at the Cambridge University Computer Lab, Andrew Murray at the LSE, the Oxford Internet Institute and various other university departments, civil society groups and so forth – and yet the government consistently fails to listen to what they say, and prefers instead to listen to industry lobby groups and Whitehall insiders. That is foolish, short-sighted and inappropriate – as well as being supremely ineffective. It is one of the reasons that policies formulated are not just misguided in their aims but also generally fail to achieve those aims. There is real expertise out there – it should be used!

Much more is needed of course – this just sets out a direction. I’ve probably missed out some crucial aspects. Some of this may seem more about reversing and cancelling existing policies rather than formulating new ones – but that is both natural and appropriate, as the internet, much more than most fields, it generally needs a light touch. The internet is not ‘ungovernable’, but most attempts to govern it have been clumsy and counter-productive.

A forward-looking, radical and positive digital policy would mark the Labour Party out as no longer being in the hands of the lobbyists, but instead being willing to fight for the rights of real, ordinary people. It would mark out the Labour Party as being a party that understands young people better – and supports them rather than demonises and criminalises them. Of course I do not expect the Labour Party to take this kind of agenda on. It would take a level of political courage that has not been demonstrated often by any political party, let alone the current Labour Party, to admit that they have got things so wrong in the past. Admission of past faults is something that seems close to political blasphemy these days – for me, that is one of the biggest problems in politics.

As I said at the start, this is very much a first stab at an approach for the future – I would welcome comments, thoughts and even criticism(!). We need debate on this – and not just for the Labour Party. Currently, though my history has been with the Labour Party, I find myself without anyone that I think can represent me. If any party were to take on an agenda for the digital world that would make more sense, I would be ready to listen.

Snoopers’ Charter Consultation

The draft Communications Data Bill – the ‘Snoopers’ Charter’ – is currently up for consultation before a specially put together Joint Parliamentary Committee. The consultation period has been relatively short – it ends on 23rd August – and at a time when many people are away on holiday and while many other have been enjoying (and being somewhat distracted by) the Olympic Games.

Even so, it’s very important – not just because what is being proposed is potentially highly damaging, but because it’s a field in which the government has been, in my opinion, very poorly advised and significantly misled. There is a great deal of expertise around – particularly on the internet – but in general, as in so many areas of policy, the government seems to be very unwilling to listen to the right people. I’ve blogged on the general area a number of times before – most directly on ‘Why does the government always get it wrong?’.

All this means that it would be great if people made submissions – for details see here.

Here is the main part of my submission, reformatted for this blog.

————————————————-

Submission to the Joint Committee on the draft Communications Data Bill

The draft Communications Data Bill raises significant issues – issues connected with human rights, with privacy, with security and with the nature of the society in which we wish to live. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

As noted, there are many issues brought up by the draft bill: this submission does not intend to deal with all of them. It focusses primarily on three key issues:

1) The nature of internet surveillance. In particular, that internet surveillance means much more than ‘communications’, partly because of the nature of the technology involved and partly because of the many different ways in which the internet is used. Internet surveillance means surveilling not just correspondence but social life, personal life, finances, health and much more. Gathering ‘basic’ data can make the most intimate, personal and private information available and vulnerable.

2) The vulnerability of both data and systems. It is a fallacy to assume that data or systems can ever be made truly ‘secure’. The evidence of the past few years suggests precisely the opposite: those who should be most able and trusted with the security of data have proved vulnerable. The approach of the draft Communications Data Bill – essentially a ‘gather all then look later’ approach – is one that not only fails to take proper account of that vulnerability, but actually sets up new and more significant vulnerabilities, effectively creating targets for hackers and others who might wish to take advantage of or misuse data.

3) The risks of ‘function creep’. The kind of systems and approach envisaged by the draft Bill makes function creep a real and significant risk. Data, once gathered, is a ‘resource’ that is almost inevitably tempting to use for purposes other than those for which its gathering was envisaged. These risks seem to be insufficiently considered both in the overall conception and in the detail of the Bill.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1 The Nature of internet Surveillance

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below

1.1 ‘Communications Data’ and the separation of ‘content’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. On the surface, it might appear that ‘communications data’ relates primarily to ‘correspondence’ – bringing in the ECHR Article 8 right to respect for privacy of correspondence – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc, even without what might reasonably be called the ‘content’ of any communications being examined – though what constitutes ‘content’ is contentious.

Considering a Google search, for example, if RIPA’s requirements are to be followed, the search term would be considered ‘content’ – but would links followed as a result of a search count as content or communications data? Who is the ‘recipient’ of a clicked link? If the data is to be of any use, it would need to reveal something of the nature of the site visited – and that would make it possible to ‘reverse engineer’ back to something close enough to the search term used to be able to get back to the ‘content’. The content of a visited site may be determined just by following a link – without any further ‘invasion’ of privacy. When slightly more complex forms of communication on the internet are considered – e.g. messaging or chatting on social networking sites – the separation between content and communications data becomes even less clear. In practice, as systems have developed, the separation is for many intents and purposes a false one.  The issue of whether or not ‘content’ data is gathered is of far less significance: focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry: it is a key part of the business models of Google, Facebook and others. It is already advanced but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this Bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and profiling.

1.2 The broad impact of internet surveillance

The kind of profiling discussed above has a very broad effect, one with a huge impact on much more than just an individual’s correspondence. It is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather than targeted will impact directly on the rights of the innocent, rights that, according to the principles of human rights, deserve protection. In the terms set out in the European Convention on Human Rights, there is a potential impact on Article 8 (right to private and family life, home and correspondence), Article 9 (Freedom of thought, conscience and religion), Article 10 (Freedom of expression) and Article 11 (Freedom of assembly and association).  Internet surveillance can enable discrimination (contrary to ECHR Article 14 (prohibition of discrimination) and even potentially automate it – a website could automatically reject visitors whose profile doesn’t match key factors, or change services available or prices based on those profiles.

2 The vulnerability of data

The essential approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions. Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

Designing and building systems to implement legislation like the Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can and does spread without control.

3 Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

The risks associated with function creep in relation to the surveillance systems envisaged in the Bill have a number of different dimensions. There can be creep in terms of the types of data gathered: as noted above, the split between ‘communications data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions of the legislation (and any systems put in place to bring the legislation into action), function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

4 Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By its very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

This Bill is not just a modernisation of existing powers, nor a way for the police to ‘catch up’. It is something on a wholly different scale. We as citizens are being asked to put a huge trust in the authorities not to misuse the kind of powers made possible by this Bill. Trust is of course important – but what characterises a liberal democracy is not trust of authorities but their accountability, the existence of checks and balances, and the limitation of their powers to interfere with individuals’ lives. This bill, as currently envisaged, does not provide that accountability and does not sufficiently limit those powers: precisely the reverse.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis MP put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence, nor to provide ‘private’ evidence that cannot at least qualitatively be revealed to the public.

That brings a bigger question into the spotlight, one that the Committee might think is the most important of all: what kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the draft Communications Data Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception. Is that what the Committee would really like to support?

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law, UEA Law School

The myth of technological ‘solutions’

A story on the BBC webpages caught my eye this morning: ‘the parcel conundrum‘. It described a scenario that must be familiar to almost everyone in the UK: you order something on the internet and then the delivery people mess up the delivery and all you end up with is a little note on the floor saying they tried to deliver it. Frustration, anger and disappointment ensue…

…so what is the ‘solution’? Well, if you read the article, we’re going to solve the problems with technology! The new, whizz-bang solutions are going to not just track the parcels, but track us, so they can find us and deliver the parcel direct to us, not to our unoccupied homes. They’re going to use information from social networking sites to discover where we are, and when they find us they’re going to use facial recognition software to ensure they deliver to the right person. Hurrah! No more problems! All our deliveries will be made on time, with no problems at all. All we have to do is let delivery companies know exactly where we are at all times, and give them our facial biometrics so they can be certain we are who we are.

Errr… could privacy be an issue here?

I was glad to see that the BBC did at least mention privacy in passing in their piece – even if they did gloss over it pretty quickly – but there are just one or two privacy problems here. I’ve blogged before about the issues relating to geo-location (here) but remember delivery companies often give 12 hour ‘windows’ for a delivery – so you’d have to let yourself be tracked for a long time to get the delivery. And your facial biometrics – will they really hold the information securely? Delete it when you’re found? Delivery companies aren’t likely to be the most secure or even skilled of operators (!) and their employees won’t always be exactly au fait with data protection etc – let alone have been CRB checked. It would be bad enough to allow the police or other authorities track us – but effectively unregulated businesses to do so? It doesn’t seem very sensible, to say the least…

…and of course under the terms of the Communications Data Bill (of which more below) putting all of this on the Internet will automatically mean it is gathered and retained for the use of the authorities, creating another dimension of vulnerability…

Technological solutions…

There is, however, a deeper problem here: a tendency to believe that a technological solution is available to a non-technological problem. In this case, the problem is that some delivery companies are just not very good – it may be commercial pressures, it may be bad management policies, it may be that they don’t train their employees well enough, it may be that they simply haven’t thought through the problems from the perspective of those of us waiting for deliveries. They can, however, ‘solve’ these problems just by doing their jobs better. A good delivery person is creative and intelligent, they know their ‘patch’ and find solutions when people aren’t in. They are organised enough to be able to predict their delivery times better. And so on. All the tracking technology and facial recognition software in the world won’t make up for poor organisation and incompetent management…

…and yet it’s far too easy just to say ‘here’s some great technology, all your problems will be solved’.

We do it again and again. We think the best new digital cameras will turn us into fantastic photographers without us even reading the manuals or learning to use our cameras (thanks the the excellent @legaltwo for the hint on that one!). We think ‘porn filters’ will sort out our parenting issues. We think web-blocking of the Pirate Bay will stop people downloading music and movies illegally. We think technology provides a shortcut without dealing with the underlying issue – and without thinking of the side effects or negative consequences. It’s not true. Technology very, very rarely ‘solves’ these kinds of problems – and the suggestion that it does is the worst kind of myth.

The Snoopers’ Charter

The Draft Communications Data Bill – the Snoopers’ Charter – perpetuates this myth in the worst kind of way. ‘If only we can track everyone’s communications data, we’ll be able to stop terrorism, catch all the paedos, root out organised crime’… It’s just not true – and the consequences to everyone’s privacy, just a little side issue to those pushing the bill, would be huge, potentially catastrophic. I’ve written about it many times before – see my submission to the Joint Committee on Human Rights for the latest example – and will probably end up writing a lot more.

The big point, though, is that the very idea of the bill is based on a myth – and that myth needs to be exposed.

That’s not to say, of course, that technology can’t help – as someone who loves technology, enjoys gadgets and spends a huge amount of his time online, that would be silly. Technology, however, is an adjunct, not a substitute, to intelligent ‘real world’ solutions, and should be clever, targeted and appropriate. It should be a rapier rather than a bludgeon.

The snoopers charter

I have just made a ‘short submission’ to the Joint Committee on Human Rights (JCHR) regarding the Draft Communications Data Bill – I’ve reproduced the contents below. I have reformatted it in order to make it more readable here, but other than the formatting this is what I sent to the committee.

The JCHR will not be the only committee looking at the bill – at the very least there will be a special committee for the bill itself. The JCHR is important, however, because, as I set out in my short submission, internet surveillance should be viewed very much as a human rights issue. In the submission I refer to a number of the Articles of the European Convention on Human Rights (available online here). For reference, the Articles I refer to are the following: Article 8 (Right to Respect for Private and Family Life), Article 9 (Freedom of Thought, Conscience and Religion), Article 10 (Freedom of Expression), Article 11 (Freedom of Assembly and Association) and Article 14 (Prohibition of Discrimination).

Here is the submission in full

——————————————————–

Submission to the Joint Committee on Human Rights

Re: Draft Communications Data Bill

The Draft Communications Data Bill raises significant human rights issues – most directly in relation to Article 8 of the Convention, but also potentially in relation to Articles 9, 10, 11 and 14. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1            The fundamental approach of the bill

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below.

2            The nature of ‘Communications Data’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. This submission will focus on one particular form of data – internet browsing data – to demonstrate some of the crucial issues that arise. Article 8 of the Convention states that:

“Everyone has the right to respect for his private and family life, his home and his correspondence’

On the surface, it might appear that ‘communications data’ relates to the ‘correspondence’ part of this clause – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact upon the ‘private life’ part of the clause. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which search terms are used, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes – and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry – it is a key part of the business models of Google, Facebook and others. It is already advanced – but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and ultimately profiling. That ‘content’ data is not gathered is of far less significance – and that focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

3            Articles 9, 10, 11 and 14

The kind of profiling discussed above is what brings Articles 9, 10, 11 and 14 into play: it is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. It can enable discrimination – and even potentially automate it. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather will impact directly on the rights of the innocent, rights that according to Articles 8, 9, 10, 11 and 14 should be respected.

4            The vulnerability of data

The approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is flawed for a number of reasons.

Firstly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

What is more, designing and building systems to implement legislation like the Communications Data Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can spread without control.

5            Function Creep

As important as the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, and the justification for its initial imposition, can be lost.

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions for which the legislation and any systems put in place to bring the legislation, function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

6            Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By the very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: on the surface, it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those who are expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence.

That brings a bigger question into the spotlight, one that the Joint Committee on Human Rights might think is the most important of all. What kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the Draft Communications Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception.

Dr Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia
Norwich NR4 7TJ

——————————————