Guest post: Data Retention: I can’t believe it’s not lawful, can you? A response to Anthony Speaight QC

Guest post by Matthew White

Introduction:

Ladies and gentlemen, Bagginses and Boffins. Tooks and Brandybucks. Grubbs! Chubbs! Hornblowers! Bolgers! Bracegirdles! Proudfoots. Put your butter away for I am about to respond, rebut, rebuke and more to a recent blog post for Judicial Power Project, by Anthony Speaight QC on data retention.

Blanket data retention is unlawful, please deal with it:

Speaight starts off by referring to the recent Court of Appeal (CoA) judgment in  Tom Watson and Others v Secretary of State for the Home Department [2018] EWCA Civ 70 and how the Court of Justice of the European Union (CJEU) has created problems and uncertainties with regards to data retention. As David Allen Green would say, ‘Well…’ Well, just to be clear, the position of the CJEU on blanket indiscriminate data retention is crystal clear. It . Is . Unlawful . It just happens that the CoA took the position of sticking their fingers in their ears and pretending that the CJEU’s ruling doesn’t apply to UK law, because its somehow (it’s not) different.

Just billing data is retained? Oh really?

Next, Speaight recaps the data retention saga so far, in that telecommunications companies have always recorded who uses their services, when and where, often for billing purposes. A long time ago, in a galaxy far, far away (a few years ago, and anywhere with an internet connection) this position was a robust one. But the European Commission (Commission) in 2011 highlighted that:

[T]rends in business models and service offerings, such as the growth in flat rate tariffs, pre-paid and free electronic communications services, meant that operators gradually stopped storing traffic and location data for billing purposes thus reducing the availability of such data for criminal justice and law enforcement purposes.

So, in a nutshell, data for billing purposes are on the decrease. This would explain why the Data Retention Directive (DRD) (discussed more below) affected:

[P]roviders of electronic communication services by requiring such providers to retain large amounts of traffic and location data, instead of retaining only data necessary for billing purposes; this shift in priority results in an increase in costs to retain and secure the data.

So, it’s simply untrue to refer to just billing data when talking about data retention, because this isn’t the only data that is or has ever been sought.

It’s the Islamists fault why we have data retention:

Speaight next points out that it was the advent of Islamist international terrorism that made it advantageous to place data retention obligations on companies. Oh really? Are we going down this route? Well….. demands for data retention can be traced back to the ‘International Law Enforcement and Telecommunications Seminars’ (ILETS) (6) and in its 1999 report, it was realised that Directive 97/66/EC (the old ePrivacy Directive) which made retention of communications data possible only for billing purposes was a problem. The report sought to ‘consider options for improving the retention of data by Communication Service Providers.’ Improve? Ha. Notice how 1999 was before 9/11? Funny that.

It doesn’t stop there though. A year later (still before 9/11), the UK’s National Crime and Intelligence Service (NCIS) made a submission (on behalf of the Mi5/6, GCHQ etc) to the Home Office on data retention laws. They ironically argued that a targeted approach would be a greater infringement on personal privacy (para 3.1.5). Of course, they didn’t say how or why this was the case, because, reasons. Charles Clarke, the then junior Home Office Minister, and Patricia Hewitt, an ‘E-Minister’ both made the claim such proposals would never happen (Judith Rauhofer, ‘Just Because You’re Paranoid, Doesn’t Mean They’re Not After You: Legislative Developments in Relation to the Retention of Communications Data’ (2006) SCRIPTed 3, 228; Patricia Hewitt and Charles Clarke, Joint letter to Independent on Sunday, 28 Jan 2000) and should not be implemented (Trade and Industry Committee, UK Online Reviewed: the First Annual Report of the E-Minister and E-Envoy Report (HC 66 1999-2000), Q93).

Guess what? A year later Part 11 of the Anti-terrorism, Crime and Security Act 2001 (ATCSA 2001) came into force three months after 9/11 (Judith Rauhofer, 331). The Earl of Northesk, however, pointed out that ‘there is no evidence whatever that a lack of data retained has proved an impediment to the investigation of the atrocities’ on 9/11 (HL Deb 4 Dec vol 629 col. 808-9). What this demonstrates is that data retention was always on the cards, even when its utility wasn’t proven, where the then Prime Minister Tony Blair, noted that ‘all the surveillance in the world’ could not have prevented the 7/7 bombings. It’s just that as Roger Clarke succinctly puts it:

“[M]ost critical driver of change, however, has been the dominance of national security extremism since the 2001 terrorist attacks in the USA, and the preparedness of parliaments in many countries to grant law enforcement agencies any request that they can somehow link to the idea of counter-terrorism.” (Roger Clarke, ‘Data retention as mass surveillance: the need for an evaluative framework’ (2015) International Data Privacy Law 5:2 121, 122).

Islamic terrorism was just fresh justification (7,9) for something that ‘the EU governments always intended to introduce an EC law to bind all member states to adopt data retention.’ Mandatory data retention was championed by the UK during its Presidency of the European Council (Council) (9) (and yes, that includes the ‘no data retention from us’ Charles Clarke (who was accused of threatening the European Parliament to agree to data retention (9))) and described as a master class in diplomacy and political manoeuvring (Judith Rauhofer, 341) (and they say it’s the EU that tells us what to do!!). Politicians goin’ politicate. Yes, the DRD makes reference to the Madrid bombings, but the DRD was not limited to combating terrorism (6), just as the reasons for accessing communications data in UK law under s.22 of the Regulation of Investigatory Powers Act 2000 (RIPA 2000) were not solely based on fighting terrorism. There is nothing wrong with saying that data retention (yeah, but not blanket, of course) and access to said data can be important in the fight against Islamist terrorism, but would you please stop pretending that was the basis on which data retention was sought?

Data retention was smooth like rocks:

Next, Speaight points to the ‘smooth operation’ of the data retention system. Smooth how and in what ways? Harder to answer that is, yess! Well….. in 2010, the Article 29 Working Party (WP29) pointed out that ‘the lack of available sensible statistics hinders the assessment of whether the [data retention] directive has achieved its objectives.’ The WP29 went further pointing out that there was a lack of harmonisation in national implementation of the DRD (2). This was, the purpose of the DRD (harmonising data retention across the EU), and it didn’t even achieve what it set out.

What about its true purpose? You know, spying on every EU citizen? Well the European Data Protection Supervisor (EDPS) responded to the Commission’s evaluation of the DRD. WARNING: EDPS pulls no punches. First, the EDPS reiterated that the DRD was based upon the assumption of necessity (para 38). Secondly, the EDPS criticised the Commission’s assertion that most Member States considered data retention a necessary tool when conclusions were based on just over a third (that’s less than half, right?) of them (para 40). Thirdly, these conclusions were in fact, only statements (para 41). Fourthly, the EDPS highlighted there should be sufficient quantitative and qualitative information to assess whether the DRD is actually working and whether less privacy intrusive measures could achieve the same result, information should show the relationship between use and result (43).

Surprise, surprise, the EDPS didn’t find sufficient evidence to demonstrate the necessity of the DRD and that further investigations into alternatives should commence (para 44). Fifthly, the EDPS pretty much savaged the quantitative and qualitative information available (para 45-52). A few years later, the CJEU asked for proof of the necessity of the DRD. There was a lack of statistical evidence from EU Member States, the Commission, the Council and European Parliament, and despite that, they had the cheek to ask the CJEU to reject the complaints made by Digital Rights Ireland and others anyway (ibid). Only the Austrian government were able to provide statistical evidence on the use (not retention) of communications data which didn’t involve any cases of terrorism (ibid). The UK’s representatives admitted (come again? The UK admits something?) there was no ‘scientific data’ to underpin the need of data retention (ibid), so the question begs, wtaf had the DRD been based upon? Was it the assumption of necessity the EDPS referred to? Draw your own conclusions. The moral of the story is that the DRD did not operate smoothly.

Ruling against data retention was a surprise?

Speaight then moves onto the judgment that started it all, Joined Cases C‑293/12 and C‑594/12, Digital Rights Ireland in which the CJEU invalidated the DRD across the EU. According to Speaight, this came as a ‘surprise.’

I felt a great disturbance in the Law, as if thousands of spies, police, other public authorities, politicians and lawyers suddenly cried out in terror, as the State were suddenly unable to spy anymore. I fear something terrible has happened.

So, who was surprised? Was it the European Parliament who had initially opposed this form of data retention as they urged its use must be entirely exceptional, based on specific comprehensible law, authorised by judicial or other competent authorities for individual cases and be consistent with the European Convention on Human Rights (ECHR)? Was it a surprise to them when they also noted that that ‘a general data retention principle must be forbidden’ and that ‘any general obligation concerning data retention’ is contrary to the proportionality principle’ (Abu Bakar Munir and Siti Hajar Mohd Yasin, ‘Retention of communications data: A bumpy road ahead’ (2004) The John Marshall Journal of Computer & Information Law 22:4 731, 734; Clive Walker and Yaman Akdeniz, ‘Anti-Terrorism Laws and Data Retention: War is over?’ (2003) Northern Ireland Legal Quarterly 54:2 159, 167)?

Was it a surprise to Patrick Breyer who argued that data retention was incompatible with Articles 8 and 10 of the ECHR back in 2005 (372, 374, 375)? Was it a surprise to Mariuca Morariu who argued that the DRD had failed to demonstrate its necessity (Mariuca Morariu, ‘How Secure is to Remain Private? On the Controversies of the European Data Retention Directive’ Amsterdam Social Science 1:2 46, 54-9)? Was it a surprise to Privacy International (PI), the European Digital Rights Initiative (EDRi), 90 NGOs and 80 telecommunications service providers (9) who were against the DRD? Was it a surprise to the 40 civil liberties organisations who urged the European Parliament to vote against the retention of communications data?

Was it a surprise to the WP29, the European Data Protection Commissioners, the International Chamber of Commerce (ICC), European Internet Services Providers Association (EuroISPA), the US Internet Service Provider Association (USISPA), the All Party Internet Group (APIG) (Abu Bakar Munir and Siti Hajar Mohd Yasin, 746-749) and those at the G8 Tokyo Conference? Hell, even our own assistant Information Commissioner, Jonathan Bamford, back in 2001 wouldn’t be surprised because he said ‘Part 11 isn’t necessary, and if it is necessary it should be made clear why’ (HL Deb 27 Nov 2001 vol 629 cc183-290, 252). Was it a surprise when prior to Digital Rights Ireland:

Bulgaria’s Supreme Administrative Court, the Romanian, German Federal, Czech Republic Constitutional Courts and the Supreme Court of Cyprus all [declared] national implementation of the DRD either invalid or unconstitutional (in some or all regards) and incompatible with Article 8 ECHR?

Was Jules Winnfield surprised?

The point I’m trying to hammer home is that (you’ve guessed it), the CJEU’s ruling in Digital Rights Ireland should come as no surprise. Still on the issue of surprise, for Speaight it was because it departed from decisions of the European Court of Human Rights (ECtHR) and the CJEU itself. Ok, let’s look at these ECtHR cases Speaight refers to. The first is Weber and Saravia v Germany, a case on ‘strategic monitoring.’ This is a whole different kettle of fish when compared to the DRD as this concerned the surveillance of 10% (I’m not saying this is cool either btw) [30, 110] of German telecommunications, not the surveillance of ‘practically the entire European population’ [56]. Ok, that may have been an exaggeration by the CJEU as there are only 28 (we’re not so sure about one though) EU Member States, but the point is, the powers in question are not comparable. The DRD was confined to serious crime, without even defining it [61]. Whereas German law in Weber concerned six defined purposes for strategic monitoring, [27] and could only be triggered through catch words [32]. In Digital Rights Ireland, authorisation for access to communications data in the DRD was not dependent upon ‘prior review carried out by a court or by an independent administrative body’ [62] where in Weber this was the case [21, 25]. Apples and oranges.

The second ECtHR case was Kennedy v UK, and it’s funny that this case is brought up. The ECtHR in this case referred to a previous case, Liberty v UK in which the virtually unfettered power of capturing external communications [64] violated Article 8 of the ECHR [70]. The ECtHR in Kennedy referred to this as an indiscriminate power [160, 162] (bit like data retention huh?), and the UK only succeeded in Kennedy because the ECtHR were acting upon the assumption that interception warrants only related to one person [160, 162]. Of course, the ECtHR didn’t know that ‘person’ for the purposes of RIPA 2000 meant ‘any organisation and any association or combination of persons,’ so you know, not one person literally.

And this was, of course, prior to Edward Snowden’s bombshell of surveillance revelations, which triggered further proceedings by Big Brother Watch. A couple of years ago, in Roman Zakharov v Russia, the ECtHR’s Grand Chamber (GC) ruled that surveillance measures that are ‘ordered haphazardly, irregularly or without due and proper consideration’ [267] violates Article 8 [305]. That is because the automatic storage of clearly irrelevant data would contravene Article 8 [255]. This coincides with Advocate General (AG) Saugmandsgaard Øe’s opinion that the ‘disadvantages of general data retention obligations arise from the fact that the vast majority of the data retained will relate to persons who will never be connected in any way with serious crime’ [252]. That’s a lot of irrelevant data if you ask me. Judge Pinto de Albuquerque, in his concurring opinion in Szabo and Vissy v Hungary regards Zakharov as a rebuke of the ‘widespread, non-(reasonable) suspicion-based, “strategic surveillance” for the purposes of national security’ [35]. So, I’d say that even Weber v Saravia is put into doubt. And so, even if the CJEU rules that data retention in the national security context is outside its competence, there is enough ECtHR case law to bite the UK on its arse.

Probably the most important ECtHR case not mentioned by Speaight (why is that?) is that of S and Marper v UK, this is the data retention case. Although this concerned DNA data retention, the ECtHR’s concerns ‘have clear applications to the detailed information revealed about individuals’ private lives by communications data.’ What did the GC rule in S and Marper? Oh, was it that blanket indiscriminate data retention ‘even on a specific group of individuals (suspects and convicts) violated Article 8’? Yes, they did and it was S and Marper to which the CJEU referred to on three separate occasions in Digital Rights Ireland [47, 54-5]. Tele 2 and Watson (where the CJEU reconfirmed that blanket indiscriminate data retention is prohibited under EU law) is just the next logical step with regards to communications data. And so far from being surprising, the CJEU in Digital Rights Ireland and Tele2 and Watson are acting in a manner that is consistent with the case law of the ECtHR.

The CJEU case law that Speaight refers to is Ireland v Parliament and Council which was a challenge to the DRD’s legal basis, not whether it was compatible with the Charter of Fundamental Rights, so I’m not entirely sure what Speaight is trying to get at. All in all, Speaight hasn’t shown anything to demonstrate that Digital Rights Ireland has departed from ECtHR or CJEU case law.

You forgot to say the UK extended data retention laws:

Speaight then rightly acknowledges how the UK government replaced UK law implementing the DRD with the Data Retention and Investigatory Powers Act 2014 (DRIPA 2014) in lightspeed fashion. What Speaight omits, however, is that DRIPA 2014 extended retention obligations from telephone companies and Internet Service Providers (ISPs) to Over-The-Top (OTT) services such as Skype, Twitter, Google, Facebook etc. James Brokenshire MP attested that DRIPA 2014 was introduced to clarify what was always covered by the definition of telecommunications services (HC Deb 14 July, vol 584, 786). This, of course, was total bullshit (5), but like I said, politicians goin’ politicate.

Claimants don’t ask questions, courts do:

Speaight moves onto the challenges to DRIPA 2014, we know the story already, the High Court (HC) said it was inconsistent with Digital Rights Ireland, whereas the CoA disagreed, blah, blah. Speaight points out that the claimants had no issue with data retention in principle, which is true, but so what? Speaight also points out that the CJEU went further than what the claimants asked by ruling that blanket indiscriminate data retention was not permissible under EU law. Wait, what the fark? It’s not the bloody claimants’ that ask the CJEU a question on the interpretation of EU law as I’m pretty sure it was the Swedish referring court (via Article 267 of the Treaty on the Functioning of the EU, you know, a preliminary reference) that asked the CJEU:

Is a general obligation to retain traffic data covering all persons, all means of electronic communication and all traffic data without any distinctions, limitations or exceptions for the purpose of combating crime (as described [below under points 1-6]) compatible with Article 15(1) of Directive 2002/58/EC, 1 taking account of Articles 7, 8 and 15(1) of the Charter?

And the CJEU said no. End of discussion.

The ends don’t always justify the means and for clarity, the CJEU didn’t reject shit:

Speaight also says that the CJEU in Tele2 and Watson rejected AG Saugmandsgaard Øe’s advice that the French governments found access to communications data useful in its investigations into terrorist attacks in 2015. Such a position however, falls victim to several questions, such as under what circumstances was the data sought? Was it accessed as a consequence of the legal obligation to retain? Or was it already retained for business purposes? What were the results of the use of that data? Could the same results have been achieved using less intrusive means? Saying it is useful tells us nothing as the ECtHR has plainly said necessity (in a democratic society) is not as flexible as expressions such as ‘useful’ [48], and as the CJEU rightly noted, a measure in and of itself, even in the general interest cannot justify general indiscriminate data retention [103]. This demonstrates that the CJEU didn’t reject anything, they didn’t even refer to the French government’s evidence, they just said as fundamental as fighting serious crime may be, and the measures employed, cannot by themselves justify such a fundamental departure from the protection of human rights. Just because you can, doesn’t mean you should. A certain ECtHR said something similar in Klass v Germany in that States ‘may not, in the name of the struggle against espionage and terrorism, adopt whatever measures they deem appropriate’ [49].

The CJEU doesn’t have to answer what it wasn’t asked:

Speaight then whines about the CJEU not addressing the issue of national security, well they weren’t asked about national security in Tele2 and Watson, were they? Like I said, even if the CJEU doesn’t have competence to rule on national security based data retention, Roman Zakharov is watching you from Strasbourg (he’s not actually in Strasbourg, I don’t think, but you dig).

What’s your problem with notification?

Speaight also bemoans the obligation to notify saying this requirement could damage investigations and surveillance and went beyond what the claimants had asked. Well, again, the claimants weren’t asking the questions, ffs, and the CJEU made this point by referring to previous case law, notably, Schrems [95]. The CJEU made very clear that notification should be done ‘as soon as that notification is no longer liable to jeopardise the investigations being undertaken by those authorities’ [121]. This is consistent with the ECtHR’s stance. Both courts are aware that notification can defeat the purpose of the investigation, and sometimes even after it has concluded, notification may still not be appropriate. But Speaight seems to omit this crucial detail.

Lawyers getting mad:

Speaight notes that criticism of Tele2 is not confined to Eurosceptics. Sure, but you don’t have to be a Europhile to defend it either. He also noted that it was roundly condemned by all the participants at a meeting of the Society of Conservative Lawyers. Well, no shit to my Sherlock, the name kinda gave it away. He also notes that the former Independent Reviewer of Terror law, David Anderson QC, said it was the worst judgment he knew of. Wait til Anderson reads the ECtHR’s case law on this matter then, which if anything, on proper reading goes further than Tele2. Speaight also points out that Demonic Grieve QC MP was pissed and that a well distinguished member of the French Bar, Francois-Henri Briard basically saying we need more conservative judges to trample on fundamental rights. If a judgment that protects the fundamental rights of all EU citizens pisses off a few lawyers, so be it.

Conclusions:

I’ve spent way too much time on Speaight’s post, and the really sad thing is, I’ve enjoyed it. It’s hard to have a conversation about data retention when you first have to sift through a load of bollocks, and there was plenty of bollocks, just to make your point. And by the time you’ve cleared through all the falsities and misleading or exaggerated points, you run close to 4k words without actually saying what your position is. So, my position for this blog post is, we should always shoot down rubbish when it shows its ugly face or else it festers. Actually, the point is, I can believe that blanket indiscriminate data retention is unlawful.

Privacy-friendly judges?

Supreme court sealYesterday’s ruling by the Supreme Court of the United States, requiring the police to get a warrant before accessing a suspect’s mobile phone data, was remarkable in many ways. It demonstrated two things in particular that fit within a recent pattern around the world, one which may have quite a lot to do with the revelations of Edward Snowden. The first is that the judiciary shows a willingness and strength to support privacy rights in the face of powerful forces, the second is an increasing understanding of the way that privacy, in these technologically dominated days, is not the simple thing that it was in the past.

The stand-out phrase in the ruling is remarkable in its clarity:

13-132 Riley v. California (06/25/2014)

“Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans “the privacies of life,” Boyd, supra, at 630. The fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought. Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple— get a warrant.”

Privacy advocates around the world have been justifiably excited by this – not only is the judgment a clearly privacy-friendly one, but it effectively validates some of the critical ideas that many of us have been trying to get the authorities to understand for a long time. Most importantly, that the way that we communicate these days, the way that we use the internet and other forms of communication, plays a far more important part in our lives than it did in the past. The emphasis on the phrase ‘the privacies of life’ is a particularly good one. This isn’t just about communication – it’s about the whole of our lives.

The argument about cell-phones can be extended to all of our communications on the internet – and the implications are significant. As I’ve argued before, the debate needs to be reframed, to take into account the new ways that we use communications – privacy these days isn’t as easily dismissed as it was before. It’s not about tapping a few phone calls or noting the addresses on a few letters that you send – communications, and the internet in particular, pervades every aspect of our lives. The authorities in the UK still don’t seem to get this – but the Supreme Court of the US does seem to be getting there, and its not alone. The last few months have seen a series of quite remarkable cases, each of which demonstrates that judges are starting to get a real grip on the issues, and are willing to take on the powerful groups with a vested interest in downplaying the importance of privacy:

  • The ECJ ruling invalidating the Data Retention Directive on 8th April 2014
  • The ECJ Google Spain ruling on the ‘Right to be Forgotten’  on 13th May 2014
  • The Irish High Court referring Max Schrems’ case against Facebook to the ECJ, on 19th June 2014

These three cases all show similar patterns. They all involve individuals taking on very powerful groups – in the data retention case, taking on pretty much all the security services in Europe, in the other two the internet giants Google and Facebook respectively. In all three cases – as in the Supreme Court of the US yesterday – the rulings are fundamentally about the place that privacy plays, and the priority that privacy is given. The most controversial statement in the Google Spain case makes it explicit:

“As the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the Charter, request that the information in question no longer be made available to the general public on account of its inclusion in such a list of results, those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name” (emphasis added)

That has been, of course, highly controversial in relation to freedom of information and freedom of expression, but the first part, that privacy overrides the economic interest of the operator of the search engine, is far less so – and the fact that it is far less controversial does at least show that there is a movement in the privacy-friendly direction.

The invalidation of the Data Retention Directive may be even more significant – and again, it is based on the idea that privacy rights are more important than security advocates in particular have been trying to suggest. The authorities in the UK are still trying to avoid implementing this invalidation – they’re effectively trying to pretend that the ruling does not apply – but the ruling itself is direct and unequivocal.

As for the decision in the Irish High Court to refer the ‘Europe vs Facebook’ case to the ECJ, the significance of that has yet to be seen, but Facebook may very well be deeply concerned – because, as the two previous cases have shown, the ECJ has been bold and unfazed by the size and strength of those it might be challenging, and willing to make rulings that have dramatic consequences. The Irish High Court is the only one of the three courts to make explicit mention of the revelations of Edward Snowden, but I do not think that it is too great a leap to suggest that Snowden has had an influence on all the others. Not a direct one – but a raising of awareness, even at the judicial level, of the issues surrounding privacy, why they matter, and how many different things are at stake. A willingness to really examine the technology, to face up to the ways in which the ‘new’ world is different from the old – and a willingness to take on the big players.

I may well be being overly optimistic, and I don’t think too much should be read into this, but it could be critical. The law is only one small factor in the overall story – but it is a critical one, and if people are to begin to take back their privacy, they need to have the law at least partly on their side, and to have judges who are able and willing to enforce that law. With this latest ruling, and the ones that have come over the last few months, the signs are more positive than they have been for some time.

 

Addendum: As David Anderson has pointed out, the UK Supreme Court showed related tendencies in last week’s ruling over the disclosure of past criminal records in job applications, in R (T ) v SSHD [2014] UKSC 35 on 18th June. See the UKSC Blog post here.

Data Retention: taking privacy seriously

The repercussions of yesterday’s landmark ruling of the  Court of Justice of the European Union that the Data Retention Directive is invalid, and has been so since its inception are likely to be complex and wide-ranging. Lawyers, academics, politicians and activists have been reading, writing, thinking and speculating about what might happen. With the directive declared invalid, what will happen to the various national implementations of that directive – in the UK, for example, we have The Data Retention (EC Directive) Regulations 2009. Will it need to be repealed? Will it need to be challenged – and if so how, and by whom? What will the various communications service providers – the ISPs, the telecommunications companies and so forth – do in reaction to the declaration? What will happen to other legislation that at least in part relies on retained data – the Regulation of Investigatory Powers Act 2000 (RIPA) for example. Will the police and intelligence services change what they do in any way, shape or form? Will the various governments attempt some kind of replacement for the Data Retention Directive? If so, what form will it take?

These are just some of the open questions – and the answers to them are only just starting to emerge. Some will be clear – but a great many will be very messy, and will take a lot of time, energy and heartache to sort out. The question that should immediately spring to mind is how that all this mess, and the resultant wastes of time, energy, expertise and heartache could have been avoided. Actually, the answer is simple. It could have been avoided if privacy had been taken seriously to start with.

Underestimating privacy

For a long time, privacy hasn’t been taken nearly seriously enough. It hasn’t been taken seriously by the big operators on the internet – Facebook, Google, Apple, Microsoft, Yahoo! and so forth. Their policies and practices have treated privacy as a minor irritant, dealt with by obscure and unfathomable policies that people will at best scroll through and click OK at the bottom of without reading. Their products have treated privacy as an afterthought, almost an irrelevance – a few boxes to tick to satisfy the lawyers, that’s all. Privacy hasn’t been taken seriously by the intelligence agencies or the police forces either – just the province of a few geeks and agitators, the tinfoil hat brigade. It hasn’t been taken seriously by some of the open data people – the furore over care.data is just one example.

Privacy, however, does matter. It matters to ordinary people in their ordinary lives – not just to geeks and nerds, not just to ‘evil-doers’, not just to paranoid conspiracy theorists. And when people care enough about things, they can often find ways to make sure that those things are treated with respect. They fight. They act. They work together – and often, more often than might immediately seem apparent, they find a way to win. That was how the Communications Data Bill – the ‘Snoopers’ Charter’ was defeated. That is why Edward Snowden’s revelations are still reverberating around the world. That’s why behavioural advertising has the bad name that it does – and why the Do Not Track initiative started, and why the EU brought in the ‘Cookies Directive’, with all its flaws.

All these conflicts – and the disaster that is the Data Retention Directive – could have been avoided or at least ameliorated if the people behind these various initiatives, laws, processes and products had taken privacy seriously to start with. This is one of the contentions of my new book, Internet Privacy Rights – people believe they have rights, and when those rights are infringed, they care about it, and increasingly they’re finding ways to act upon it. Governments, businesses and others need to start to understand this a bit better if they’re not going to get into more messes like that that surrounds the Data Retention Directive.  It’s not as though they haven’t had warnings. From the very start, privacy advocates have been complaining about the Directive – indeed, even before its enactment the Article 29 Working Party had been strongly critical of the whole concept of mass data retention. That criticism continued over the years, largely ignored by those in favour of mass surveillance. In 2011, Peter Hustinx, the European Data Protection Supervisor, called the Data Retention Directive “the most privacy-invasive instrument ever” – and that was before the revelations of Edward Snowden.

They should have listened. They should be listening now. Privacy needs to be taken seriously.

 

Paul Bernal, April 2014

Internet Privacy Rights – Rights to Protect Autonomy is available from Cambridge University Press here. Quote code ‘InternetPrivacyRights2014’ for a 20% discount from the CUP online shop.

Data retention: fighting for privacy!

This morning’s news that the Court of Justice of the European Union has declared the Data Retention Directive to be invalid has been greeted with joy amongst privacy advocates. It’s a big win for privacy – though far from a knockout blow to the supporters of mass surveillance – and one that should be taken very seriously indeed. As Glyn Moody put it in his excellent analysis:

“…this is a massively important ruling. It not only says that the EU’s Data Retention Directive is illegal, but that it always was from the moment it was passed. It criticises it on multiple grounds that will make it much harder to frame a replacement. That probably won’t be impossible, but it will be circumscribed in all sorts of good ways that will help to remove some of its worst elements.”

I’m not going to attempt a detailed legal analysis here – others far more expert than me have already begun the process. These are some of the best that I have seen so far:

Fiona de Londras: http://humanrights.ie/civil-liberties/cjeu-strikes-down-data-retention-directive/

Daithí Mac Síthigh: http://www.lexferenda.com/08042014/2285/

Simon McGarr: http://www.mcgarrsolicitors.ie/2014/04/08/digital-rights-ireland-ecj-judgement-on-data-retention/

The full impact of the ruling won’t become clear for some time, I suspect – and already some within the European Commission seems to be somewhat in panic mode, looking around for ways to underplay the ruling and limit the damage to their plans for more and more surveillance and data retention. Things are likely to remain in flux for some time – but there are some key things to take from this already.

The most important of these is that privacy is worth fighting for – and that when we fight for privacy, we can win, despite what may seem overwhelming odds and supremely powerful and well-resourced opponents. This particular fight exemplifies the problems faced – but also the way that they can and are being overcome. It was brought by an alliance of digital rights activists – most notably Digital Rights Ireland – and has taken a huge amount of time and energy. It is, as reported in the Irish Times by the excellent Karlin Lillington, a ‘true David versus Goliath victory‘. It is a victory for the small people, the ordinary people – for all of us – and one from which we should take great heart.

Privacy often seems as though it is dead, or at the very least dying. Each revelation from Edward Snowden seems to demonstrate that every one of our movements is being watched at all times. Each new technological development seems to have privacy implications, and the developers of the technology often seem blissfully unaware of those implications until it’s almost too late. Each new government seems to embrace surveillance and see it as a solution to all kinds of problems, from fighting terrorism to rooting out paedophiles, from combatting the ‘evil’ of music and movie piracy to protecting children from cyberbullies or online pornography, regardless of the evidence that it really doesn’t work very well in those terms, if at all. Seeing it in that way, however, misses the other side of the equation – that more and more people are coming to understand that privacy matters, and are willing to take up the fight for privacy. Some times those fights are doomed to failure – but sometimes, as with today’s ruling over data retention, they can succeed. We need to keep fighting.

A progressive digital policy?

Yesterday I read a call for submissions to Labour Left’s ‘Red Book II’, by Dr Éoin Clarke – to develop a way forward for the Labour Party. It started me thinking about what would really constitute a progressive digital policy – because for me, any progressive party should be looking at how to deal with the digital world. It is becoming increasingly important – and policies of governments seem to be wholly unable to deal with or even understand the digital world.

It must be said from the outset that I am not a Labour Party member, but that I was for many years. I left in 1999, partly because I was leaving the country and partly because I was already becoming disillusioned as to the direction that Labour was taking – a stance that the invasion of Iraq only confirmed. I have not rejoined the party since, though I have been tempted at times. One of the reasons I have not been able to bring myself to join has been the incoherence and oppressiveness of Labour’s digital policies, which are not those of a progressive, positive and modern party, of one that represents the ordinary people, and in particular the young people, of Britain today.

That seems to me to be very wrong. Labour should be a progressive party. It should be one that both represents and learns from young people. It should be one that looks forward rather than back – and one that is brave enough to be radical. Right now it isn’t: and the last government presided over some appalling, oppressive and regressive digital policies.

I’ve written in the past about why governments always get digital policy wrong – but it’s much easier to snipe from the sidelines than it is to try to build real policy. Here, therefore, is my first attempt at putting together a coherent, progressive policy for digital government. It is of course very much a skeleton – just the barest of bones – and very much a first attempt. There is probably a lot missing, and it needs a lot more thought. It would take a lot of work to put flesh on the bones – but for me, the debate needs to be had.

The starting point for such a policy would be a series of nine commitments.

  1. A commitment to the right to access to the net – and to supporting human rights online as well as in the real world. This is the easiest part of the policy, and one where Labour, at least theoretically, has not been bad. Gordon Brown spoke of such a right. However, supporting such a right has implications, implications which the Labour Party seems to have neither understood nor follows. The most important such implication is that it should not be possible to arbitrarily prevent people accessing the net – and that the barrier for removal of that right should be very high. Any policy which relies on the idea of blocking access should be vigorously resisted – the Digital Economy Act is the most obvious example. Cutting people’s access on what is essentially suspicion is wholly inconsistent with a commitment to the right to access the internet.
  2. A commitment against internet surveillance – internet surveillance is very much in the news right now, with the Coalition pushing the Communications Data Bill, accurately labelled the ‘snoopers’ charter’, about which I have written a number of times.Labour should very much oppose this kind of surveillance, but doesn’t. Indeed, rather the opposite – the current bill is in many ways a successor to Labour’s ‘Interception Modernisation Programme’. Surveillance of this kind goes very much against what should be Labour values: it can be and has been used to monitor those organising protests and similar, going directly against the kinds of civil rights that should be central to the programme of any progressive, left wing party: the rights to assembly and association. Labour should not only say, right now, that it opposes the Snoopers Charter, but that it would not seek to bring in other similar regulation. Indeed, it should go further, and suggest that it would work within the European Union to repeal the Data Retention Directive (which was pushed through by Tony Blair) and to reform RIPA – restricting the powers that it grants rather than increasing them.
  3. A commitment to privacy and data protection – rather than just paying lip service to them. I have written many times before about the problems with the Information Commissioner’s Office. First of all it needs focus: it (or any replacement body) should be primarily in charge of protecting privacy. Secondly, it needs more real teeth – but also more willingness to use them and against more appropriate targets. There has been far too little enforcement on corporate bodies, and too much on public authorities. If companies are to treat individuals’ private information better, they need the incentive to do so – at the moment even if they are detected, the enforcement tends to be feeble: a slap on the wrist at best. The current law punishes each group inappropriately: public authorities with big fines, which ultimately punish the public, corporates barely at all. Financial penalties would provide an incentive for businesses, while more direct personal punishments for those in charge of public authorities would work better as an incentive for them, as well as not punishing the public!
  4. A commitment to oppose the excessive enforcement of copyright – and instead to encourage the content industry to work for more positive ways forward. This would include the repeal of the Digital Economy Act, one of the worst pieces of legislation in the digital field, and one about which the Labour Party should be thoroughly ashamed. Labour needs to think more radically and positively – and understand that the old ways don’t work, and merely manage to alienate (and even criminalise) a generation of young people. Labour has a real opportunity to do something very important here – and to understand the tide that is sweeping across the world, at least in the minds of the people. In the US, SOPA and PIPA have been roundly beaten. ACTA suffered a humiliating defeat in the European Parliament and is probably effectively dead. In France, the new government is looking to abolish HADOPI – the body that enforces their equivalent of the Digital Economy Act. A truly progressive, radical party would not resist this movement – it would seek to lead it. Let the creative minds of the creative industries be put to finding a creative, constructive and positive way forward. Carrots rather than just big sticks.
  5. A commitment to free speech on the internet. This has a number of strands. First of all, to develop positive and modern rules governing defamation on the internet. Reform of defamation is a big programme – and I am not convinced that the current reform package does what it really should, focussing too much on reforming what happens in the ‘old media’ (where I suspect there is less wrong than some might suggest) without dealing properly with the ‘new media’ (which has been dealt with fairly crudely in the current reforms). There needs to be clarity about protection for intermediaries, for example.
  6. A commitment against censorship – this is the second part of the free speech strand. In the current climate, there are regular calls to deal with such things as pornography and ‘trolling’ on the internet – but most of what is actually suggested amounts to little more than censorship. We need to be very careful about this indeed – the risks of censorship are highly significant. Rather than strengthening our powers to censor and control,via web-blocking and so forth, we need to make them more transparent and accountable. A key starting point would be the reform of the Internet Watch Foundation, which plays a key role in dealing with child abuse images and related websites, but falls down badly in terms of transparency and accountability. It needs much more transparency about how it works – a proper appeals procedure, better governance structures and so forth. The Labour Party must not be seduced by the populism of anti-pornography campaigners into believing in web-blocking as a simple, positive tool. There are huge downsides to that kind of approach, downsides that often greatly outweigh the benefits.
  7. A radical new approach to social media – the third strand of the free speech agenda. We need to rethink the laws and their enforcement that have led to tragic absurdities like the Twitter Joke Trial, and the imprisonment of people for Facebook posts about rioting. The use of social media is now a fundamental part of many people’s lives – pretty much all young people’s lives – and at present it often looks as though politicians and the courts have barely a clue how it works. Labour should be taking the lead on this – and it isn’t. The touch needs to be lighter, more intelligent and more sensitive – and led by people who understand and use social media. There are plenty of them about – why aren’t they listened to?
  8. A commitment to transparency – including a full commitment to eGovernment, continuing the good aspects of what the current government is doing in relation to Open Data. Transparency, however, should mean much more – starting with full and unequivocal support for Freedom of Information. There has been too much said over recent months to denigrate the idea of freedom of information, and to suggest that it has ‘gone too far’. The opposite is much more likely to be the case: and a new approach needs to be formulated. If it takes too much time, money and effort to comply with FOI requests, that indicates that the information hasn’t been properly organised or classified, not that the requests should be curbed. The positive, progressive approach would be to start to build systems that make it easier to provide the information, not complain about the requests.
  9. A commitment to talk to the experts – and a willingness to really engage with and listen to them. We have some of the best – from people like Tim Berner-Lee to Professor Ross Anderson at the Cambridge University Computer Lab, Andrew Murray at the LSE, the Oxford Internet Institute and various other university departments, civil society groups and so forth – and yet the government consistently fails to listen to what they say, and prefers instead to listen to industry lobby groups and Whitehall insiders. That is foolish, short-sighted and inappropriate – as well as being supremely ineffective. It is one of the reasons that policies formulated are not just misguided in their aims but also generally fail to achieve those aims. There is real expertise out there – it should be used!

Much more is needed of course – this just sets out a direction. I’ve probably missed out some crucial aspects. Some of this may seem more about reversing and cancelling existing policies rather than formulating new ones – but that is both natural and appropriate, as the internet, much more than most fields, it generally needs a light touch. The internet is not ‘ungovernable’, but most attempts to govern it have been clumsy and counter-productive.

A forward-looking, radical and positive digital policy would mark the Labour Party out as no longer being in the hands of the lobbyists, but instead being willing to fight for the rights of real, ordinary people. It would mark out the Labour Party as being a party that understands young people better – and supports them rather than demonises and criminalises them. Of course I do not expect the Labour Party to take this kind of agenda on. It would take a level of political courage that has not been demonstrated often by any political party, let alone the current Labour Party, to admit that they have got things so wrong in the past. Admission of past faults is something that seems close to political blasphemy these days – for me, that is one of the biggest problems in politics.

As I said at the start, this is very much a first stab at an approach for the future – I would welcome comments, thoughts and even criticism(!). We need debate on this – and not just for the Labour Party. Currently, though my history has been with the Labour Party, I find myself without anyone that I think can represent me. If any party were to take on an agenda for the digital world that would make more sense, I would be ready to listen.

Snoopers’ Charter Consultation

The draft Communications Data Bill – the ‘Snoopers’ Charter’ – is currently up for consultation before a specially put together Joint Parliamentary Committee. The consultation period has been relatively short – it ends on 23rd August – and at a time when many people are away on holiday and while many other have been enjoying (and being somewhat distracted by) the Olympic Games.

Even so, it’s very important – not just because what is being proposed is potentially highly damaging, but because it’s a field in which the government has been, in my opinion, very poorly advised and significantly misled. There is a great deal of expertise around – particularly on the internet – but in general, as in so many areas of policy, the government seems to be very unwilling to listen to the right people. I’ve blogged on the general area a number of times before – most directly on ‘Why does the government always get it wrong?’.

All this means that it would be great if people made submissions – for details see here.

Here is the main part of my submission, reformatted for this blog.

————————————————-

Submission to the Joint Committee on the draft Communications Data Bill

The draft Communications Data Bill raises significant issues – issues connected with human rights, with privacy, with security and with the nature of the society in which we wish to live. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

As noted, there are many issues brought up by the draft bill: this submission does not intend to deal with all of them. It focusses primarily on three key issues:

1) The nature of internet surveillance. In particular, that internet surveillance means much more than ‘communications’, partly because of the nature of the technology involved and partly because of the many different ways in which the internet is used. Internet surveillance means surveilling not just correspondence but social life, personal life, finances, health and much more. Gathering ‘basic’ data can make the most intimate, personal and private information available and vulnerable.

2) The vulnerability of both data and systems. It is a fallacy to assume that data or systems can ever be made truly ‘secure’. The evidence of the past few years suggests precisely the opposite: those who should be most able and trusted with the security of data have proved vulnerable. The approach of the draft Communications Data Bill – essentially a ‘gather all then look later’ approach – is one that not only fails to take proper account of that vulnerability, but actually sets up new and more significant vulnerabilities, effectively creating targets for hackers and others who might wish to take advantage of or misuse data.

3) The risks of ‘function creep’. The kind of systems and approach envisaged by the draft Bill makes function creep a real and significant risk. Data, once gathered, is a ‘resource’ that is almost inevitably tempting to use for purposes other than those for which its gathering was envisaged. These risks seem to be insufficiently considered both in the overall conception and in the detail of the Bill.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1 The Nature of internet Surveillance

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below

1.1 ‘Communications Data’ and the separation of ‘content’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. On the surface, it might appear that ‘communications data’ relates primarily to ‘correspondence’ – bringing in the ECHR Article 8 right to respect for privacy of correspondence – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc, even without what might reasonably be called the ‘content’ of any communications being examined – though what constitutes ‘content’ is contentious.

Considering a Google search, for example, if RIPA’s requirements are to be followed, the search term would be considered ‘content’ – but would links followed as a result of a search count as content or communications data? Who is the ‘recipient’ of a clicked link? If the data is to be of any use, it would need to reveal something of the nature of the site visited – and that would make it possible to ‘reverse engineer’ back to something close enough to the search term used to be able to get back to the ‘content’. The content of a visited site may be determined just by following a link – without any further ‘invasion’ of privacy. When slightly more complex forms of communication on the internet are considered – e.g. messaging or chatting on social networking sites – the separation between content and communications data becomes even less clear. In practice, as systems have developed, the separation is for many intents and purposes a false one.  The issue of whether or not ‘content’ data is gathered is of far less significance: focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry: it is a key part of the business models of Google, Facebook and others. It is already advanced but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this Bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and profiling.

1.2 The broad impact of internet surveillance

The kind of profiling discussed above has a very broad effect, one with a huge impact on much more than just an individual’s correspondence. It is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather than targeted will impact directly on the rights of the innocent, rights that, according to the principles of human rights, deserve protection. In the terms set out in the European Convention on Human Rights, there is a potential impact on Article 8 (right to private and family life, home and correspondence), Article 9 (Freedom of thought, conscience and religion), Article 10 (Freedom of expression) and Article 11 (Freedom of assembly and association).  Internet surveillance can enable discrimination (contrary to ECHR Article 14 (prohibition of discrimination) and even potentially automate it – a website could automatically reject visitors whose profile doesn’t match key factors, or change services available or prices based on those profiles.

2 The vulnerability of data

The essential approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions. Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

Designing and building systems to implement legislation like the Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can and does spread without control.

3 Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

The risks associated with function creep in relation to the surveillance systems envisaged in the Bill have a number of different dimensions. There can be creep in terms of the types of data gathered: as noted above, the split between ‘communications data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions of the legislation (and any systems put in place to bring the legislation into action), function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

4 Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By its very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

This Bill is not just a modernisation of existing powers, nor a way for the police to ‘catch up’. It is something on a wholly different scale. We as citizens are being asked to put a huge trust in the authorities not to misuse the kind of powers made possible by this Bill. Trust is of course important – but what characterises a liberal democracy is not trust of authorities but their accountability, the existence of checks and balances, and the limitation of their powers to interfere with individuals’ lives. This bill, as currently envisaged, does not provide that accountability and does not sufficiently limit those powers: precisely the reverse.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis MP put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence, nor to provide ‘private’ evidence that cannot at least qualitatively be revealed to the public.

That brings a bigger question into the spotlight, one that the Committee might think is the most important of all: what kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the draft Communications Data Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception. Is that what the Committee would really like to support?

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law, UEA Law School

The myth of technological ‘solutions’

A story on the BBC webpages caught my eye this morning: ‘the parcel conundrum‘. It described a scenario that must be familiar to almost everyone in the UK: you order something on the internet and then the delivery people mess up the delivery and all you end up with is a little note on the floor saying they tried to deliver it. Frustration, anger and disappointment ensue…

…so what is the ‘solution’? Well, if you read the article, we’re going to solve the problems with technology! The new, whizz-bang solutions are going to not just track the parcels, but track us, so they can find us and deliver the parcel direct to us, not to our unoccupied homes. They’re going to use information from social networking sites to discover where we are, and when they find us they’re going to use facial recognition software to ensure they deliver to the right person. Hurrah! No more problems! All our deliveries will be made on time, with no problems at all. All we have to do is let delivery companies know exactly where we are at all times, and give them our facial biometrics so they can be certain we are who we are.

Errr… could privacy be an issue here?

I was glad to see that the BBC did at least mention privacy in passing in their piece – even if they did gloss over it pretty quickly – but there are just one or two privacy problems here. I’ve blogged before about the issues relating to geo-location (here) but remember delivery companies often give 12 hour ‘windows’ for a delivery – so you’d have to let yourself be tracked for a long time to get the delivery. And your facial biometrics – will they really hold the information securely? Delete it when you’re found? Delivery companies aren’t likely to be the most secure or even skilled of operators (!) and their employees won’t always be exactly au fait with data protection etc – let alone have been CRB checked. It would be bad enough to allow the police or other authorities track us – but effectively unregulated businesses to do so? It doesn’t seem very sensible, to say the least…

…and of course under the terms of the Communications Data Bill (of which more below) putting all of this on the Internet will automatically mean it is gathered and retained for the use of the authorities, creating another dimension of vulnerability…

Technological solutions…

There is, however, a deeper problem here: a tendency to believe that a technological solution is available to a non-technological problem. In this case, the problem is that some delivery companies are just not very good – it may be commercial pressures, it may be bad management policies, it may be that they don’t train their employees well enough, it may be that they simply haven’t thought through the problems from the perspective of those of us waiting for deliveries. They can, however, ‘solve’ these problems just by doing their jobs better. A good delivery person is creative and intelligent, they know their ‘patch’ and find solutions when people aren’t in. They are organised enough to be able to predict their delivery times better. And so on. All the tracking technology and facial recognition software in the world won’t make up for poor organisation and incompetent management…

…and yet it’s far too easy just to say ‘here’s some great technology, all your problems will be solved’.

We do it again and again. We think the best new digital cameras will turn us into fantastic photographers without us even reading the manuals or learning to use our cameras (thanks the the excellent @legaltwo for the hint on that one!). We think ‘porn filters’ will sort out our parenting issues. We think web-blocking of the Pirate Bay will stop people downloading music and movies illegally. We think technology provides a shortcut without dealing with the underlying issue – and without thinking of the side effects or negative consequences. It’s not true. Technology very, very rarely ‘solves’ these kinds of problems – and the suggestion that it does is the worst kind of myth.

The Snoopers’ Charter

The Draft Communications Data Bill – the Snoopers’ Charter – perpetuates this myth in the worst kind of way. ‘If only we can track everyone’s communications data, we’ll be able to stop terrorism, catch all the paedos, root out organised crime’… It’s just not true – and the consequences to everyone’s privacy, just a little side issue to those pushing the bill, would be huge, potentially catastrophic. I’ve written about it many times before – see my submission to the Joint Committee on Human Rights for the latest example – and will probably end up writing a lot more.

The big point, though, is that the very idea of the bill is based on a myth – and that myth needs to be exposed.

That’s not to say, of course, that technology can’t help – as someone who loves technology, enjoys gadgets and spends a huge amount of his time online, that would be silly. Technology, however, is an adjunct, not a substitute, to intelligent ‘real world’ solutions, and should be clever, targeted and appropriate. It should be a rapier rather than a bludgeon.

The snoopers charter

I have just made a ‘short submission’ to the Joint Committee on Human Rights (JCHR) regarding the Draft Communications Data Bill – I’ve reproduced the contents below. I have reformatted it in order to make it more readable here, but other than the formatting this is what I sent to the committee.

The JCHR will not be the only committee looking at the bill – at the very least there will be a special committee for the bill itself. The JCHR is important, however, because, as I set out in my short submission, internet surveillance should be viewed very much as a human rights issue. In the submission I refer to a number of the Articles of the European Convention on Human Rights (available online here). For reference, the Articles I refer to are the following: Article 8 (Right to Respect for Private and Family Life), Article 9 (Freedom of Thought, Conscience and Religion), Article 10 (Freedom of Expression), Article 11 (Freedom of Assembly and Association) and Article 14 (Prohibition of Discrimination).

Here is the submission in full

——————————————————–

Submission to the Joint Committee on Human Rights

Re: Draft Communications Data Bill

The Draft Communications Data Bill raises significant human rights issues – most directly in relation to Article 8 of the Convention, but also potentially in relation to Articles 9, 10, 11 and 14. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1            The fundamental approach of the bill

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below.

2            The nature of ‘Communications Data’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. This submission will focus on one particular form of data – internet browsing data – to demonstrate some of the crucial issues that arise. Article 8 of the Convention states that:

“Everyone has the right to respect for his private and family life, his home and his correspondence’

On the surface, it might appear that ‘communications data’ relates to the ‘correspondence’ part of this clause – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact upon the ‘private life’ part of the clause. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which search terms are used, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes – and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry – it is a key part of the business models of Google, Facebook and others. It is already advanced – but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and ultimately profiling. That ‘content’ data is not gathered is of far less significance – and that focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

3            Articles 9, 10, 11 and 14

The kind of profiling discussed above is what brings Articles 9, 10, 11 and 14 into play: it is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. It can enable discrimination – and even potentially automate it. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather will impact directly on the rights of the innocent, rights that according to Articles 8, 9, 10, 11 and 14 should be respected.

4            The vulnerability of data

The approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is flawed for a number of reasons.

Firstly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

What is more, designing and building systems to implement legislation like the Communications Data Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can spread without control.

5            Function Creep

As important as the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, and the justification for its initial imposition, can be lost.

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions for which the legislation and any systems put in place to bring the legislation, function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

6            Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By the very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: on the surface, it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those who are expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence.

That brings a bigger question into the spotlight, one that the Joint Committee on Human Rights might think is the most important of all. What kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the Draft Communications Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception.

Dr Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia
Norwich NR4 7TJ

——————————————

Labour and the ‘Snoopers’ Charter’…

The draft Communications Data Bill – dubbed, pretty accurately in my view, as the ‘Snoopers’ charter’ – has already been the subject of a great deal of scrutiny. I’ve blogged about it a number of times, as have many others far more expert than me. My own MP, Lib Dem Julian Huppert, will be on one of the parliamentary committees scrutinising the bill, and has spoken out about aspects of the bill with some vehemence. David Davis MP, Tory backbencher and former minister, has been one of the most vocal and eloquent opponents of the whole idea of the bill. His speech at the Scrambling for Safety conference a few months ago (which I blogged about here) was hugely impressive. I’m sure he will keep up the pressure – and I’m equally sure that there are a significant number of Tories and Lib Dems who will have at the very least sympathies for the respective positions of Davis and Huppert.

But what about Labour? No Labour MPs even appeared at the Scrambling for Safety conference – and very few have said anything much about it even since the draft bill was released. Tom Watson MP, one of very few MPs who really ‘gets’ the internet, and one who really understands privacy, has of course had one or two other things on his mind…. but what about the rest of them? All we’ve heard is cautious and even supportive noises from Yvette Cooper, and little else. That, for me, is deeply disturbing. It’s disturbing for two reasons:

  1. If we’re going to defeat this bill – and we need to defeat this bill – then we’re going to need to get the Labour Party on board, and not just because they’re the ‘opposition’.
  2. More importantly, because the Labour Party SHOULD oppose the kind of measures put forward in this bill, if they’re really the party of the ordinary person, if they’re in any sense a ‘progressive’ party, y’d if they’re any sort of a ‘positive’ party.

This second point is particularly important. I’ve blogged before about the problems that all our political parties have over the whole issue of privacy, but the issues for the Labour Party are particularly acute – and the challenge is particularly difficult. In order to take a positive and progressive stance on the Snoopers’ Charter, they need to make a break from the past. They need to recognise that all the anti-terror rhetoric that surrounded the invasion of Iraq and its repercussions was misguided at best – and deeply counter-productive at worst. They need to somehow acknowledge that mistakes were made both in approach and in detail. Can they do this?

It’s always hard for a politician to make a real break from the past – accusations of U-turns, of ‘flip-flopping’, of being indecisive and so forth abound, and politicians are often deeply scared of appearing ‘weak’. Moreover, the Labour Party, as I discussed in my earlier blog, can been very afraid of appearing not to understand the ‘harsh realities’ of the world. They want to appear tough, to be able to make the ‘tough decisions’ – and not to let the Tories be the ‘party of law and order’, and of ‘security’. The scars of the unilateral nuclear disarmament policies of the 80s are still not really healed.

…and yet, I think there might be a chance. Even now, with the infighting over the ‘Progress’ organisation, the soul of the Labour Party is in some ways being reforged. That could open up opportunities and not just old wounds – an opportunity for the Labour Party to assess what it actually stands for. If it makes the decisions that I hope it does – that it’s a party for ‘little people’, a party for ‘freedom’, a party that looks forward rather than back, and a party that understands the modern world, that understands young people – then it could be willing to take a positive stance over the Snoopers Charter.

The snoopers charter is an inherently repressive and retrograde piece of legislation, both in approach and in detail. It sets out a relationship between state and citizen that is not the kind of relationship that a progressive, liberal and positive political party should take – and it works on the basis of an old kind of thinking, an old kind of fear. That should be the bottom line. I hope we can get the Labour Party to understand that.

A police state?

Yesterday saw the release of the details of the Draft Communications Bill – and, despite the significant competing interest in David Cameron’s appearance at the Leveson Inquiry, its arrival was greeted with a lot of attention and reaction, both positive and negative. Theresa May characterised those of us who oppose the bill as ‘conspiracy theorists’, something that got even the Daily Mail into a bit of a state. Could she, however, have a point? Are we over-egging the pudding by linking the kind of thing in the Bill as moving us in the direction of a police state? I’ve been challenged myself over my fairly regular use of that famous quote by the excellent Bruce Schneier:

“It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.” (see his blog here)

One of the things I was questioned on was what do we actually mean by a ‘police state’ – and it started me thinking. I’ve looked at definitions (e.g. the ever-reliable(!) wikipedia entry on ‘police state’ here) – it’s not a simple definition, and no single thing can be seen as precisely characterising what constitutes a police state. I’m no political scientist – and this is not a political science blog – but we all need to think about these things in the current climate. The primary point for me, as triggered by the Schneier quote, is that the difference between a ‘police state’ and a ‘liberal’ state is about assumptions and defaults (something I find myself writing about a lot!).

Police states

In a police state, the assumption is one of suspicion and distrust. People are assumed to be untrustworthy, and as a consequence generalised and universal surveillance makes perfect sense – and the legal, technical and bureaucratic systems are built with that universal surveillance in mind. The two police states I have most direct experience of, Burma and pre-revolutionary Romania, both worked very much in that way – the question of definitions of ‘police state’ is of course a difficult one, but when you’ve seen or experienced the real thing, it does change things.

When I visited Burma back in 1991, I know that every local that even spoke to me in the street was picked up and ‘questioned’ after the event – I don’t know quite how ‘severely’ they were questioned, but when I first heard about it after I returned to the UK it shook me. It said a great deal – firstly, that I was being watched at all times, and secondly that even talking to me was considered suspicious, and in need of investigation. The assumption was of suspicion. The default was guilt.

My wife is Romanian, and was brought up in the Ceaucescu regime – and she generally laughs when people talk about trusting their government and believing government assurances about how they can be trusted. From all the stories she’s told me, Ceaucescu would have loved the kind of surveillance facilities and access to information that we in the UK seem to be willing to grant our government. So would Honecker in East Germany. So do all the despotic regimes currently holding power around the world – monitoring social networks etc comes naturally to them, as it does to anyone wanting to control through information. Everyone is a suspect, everyone might be a terrorist, a subversive, a paedophile, a criminal.

‘Liberal’ states

In a liberal state the reverse should be true – people are (or should be) generally assumed to be trustworthy and worthy of respect, with the ‘criminals’, ‘subversives’ and ‘terrorists’ very much the exception. The idea of ‘innocent until proven guilty’ is a reflection (though not a precise one) of this. That is both an ideal and something that, in general, I believe works in practice. What that means, relating back to the Schneier quote, is that we should avoid putting into place anything that is generalised rather than targeted, anything that assumes suspicion (or even guilt), anything that doesn’t have appropriately powerful checks and balances (rather than bland assurances) in place. It means that you should think very, very carefully about the advantages of things before putting them in place ‘just in case’.

At the recent ‘Scrambling for Safety’ meeting about the new UK surveillance plans (which I’ve blogged about before) two of the police representatives confirmed in no uncertain terms that the idea of universal rather than targeted surveillance was something that they neither supported nor believed was effective. They prefer the ‘intelligent’ and targeted approach – and not to put in place the kind of infrastructural surveillance systems and related legal mechanisms that people like me would call ‘bad civic hygiene’.

In a liberal rather than police state, policing should be by consent – the police ‘represent’ the people, enforcing rules and laws that the people generally believe in and support. The police aren’t enemies of the people – and the people aren’t enemies of the police. The police generally know that – on twitter, in particular, I have a lot of contact with some excellent police people, and I’m sure they don’t want to be put in that kind of position.

The Communications Bill

So where does the new bill come into this? As well as the detailed issues with it (which I will be looking into over the next few weeks and months) there’s a big question of assumptions going on. It’s not just the details that matter, it’s the thinking that lies behind it. The idea that universal surveillance is a good thing – and make no mistake, that’s what’s being envisaged here – should itself be challenged, not just the details. That’s the idea that lies behind a police state.