Guest post: Data Retention: I can’t believe it’s not lawful, can you? A response to Anthony Speaight QC

Guest post by Matthew White

Introduction:

Ladies and gentlemen, Bagginses and Boffins. Tooks and Brandybucks. Grubbs! Chubbs! Hornblowers! Bolgers! Bracegirdles! Proudfoots. Put your butter away for I am about to respond, rebut, rebuke and more to a recent blog post for Judicial Power Project, by Anthony Speaight QC on data retention.

Blanket data retention is unlawful, please deal with it:

Speaight starts off by referring to the recent Court of Appeal (CoA) judgment in  Tom Watson and Others v Secretary of State for the Home Department [2018] EWCA Civ 70 and how the Court of Justice of the European Union (CJEU) has created problems and uncertainties with regards to data retention. As David Allen Green would say, ‘Well…’ Well, just to be clear, the position of the CJEU on blanket indiscriminate data retention is crystal clear. It . Is . Unlawful . It just happens that the CoA took the position of sticking their fingers in their ears and pretending that the CJEU’s ruling doesn’t apply to UK law, because its somehow (it’s not) different.

Just billing data is retained? Oh really?

Next, Speaight recaps the data retention saga so far, in that telecommunications companies have always recorded who uses their services, when and where, often for billing purposes. A long time ago, in a galaxy far, far away (a few years ago, and anywhere with an internet connection) this position was a robust one. But the European Commission (Commission) in 2011 highlighted that:

[T]rends in business models and service offerings, such as the growth in flat rate tariffs, pre-paid and free electronic communications services, meant that operators gradually stopped storing traffic and location data for billing purposes thus reducing the availability of such data for criminal justice and law enforcement purposes.

So, in a nutshell, data for billing purposes are on the decrease. This would explain why the Data Retention Directive (DRD) (discussed more below) affected:

[P]roviders of electronic communication services by requiring such providers to retain large amounts of traffic and location data, instead of retaining only data necessary for billing purposes; this shift in priority results in an increase in costs to retain and secure the data.

So, it’s simply untrue to refer to just billing data when talking about data retention, because this isn’t the only data that is or has ever been sought.

It’s the Islamists fault why we have data retention:

Speaight next points out that it was the advent of Islamist international terrorism that made it advantageous to place data retention obligations on companies. Oh really? Are we going down this route? Well….. demands for data retention can be traced back to the ‘International Law Enforcement and Telecommunications Seminars’ (ILETS) (6) and in its 1999 report, it was realised that Directive 97/66/EC (the old ePrivacy Directive) which made retention of communications data possible only for billing purposes was a problem. The report sought to ‘consider options for improving the retention of data by Communication Service Providers.’ Improve? Ha. Notice how 1999 was before 9/11? Funny that.

It doesn’t stop there though. A year later (still before 9/11), the UK’s National Crime and Intelligence Service (NCIS) made a submission (on behalf of the Mi5/6, GCHQ etc) to the Home Office on data retention laws. They ironically argued that a targeted approach would be a greater infringement on personal privacy (para 3.1.5). Of course, they didn’t say how or why this was the case, because, reasons. Charles Clarke, the then junior Home Office Minister, and Patricia Hewitt, an ‘E-Minister’ both made the claim such proposals would never happen (Judith Rauhofer, ‘Just Because You’re Paranoid, Doesn’t Mean They’re Not After You: Legislative Developments in Relation to the Retention of Communications Data’ (2006) SCRIPTed 3, 228; Patricia Hewitt and Charles Clarke, Joint letter to Independent on Sunday, 28 Jan 2000) and should not be implemented (Trade and Industry Committee, UK Online Reviewed: the First Annual Report of the E-Minister and E-Envoy Report (HC 66 1999-2000), Q93).

Guess what? A year later Part 11 of the Anti-terrorism, Crime and Security Act 2001 (ATCSA 2001) came into force three months after 9/11 (Judith Rauhofer, 331). The Earl of Northesk, however, pointed out that ‘there is no evidence whatever that a lack of data retained has proved an impediment to the investigation of the atrocities’ on 9/11 (HL Deb 4 Dec vol 629 col. 808-9). What this demonstrates is that data retention was always on the cards, even when its utility wasn’t proven, where the then Prime Minister Tony Blair, noted that ‘all the surveillance in the world’ could not have prevented the 7/7 bombings. It’s just that as Roger Clarke succinctly puts it:

“[M]ost critical driver of change, however, has been the dominance of national security extremism since the 2001 terrorist attacks in the USA, and the preparedness of parliaments in many countries to grant law enforcement agencies any request that they can somehow link to the idea of counter-terrorism.” (Roger Clarke, ‘Data retention as mass surveillance: the need for an evaluative framework’ (2015) International Data Privacy Law 5:2 121, 122).

Islamic terrorism was just fresh justification (7,9) for something that ‘the EU governments always intended to introduce an EC law to bind all member states to adopt data retention.’ Mandatory data retention was championed by the UK during its Presidency of the European Council (Council) (9) (and yes, that includes the ‘no data retention from us’ Charles Clarke (who was accused of threatening the European Parliament to agree to data retention (9))) and described as a master class in diplomacy and political manoeuvring (Judith Rauhofer, 341) (and they say it’s the EU that tells us what to do!!). Politicians goin’ politicate. Yes, the DRD makes reference to the Madrid bombings, but the DRD was not limited to combating terrorism (6), just as the reasons for accessing communications data in UK law under s.22 of the Regulation of Investigatory Powers Act 2000 (RIPA 2000) were not solely based on fighting terrorism. There is nothing wrong with saying that data retention (yeah, but not blanket, of course) and access to said data can be important in the fight against Islamist terrorism, but would you please stop pretending that was the basis on which data retention was sought?

Data retention was smooth like rocks:

Next, Speaight points to the ‘smooth operation’ of the data retention system. Smooth how and in what ways? Harder to answer that is, yess! Well….. in 2010, the Article 29 Working Party (WP29) pointed out that ‘the lack of available sensible statistics hinders the assessment of whether the [data retention] directive has achieved its objectives.’ The WP29 went further pointing out that there was a lack of harmonisation in national implementation of the DRD (2). This was, the purpose of the DRD (harmonising data retention across the EU), and it didn’t even achieve what it set out.

What about its true purpose? You know, spying on every EU citizen? Well the European Data Protection Supervisor (EDPS) responded to the Commission’s evaluation of the DRD. WARNING: EDPS pulls no punches. First, the EDPS reiterated that the DRD was based upon the assumption of necessity (para 38). Secondly, the EDPS criticised the Commission’s assertion that most Member States considered data retention a necessary tool when conclusions were based on just over a third (that’s less than half, right?) of them (para 40). Thirdly, these conclusions were in fact, only statements (para 41). Fourthly, the EDPS highlighted there should be sufficient quantitative and qualitative information to assess whether the DRD is actually working and whether less privacy intrusive measures could achieve the same result, information should show the relationship between use and result (43).

Surprise, surprise, the EDPS didn’t find sufficient evidence to demonstrate the necessity of the DRD and that further investigations into alternatives should commence (para 44). Fifthly, the EDPS pretty much savaged the quantitative and qualitative information available (para 45-52). A few years later, the CJEU asked for proof of the necessity of the DRD. There was a lack of statistical evidence from EU Member States, the Commission, the Council and European Parliament, and despite that, they had the cheek to ask the CJEU to reject the complaints made by Digital Rights Ireland and others anyway (ibid). Only the Austrian government were able to provide statistical evidence on the use (not retention) of communications data which didn’t involve any cases of terrorism (ibid). The UK’s representatives admitted (come again? The UK admits something?) there was no ‘scientific data’ to underpin the need of data retention (ibid), so the question begs, wtaf had the DRD been based upon? Was it the assumption of necessity the EDPS referred to? Draw your own conclusions. The moral of the story is that the DRD did not operate smoothly.

Ruling against data retention was a surprise?

Speaight then moves onto the judgment that started it all, Joined Cases C‑293/12 and C‑594/12, Digital Rights Ireland in which the CJEU invalidated the DRD across the EU. According to Speaight, this came as a ‘surprise.’

I felt a great disturbance in the Law, as if thousands of spies, police, other public authorities, politicians and lawyers suddenly cried out in terror, as the State were suddenly unable to spy anymore. I fear something terrible has happened.

So, who was surprised? Was it the European Parliament who had initially opposed this form of data retention as they urged its use must be entirely exceptional, based on specific comprehensible law, authorised by judicial or other competent authorities for individual cases and be consistent with the European Convention on Human Rights (ECHR)? Was it a surprise to them when they also noted that that ‘a general data retention principle must be forbidden’ and that ‘any general obligation concerning data retention’ is contrary to the proportionality principle’ (Abu Bakar Munir and Siti Hajar Mohd Yasin, ‘Retention of communications data: A bumpy road ahead’ (2004) The John Marshall Journal of Computer & Information Law 22:4 731, 734; Clive Walker and Yaman Akdeniz, ‘Anti-Terrorism Laws and Data Retention: War is over?’ (2003) Northern Ireland Legal Quarterly 54:2 159, 167)?

Was it a surprise to Patrick Breyer who argued that data retention was incompatible with Articles 8 and 10 of the ECHR back in 2005 (372, 374, 375)? Was it a surprise to Mariuca Morariu who argued that the DRD had failed to demonstrate its necessity (Mariuca Morariu, ‘How Secure is to Remain Private? On the Controversies of the European Data Retention Directive’ Amsterdam Social Science 1:2 46, 54-9)? Was it a surprise to Privacy International (PI), the European Digital Rights Initiative (EDRi), 90 NGOs and 80 telecommunications service providers (9) who were against the DRD? Was it a surprise to the 40 civil liberties organisations who urged the European Parliament to vote against the retention of communications data?

Was it a surprise to the WP29, the European Data Protection Commissioners, the International Chamber of Commerce (ICC), European Internet Services Providers Association (EuroISPA), the US Internet Service Provider Association (USISPA), the All Party Internet Group (APIG) (Abu Bakar Munir and Siti Hajar Mohd Yasin, 746-749) and those at the G8 Tokyo Conference? Hell, even our own assistant Information Commissioner, Jonathan Bamford, back in 2001 wouldn’t be surprised because he said ‘Part 11 isn’t necessary, and if it is necessary it should be made clear why’ (HL Deb 27 Nov 2001 vol 629 cc183-290, 252). Was it a surprise when prior to Digital Rights Ireland:

Bulgaria’s Supreme Administrative Court, the Romanian, German Federal, Czech Republic Constitutional Courts and the Supreme Court of Cyprus all [declared] national implementation of the DRD either invalid or unconstitutional (in some or all regards) and incompatible with Article 8 ECHR?

Was Jules Winnfield surprised?

The point I’m trying to hammer home is that (you’ve guessed it), the CJEU’s ruling in Digital Rights Ireland should come as no surprise. Still on the issue of surprise, for Speaight it was because it departed from decisions of the European Court of Human Rights (ECtHR) and the CJEU itself. Ok, let’s look at these ECtHR cases Speaight refers to. The first is Weber and Saravia v Germany, a case on ‘strategic monitoring.’ This is a whole different kettle of fish when compared to the DRD as this concerned the surveillance of 10% (I’m not saying this is cool either btw) [30, 110] of German telecommunications, not the surveillance of ‘practically the entire European population’ [56]. Ok, that may have been an exaggeration by the CJEU as there are only 28 (we’re not so sure about one though) EU Member States, but the point is, the powers in question are not comparable. The DRD was confined to serious crime, without even defining it [61]. Whereas German law in Weber concerned six defined purposes for strategic monitoring, [27] and could only be triggered through catch words [32]. In Digital Rights Ireland, authorisation for access to communications data in the DRD was not dependent upon ‘prior review carried out by a court or by an independent administrative body’ [62] where in Weber this was the case [21, 25]. Apples and oranges.

The second ECtHR case was Kennedy v UK, and it’s funny that this case is brought up. The ECtHR in this case referred to a previous case, Liberty v UK in which the virtually unfettered power of capturing external communications [64] violated Article 8 of the ECHR [70]. The ECtHR in Kennedy referred to this as an indiscriminate power [160, 162] (bit like data retention huh?), and the UK only succeeded in Kennedy because the ECtHR were acting upon the assumption that interception warrants only related to one person [160, 162]. Of course, the ECtHR didn’t know that ‘person’ for the purposes of RIPA 2000 meant ‘any organisation and any association or combination of persons,’ so you know, not one person literally.

And this was, of course, prior to Edward Snowden’s bombshell of surveillance revelations, which triggered further proceedings by Big Brother Watch. A couple of years ago, in Roman Zakharov v Russia, the ECtHR’s Grand Chamber (GC) ruled that surveillance measures that are ‘ordered haphazardly, irregularly or without due and proper consideration’ [267] violates Article 8 [305]. That is because the automatic storage of clearly irrelevant data would contravene Article 8 [255]. This coincides with Advocate General (AG) Saugmandsgaard Øe’s opinion that the ‘disadvantages of general data retention obligations arise from the fact that the vast majority of the data retained will relate to persons who will never be connected in any way with serious crime’ [252]. That’s a lot of irrelevant data if you ask me. Judge Pinto de Albuquerque, in his concurring opinion in Szabo and Vissy v Hungary regards Zakharov as a rebuke of the ‘widespread, non-(reasonable) suspicion-based, “strategic surveillance” for the purposes of national security’ [35]. So, I’d say that even Weber v Saravia is put into doubt. And so, even if the CJEU rules that data retention in the national security context is outside its competence, there is enough ECtHR case law to bite the UK on its arse.

Probably the most important ECtHR case not mentioned by Speaight (why is that?) is that of S and Marper v UK, this is the data retention case. Although this concerned DNA data retention, the ECtHR’s concerns ‘have clear applications to the detailed information revealed about individuals’ private lives by communications data.’ What did the GC rule in S and Marper? Oh, was it that blanket indiscriminate data retention ‘even on a specific group of individuals (suspects and convicts) violated Article 8’? Yes, they did and it was S and Marper to which the CJEU referred to on three separate occasions in Digital Rights Ireland [47, 54-5]. Tele 2 and Watson (where the CJEU reconfirmed that blanket indiscriminate data retention is prohibited under EU law) is just the next logical step with regards to communications data. And so far from being surprising, the CJEU in Digital Rights Ireland and Tele2 and Watson are acting in a manner that is consistent with the case law of the ECtHR.

The CJEU case law that Speaight refers to is Ireland v Parliament and Council which was a challenge to the DRD’s legal basis, not whether it was compatible with the Charter of Fundamental Rights, so I’m not entirely sure what Speaight is trying to get at. All in all, Speaight hasn’t shown anything to demonstrate that Digital Rights Ireland has departed from ECtHR or CJEU case law.

You forgot to say the UK extended data retention laws:

Speaight then rightly acknowledges how the UK government replaced UK law implementing the DRD with the Data Retention and Investigatory Powers Act 2014 (DRIPA 2014) in lightspeed fashion. What Speaight omits, however, is that DRIPA 2014 extended retention obligations from telephone companies and Internet Service Providers (ISPs) to Over-The-Top (OTT) services such as Skype, Twitter, Google, Facebook etc. James Brokenshire MP attested that DRIPA 2014 was introduced to clarify what was always covered by the definition of telecommunications services (HC Deb 14 July, vol 584, 786). This, of course, was total bullshit (5), but like I said, politicians goin’ politicate.

Claimants don’t ask questions, courts do:

Speaight moves onto the challenges to DRIPA 2014, we know the story already, the High Court (HC) said it was inconsistent with Digital Rights Ireland, whereas the CoA disagreed, blah, blah. Speaight points out that the claimants had no issue with data retention in principle, which is true, but so what? Speaight also points out that the CJEU went further than what the claimants asked by ruling that blanket indiscriminate data retention was not permissible under EU law. Wait, what the fark? It’s not the bloody claimants’ that ask the CJEU a question on the interpretation of EU law as I’m pretty sure it was the Swedish referring court (via Article 267 of the Treaty on the Functioning of the EU, you know, a preliminary reference) that asked the CJEU:

Is a general obligation to retain traffic data covering all persons, all means of electronic communication and all traffic data without any distinctions, limitations or exceptions for the purpose of combating crime (as described [below under points 1-6]) compatible with Article 15(1) of Directive 2002/58/EC, 1 taking account of Articles 7, 8 and 15(1) of the Charter?

And the CJEU said no. End of discussion.

The ends don’t always justify the means and for clarity, the CJEU didn’t reject shit:

Speaight also says that the CJEU in Tele2 and Watson rejected AG Saugmandsgaard Øe’s advice that the French governments found access to communications data useful in its investigations into terrorist attacks in 2015. Such a position however, falls victim to several questions, such as under what circumstances was the data sought? Was it accessed as a consequence of the legal obligation to retain? Or was it already retained for business purposes? What were the results of the use of that data? Could the same results have been achieved using less intrusive means? Saying it is useful tells us nothing as the ECtHR has plainly said necessity (in a democratic society) is not as flexible as expressions such as ‘useful’ [48], and as the CJEU rightly noted, a measure in and of itself, even in the general interest cannot justify general indiscriminate data retention [103]. This demonstrates that the CJEU didn’t reject anything, they didn’t even refer to the French government’s evidence, they just said as fundamental as fighting serious crime may be, and the measures employed, cannot by themselves justify such a fundamental departure from the protection of human rights. Just because you can, doesn’t mean you should. A certain ECtHR said something similar in Klass v Germany in that States ‘may not, in the name of the struggle against espionage and terrorism, adopt whatever measures they deem appropriate’ [49].

The CJEU doesn’t have to answer what it wasn’t asked:

Speaight then whines about the CJEU not addressing the issue of national security, well they weren’t asked about national security in Tele2 and Watson, were they? Like I said, even if the CJEU doesn’t have competence to rule on national security based data retention, Roman Zakharov is watching you from Strasbourg (he’s not actually in Strasbourg, I don’t think, but you dig).

What’s your problem with notification?

Speaight also bemoans the obligation to notify saying this requirement could damage investigations and surveillance and went beyond what the claimants had asked. Well, again, the claimants weren’t asking the questions, ffs, and the CJEU made this point by referring to previous case law, notably, Schrems [95]. The CJEU made very clear that notification should be done ‘as soon as that notification is no longer liable to jeopardise the investigations being undertaken by those authorities’ [121]. This is consistent with the ECtHR’s stance. Both courts are aware that notification can defeat the purpose of the investigation, and sometimes even after it has concluded, notification may still not be appropriate. But Speaight seems to omit this crucial detail.

Lawyers getting mad:

Speaight notes that criticism of Tele2 is not confined to Eurosceptics. Sure, but you don’t have to be a Europhile to defend it either. He also noted that it was roundly condemned by all the participants at a meeting of the Society of Conservative Lawyers. Well, no shit to my Sherlock, the name kinda gave it away. He also notes that the former Independent Reviewer of Terror law, David Anderson QC, said it was the worst judgment he knew of. Wait til Anderson reads the ECtHR’s case law on this matter then, which if anything, on proper reading goes further than Tele2. Speaight also points out that Demonic Grieve QC MP was pissed and that a well distinguished member of the French Bar, Francois-Henri Briard basically saying we need more conservative judges to trample on fundamental rights. If a judgment that protects the fundamental rights of all EU citizens pisses off a few lawyers, so be it.

Conclusions:

I’ve spent way too much time on Speaight’s post, and the really sad thing is, I’ve enjoyed it. It’s hard to have a conversation about data retention when you first have to sift through a load of bollocks, and there was plenty of bollocks, just to make your point. And by the time you’ve cleared through all the falsities and misleading or exaggerated points, you run close to 4k words without actually saying what your position is. So, my position for this blog post is, we should always shoot down rubbish when it shows its ugly face or else it festers. Actually, the point is, I can believe that blanket indiscriminate data retention is unlawful.

Privacy and Security together…

I just spent a very interesting day at ‘Project Breach’ – an initiative of Norfolk and Suffolk police, trying to encourage businesses and others to understand and protect themselves from cybercrime. It was informative in many ways, and primarily (as far as I could tell) intended to be both a pragmatic workshop, giving real advice, and to ‘change the narrative’ over cybercrime. In both ways, I think it worked – the advice, in particular, seemed eminently sensible.

What was particularly interesting, however, was how that advice was in most ways in direct tension with the government’s approach to surveillance, as manifested most directly in the Investigatory Powers Act 2016 – often labelled the ‘Snooper’s Charter’.

The speaker – Paul Maskall – spent much of the first session outlining the risks associated with your ‘digital footprint’. How your search history could reveal things about you. How your ‘meta data’ could say more about you than the content of your postings. How your browsing history could put you at risk of all kinds of scams and so forth. And yet all of this is made more vulnerable by the Investigatory Powers Act. Search histories and metadata could be forced to be retained by service providers. ‘Internet Connection Records’ could be used to create a record of your browsing – and all of this could then be vulnerable to the many forms of hacking etc that Maskall then went on to detail. The Investigatory Powers Act makes you more vulnerable to scams and other crimes.

The keys to the next two sessions were how to protect yourself – and two central pillars were encryption and VPNs. Maskall emphasised again and again the importance of encryption – and yet this is what Amber Rudd railed against only a few weeks ago, trying to link it to the Westminster attack, though subsequent evidence proved yet again that this was a red herring at best. The Investigatory Powers Act adds to the old Regulation of Investigatory Powers Act (RIPA) in the way it could allow encryption to be undermined…. which again puts us all at risk. When I raised this issue, first on Twitter and then in the room, Maskall agreed with me – encryption is critical to all of us, and attempts to undermine it put us all at risk – but I was challenged, privately, by another delegate in the room, after the session was over. Amber Rudd, this delegate told me, wasn’t talking about undermining encryption for us, but only for ISIS and Al Qaeda. I was very wrong, he told me, to put the speaker on the spot about this subject. All that showed me was how sadly effective the narrative presented by Amber Rudd, and Theresa May before her, as well as others in what might loosely be called the ‘security lobby’ has been. You can’t undermine encryption for ISIS without undermining it for all of us. You can’t allow backdoors for the security services without providing backdoors for criminals, enemy states and terrorists.

VPNs were the other key tool mentioned by the speaker – and quite rightly. Though they have not been directly acted against by the Investigatory Powers Act, they do (or might) act against the main new concept introduced by the Act, the Internet Connection Record. Further, VPN operators might also be subjected to the attention of the authorities, and asked to provide browsing histories themselves – though the good ones don’t even retain those histories, which will cause a conflict in itself. Quite now the authorities will deal with the extensive use of VPNs has yet to be seen – but if they frustrate the intentions of the act, we can expect something to be done. The overall point, however, remains. For good security – and privacy – we need to go against the intentions of the act.

The other way to put that is that the act goes directly against good practice in security and privacy. It undermines, rather than supports security. This is something that many within the field understand – including, from his comments to me after the event, the speaker at Project Breach. It is sad that this should be the case. A robust, secure and privacy-friendly internet helps us all. Even though it might go against their instincts, governments really should recognise that.

The internet, privacy and terrorism…

As is sadly all too common after an act of terrorism, freedom on the internet is also under attack – and almost entirely for spurious reasons. This is not, of course anything new. As the late and much lamented Douglas Adams, who died back in 2001 put it:

“I don’t think anybody would argue now that the Internet isn’t becoming a major factor in our lives. However, it’s very new to us. Newsreaders still feel it is worth a special and rather worrying mention if, for instance, a crime was planned by people ‘over the Internet’.”

The headlines in the aftermath of the Westminster attack were therefore far from unpredictable – though a little more extreme than most. The Daily Mail had:

“Google, the terrorists’ friend”

IMG_0035

…and the Times noted that:

“Police search secret texts of terrorist”

IMG_0036

…while the Telegraph suggested that:

“Google threatened with web terror law”

Screen Shot 2017-03-25 at 20.34.14

The implications are direct: the net is a tool for terrorists, and we need to bring in tough laws to get it under control.

And yet this all misses the key point – the implication of Douglas Adams’ quote. Terrorists use the internet to communicate and to plan because we all use the internet to communicate and plan. Terrorists use the internet to access information because we all use the internet to access information. The internet is a communicative tool, so of course they’ll use it – and as it develops and becomes better at all these things, we’ll all be able to use it in this way. And this applies to all the tools on the net. Yes, terrorists will use Google. Yes, they’ll use Facebook too. And Twitter. And WhatsApp. Why? Because they’re useful tools, systems, platforms, whatever you want to call them – and because they’re what we all use. Just as we use hire cars and kitchen knives.

Useful tools…

That’s the real point. The internet is something we all use – and it’s immensely useful. Yes, Google is a really good way to find out information – that’s why we all use it. The Mail seems shocked by this – not that it’s particularly difficult to know how a car might be used to drive somewhere and to crash into people. It’s not specifically the ‘terrorists’ friend, but a useful tool for all of us.

 

The same is true about WhatsApp – and indeed other forms of communication. Yes, they can be used by ‘bad guys’, and in ways that are bad – but they are also excellent tools for the rest of us. If you do something to ban ‘secret texts’ (effectively by undermining encryption), then actually you’re banning private and confidential communications – both of which are crucial for pretty much all of us.

The same is true of privacy itself. We all need it. Undermining it – for example by building in backdoors to services like WhatsApp – undermines us all. Further, calls for mass surveillance damage us all – and attacks like that at Westminster absolutely do not help build the case for more of it. Precisely the opposite. To the surprise of no-one who works in privacy, it turns out that the attacker was already known to the authorities – so did not need to be found by mass surveillance. The same has been true of the perpetrators of all the major terrorist attacks in the West in recent years. The murderers of Lee Rigby. The Boston Bombers. The Charlie Hebdo shooters. The Sydney siege perpetrators. The Bataclan killers. None of these attacks needed identifying through mass surveillance. At a time when resources are short, to spend time, money, effort and expertise on mass surveillance rather than improving targeted intelligence, putting more human intelligence into place – more police, more investigators rather than more millions into the hands of IT contractors – is hard to defend.

More responsible journalism…

What is also hard to defend is the kind of journalism that produces headlines like that in the mail, or indeed in the Times. Journalists should know better. They should know all too well the importance of privacy and confidentiality – they know when they need to protect their own sources, and get rightfully up in arms when the police monitor their communications and endanger their sources. They should know that ‘blocking terror websites’ is a short step away from political censorship, and potentially highly damaging to freedom of expression – and freedom of the press in particular.

They should know that they’re scaremongering or distracting with their stories, their headlines and their ‘angles’. At a time when good, responsible journalism is needed more than ever – to counter the ‘fake news’ phenomenon amongst other things, and to keep people informed at a time of political turmoil all over the world – this kind of an approach is deeply disappointing.

The new IP Bill…. first thoughts…

This morning, in advance of the new draft of the Investigatory Powers Bill being released, I asked six questions:

Screen Shot 2016-03-01 at 09.46.09

At a first glance, they seem to have got about 2 out of 6, which is perhaps better than I suspected, but  not as good as I hoped.

  1. On encryption, I fear they’ve failed again – or if anything made things worse. The government claims to have clarified things in S217 and indeed in the Codes of Practice – but on a first reading this seems unconvincing. The Communications Data Draft Code of Practice section on ‘Maintenance of a Technical Capability’ relies on the idea of ‘reasonability’ which in itself is distinctly vague. No real clarification here – and still the possibility of ordering back-doors via a ‘Technical Capability Notice’ looms very large. (0 out of 1)
  2. Bulk Equipment Interference remains in the Act – large scale hacking ‘legitimised’ despite the recommendation from the usually ‘authority-friendly’ Intelligence and Security Committee that it be dropped from the Bill. (0 out of 2)
  3. A review clause has been added to the Bill – but it is so anaemic as to be scarcely worth its place. S222 of the new draft says that the Secretary of State must prepare a report by the end of the sixth year after the Bill is passed, publish it and lay it before parliament. This is not a sunset clause, and the report prepared is not required to be independent or undertaken by a review body, just by the Secretary of State. It’s a review clause without any claws, so worth only 1/4 a point. (1/4 out of 3)
  4. At first read-through, the ‘double-lock’ does not appear to have been notably changed, but the ‘urgent’ clause has seemingly been tightened a little, from 5 days to 3, but even that isn’t entirely clear. I’d give this 1/4 of a point (so that’s 1/2 out of 4)
  5. The Codes of Practice were indeed published with the bill (and are accessible here) which is something for which the Home Office should be applauded (so that’s 1 and 1/2 out of 5)
  6. As for giving full time for scrutiny of the Bill, the jury is still out – the rumour is second reading today, which still looks like undue haste, so the best I can give them is 1/2 a point – making it a total of 2 out of 6 on my immediate questions.

That’s not quite as bad as I feared – but it’s not as good as it might have been and should have been. Overall, it looks as though the substance of the bill is largely unchanged – which is very disappointing given the depth and breadth of the criticism levelled at it by the three parliamentary committees that examined it. The Home Office may be claiming to have made ‘most’ of the changes asked for – but the changes they have made seem to have been the small, ‘easy’ changes rather than the more important substantial ones.

Those still remain. The critical issue of encryption has been further obfuscated, the most intrusive powers – the Bulk Powers and the ICRs – remain effectively untouched, as do the most controversial ‘equipment interference’ powers. The devil may well be in the detail, though, and that takes time and careful study – there are people far more able and expert than me poring over the various documents as I type, and a great deal more will come out of that study. Time will tell – if we are given that time.

 

The Saga Of the Privacy Shield…

Screen Shot 2016-02-09 at 06.23.54

(With apologies to all poets everywhere)

 

Listen to the tale I tell

Of Princes bold and monsters fell

A tale of dangers well conceal’d

And of a bright and magic shield

 

There was a land, across the bay

A fair land called the USA

A land of freedom: true and just

A land that all the world might trust

 

Or so, at least, its people cheered

Though others thought this far from clear

From Europe all the Old Folk scowled

And in the darkness something howled

 

For a monster grew across the bay

A beast they called the NSA,

It lived for one thing: information

And for this it scoured that nation

 

It watched where people went and came

It listened and looked with naught of shame

The beast, howe’er, was very sly

And hid itself from prying eyes

 

It watched while folk from all around

Grew wealthy, strong and seeming’ sound

And Merchant Princes soon emerged

Their wealth it grew surge after surge

 

They gathered data, all they could

And used it well, for their own good

They gave the people things they sought

While keeping more than p’rhaps they ought

 

And then they looked across the bay

Saw Old Folk there, across the way

And knew that they could farm those nations

And take from them their information

 

But those Old Folk were not the same

They did not play the Princes’ game

They cared about their hope and glory

Their laws protected all their stories

 

‘You cannot have our information

Unless we have negotiations

Unless our data’s safe and sound

We’ll not let you plough our ground’

 

The Princes thought, and then procured

A harbour safe and quite secure

Or so they thought, and so they said

And those Old Folk gave them their trade

 

And so that trade just grew and grew

The Old Folks loved these ideas new

They trusted in that harbour’s role

They thought it would achieve its goal

 

But while the Princes’ realms just grew

The beast was learning all they knew

Its tentacles reached every nook

Its talons gripped each face, each book

 

It sucked up each and ev’ry drop:

None knew enough to make it stop

Indeed, they knew not what it did

‘Til one brave man, he raised his head

 

And told us all, around the world

‘There is a beast, you must be told’

He told us of this ‘NSA’

And how it watched us day by day

 

He told us of each blood-drenched claw

He named each tentacle – and more

And with each word, he made us fear

That this beast’s evil held us near

 

In Europe one man stood up tall

“Your harbour is not safe at all!

You can’t protect us from that beast

That’s not enough, not in the least!”

 

He went unto Bourg of Luxem

The judges listened care’fly to him

‘A beast ‘cross the bay sees ev’rywhere

Don’t send our secrets over there!

 

The judges liked not what they saw

‘That’s no safe habour,’ they all swore

“No more stories over there!

Sort it out! We do all care!”

 

The Princes knew not what to do

They could not see a good way through

The beast still lurked in shadows dark

The Princes’ choices seemed quite stark

 

Their friends and fellows ‘cross the bay

Tried to help them find a way

They whispered, plotted, thought and plann’d

And then the Princes raised their hands

 

“Don’t worry now, the beast is beaten

It’s promised us you won’t be eaten

It’s changed its ways; it’s kindly now

And on this change you have our vow

 

Behold, here is our mighty shield

And in its face, the mighty yield

It’s magic, and its trusty steel

Is strong enough for all to feel

 

Be brave, be bold, you know you should

You know we only want what’s good”

But those old folk, they still were wary

That beast, they knew, was mighty scary

 

“That beast of yours, is it well chained?

Its appetites, are they contained?

Does it still sniff at every door?

Its tentacles, on every floor?

 

The Princes stood up tall and proud

“We need no chains”, they cried aloud

“Our beast obeys us, and our laws

You need not fear it’s blunted claws.”

 

“Besides,” they said, “you are contrary

You have your own beasts, just as scary”

The Old Folk looked a mite ashamed

‘Twas true their own beasts were not tamed

 

“‘Tis true our beasts remain a blight

But two wrongs never make a right

It’s your beast now that we all fear

Tell us now, and make it clear!”

 

“Look here” the Princes cried aloud

“Of this fair shield we all are proud,

Its face is strong, its colours bright

There’s no more need for any fright.”

Shield

The Old Folk took that shield in hand

‘Twas shiny, coloured, bright and grand

But as they held it came a worry

Why were things in such a hurry?

 

Was this shield just made of paper?

Were their words just naught but vapour?

Would that beast still suck them dry?

And their privacy fade and die?

 

Did they trust the shield was magic?

The consequences could be tragic

The monster lurked and sucked its claws

It knew its might meant more than laws

 

Whatever happened, it would win

Despite the tales the Princes spin

It knew that well, and so did they

In that fair land across the bay.

 

 

 

 

Global letter on Encryption – why it matters.

I am one of the signatories on an open letter to the governments of the world that has been released today. The letter has been organised by Access Now and there are 195 signatories – companies, organisations and individuals from around the world.

The letter itself can be found here. The key demands are the following

Screen Shot 2016-01-11 at 06.10.45

It’s an important letter, and one that Should be shared as widely as possible. Encryption matters, and not just for technical reasons and not just for ‘technical’ people. Even more than that, the arguments over encryption are a manifestation of a bigger argument – and, I would argue, a massive misunderstanding that needs to be addressed: the idea that privacy and security are somehow ‘alternatives’ or at the very least that privacy is something that needs to be ‘sacrificed’ for security. The opposite is the case: privacy and security are not alternatives, they’re critical partners. Privacy needs security and security needs privacy.

The famous (and much misused) saying often attributed (probably erroneously) to Benjamin Franklin, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” is not, in this context at least, strong enough. In relation to the internet, those who would give up essential privacy to purchase a little temporary security will get neither. It isn’t a question of what they ‘deserve’ – we all deserve both security and privacy – but that by weakening privacy on the internet we weaken security.

The conflict over encryption exemplifies this. Build in backdoors, weaken encryption, prevent or limit the ways in which people can use it, and you both reduce their privacy and their security. The backdoors, the weaknesses, the vulnerabilities that are provided for the ‘good guys’ can and will be used by the ‘bad guys’. Ordinary people will be more vulnerable to criminals and scammers, oppressive regimes will be able to use them against dissidents, overreaching authorities against whistleblowers, abusive spouses against their targets and so forth. People may think they have ‘nothing to hide’ from the police and intelligence agencies – but that is to fundamentally miss the point. Apart from everything else, it is never just the police and the intelligence agencies that our information needs protection from.

What is just as important is that there is no reason (nor evidence) to suggest that building backdoors or undermining encryption helps even in the terms suggested by those advocating it. None examples have been provided – and whenever they are suggested (as in the aftermath of the Paris terrorist attacks) they quickly dissolve when examined. From a practical perspective it makes sense. ‘Tech-savvy’ terrorists will find their own way around these approaches – DIY encryption, at their own ends, for example – while non-tech savvy terrorists (the Paris attackers seem to have used unencrypted SMSs) can be caught in different ways, if we use different ways and a more intelligent approach. Undermining or ‘back-dooring’ encryption puts us all at risk without even helping. The superficial attractiveness of the idea is just that: superficial.

The best protection for us all is a strong, secure, robust and ‘privacy-friendly’ infrastructure, and those who see the bigger picture understand this. This is why companies such as Apple, Google, Microsoft, Yahoo, Facebook and Twitter have all submitted evidence to the UK Parliament’s Committee investigating the draft Investigatory Powers Bill – which includes provisions concerning encryption that are ambiguous at best. It is not because they’re allies of terrorists or because they make money from paedophiles, nor because they’re putty in the hands of the ‘privacy lobby’. Very much the opposite. It is because they know how critical encryption is to the way that the internet works.

That matters to all of us. The internet is fundamental to the way that we live our lives these days. Almost every element of our lives has an online aspect. We need the internet for our work, for our finances, for our personal and social lives, for our dealings with governments, corporations and more. It isn’t a luxury any more – and neither is our privacy. Privacy isn’t an indulgence – and neither is security. Encryption supports both. We should support it, and tell our governments so.

Read the letter here – and please pass it on.

Investigatory Powers Bill – my written submission

As well as providing oral evidence to the Draft Investigatory Powers Bill Joint Committee (which I have written about here, can be watched here, and a transcript can be found here) I submitted written evidence on the 15th December 2015.

Screen Shot 2015-12-09 at 10.02.12

The contents of the written submission are set out below. It is a lot more detailed than the oral evidence, and a long read (around 7,000 words) but even so, given the timescale involved, it is not as comprehensive as I would have liked – and I didn’t have as much time to proof read it as I would have liked. There are a number of areas I would have liked to have covered that I did not, but I hope it helps.

As it is published, the written evidence is becoming available on the IP Bill Committee website here – my own evidence is part of what has been published so far.


 

Submission to the Joint Committee on the draft Investigatory Powers Bill by Dr Paul Bernal

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. My book, Internet Privacy Rights – Rights to Protect Autonomy, was published by Cambridge University Press in 2014. I am a member of the National Police Chiefs’ Council’s Independent Digital Ethics Panel. The draft Investigatory Powers Bill therefore lies precisely within my academic field.

I gave oral evidence to the Committee on 7th December 2015: this written evidence is intended to expand on and explain some of the evidence that I gave on that date. If any further explanation is required, I would be happy to provide it.


 

One page summary of the submission

The submission looks specifically at the nature of internet surveillance, as set out in the Bill, at its impact on broad areas of our lives – not just what is conventionally called ‘communications’ – and on a broad range of human rights – not just privacy but freedom of expression, of association and assembly, and of protection from discrimination. It looks very specifically at the idea of ‘Internet Connection Records, briefly at data definitions and at encryption, as well as looking at how the Bill might be ‘future proofed’ more effectively.

The submission will suggest that in its current form, in terms of the overarching/thematic questions set out in the Committee’s Call for Written Evidence, it is hard to conclude that all of the powers sought are necessary, uncertain that they are legal, likely that many of them are neither workable nor carefully defined, and unclear whether they are sufficiently supervised. In some particular areas – Internet Connection Records is the example that I focus on in this submission – the supervision envisaged does not seem sufficient or appropriate. Moreover, there are critical issues – for example the vulnerability of gathered data – that are not addressed at all. These problems potentially leave the Bill open to successful legal challenge and rather than ‘future-proofing’ the Bill, they provide what might be described as hostages to fortune.

Many of the problems, in my opinion, could be avoided by taking a number of key steps. Firstly, rethinking (and possibly abandoning) the Internet Connection Records plans. Secondly, being more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge. Thirdly, taking a new look at encryption and being clear about the approach to end-to-end encryption. Fourthly, strengthening and broadening the scope of oversight. Fifthly, through the use of some form of renewal or sunset clauses to ensure that the powers are subject to full review and reflection on a regular basis.


1          Introductory remarks

1.1       Before dealing with the substance of the Bill, there is an overriding question that needs to be answered: why is the Committee being asked to follow such a tight timetable? This is a critically important piece of legislation – laws concerning surveillance and interception are not put forward often, particularly as they are long and complex and deal with highly technical issues. That makes detailed and careful scrutiny absolutely crucial. Andrew Parker of MI5 called for ‘mature debate’ on surveillance immediately prior to the introduction of the Bill: the timescale set out for the scrutiny of the Bill does not appear to give an adequate opportunity for that mature debate.

1.2       Moreover, it is equally important that the debate be an accurate one, and engaged upon with understanding and clarity. In the few weeks since the Bill was introduced the public debate has been far from this. As shall be discussed below, for example, the analogies chosen for some of the powers envisaged in the Bill have been very misleading. In particular, to suggest that the proposed ‘Internet Connection Records’ (‘ICRs’) are like an ‘itemised phone bill’, as the Home Secretary described it, is wholly inappropriate. As I set out below (in section 5) the reality is very different. There are two possible interpretations for the use of such inappropriate analogies: either the people using them don’t understand the implications of the powers, which means more discussion is needed to disabuse them of their illusions, or they are intentionally oversimplifying and misleading, which raises even more concerns.

1.3       For this reason, the first and most important point that I believe the Committee should be making in relation to the scrutiny of the Bill is that more time is needed. As I set out below (in 8.4 below) the case for the urgency of the Bill, particularly in the light of the recent attacks in Paris, has not been made: in many ways the attacks in Paris should make Parliament pause and reflect more carefully about the best approach to investigatory powers in relation to terrorism.

1.4       In its current form, in terms of the overarching/thematic questions set out in the Committee’s Call for Written Evidence, it is hard to conclude that all of the powers sought are necessary, uncertain that they are legal, likely that many of them are neither workable nor carefully defined, and unclear whether they are sufficiently supervised. In some particular areas – Internet Connection Records is the example that I focus on in this submission – the supervision envisaged does not seem sufficient or appropriate. Moreover, there are critical issues – for example the vulnerability of gathered data – that are not addressed at all. These problems potentially leave the Bill open to successful legal challenge and rather than ‘future-proofing’ the Bill, they provide what might be described as hostages to fortune.

1.5       Many of the problems, in my opinion, could be avoided by taking a number of key steps. Firstly, rethinking (and possibly abandoning) the Internet Connection Records plans. Secondly, being more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge. Thirdly, taking a new look at encryption and being clear about the approach to end-to-end encryption. Fourthly, strengthening and broadening the scope of oversight. Fifthly, through the use of some form of renewal or sunset clauses to ensure that the powers are subject to full review and reflection on a regular basis.

2          The scope and nature of this submission

2.1       This submission deals specifically with the gathering, use and retention of communications data, and of Internet Connection Records in particular. It deals more closely with the internet rather than other forms of communication – this is my particular area of expertise, and it is becoming more and more important as a form of communications. The submission does not address areas such as Equipment Interference, and deals only briefly with other issues such as interception and oversight. Many of the issues identified with the gathering, use and retention of communications data, however, have a broader application to the approach adopted by the Bill.

2.2       It should be noted, in particular, that this submission does not suggest that it is unnecessary for either the security and intelligence services or law enforcement to have investigatory powers such as those contained in the draft Bill. Many of the powers in the draft Bill are clearly critical for both security and intelligence services and law enforcement to do their jobs. Rather, this submission suggests that as it is currently drafted the bill includes some powers that are poorly defined, poorly suited to the stated function, have more serious repercussions than seem to have been understood, and could represent a distraction, a waste of resources and add an unnecessary set of additional risks to an already risky environment for the very people that the security and intelligence services and law enforcement are charged with protecting.

3          The Internet, Internet Surveillance and Communications Data

3.1       The internet has changed the way that people communicate in many radical ways. More than that, however, it has changed the way people live their lives. This is perhaps the single most important thing to understand about the internet: we do not just use it for what we have traditionally thought of as ‘communications’, but in almost every aspect of our lives. We don’t just talk to our friends online, or just do our professional work online, we do almost everything online. We bank online. We shop online. We research online. We find relationships online. We listen to music and watch TV and movies online. We plan our holidays online. We try to find out about our health problems online. We look at our finance online. For most people in our modern society, it is hard to find a single aspect of our lives that does not have a significant online element.

3.2       This means that internet interception and surveillance has a far bigger potential impact than traditional communications interception and surveillance might have had. Intercepting internet communications is not the equivalent of tapping a telephone line or examining the outside of letters sent and received, primarily because we use the internet for far more than we ever used telephones or letters. This point cannot be overemphasised: the uses of the internet are growing all the time and show no signs of slowing down. Indeed, more dimensions of internet use are emerging all the time: the so-called ‘internet of things’ which integrates ‘real world’ items (from cars and fridges to Barbie dolls[1]) into the internet is just one example.

3.3       This is also one of the reasons that likening Internet Connection Records to an itemised phone bill is particularly misleading. Another equally important reason to challenge that metaphor is the nature and potential uses of the data itself. What is labelled Communications Data (and in particular ‘relevant communications data’, as set out in clause 71(9) of the draft Bill) is by nature of its digital form ideal for analysis and profiling. Indeed, using this kind of data for profiling is the heart of the business models of Google, Facebook and the entire internet advertising industry.

3.4       The inferences that can be – and are – drawn from this kind of data, through automated, algorithmic analysis rather than through informed, human scrutiny – are enormous and are central to the kind of ‘behavioural targeting’ that are the current mode of choice for internet advertisers. Academic studies have shown that very detailed inferences can be drawn: analysis of Facebook ‘Likes’, for example, has been used to indicate the most personal of data including sexuality, intelligence and so forth. A recent study at Cambridge University concluded that ‘by mining Facebook Likes, the computer model was able to predict a person’s personality more accurately than most of their friends and family.’[2]

3.5       This means that the kind of ‘communications’ data discussed in the Bill is vastly more significant that what is traditionally considered to be communications. It also means that from a human rights perspective more rights are engaged by its gathering, holding and use. Internet ‘communications’ data does not just engage Article 8 in its ‘correspondence’ aspect, but in its ‘private and family life’ aspect. It engages Article 10 – the impact of internet surveillance on freedom of speech has become a bigger and bigger issue in recent years, as noted in depth by the UN Special Rapporteur on Freedom of Expression, most recently in his report on encryption and anonymity.[3]

3.6       Article 11, which governs Freedom of Association and Assembly, is also critically engaged: not only do people now associate and assemble online, but they use online tools to organise and coordinate ‘real world’ association and assembly. Indeed, using surveillance to perform what might loosely be called chilling for association and assembly has become one of the key tools of the more authoritarian governments to stifle dissent. Monitoring and even shutting off access to social media systems, for example, was used by many of the repressive regimes in the Arab Spring. Even in the UK, the government communications plan for 2013/14 included the monitoring of social media in order to ‘head off badger cull protests’, as the BBC reported.[4] This kind of monitoring does not necessarily engage Article 8, as Tweets (the most obvious example to monitor) are public, but it would engage both aspects of Article 11, and indeed of Article 10.

3.7       Article 14, the prohibition of discrimination, is also engaged: the kind of profiling discussed in paragraph 3.4 above can be used to attempt to determine a person’s race, gender, possible disability, religion, political views, even direct information like membership of a trade union. It should be noted, as is the case for all these profiling systems, that accuracy is far from guaranteed, giving rise to a bigger range of risks. Where derived or profiling data is accurate, it can involve invasions of privacy, chilling of speech and discrimination: where it is inaccurate it can generate injustice, inappropriate decisions and further chills and discrimination.

3.8       This broad range of human rights engaged means that the ‘proportionality bar’ for any gathering of this data, interception and so forth is higher than it would be if only the correspondence aspect of Article 8 were engaged. It is important to understand that the underlying reason for this is that privacy is not an individual, ‘selfish’, right, but one that underpins the way that our communities function. We need privacy to communicate, to express ourselves, to associate with those we choose, to assemble when and where we wish – indeed to do all those things that humans, as social creatures, need to do. Privacy is a collective right that needs to be considered in those terms.

3.9       It is also critical to note that communications data is not ‘less’ intrusive than content: it is ‘differently’ intrusive. In some ways, as has been historically evident, it is less intrusive – which is why historically it has been granted lower levels of protection – but increasingly the intrusion possible through the gathering of communications data is in other was greater than that possible through examination of content. There are a number of connected reasons for this. Firstly, it is more suitable for aggregation and analysis – communications data is in a structured form, and the volumes gathered make it possible to use ‘big data’ analysis, as noted above. Secondly, content can be disguised more easily – either by technical encryption or by using ‘coded’ language. Thirdly, there are many kinds of subjects that are often avoided deliberately when writing content – things like sexuality, health and religion – that can be determined by analysis of communications data. That means that the intrusive nature of communications data can often be greater than that of content. Moreover, as the levels and nature of data gathered grows, the possible intrusions are themselves growing. This means that the idea that communications data needs a lower level of control, and less scrutiny, than content data is not really appropriate – and in the future will become even less appropriate.

4          When rights are engaged

4.1       A key issue in relation to the gathering and retention of communications data is when the relevant rights are engaged: it is when data is gathered and retained, when it is subject to algorithmic analysis or automated filtering, or when it is subject to human examination. When looked at from what might be viewed an ‘old fashioned’ communications perspective, it is only when humans examine the data that ‘surveillance’ occurs and privacy is engaged. In relation to internet communications data this is to fundamentally miss the nature of the data and the nature of the risks. In practice, many of the most important risks occur at the gathering stage, and more at what might loosely be described as the ‘automated analysis’ stage.

4.2       It is fundamental to the nature of data that when it is gathered it becomes vulnerable. This vulnerability has a number of angles. There is vulnerability to loss – from human error to human malice, from insiders and whistle-blowers to hackers of various forms. The recent hacks of Talk Talk and Ashley Madison in particular should have focussed the minds of any envisaging asking communications providers to hold more and more sensitive data. There is vulnerability to what is variously called ‘function creep’ or ‘mission creep’: data gathered for one reason may end up being used for another reason. Indeed, when business models of companies such as Facebook and Google are concerned this is one of the key features: they gather data with the knowledge that this data is useful and that the uses will develop and grow with time.

4.3       It is also at the gathering stage that the chilling effects come in. The Panopticon, devised by Bentham and further theorised about by Foucault, was intended to work by encouraging ‘good’ behaviour in prisoners through the possibility of their being observed, not by the actual observation. Similarly it is the knowledge that data is being gathered that chills freedom of expression, freedom of association and assembly and so forth, not the specific human examination of that data. This is not only a theoretical analysis but one borne out in practice, which is one of the reasons that the UN Special Rapporteur on Freedom of Expression and many others have made the link between privacy and freedom of expression.[5]

4.4       Further vulnerabilities arise at the automated analysis stage: decisions are made by the algorithms, particular in regard to filtering based on automated profiling. In the business context, services are tailored to individuals automatically based on this kind of filtering – Google, for example, has been providing automatically and personally tailored search results to all individuals since 2009, without the involvement of humans at any stage. Whether security and intelligence services or law enforcement use this kind of a method is not clear, but it would be rational for them to do so: this does mean, however, that more risks are involved and that more controls and oversight are needed at this level as well as at the point that human examination takes place.

4.5       Different kinds of risks arise at each stage. It is not necessarily true that the risks are greater at the final, human examination stage. They are qualitatively different, and engage different rights and involve different issues. If anything, however, it is likely that as technology advances the risks at the earlier stages – the gathering and then the automated analysis stages – will become more important than the human examination stage. It is critical, therefore, that the Bill ensures that appropriate oversight and controls are put in place at these earlier stages. At present, this does not appear to be the case. Indeed, the essence of the data retention provisions appears to be that no real risk is considered by the ‘mere’ retention of data. That is to fundamentally misunderstand the impact of the gathering of internet communications data.

5          Internet Connection Records

5.1       Internet Connection Records (‘ICRs’) have been described as the only really new power in the Bill, and yet they are deeply problematic in a number of ways. The first is the question of definition. The ‘Context’ section of the Guide to Powers and Safeguards (the Guide) in the introduction to the Bill says that:

“The draft Bill will make provision for the retention of internet connection records (ICRs) in order for law enforcement to identify the communications service to which a device has connected. This will restore capabilities that have been lost as a result of changes in the way people communicate.” (paragraph 3)

This is further explained in paragraphs 44 and 45 of the Guide as follows:

“44. A kind of communications data, an ICR is a record of the internet services a specific device has connected to, such as a website or instant messaging application. It is captured by the company providing access to the internet. Where available, this data may be acquired from CSPs by law enforcement and the security and intelligence agencies.

45. An ICR is not a person’s full internet browsing history. It is a record of the services that they have connected to, which can provide vital investigative leads. It would not reveal every web page that they visit or anything that they do on that web page.”

Various briefings to the press have suggested that in the context of web browsing this would mean that the URL up to the first slash would be gathered (e.g. www.bbc.co.uk and not any further e.g. http://www.bbc.co.uk/sport/live/football/34706510 ). On this basis it seems reasonable to assume that in relation to app-based access to the internet via smartphones or tablets the ICR would include the activation of the app, but nothing further.

5.2       The ‘definition’ of ICRs in the bill is set out in 47(6) as follows:

“In this section “internet connection record” means data which—

(a) may be used to identify a telecommunications service to which a communication is transmitted through a telecommunication system for

the purpose of obtaining access to, or running, a computer file or computer program, and

(b) is generated or processed by a telecommunications operator in the process of supplying the telecommunications service to the sender of the communication (whether or not a person).”

This definition is vague, and press briefings have suggested that the details would be in some ways negotiated directly with the communications services. This does not seem satisfactory at all, particularly for something considered to be such a major part of the Bill: indeed, the only really new power according to the Guide. More precision should be provided within the Bill itself – and specific examples spelled out in Codes of Practice that accompany the Bill, covering the major categories of communications envisaged. Initial versions of these Codes of Practice should be available to Parliament at the same time as the Bill makes its passage through the Houses.

5.3       The Bill describes the functions to which ICRs may be put. In 47(4) it is set out that ICRs (and data obtained through the processing of ICRs) can only be used to identify:

“(a) which person or apparatus is using an internet service where—

(i) the service and time of use are already known, but

(ii) the identity of the person or apparatus using the service is not known,

(b) which internet communications service is being used, and when and how it is being used, by a person or apparatus whose identity is already known, or

(c) where or when a person or apparatus whose identity is already known is obtaining access to, or running, a computer file or computer program which wholly or mainly involves making available, or acquiring, material whose possession is a crime.”

The problem is that in all three cases ICRs insofar as they are currently defined are very poorly suited to performing any of these three functions – and better methods either already exist for them or could be devised to do so. ICRs provide at the same time much more information (and more intrusion) than is necessary and less information than is adequate to perform the function. In part this is because of the way that the internet is used and in part because of the way that ICRs are set out. Examples in the following paragraphs can illustrate some (but not all) of the problems.

5.4       The intrusion issue arises from the nature of internet use, as described in Section 3 of this submission. ICRs cannot be accurately likened to ‘itemised telephone bills’. They do not record the details of who a person is communicating with (as an itemised telephone bill would) but they do include vastly more information, and more sensitive and personal information, than an itemised telephone bill could possibly contain. A record of websites visited, even at the basic level, can reveal some of the most intimate information about an individual – and not in terms of what might traditionally be called ‘communications’. This intrusion could be direct – such as accessing a website such as www.samaritans.org at 3am or accessing information services about HIV – or could come from profiling possibilities. The commercial profilers, using what is often described as ‘big data’ analysis (and has been explained briefly in section 3 above) are able to draw inferences from very few pieces of information. Tastes, politics, sexuality, and so forth can be inferred from this data, with a relatively good chance of success.

5.5       This makes ICRs ideal for profiling and potentially subject to function-creep/mission-creep. It also makes them ideally suited for crimes such as identity theft and personalised scamming, and the databases of ICRs created by communications service providers a perfect target for hackers and malicious insiders. By gathering ICRs, a new range of vulnerabilities are created. Data, however held and whoever it is held by, is vulnerable in a wide range of ways.[6] Recent events have highlighted this very directly: the hacking of Talk Talk, precisely the sort of provider who would be expected to gather and store ICRs, should be taken very seriously. Currently it appears as though this hack was not done by the kind of ‘cyber-terrorists’ that were originally suggested, but by disparate teenagers around the UK. Databases of ICRs would seem highly likely to attract the interest both hackers of many different kinds. In practice, too, precisely those organisations who should have the greatest expertise and the greatest motivations to keep data secure – from the MOD and HMRC and the US DoD to Swiss Banks, technology companies including Sony and Apple – have all proved vulnerable to hacking or other forms of data loss in recent years. Hacking is the most dramatic, but human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

5.6       ICRs are also unlikely to provide the information that law enforcement and the intelligence and security services need in order to perform the three functions noted above. The first example of this is Facebook. Facebook messages and more open communications would seem on the surface to be exactly the kind of information that law enforcement might need to locate missing children – the kind of example referred to in the introduction and guide to the bill. ICRs, however, would give almost no relevant information in respect of Facebook. In practice, Facebook is used in many different ways by many different people – but the general approach is to remain connected to Facebook all the time. Often this will literally be 24 hours a day, as devices are rarely turned off at night – the ‘connection’ event has little relationship to the use of the service. If Facebook is accessed by smartphone or tablet, it will generally be via an app that runs in the background at all times – this is crucial for the user to be able to receive notifications of events, of messages, of all kinds of things. If Facebook is accessed by PC, it may be by an app (with the same issues) or through the web – but if via the web this will often be using ‘tabbed browsing’ with one tab on the browser keeping the connection to Facebook available without the need to reconnect.

5.7       Facebook and others encourage and support this kind of long-term and even permanent connection to their services – it supports their business model and in a legal sense gives them some kind of consent to the kind of tracking and information gathering about their users that is the key to their success. ICRs would not help in relation to Facebook except in very, very rare circumstances. Further, most information remains available on Facebook in other ways. Much of it is public and searchable anyway. Facebook does not delete information except in extraordinary circumstances – the requirement for communications providers to maintain ICRs would add nothing to what Facebook retains.

5.8       The story is similar in relation to Twitter and similar services. A 24/7 connection is possible and indeed encouraged. Tweets are ‘public’ and available at all times, as well as being searchable and subject to possible data mining. Again, ICRs would add nothing to the ways that law enforcement and the intelligence and security services could use Twitter data. Almost all the current and developing communications services – from WhatsApp and SnapChat to Pinterest and more – have similar approaches and ICRs would be similarly unhelpful.

5.9       Further, the information gathered through ICRs would fail to capture a significant amount of the ‘communications’ that can and do happen on the internet – because the interactive nature of the internet now means that almost any form of website can be used for communication without that communication being the primary purpose of the website. Detailed conversations, for example, can and do happen on the comments sections of newspaper websites: if an analysis of ICRs showed access to www.telegraph.co.uk would the immediate thought be that communications are going on? Similarly, coded (rather than encrypted) messages can be put on product reviews on www.amazon.co.uk. I have had detailed political conversations on the message-boards of the ‘Internet Movies Database’ (www.imdb.com) but an ICR would neither reveal nor suggest the possibility of this.

5.10     This means that neither can the innocent missing child be found by ICRs via Facebook or its equivalents nor can the even slightly careful criminal or terrorist be located or tracked. Not enough information is revealed to find either – whilst extra information is gathered that adds to intrusion and vulnerability. The third function stated for ICRs refers to people whose identity is already known. For these people, ICRs provide insufficient information to help. This is one of the examples where more targeted powers would help – and are already envisaged elsewhere in the Bill.

5.11     The conclusion for all of this is that ICRs are not likely to be a useful tool in terms of the functions presented. The closest equivalent form of surveillance used around the world has been in Denmark, with very poor results. In their evaluation of five years’ experience the Danish Justice Ministry concluded that ‘session logging’, their equivalent of Internet Connection Records, had been of almost no use to the police. [7] It should be noted that when the Danish ‘session logging’ suggestion was first made, the Danish ISPs repeatedly warned that the system would not work and that the data would be of little use. Their warnings were not heeded. Similar warnings from ISPs in the UK have already begun to emerge. The argument has been made that the Danish failure was a result of the specific technical implementation – I would urge the Committee to examine it in depth to come to a conclusion. However, the fundamental issues as noted above are only likely to grow as the technology becomes more complex, the data more dense and interlinked, and the use of it more nuanced. All these trends are likely only to increase in speed.

5.12     The gathering and holding of ICRs are also likely to add vulnerabilities to all those about whom they are collected, as well as requiring massive amounts of data storage at a considerable cost. At a time when resources are naturally very tight, for the money, expertise and focus to be on something like this appears inappropriate.

 

6          Other brief observations about communications data, definitions and encryption

6.1       There is still confusion between ‘content’ and ‘communications’ data. The references to ‘meaning’ in 82(4), 82(8),106(8) and 136(4) and emphasised in 193(6) seem to add rather than reduce confusion – particularly when considered in relation to the kinds of profiling possible from the analysis of basic communications data. It is possible to derive ‘meaning’ from almost any data – this is one of the fundamental problems with the idea that content and communications can be simply and meaningfully separated. In practice, this is far from the case.[8] Further, Internet Connection Records are just one of many examples of ‘communications’ data that can be used to derive deeply personal information – and sometimes more directly (through analysis) than often confusing and coded (rather than encrypted) content.

6.2       There are other issues with the definitions of data – experts have been attempting to analyse them in detail in the short time since the Bill was published, and the fact that these experts have been unable to agree or at times even ascertain the meaning of some of the definitions is something that should be taken seriously. Again it emphasises the importance of having sufficient time to scrutinise the Bill. Graham Smith of Bird & Bird, in his submission to the Commons Science and Technology Committee,[9] notes that the terms ‘internet service’ and ‘internet communications service’ used in 47(4) are neither defined nor differentiated, as well as a number of other areas in which there appears to be significant doubt as to what does and does not count as ‘relevant communications data’ for retention purposes. One definition in the Bill particularly stands out: in 195(1) it is stated that ‘”data” includes any information which is not data’. Quite what is intended by this definition remains unclear.

6.3       In his report, ‘A question of trust’, David Anderson QC called for a law that would be ‘comprehensive and comprehensible’: the problems surrounding definitions and the lack of clarity about the separation of content and communications data mean that the Bill, as drafted, does not meet either of these targets yet. There are other issues that make this failure even more apparent. The lack of clarity over encryption – effectively leaving the coverage of encryption to RIPA rather than drafting new terms – has already caused a significant reaction in the internet industry. Whether or not the law would allow end-to-end encryption services such as Apple’s iMessage to continue in their current form, where Apple would not be able to decrypt messages themselves, needs to be spelled out clearly, directly and comprehensibly. In the current draft of the Bill it does not.

6.4       This could be solved relatively simply by the modification of 189 ‘Maintenance of technical capability’, and in particular 189(4)(c) to make it clear that the Secretary of State cannot impose an obligation to remove electronic protection that is a basic part of the service operated, and that the Bill does not require telecommunications services to be designed in such a way as to allow for the removal of electronic protection.

7          Future Proofing the Bill

7.1       One of the most important things for the Committee to consider is how well shaped the Bill is for future developments, and how the Bill might be protected from potential legal challenges. At present, there are a number of barriers to this, but there are ways forward that could provide this kind of protection.

7.2       The first of these relates to ICRs, as noted in section 5 above. The idea behind the gathering ICRs appears on the face of it to be based upon an already out-dated understanding of both the technology of the internet and of the way that people use it. In its current form, the idea of requiring communications providers to retain ICRs is also a hostage to fortune. The kind of data required is likely to become more complex, of a vastly greater volume and increasingly difficult to use. What is already an unconvincing case will become even less convincing as time passes. The best approach would seem to be to abandon the idea of requiring the collection of ICRs entirely, and looking for a different way forward.

7.3       Further, ICRs represent one of the two main ways in which the Bill appears to be vulnerable to legal challenge. It is important to understand that recent cases at both the CJEU (in particular the Digital Ireland case[10] and the Schrems case[11]) and the European Court of Human Rights (in particular the Zakharov case[12]) it is not just the examination of data that is considered to bring Article 8 privacy rights into play, but the gathering and holding of data. This is not a perverse trend, but rather a demonstration that the European courts are recognising some of the issues discussed above about the potential intrusion of gathering and holding data. It is a trend that is likely to continue. Holding data of innocent people on an indiscriminate basis is likely to be considered disproportionate. That means that the idea of ICRs – where this kind of data would be required to be held – is very likely to be challenged in either of these courts and indeed is likely to be overturned at some point.

7.4       The same is likely to be true of the ‘Bulk’ powers, unless those bulk powers are more tightly and clearly defined, including the giving of examples. At the moment quite what these bulk powers consist of – and how ‘bulky’ they are – is largely a matter of speculation, and while that speculation continues, so does legal uncertainty. If the powers involve the gathering and holding of the data of innocent people on a significant scale, a legal challenge either now or in the future seems to be highly likely.

7.5       It is hard to predict future developments either in communications technology or in the way that people use it. This, too, is something that seems certain to continue – and it means that being prepared for those changes needs to be built into the Bill. At present, this is done at least in part by having relatively broad definitions in a number of places, to try to ensure that future technological changes can be ‘covered’ by the law. This approach has a number of weaknesses – most notably that it gives less certainty than is helpful, and that it makes ‘function creep’ or ‘mission creep’ more of a possibility. Nonetheless, it is probably inevitable to a degree. It can, however, be ameliorated in a number of ways.

7.6       The first of these ways is to have a regular review process built in. This could take the form of a ‘sunset clause’, or perhaps a ‘renewal clause’ that requires a new, full, debate by Parliament on a regular basis. The precise form of this could be determined by the drafters of the Bill, but the intention should be clear: to avoid the situation that we find ourselves in today with the complex and almost incomprehensible regime so actively criticised by David Anderson QC, RUSI and to an extent the ISC in their reviews.

7.7       Accompanying this, it is important to consider not only the changes in technology, but the changes in people’s behaviour. One way to do this would be to charge those responsible for the oversight of communications with a specific remit to review how the powers are being used in relation to the current and developing uses of the internet. They should report on this aspect specifically.

8          Overall conclusions

8.1       I have outlined above a number of ways in which the Bill, in its current form, does not seem to be workable, proportionate, future-proofed and protected from potential legal challenges. I have made five specific recommendations:

8.1.1    I do not believe the case has been made for retaining ICRs. They appear unlikely to be of any real use to law enforcement in performing the functions that are set out, they add a significant range of risks and vulnerabilities, and are likely to end up being extremely expensive. This expense is likely to fall upon both the government – in which case it would be a waste of resources that could be put to more productive use to achieve the aims of the Bill – or ordinary internet users through increased connection costs.

8.1.2    The Bill needs to be more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge.

8.1.3    The Bill needs to be more precise about encryption and to be clear about the approach to end-to-end encryption. This is critical to building trust in the industry, and in particular with overseas companies such as those in Silicon Valley. It is also a way to future-proof the Bill: though some within the security and intelligence services may not like it, strong encryption is fundamental to the internet now and will become even more significant in the future. This should be embraced rather than fought against.

8.1.4    Oversight needs strengthening and broadening – including oversight of how the powers have been used in relation to changes in behaviour as well as changes in technology

8.1.5    The use of some form of renewal or sunset clause should be considered, to ensure that the powers are subject to full review and reflection by parliemant on a regular basis.

8.2       The question of resource allocation is a critical one. For example, have alternatives to the idea of retaining ICRs been properly considered for both effectiveness and costs? The level of intrusion of internet surveillance (as discussed in section 3 above) adds to the imperative to consider other options. Where a practice is so intrusive, and impacts upon such a wide range of human rights (Articles 8, 10, 11 and 14 of the ECHR – and possibly Article 6), a very high bar has to be set to make it acceptable. It is not at all clear either that the height of that bar has been appropriately set or that the benefits of the Bill mean that it has met them. In particular, the likely ineffectiveness of ICRs mean that it is very hard to argue that this part of the Bill would meet even a far lower requirement. The risks and vulnerabilities that retention of ICRs adds will in all probability exceed the possible benefits, even without considering the intrusiveness of their collection, retention and use.

8.3       The most important overall conclusion at this stage, however, is that more debate and analysis is needed. The time made available for analysis is too short for any kind of certainty, and that means that the debate is being held without sufficient information or understanding. Time is also needed to enable MPs and Lords to gain a better understanding of how the internet works, how people use it in practice, and how this law and the surveillance envisaged under its auspices could impact upon that use. This is not a criticism of MPs or Lords so much as a recognition that people in general do not have that much understanding of how the internet works – one of the best things about the internet is that we can use it quickly and easily without having to understand much of what is actually happening ‘underneath the bonnet’ as it were. In passing laws with significant effects – and the Investigatory Powers Bill is a very significant Bill – much more understanding is needed.

8.4       It is important for the Committee not to be persuaded that an event like the recent one in Paris should be considered a reason to ‘fast-track’ the Bill, or to extend the powers provided by the Bill. In Paris, as in all the notable terrorism cases in recent years, from the murder of Lee Rigby and the Boston Bombings to the Sydney Café Siege and the Charlie Hebdo shootings, the perpetrators (or at the very least a significant number of the perpetrators) were already known to the authorities. The problem was not a lack of data or a lack of intelligence, but the use of that data and that intelligence. The issue of resources noted above applies very directly here: if more resources had been applied to ‘conventional’ intelligence it seems, on the surface at least, as though there would have been more chance of the events being avoided. Indeed, examples like Paris, if anything, argue against extending large-scale surveillance powers. If the data being gathered is already too great for it to be properly followed up, why would gathering more data help?

8.5       As a consequence of this, in my opinion the Committee should look not just at the detailed powers outlined in the Bill and their justification, but also more directly at the alternatives to the overall approach of the Bill. There are significant costs and consequences, and the benefits of the approach as opposed to a different, more human-led approach, have not, at least in public, been proven. The question should be asked – and sufficient evidence provided to convince not just the Committee but the public and the critics in academia and elsewhere. David Anderson QC made ‘A Question of Trust’ the title of his review for a reason: gaining the trust of the public is a critical element here.

 

 

 

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law

UEA Law School

University of East Anglia

Norwich NR4 7TJ

Email: paul.bernal@uea.ac.uk


 

[1] The new ‘Hello Barbie’ doll, through which a Barbie Doll can converse and communicate with a child, has caused some controversy recently (see for example http://www.theguardian.com/technology/2015/nov/26/hackers-can-hijack-wi-fi-hello-barbie-to-spy-on-your-children but is only one of a growing trend.

[2] See http://www.cam.ac.uk/research/news/computers-using-digital-footprints-are-better-judges-of-personality-than-friends-and-family#sthash.OSQ8dqdr.dpuf

[3] Available online at http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/CallForSubmission.aspx

[4] http://www.bbc.co.uk/news/uk-politics-22984367

[5] See for example the 2015 report of the UN Special Rapporteur on Freedom of Expression, where amongst other things he makes particular reference to encryption and anonymity. http://daccess-dds-ny.un.org/doc/UNDOC/GEN/G15/095/85/PDF/G1509585.pdf?OpenElement

[6] Some of the potential range of vulnerabilities are discussed in Chapter 6 of my book Internet Privacy Rights – Rights to Protect Autonomy, Cambridge University Press, 2014.

[7] See http://www.ft.dk/samling/20121/almdel/reu/bilag/125/1200765.pdf – in Danish

[8] This has been a major discussion point amongst legal academics for a long time. See for example the work of Daniel Solove, e.g. Reconstructing Electronic Surveillance Law, Geo. Wash. L. Review, vol 72, 2003-2004

[9] Published on the Committee website at http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/investigatory-powers-bill-technology-issues/written/25119.pdf

[10] Joined Cases C‑293/12 and C‑594/12, Digital Rights Ireland and Seitlinger and Others, April 2014, which resulted in the invalidation of the Data Retention Directive

[11] Case C-362/14, Maximillian Schrems v Data Protection Commissioner, October 2015, which resulted in the declaration of invalidity of the Safe Harbour agreement.

[12] Roman Zakharov v. Russia (application no. 47143/06), ECtHR, December 2015