Guest post: Data Retention: I can’t believe it’s not lawful, can you? A response to Anthony Speaight QC

Guest post by Matthew White

Introduction:

Ladies and gentlemen, Bagginses and Boffins. Tooks and Brandybucks. Grubbs! Chubbs! Hornblowers! Bolgers! Bracegirdles! Proudfoots. Put your butter away for I am about to respond, rebut, rebuke and more to a recent blog post for Judicial Power Project, by Anthony Speaight QC on data retention.

Blanket data retention is unlawful, please deal with it:

Speaight starts off by referring to the recent Court of Appeal (CoA) judgment in  Tom Watson and Others v Secretary of State for the Home Department [2018] EWCA Civ 70 and how the Court of Justice of the European Union (CJEU) has created problems and uncertainties with regards to data retention. As David Allen Green would say, ‘Well…’ Well, just to be clear, the position of the CJEU on blanket indiscriminate data retention is crystal clear. It . Is . Unlawful . It just happens that the CoA took the position of sticking their fingers in their ears and pretending that the CJEU’s ruling doesn’t apply to UK law, because its somehow (it’s not) different.

Just billing data is retained? Oh really?

Next, Speaight recaps the data retention saga so far, in that telecommunications companies have always recorded who uses their services, when and where, often for billing purposes. A long time ago, in a galaxy far, far away (a few years ago, and anywhere with an internet connection) this position was a robust one. But the European Commission (Commission) in 2011 highlighted that:

[T]rends in business models and service offerings, such as the growth in flat rate tariffs, pre-paid and free electronic communications services, meant that operators gradually stopped storing traffic and location data for billing purposes thus reducing the availability of such data for criminal justice and law enforcement purposes.

So, in a nutshell, data for billing purposes are on the decrease. This would explain why the Data Retention Directive (DRD) (discussed more below) affected:

[P]roviders of electronic communication services by requiring such providers to retain large amounts of traffic and location data, instead of retaining only data necessary for billing purposes; this shift in priority results in an increase in costs to retain and secure the data.

So, it’s simply untrue to refer to just billing data when talking about data retention, because this isn’t the only data that is or has ever been sought.

It’s the Islamists fault why we have data retention:

Speaight next points out that it was the advent of Islamist international terrorism that made it advantageous to place data retention obligations on companies. Oh really? Are we going down this route? Well….. demands for data retention can be traced back to the ‘International Law Enforcement and Telecommunications Seminars’ (ILETS) (6) and in its 1999 report, it was realised that Directive 97/66/EC (the old ePrivacy Directive) which made retention of communications data possible only for billing purposes was a problem. The report sought to ‘consider options for improving the retention of data by Communication Service Providers.’ Improve? Ha. Notice how 1999 was before 9/11? Funny that.

It doesn’t stop there though. A year later (still before 9/11), the UK’s National Crime and Intelligence Service (NCIS) made a submission (on behalf of the Mi5/6, GCHQ etc) to the Home Office on data retention laws. They ironically argued that a targeted approach would be a greater infringement on personal privacy (para 3.1.5). Of course, they didn’t say how or why this was the case, because, reasons. Charles Clarke, the then junior Home Office Minister, and Patricia Hewitt, an ‘E-Minister’ both made the claim such proposals would never happen (Judith Rauhofer, ‘Just Because You’re Paranoid, Doesn’t Mean They’re Not After You: Legislative Developments in Relation to the Retention of Communications Data’ (2006) SCRIPTed 3, 228; Patricia Hewitt and Charles Clarke, Joint letter to Independent on Sunday, 28 Jan 2000) and should not be implemented (Trade and Industry Committee, UK Online Reviewed: the First Annual Report of the E-Minister and E-Envoy Report (HC 66 1999-2000), Q93).

Guess what? A year later Part 11 of the Anti-terrorism, Crime and Security Act 2001 (ATCSA 2001) came into force three months after 9/11 (Judith Rauhofer, 331). The Earl of Northesk, however, pointed out that ‘there is no evidence whatever that a lack of data retained has proved an impediment to the investigation of the atrocities’ on 9/11 (HL Deb 4 Dec vol 629 col. 808-9). What this demonstrates is that data retention was always on the cards, even when its utility wasn’t proven, where the then Prime Minister Tony Blair, noted that ‘all the surveillance in the world’ could not have prevented the 7/7 bombings. It’s just that as Roger Clarke succinctly puts it:

“[M]ost critical driver of change, however, has been the dominance of national security extremism since the 2001 terrorist attacks in the USA, and the preparedness of parliaments in many countries to grant law enforcement agencies any request that they can somehow link to the idea of counter-terrorism.” (Roger Clarke, ‘Data retention as mass surveillance: the need for an evaluative framework’ (2015) International Data Privacy Law 5:2 121, 122).

Islamic terrorism was just fresh justification (7,9) for something that ‘the EU governments always intended to introduce an EC law to bind all member states to adopt data retention.’ Mandatory data retention was championed by the UK during its Presidency of the European Council (Council) (9) (and yes, that includes the ‘no data retention from us’ Charles Clarke (who was accused of threatening the European Parliament to agree to data retention (9))) and described as a master class in diplomacy and political manoeuvring (Judith Rauhofer, 341) (and they say it’s the EU that tells us what to do!!). Politicians goin’ politicate. Yes, the DRD makes reference to the Madrid bombings, but the DRD was not limited to combating terrorism (6), just as the reasons for accessing communications data in UK law under s.22 of the Regulation of Investigatory Powers Act 2000 (RIPA 2000) were not solely based on fighting terrorism. There is nothing wrong with saying that data retention (yeah, but not blanket, of course) and access to said data can be important in the fight against Islamist terrorism, but would you please stop pretending that was the basis on which data retention was sought?

Data retention was smooth like rocks:

Next, Speaight points to the ‘smooth operation’ of the data retention system. Smooth how and in what ways? Harder to answer that is, yess! Well….. in 2010, the Article 29 Working Party (WP29) pointed out that ‘the lack of available sensible statistics hinders the assessment of whether the [data retention] directive has achieved its objectives.’ The WP29 went further pointing out that there was a lack of harmonisation in national implementation of the DRD (2). This was, the purpose of the DRD (harmonising data retention across the EU), and it didn’t even achieve what it set out.

What about its true purpose? You know, spying on every EU citizen? Well the European Data Protection Supervisor (EDPS) responded to the Commission’s evaluation of the DRD. WARNING: EDPS pulls no punches. First, the EDPS reiterated that the DRD was based upon the assumption of necessity (para 38). Secondly, the EDPS criticised the Commission’s assertion that most Member States considered data retention a necessary tool when conclusions were based on just over a third (that’s less than half, right?) of them (para 40). Thirdly, these conclusions were in fact, only statements (para 41). Fourthly, the EDPS highlighted there should be sufficient quantitative and qualitative information to assess whether the DRD is actually working and whether less privacy intrusive measures could achieve the same result, information should show the relationship between use and result (43).

Surprise, surprise, the EDPS didn’t find sufficient evidence to demonstrate the necessity of the DRD and that further investigations into alternatives should commence (para 44). Fifthly, the EDPS pretty much savaged the quantitative and qualitative information available (para 45-52). A few years later, the CJEU asked for proof of the necessity of the DRD. There was a lack of statistical evidence from EU Member States, the Commission, the Council and European Parliament, and despite that, they had the cheek to ask the CJEU to reject the complaints made by Digital Rights Ireland and others anyway (ibid). Only the Austrian government were able to provide statistical evidence on the use (not retention) of communications data which didn’t involve any cases of terrorism (ibid). The UK’s representatives admitted (come again? The UK admits something?) there was no ‘scientific data’ to underpin the need of data retention (ibid), so the question begs, wtaf had the DRD been based upon? Was it the assumption of necessity the EDPS referred to? Draw your own conclusions. The moral of the story is that the DRD did not operate smoothly.

Ruling against data retention was a surprise?

Speaight then moves onto the judgment that started it all, Joined Cases C‑293/12 and C‑594/12, Digital Rights Ireland in which the CJEU invalidated the DRD across the EU. According to Speaight, this came as a ‘surprise.’

I felt a great disturbance in the Law, as if thousands of spies, police, other public authorities, politicians and lawyers suddenly cried out in terror, as the State were suddenly unable to spy anymore. I fear something terrible has happened.

So, who was surprised? Was it the European Parliament who had initially opposed this form of data retention as they urged its use must be entirely exceptional, based on specific comprehensible law, authorised by judicial or other competent authorities for individual cases and be consistent with the European Convention on Human Rights (ECHR)? Was it a surprise to them when they also noted that that ‘a general data retention principle must be forbidden’ and that ‘any general obligation concerning data retention’ is contrary to the proportionality principle’ (Abu Bakar Munir and Siti Hajar Mohd Yasin, ‘Retention of communications data: A bumpy road ahead’ (2004) The John Marshall Journal of Computer & Information Law 22:4 731, 734; Clive Walker and Yaman Akdeniz, ‘Anti-Terrorism Laws and Data Retention: War is over?’ (2003) Northern Ireland Legal Quarterly 54:2 159, 167)?

Was it a surprise to Patrick Breyer who argued that data retention was incompatible with Articles 8 and 10 of the ECHR back in 2005 (372, 374, 375)? Was it a surprise to Mariuca Morariu who argued that the DRD had failed to demonstrate its necessity (Mariuca Morariu, ‘How Secure is to Remain Private? On the Controversies of the European Data Retention Directive’ Amsterdam Social Science 1:2 46, 54-9)? Was it a surprise to Privacy International (PI), the European Digital Rights Initiative (EDRi), 90 NGOs and 80 telecommunications service providers (9) who were against the DRD? Was it a surprise to the 40 civil liberties organisations who urged the European Parliament to vote against the retention of communications data?

Was it a surprise to the WP29, the European Data Protection Commissioners, the International Chamber of Commerce (ICC), European Internet Services Providers Association (EuroISPA), the US Internet Service Provider Association (USISPA), the All Party Internet Group (APIG) (Abu Bakar Munir and Siti Hajar Mohd Yasin, 746-749) and those at the G8 Tokyo Conference? Hell, even our own assistant Information Commissioner, Jonathan Bamford, back in 2001 wouldn’t be surprised because he said ‘Part 11 isn’t necessary, and if it is necessary it should be made clear why’ (HL Deb 27 Nov 2001 vol 629 cc183-290, 252). Was it a surprise when prior to Digital Rights Ireland:

Bulgaria’s Supreme Administrative Court, the Romanian, German Federal, Czech Republic Constitutional Courts and the Supreme Court of Cyprus all [declared] national implementation of the DRD either invalid or unconstitutional (in some or all regards) and incompatible with Article 8 ECHR?

Was Jules Winnfield surprised?

The point I’m trying to hammer home is that (you’ve guessed it), the CJEU’s ruling in Digital Rights Ireland should come as no surprise. Still on the issue of surprise, for Speaight it was because it departed from decisions of the European Court of Human Rights (ECtHR) and the CJEU itself. Ok, let’s look at these ECtHR cases Speaight refers to. The first is Weber and Saravia v Germany, a case on ‘strategic monitoring.’ This is a whole different kettle of fish when compared to the DRD as this concerned the surveillance of 10% (I’m not saying this is cool either btw) [30, 110] of German telecommunications, not the surveillance of ‘practically the entire European population’ [56]. Ok, that may have been an exaggeration by the CJEU as there are only 28 (we’re not so sure about one though) EU Member States, but the point is, the powers in question are not comparable. The DRD was confined to serious crime, without even defining it [61]. Whereas German law in Weber concerned six defined purposes for strategic monitoring, [27] and could only be triggered through catch words [32]. In Digital Rights Ireland, authorisation for access to communications data in the DRD was not dependent upon ‘prior review carried out by a court or by an independent administrative body’ [62] where in Weber this was the case [21, 25]. Apples and oranges.

The second ECtHR case was Kennedy v UK, and it’s funny that this case is brought up. The ECtHR in this case referred to a previous case, Liberty v UK in which the virtually unfettered power of capturing external communications [64] violated Article 8 of the ECHR [70]. The ECtHR in Kennedy referred to this as an indiscriminate power [160, 162] (bit like data retention huh?), and the UK only succeeded in Kennedy because the ECtHR were acting upon the assumption that interception warrants only related to one person [160, 162]. Of course, the ECtHR didn’t know that ‘person’ for the purposes of RIPA 2000 meant ‘any organisation and any association or combination of persons,’ so you know, not one person literally.

And this was, of course, prior to Edward Snowden’s bombshell of surveillance revelations, which triggered further proceedings by Big Brother Watch. A couple of years ago, in Roman Zakharov v Russia, the ECtHR’s Grand Chamber (GC) ruled that surveillance measures that are ‘ordered haphazardly, irregularly or without due and proper consideration’ [267] violates Article 8 [305]. That is because the automatic storage of clearly irrelevant data would contravene Article 8 [255]. This coincides with Advocate General (AG) Saugmandsgaard Øe’s opinion that the ‘disadvantages of general data retention obligations arise from the fact that the vast majority of the data retained will relate to persons who will never be connected in any way with serious crime’ [252]. That’s a lot of irrelevant data if you ask me. Judge Pinto de Albuquerque, in his concurring opinion in Szabo and Vissy v Hungary regards Zakharov as a rebuke of the ‘widespread, non-(reasonable) suspicion-based, “strategic surveillance” for the purposes of national security’ [35]. So, I’d say that even Weber v Saravia is put into doubt. And so, even if the CJEU rules that data retention in the national security context is outside its competence, there is enough ECtHR case law to bite the UK on its arse.

Probably the most important ECtHR case not mentioned by Speaight (why is that?) is that of S and Marper v UK, this is the data retention case. Although this concerned DNA data retention, the ECtHR’s concerns ‘have clear applications to the detailed information revealed about individuals’ private lives by communications data.’ What did the GC rule in S and Marper? Oh, was it that blanket indiscriminate data retention ‘even on a specific group of individuals (suspects and convicts) violated Article 8’? Yes, they did and it was S and Marper to which the CJEU referred to on three separate occasions in Digital Rights Ireland [47, 54-5]. Tele 2 and Watson (where the CJEU reconfirmed that blanket indiscriminate data retention is prohibited under EU law) is just the next logical step with regards to communications data. And so far from being surprising, the CJEU in Digital Rights Ireland and Tele2 and Watson are acting in a manner that is consistent with the case law of the ECtHR.

The CJEU case law that Speaight refers to is Ireland v Parliament and Council which was a challenge to the DRD’s legal basis, not whether it was compatible with the Charter of Fundamental Rights, so I’m not entirely sure what Speaight is trying to get at. All in all, Speaight hasn’t shown anything to demonstrate that Digital Rights Ireland has departed from ECtHR or CJEU case law.

You forgot to say the UK extended data retention laws:

Speaight then rightly acknowledges how the UK government replaced UK law implementing the DRD with the Data Retention and Investigatory Powers Act 2014 (DRIPA 2014) in lightspeed fashion. What Speaight omits, however, is that DRIPA 2014 extended retention obligations from telephone companies and Internet Service Providers (ISPs) to Over-The-Top (OTT) services such as Skype, Twitter, Google, Facebook etc. James Brokenshire MP attested that DRIPA 2014 was introduced to clarify what was always covered by the definition of telecommunications services (HC Deb 14 July, vol 584, 786). This, of course, was total bullshit (5), but like I said, politicians goin’ politicate.

Claimants don’t ask questions, courts do:

Speaight moves onto the challenges to DRIPA 2014, we know the story already, the High Court (HC) said it was inconsistent with Digital Rights Ireland, whereas the CoA disagreed, blah, blah. Speaight points out that the claimants had no issue with data retention in principle, which is true, but so what? Speaight also points out that the CJEU went further than what the claimants asked by ruling that blanket indiscriminate data retention was not permissible under EU law. Wait, what the fark? It’s not the bloody claimants’ that ask the CJEU a question on the interpretation of EU law as I’m pretty sure it was the Swedish referring court (via Article 267 of the Treaty on the Functioning of the EU, you know, a preliminary reference) that asked the CJEU:

Is a general obligation to retain traffic data covering all persons, all means of electronic communication and all traffic data without any distinctions, limitations or exceptions for the purpose of combating crime (as described [below under points 1-6]) compatible with Article 15(1) of Directive 2002/58/EC, 1 taking account of Articles 7, 8 and 15(1) of the Charter?

And the CJEU said no. End of discussion.

The ends don’t always justify the means and for clarity, the CJEU didn’t reject shit:

Speaight also says that the CJEU in Tele2 and Watson rejected AG Saugmandsgaard Øe’s advice that the French governments found access to communications data useful in its investigations into terrorist attacks in 2015. Such a position however, falls victim to several questions, such as under what circumstances was the data sought? Was it accessed as a consequence of the legal obligation to retain? Or was it already retained for business purposes? What were the results of the use of that data? Could the same results have been achieved using less intrusive means? Saying it is useful tells us nothing as the ECtHR has plainly said necessity (in a democratic society) is not as flexible as expressions such as ‘useful’ [48], and as the CJEU rightly noted, a measure in and of itself, even in the general interest cannot justify general indiscriminate data retention [103]. This demonstrates that the CJEU didn’t reject anything, they didn’t even refer to the French government’s evidence, they just said as fundamental as fighting serious crime may be, and the measures employed, cannot by themselves justify such a fundamental departure from the protection of human rights. Just because you can, doesn’t mean you should. A certain ECtHR said something similar in Klass v Germany in that States ‘may not, in the name of the struggle against espionage and terrorism, adopt whatever measures they deem appropriate’ [49].

The CJEU doesn’t have to answer what it wasn’t asked:

Speaight then whines about the CJEU not addressing the issue of national security, well they weren’t asked about national security in Tele2 and Watson, were they? Like I said, even if the CJEU doesn’t have competence to rule on national security based data retention, Roman Zakharov is watching you from Strasbourg (he’s not actually in Strasbourg, I don’t think, but you dig).

What’s your problem with notification?

Speaight also bemoans the obligation to notify saying this requirement could damage investigations and surveillance and went beyond what the claimants had asked. Well, again, the claimants weren’t asking the questions, ffs, and the CJEU made this point by referring to previous case law, notably, Schrems [95]. The CJEU made very clear that notification should be done ‘as soon as that notification is no longer liable to jeopardise the investigations being undertaken by those authorities’ [121]. This is consistent with the ECtHR’s stance. Both courts are aware that notification can defeat the purpose of the investigation, and sometimes even after it has concluded, notification may still not be appropriate. But Speaight seems to omit this crucial detail.

Lawyers getting mad:

Speaight notes that criticism of Tele2 is not confined to Eurosceptics. Sure, but you don’t have to be a Europhile to defend it either. He also noted that it was roundly condemned by all the participants at a meeting of the Society of Conservative Lawyers. Well, no shit to my Sherlock, the name kinda gave it away. He also notes that the former Independent Reviewer of Terror law, David Anderson QC, said it was the worst judgment he knew of. Wait til Anderson reads the ECtHR’s case law on this matter then, which if anything, on proper reading goes further than Tele2. Speaight also points out that Demonic Grieve QC MP was pissed and that a well distinguished member of the French Bar, Francois-Henri Briard basically saying we need more conservative judges to trample on fundamental rights. If a judgment that protects the fundamental rights of all EU citizens pisses off a few lawyers, so be it.

Conclusions:

I’ve spent way too much time on Speaight’s post, and the really sad thing is, I’ve enjoyed it. It’s hard to have a conversation about data retention when you first have to sift through a load of bollocks, and there was plenty of bollocks, just to make your point. And by the time you’ve cleared through all the falsities and misleading or exaggerated points, you run close to 4k words without actually saying what your position is. So, my position for this blog post is, we should always shoot down rubbish when it shows its ugly face or else it festers. Actually, the point is, I can believe that blanket indiscriminate data retention is unlawful.

Twitter/DataSift – an early ICO response

I’ve just received a response from the ICO to my initial question about whether or not they were investigating the Twitter/DataSift issue (about which I’ve just blogged here)

This is the full response (set down here with the permission of Dr Simon Rice of the ICO)

————————————————

Paul,

David Smith passed on your email regarding Twitter/DataSift.

The ICO is aware of an arrangement between Twitter and some third-parties which permits access to a greater volume of Tweets than would normally be accessible through the website or API. Insofar as they are required to comply with UK law both Twitter and these third-parties would need to ensure that they remain compliant with the DPA and PECR for the processing undertaken with such data.

The report linked to from your blog suggests that the data is used for purposes of thematic analysis and not for direct marketing or otherwise attempting to identify the users of the Twitter accounts. This is important because clearly a third party learning that I might be interested in their products and marketing me on that basis still needs to comply with the rules on marketing and still needs to justify why they are holding personal data relating to me; on the other hand, a third party which analyses the mass of tweets to infer that their efforts are best focussed on a particular demographic or geographical area might not face the same compliance problems. Then, of course, there are the mass of third parties whose activities lie somewhere in the middle.

The privacy policy at http://twitter.com/privacy does state that the sharing of non-personal data may take place and we would expect Twitter to comply with this. However, if you are aware of evidence that is contrary to this understanding then of course please do not hesitate to let us know.

I you have any further questions please feel free to get in contact.

Regards,

Simon Rice

Dr Simon Rice Principal Policy Adviser (Technology)

Information Commissioner’s Office, Wycliffe House, Water Lane, Wilmslow, Cheshire, SK9 5AF.

————————————————

I would welcome any responses – but it seems to me that we would need to see the details of the agreement between Twitter and DataSift (and any other subsequent agreements) to see whether they meet the requirements of the ICO as set out in the letter. There’s more to investigate here – I will be interested to see how DataSift might be able to guarantee that they will only be using the data for thematic analysis rather than direct marketing, and have written to DataSift to ask that question.

Dr Rice has asked that anyone contacting the ICO directly should use the usual ICO website or helpline (see https://www.ico.gov.uk/Global/contact_us.aspx)

10 things I hate about the ICO

With apologies to William Shakespeare, Elizabeth Barrett Browning, Heath Ledger, Julia Stiles and many more…

10 things I hate about the ICO

I hate the way you ask for teeth but seem afraid to bite
I hate the way you think the press are far too big to fight
I hate the way you always think that business matters most
Leaving all our online rights, our privacy, as toast

I hate the way you keep your fines for councils and their kind
While leaving business all alone, in case the poor dears mind
I hate the way you take the rules that Europe writes quite well
And turn them into nothing much, as far as we can tell

I hate the way that your advice on cookies was so vague
Could it possibly have been, you were a touch afraid?
I hate the way you talked so tough to old ACS Law
But when it came to action, it didn’t hurt for sure

I hate the way it always seems that others take the fore
While you sit back and wait until the interest is no more
I hate that your investigations all stop far too soon
As PlusNet, Google and BT have all found to their boon

I hate the way you tried your best to hide your own report
‘Bury it on a busy day’; a desperate resort!
You should be open, clear and fair, not secretive and poor
We’ll hold you up for all to see – we expect so much more!

I hated how when Google’s cars were taking all our stuff
You hardly seemed to care at all – that wasn’t near’ enough
Even when you knew the truth, you knew not what to do
It took the likes of good PI to show you where to go…

I hated how my bugbears Phorm, didn’t get condemned
Even when their every deed could not help but offend
You let them off with gentle words, ‘must try harder’ you just said
Some of us, who cared a lot, almost wished you dead

You tease us, tempt us, give us hope – then let us down so flat
We think you’re on our side – you’re not – and maybe that is that!
Will all these bad things ever change? We can but hope and dream
That matters at the ICO aren’t quite as they might seem.

We need you, dearest ICO, far more than we should
We’d love you if you only tried to do the job you could
We’d love you if you stood up tall, and faced our common foes
Until you do, sad though it is, then hatred’s how it goes.

P.S. I don’t really hate the ICO at all really…. this is ‘poetic’ licence!

12 wishes for online privacy….

It’s that time of year for lists, predictions and so forth. I don’t want to make predictions myself – I know all too well how hard it is to predict anything in this world, and even more so in the online world. I do, however, have wishes. Many of these are pipe dreams, I’m afraid, but some of them do have some small hope of coming true. So here they are, my twelve wishes for online privacy…

  1. That I don’t hear the ‘if you’ve got nothing to hide…’ argument against privacy ever again…
  2. That governments worldwide begin to listen more to individuals and to advocacy groups and less to the industry lobby groups, particularly those of the copyright and security industries
  3. That privacy problems continue to grab the headlines – so that privacy starts to be something of a selling point, and companies compete to become the most ‘privacy-friendly’ rather than just paying lip service to privacy
  4. That the small signs I’ve been seeing that Google might be ‘getting’ privacy do not turn out to be illusions. Go on, Google, go on!
  5. That my ‘gut feeling’ that 2012 could be the peak year for Facebook turns out to be true. Not because I particularly dislike Facebook – I can see the benefits and strengths of its system – but because the kind of domination and centralisation it represents can’t be good for privacy in the end, and I don’t believe that the man who said that privacy was no longer a ‘social norm’ has really changed his spots
  6. That the ICO grows some cojones, and starts understanding that it’s supposed to represent us, not just find ways for businesses to get around data protection regulations…
  7. That the media (and yes, I’m talking to YOU, BBC), whenever they get told about a new technical innovation, don’t just talk about how wonderful and exciting it is, but think a little more critically, and particularly about privacy
  8. That the revision to the Data Protection Directive (or perhaps Regulation) turns into something that is both helpful and workable – and not by compromising privacy to the wishes of business interests.
  9. That neither SOPA nor PIPA get passed in the US…
  10. That the right to be forgotten, something I’ve written about a number of times before, is discussed for what it is, not what people assume it must be based solely on the misleading name. It’s not about censorship or rewriting history. It really isn’t! It’s about people having rights over their own data! Whose data? Our data!
  11. That the Labour Party begins to put together a progressive digital policy, and says sorry for ever having listened to the copyright lobby in introducing the Digital Economy Act! 
  12. That we start thinking more about the ordinary privacy of ordinary people, not just that of celebrities and politicians… 
These are of course just a sample of the things I could say – but if even a few of them start to become true, it would be a really good start. Here’s wishing….

Whose data? Our data!!!

There’s a slogan echoing around the streets of major cities around the globe at the moment: ‘Whose streets – our streets!’ It’s the mantra of the ‘occupy’ movement, expressing the frustration and injustice – particularly economic injustice – and the sense that all kinds of things that should be ‘ours’ have been taken out of ‘our’ control.

The same could – and should – be said about personal data. The mantra of the occupy movement has a very direct parallel in the world of data, which is why I think we should be saying, loud and proud, ‘Whose data – our data!’

Just as for the occupy movement (which I’ve written about before), the chances of getting everything that we want in relation to data are slim – but the chances of changing the agenda in relation to data are not, and the chances of bringing about some real changes in the medium and long term even less so. The occupy movement, particularly in the US, have brought some ideas that previously were hardly talked about in the media, like wage and wealth inequality, close to the top of the agenda. They may even have moved it high enough that politicians feel the need to do something about it – I certainly hope so.

The personal data agenda.

Can we do the same for personal data? One of the current points of discussion is the idea of a ‘right to be forgotten’ – something that relates directly to the question of whether personal data is ‘ours’ in any meaningful way. I’ve spoken and written about it a lot before – my academic article on my take on it, ‘a right to delete?’ can be found online here, while I’ve also blogged on the subject on the INFORRM blog. It’s currently under discussion as part of the forthcoming revision to the Data Protection Directive, to great resistance from the UK. The latest manifestation of this resistance has come from the ICO, suggesting that the right to be forgotten should not be included as it would be unenforceable, and that the inclusion would give people unrealistic expectations, as well as potentially interfering with free speech. Effectively, they seem to be suggesting that including it would send out the wrong message. This pronouncement echoes previous statements by Ken Clarke in May, and Ed Vaizey a couple of weeks ago – it looks like part of a campaign to rein in the attempts by Europe to give more weight to privacy and user rights in the balancing exercise with business use of personal data.

Are the ICO right?

I believe that the ICO are wrong about this in a number of ways. First of all, I think they’re wrong about the unenforceability issue – at least to a great extent. In the Mexico City conference on data protection earlier this month, even Google admitted that they could do their part, but that it would be expensive. That’s very different from saying that it is unenforceable. What’s more, it doesn’t have to be perfectly implemented in order to have a benefit to people – if, for example, the right to be forgotten would allow people to easily, simply and quickly delete their Facebook profiles, or the data held on them by Tesco, that could be significant. It could also, as I’ve argued in my article, help persuade businesses to develop business models less dependent on the gathering and holding of massive amounts of personal data – if they know that such data might be ‘deletable’.

Secondly, I believe they’re quite wrong about the free speech issue – again, as I outline in my paper, if proper exceptions are put in place to allow archives to be kept, then free speech isn’t affected at all. The idea is not to be able to delete a record of what school you went to – but to be able to delete records of what breakfast cereal you bought, or profiles created based on surveillance of your internet activity.

Thirdly, and perhaps most importantly, I think they’re wrong about the message being sent out – profoundly wrong. The message that the ICO is sending out is that business matters more than people’s rights – and it’s a message that has echoes throughout the world at the moment, echoes that are what has provoked the anger in so many people that lies being the ‘occupy’ movement.  It’s the same logic as that which supports bankers bonuses over benefits for the disabled, and looks for tax cuts for the rich whilst enforcing austerity measures that cut public services to the bone and beyond. Even more importantly, it suggests that the ICO does not see its role as protecting individual rights over data – but as supporting the government’s business agenda.

Whose data – our data!

The actions and messages of the ICO are essentially saying that this is too difficult to do, so we shouldn’t even try. It reminds me very much of the arguments against the idea of having smoke-free restaurants and pubs – a lot of people said it would be impossible, would drive the restaurants and pubs out of business. Further back, there have been similar stories throughout history – most dramatically, they were made against the abolition of slavery. We shouldn’t let this kind of logic stop us from doing what is right – we should find a way. And we can find a way, if only we can find the will. The ICO needs to be stronger, to understand that it has to serve us, not just business or the government. Privacy International asked in February whether the ICO was fit for purpose – and the answer increasingly seems to be clearly not. We need to remind them what their purpose should be – and that, more than anything else, is to represent us, the people. We need to remind them whose data they’re supposed to be protecting. Whose data? Our data!

The ICO: between a rock and a hard place? Not really…

In the last week I’ve been to two events in which representatives of the Information Commissioner’s Office have spoken. First came the 16th March meeting of the Society for Computers and Law entitled ‘Privacy by Design: Grand Design or Pipe Dream?’ at which Steve Wood, the ICO’s ‘Head of Policy Design’ spoke to a mixed group of lawyers of various kinds, some representing companies in the computing business. Then, on 22nd March, the Information Commissioner himself, Christopher Graham, spoke to the Westminster Media Forum, which was discussing ‘Social media, online privacy and the ‘right to be forgotten’.

On both occasions, the representatives of the ICO had a pretty rough ride, one way or another. At the first meeting, Steve Wood was given a hard time by people working in or for the people providing online services for the way that the ICO has dealt with the ‘EU Cookie Directive’, about which the ICO has recently issued a warning, suggesting that ‘UK businesses must ‘wake up’ to new EU law on cookies’. To put it at its most basic, the ICO was being castigated for being too tough on the industry. Steve Wood’s primary defence seemed to be ‘don’t shoot the messenger’, and that all they were doing was following orders from the EU, though how well that defence went down with the audience seemed a little unclear.

At the Westminster Media Forum, the Commissioner himself had an equally rough ride – and I have to admit that I was one of those who asked him a question that was perhaps a little negative in angle, wondering why so little attention was paid to data minimisation by the ICO, despite it embodying some of the most fundamental principles of data protection. The reply I got was somewhat terse – but my question was one of the gentlest that the Commissioner had to answer. In effect, he was being challenged by privacy advocates, consumer groups and others (including Microsoft’s Caspar Bowden) for not being tough enough on the industry.

So what are the ICO to do? One week they’re attacked for being too tough on the industry, the next they’re attacked for not being tough enough? Are either forms of attack fair or justified? Is there anything that the ICO can do to meet the expectations of both sides? Is the problem just an intractable one that can’t be resolved?

As someone who’s on the privacy advocacy side of the debate, I have a lot of sympathy for the ICO. They do a lot of good things, provide a lot of good guidance, and generally say the right things. They try to tread a delicate path between the industry and the people – and do their best to tread that path with care and without causing too many fights – and have asked (and now received) for some more ‘teeth’ to punish those who transgress and deter those who might be tempted to.

Still, however, I find myself wanting to criticise them quite a lot of the time, and find myself in general agreement with NGOs like Privacy International who wondered in February whether the ICO was fit for purpose. Why? Mostly, because the role I think they should be playing does not seem to be the role that they think they’re playing. They shouldn’t be playing a kind of conciliation service, working out compromises between the industry and the people – they should be on the side of the people first and foremost, and supporting those people’s rights. We haven’t got anyone else on our side – while the industry has huge amounts of lobbying power, together with the support of great ministries of government to whom trade and finance is the be-all and end-all. They also have the tacit support of large parts of the security lobby, who’d like as much surveillance and data retention as possible, as many back-doors into websites and social networks as possible, and would be happy for the industry to do the building, gathering and retaining of data for them.

So does the ICO need to be so careful not to upset them? I don’t think so – they should be braver to speak out and upset companies when those companies need to be upset, and to challenge them when they need to be challenged. They shouldn’t be ashamed of this – Steve Wood seemed highly apologetic at the SCL meeting, as if he was ashamed to acknowledge that, deeply flawed though the Cookie Directive may be, it was introduced to address a real issue, and a real issue that the industry had failed to address themselves. If the ICO ends up caving in on this issue too, it really will be showing that it’s not fit for purpose…

How personal is personal?

The Register is reporting that the ICO wants a clearer definition of what consititutes ‘personal data’ – and it is indeed a crucial question, particularly under the current data protection regime. The issue has come up in the ICO’s response to the Government consultation on the review of the Data Protection Directive – and one of the key points is that there is a difference between how personal data is defined in the directive and how it is defined in the UK Data Protection Act. That difference gives scope for lots of legal argument – and is one of many factors that help to turn the data protection regime from something that should be about rights and personal protection into something often hideously technical and legalistic. The ICO, fortunately, seems to recognise this. As quoted in The Register, ICO Deputy Director David Smith says:

“We need to ensure that people have real protection for their personal information, not just protection on paper and that we are not distracted by arguments over interpretations of the Data Protection Act,”

That’s the crux of it – right now, people don’t really have as much real protection as they should. Will any new version of the directive (and then the DPA) be any better? It would be excellent if it did, but right now it’s hard to imagine that it will, unless there is a fundamental shift in attitudes.

There’s another area, however, that just makes it into the end of the Register’s article, that may be even more important – the question of what constitutes ‘sensitive personal data’.  Here, again, the ICO is on the ball – this is again from the Register:

“The current distinction between sensitive and non-sensitive categories of personal data does not work well in practice,” said the submission. “The Directive’s special categories of data may not match what individuals themselves consider to be ‘sensitive’ – for example their financial status or geo-location data about them.”

The ICO go on to suggest not a broadening of the definition of sensitive personal data, but a more ‘flexible and contextual approach’ to it – and they’re right. Data can be sensitive in one context, not sensitive in another. However, I would suggest that they’re not going nearly far enough. The problem is that the idea of the ‘context’ of any particular data is so broad as to be unmanageable. What matters isn’t just who has got the data and what they might do with it, but a whole lot of other things concerning the data subject, the data holder, any other potential data user and so on.

For instance, consider data about someone’s membership of the Barbra Streisand fan club. Sensitive data? In most situations, people might consider it not to be sensitive at all – who  cares what kind of music someone listens to? However, liking Barbra Streisand might mean a very different thing for a 22 year old man than it does for a 56 year old woman. Extra inferences might be drawn if the data gatherer has also learned that the data subject has been searching for holidays only in San Francisco and Sydney, or spends a lot of time looking at hairdressing websites. Add to that the real ‘geo-tag’ kind of information about where people actually go, and you can build up quite detailed profiles without ever touching what others might consider sensitive. When you have all that information, even supposedly trivial information like favourite colours or favourite items in your Tesco online shopping could end up being sensitive – as an extra item in a profile that ‘confirms’ or ‘denies’ (according to the kinds of probabilistic analyses that are used for behavioural profiling) that a person fits into a particular category.

What does all this mean? Essentially that ANY data that can be linked to a person can become sensitive – and that judging the context is so difficult that it is almost impossible. Ultimately, if we believe that sensitive data needs particular protection, then we should apply that kind of protection to ALL personal data, regardless of how apparently sensitive it is….