Wikipedia and the Right to be Forgotten…

…or why Jimmy Wales might want to support a right to delete.

Screen Shot 2014-08-24 at 17.39.48

One of the more strident critics of the Google Spain ruling by the ECJ, bringing into action at least a form of the much derided ‘right to be forgotten’, has been Jimmy Wales, co-founder of Wikipedia. He has spoken and written about it in highly critical terms, calling it ‘one of the most wide-sweeping internet censorship rulings that I’ve ever seen’ and, since Wikipedia itself started receiving notifications, ‘completely insane’. His statements, amplified by the obliging British press, were followed by his appointment to Google’s advisory committee on implementation of the court’s ruling. He has so far stood firmly by Google’s side, and against the ECJ – and yet, if looked at from the perspective of ‘openness’, there are arguments that he should shift his position. The so-called right to be forgotten – in some forms at least – is far from incompatible with the principles of openness that underpin Wikipedia. Indeed, it can be argued to be supportive of those principles – or even necessary to produce more openness in the way the internet operates for most ordinary people.

Wikipedia and ‘openness’

Wikipedia is viewed by many as the epitome of the new(ish) idea of ‘openness’ (e.g. http://www.theguardian.com/technology/2014/aug/10/wikipedia-isnt-perfect-model-channel-4-government) . Crowdsourcing information, allowing edits by anyone. Words like ‘participatory’, ‘collaborative’, even ‘democratic’ are used to describe it – indeed, it’s often used as an example of what those terms mean. These are words that are almost always used positively: participation, collaboration and democracy are seen as fundamentally ‘good’ things. Specifically, they’re seen as good things in relation to the internet: the fight for an ‘open’ and ‘free’ internet, a fight which the Wikimedia Foundation often seems to see itself at the forefront of, is a fight for the sort of internet built around participation, collaboration and openness.

But what does that mean in practice? Take a look at Wikipedia. As a teacher of university students, I often discuss the use of Wikipedia for research. In the old days, universities frowned on the use of Wikipedia – and we generally still disapprove of its use as a primary source (a citation of a Wikipedia page will raise both eyebrows and hackles in any university teacher) – but now it is usually seen as something very useful. You can get a broad brush view of a subject from reading the Wikipedia page – and you can find links to better, more reliable information about the subject. You don’t cite the Wikipedia page, but you can find sources that you can cite by looking at the Wikipedia page.

This all comes from both the strength and the weakness of Wikipedia. It is generally reliable – because crowdsourcing works, and because people with an intimate knowledge of all the various subjects contribute to it – but it is also, and just as importantly, ever-changing. It changes as events develop – new information appears, new views come in and, crucially, errors are corrected, biases revealed and changes made. Inaccurate and out of date information – and irrelevant information – is corrected or deleted from Wikipedia pages.

Deletion of information…

Let me repeat that.

Inaccurate, out of date and irrelevant information is corrected or deleted from Wikipedia pages.

That’s the strength of Wikipedia. Indeed, it is a key virtue of digital publishing – it is dynamic, not static. When errors creep in – whether by accident, by error, by biased editing, by malice (and cases of falsification of Wikipedia pages are well known, as are the strong and consistent critiques of both Wales and Wikipedia) the openness of the Wikipedia platform means that those errors, those biases, and so forth are open to being corrected. Information is deleted. That’s what makes Wikipedia great – and also what shapes the way we use it. We know Wikipedia isn’t set in stone, and that at any particular moment it may include errors or misunderstandings. We know that, so we don’t treat it with undue reverence. We check what we see against other sources. We look for alternative views and compare them to what we see on Wikipedia. We sometimes even help to edit Wikipedia. We treat Wikipedia as ‘organic’, growing and changing all the time.

Treating the internet as ‘organic’

Isn’t it appropriate – and desirable – to treat the whole internet in the same, open way? As organic, growing and changing all the time? Why should other material in the free floating internet be treated as inviolable; privileged by virtue of their form, if we are happy to see it otherwise with Wikipedia? In many ways we know that this is how the internet really is anyway – we know that when we look at a page we need to consider who created it, what sort of people they are, what biases they might have and so on. We know that new material is appearing all the time – every blog post, every newspaper article, every uploaded photo – and we should also understand that other material is being deleted or edited every day. Old, irrelevant or inaccurate information disappears every day. That’s part of the process – life and death are part of the same cycle.

What the internet isn’t, is a perfect archive of truth, set in stone as a record of perfect accuracy. To evoke otherwise, as Wales and the Wikimedia foundation have done, is simply false. It isn’t Asimov’s vision (deliberately misleading) of an Encyclopaedia Galactica in his seminal ‘Foundation’ books, designed to preserve and maintain humanity’s store of knowledge against barbarians and the decline of civilisation. It’s much closer to the reality of Wikipedia. Somewhere were things are being deleted all the time. Somewhere where routes to things are being corrected all the time. Somewhere that should be treated with respect but not reverence.

The right to delete – or the right to be forgotten

That’s where a right to delete – and yes, sometimes, a right to be forgotten fits in. It’s not such a big deal, really – things get deleted and forgotten all the time on the internet. Eric Schmidt and Jimmy Wales’ things, too. The right to be forgotten is just one of many mechanisms through which such deletions might take place. Almost completely overlooked in the media coverage, and the runaway notion that this is a ‘right for the rich and famous’ is the fact that already people with resources and knowledge use ‘reputation management’ services to hunt down and remove uncomplimentary things about them. Already ‘rights holders’ use copyright law to have things that breach their rights removed from the net – and routes to them removed, obscured or deleted. Already companies choose to cleanse old websites, to rebrand themselves and so forth. The right to be forgotten – both in its ‘Google Spain’ form and in a purer deletion of data form – would be just one of many tools through which the internet changes form. That constant changing should be understood and celebrated – and refined not fought and feared. It’s part of what makes the internet so great.

That doesn’t, of course, mean that it shouldn’t be treated critically. It should, very much so. It doesn’t mean that the Google Spain ruling is without fault – it isn’t, and the way that Google has implemented it to date has highlighted many of those faults. And yes, it’s a tool that could well be misused – most tools are, but we don’t outlaw kitchen knives because they could be used to stab people. For ordinary people, in extraordinary circumstances, it could be a real boon. Ordinary people need to be given a chance to contribute, to participate, to be part of that great community that so many of us hope the internet can become. Of course we need to find a way to make it work better. We need to set out more appropriate rules and good, solid guidelines as to how it should be operated – and to reduce the possibility of its misuse. We need all of this, both to help Google and to keep the internet open….

…because that’s the bottom line. Having ways to delete information isn’t the enemy of the internet of the people, so much as an enemy of the big players of the internet. In terms of the ordinary people, it’s very much the internet’s friend. Wikipedia demonstrates the need to have deletion and correction as well as addition as part of its toolkit. Jimmy Wales knows this, I suspect, though I’m not sure he’s applied this knowledge to the internet as a whole. He may not like the way that this particular tool has been developed – for judges and courts are often seen as the enemies of openness, and from an America perspective, European judges and courts may be the worst of all. Nobody wants to be told what to do – and often they’re quite right to resist what they’re told to do.

However, an excessive faith in the ‘record’ of the internet, and an excessive reverence for the way that the internet (and Google in particular) currently works are also enemies of real openness. We need to be open to changes – and yes, even changes in all of these.

—————-

This blog post was inspired in part by reading Nathaniel Tkacz’s work on Openness.

Dave Eggers’ The Circle: a book for our times…

I was introduced to Dave Eggers’ novel, The Circle, by Professor Andrew Murray – one of the pre-eminent scholars in IT Law in the UK, and also on of my PhD supervisors. I know I’m very late to this game – the book came out in 2013, and all the cool people will already have read it or reviewed it, but in this case I think it’s worth it. And the fact that someone like Andrew Murray would recommend it should give pause for thought: this isn’t just an entertaining piece of science fiction, it’s a book that really makes you think. It’s not just a dystopian vision of the future, it’s one that is far, far closer to reality than almost any I’ve read – and dystopian novels and films are pretty much my favourite genre.

It’s a book that reminded me why, unlike most of my schoolmates, I always preferred Brave New World to 1984 – and why, of the various privacy stories of the last few months I suspect, ultimately, the Facebook Experiment and the ruling over the Right to be Forgotten will matter more than the passing of the deeply depressing DRIP. In the end, as The Circle demonstrates graphically, we have more to fear from corporate domination of the Internet than we do from all the spooks and law enforcement agencies.

The Circle from which the novel gets its name is a technology company that combines a great deal of Google and Facebook with a little dash of Apple and a touch of Twitter. It dominates search and social media, but also makes cool and functional hardware. Egger’s triumph in the Circle is that he really gets not just the tech but the culture that surrounds it – little details like sending frowns to paramilitaries in Guatemala echo campaigns like #BringBackOurGirls in their futility, superficiality and ultimate inanity. The lives portrayed in the Circle should send shivers down the spines of any of us who spend much time on Twitter or Facebook: that I read the book whilst on a holiday without much Internet access made the point to me most graphically.

Privacy is theft

Eggers echoes both 1984 and Brave New World in using slogans to encapsulate concepts – exaggerating to make the point. For the Circle, these are:

Secrets are lies
Sharing is caring
Privacy is theft

All three are linked together – and connected to the idea that there’s something almost mystical about data. We don’t just have no right to privacy, we have a duty to disclose, a duty to be transparent. A failure to disclose means we’re depriving others of the benefits of our information: by claiming privacy, we’re stealing opportunities and advantages that others have the right to. If we care about others, we should share with them. This is Facebook, this is Google Flu Trends – and it’s the philosophy that implies that those of us who oppose the care.data scheme through which all our health data will be shared with researchers, pharmaceutical companies and many others, are selfish Luddites likely to be responsible for the deaths of thousands.

It is also the philosophy behind a lot of the opposition to the right to be forgotten. That opposition is based on the myth – one that Eggers exposes excellently – that the records on the Internet represent ‘the truth’ and that tampering with them, let alone deleting anything from them, is tantamount to criminality. Without spoiling the plot too much, one of the characters is psychologically and almost physically destroyed by the consequences of that. Eggers neatly leaves it unclear whether the key ‘facts’ that do the damage are actually real – he knows that this, ultimately, isn’t the point. Even if it all were true, the idea that maintaining it and exposing it would be a general good, something to be encouraged and fought for, is misguided at best.

It’s about power – and how it’s wielded

In the novel, The Circle has the power – and it wields it in many ways. Emotional manipulation, keeping people happy and at the same time keeping them within the Circle, is the key point – and the echoes of the Facebook Experiment, about which much has been written, but much has missed the deeper points, are chilling here. One of the real functions of the experiment was for Facebook to find ways to keep people using Facebook…

Another of the key ways that the Circle wields power is through its influence over lawmakers – and the same is sadly evident of Google and Facebook, in the UK as much as in the US. In the UK in particular the influence over things like opposition to data protection reform – and the right to be forgotten – are all too clear. It would be great if this could change, but as in the novel, the powers and common interests are far too strong for much chance. More’s the pity.

As a novel, The Circle is not without fault. I guessed the main plot twist less than half-way through the book. There’s a good deal of hyperbole – but this is dystopian fiction, after all – and the tech itself is not exactly described convincingly. What’s more, the prose is far from beautiful, the characters are mostly rather two-dimensional, and often they’re used primarily to allow Eggers to make his points, often through what amount to set speeches – but Huxley was guilty of that from time to time too. Those speeches, however, are often worth reading. Here, one of the dissidents explains his objections:

“It’s the usual utopian vision. This time they were saying it’ll reduce waste. If stores know what their customers want, then they don’t overproduce, don’t overship, don’t have to throw stuff away when it’s not bought. I mean, like everything else you guys are pushing, it sounds perfect, sounds progressive, but it carries with it more control, more central tracking of everything we do.”

“Mercer, the Circle is a group of people like me. Are you saying that somehow we’re all in a room somewhere, watching you, planning world domination?”

“No. First of all, I know it’s all people like you. Individually you don’t know what you’re doing collectively. But secondly, don’t presume the benevolence of your leaders.”

In that brief exchange Eggers shows how well he gets the point. A little later he nails why we should care much more about this but don’t, focussing instead on the spooks of the NSA and GCHQ.

“Here, though, there are no oppressors. No one’s forcing you to do this. You willingly tie yourself to these leashes.”

That’s the problem. We don’t seem to see the risk – indeed, just as in the novel, we willingly seem to embrace the very things that damage us. Lawmakers, too, seem not to see the problem – and as noted all too often allow themselves to be lobbied into compliance. The success of Google’s lobbyists over the right to be forgotten is testimony to this. Even now, people who really should know better are being persuaded to support the Circle sorry, I mean Google’s business model rather than address a real, important privacy issue.

Coming to a society near you…

We’re taking more and more steps in the direction of the Circle. Not just the Facebook experiment and the reaction to the ‘right to be forgotten’ ruling – but even in the last week or two a House of Lords committee has recommended an end to online anonymity, effectively asking service providers to require real names before receiving services. This is one of the central planks of the way the Circle takes control over people’s lives, and one which our lawmakers seem to be very happy to give them. There are also stories going around about government plans to integrate various databases from health and the DVLA to criminal records… another key tenet of the Circle‘s plans… The ‘detailed’ reasons for doing so sound and seem compelling – but the ultimate consequences could be disastrous…

Anyway, that’s enough from me. Read the book. I’ll be recommending it to
my Internet Law and Privacy students, but I hope it’s read much more widely than that. It deserves to be.

20140804-222408-80648421.jpg

Facebook, Google and the little people….

This last week has emphasised the sheer power and influence of the internet giants – Facebook and Google in particular.

The Facebook Experiment

First we had the furore over the so-called ‘Facebook Experiment’ – the revelation that Facebook had undertaken an exercise in ‘emotional contagion’, effectively trying to manipulate the emotions of nearly 700,000 of its users without their consent, knowledge or understanding. There were many issues surrounding it (some of which I’ve written about here) starting with the ethics of the study itself, but the most important thing to understand is that the experiment succeeded, albeit not very dramatically. That is, by manipulating people’s news feeds, Facebook found that they were able to manipulate peoples emotions. However you look at the ethics of this, that’s a significant amount of power.

Google and the Right to be Forgotten

Then we’ve had the excitement over Google’s ‘clumsy’ implementation of the ECJ ruling in the Google Spain case. I’ve speculated before about Google’s motivations in implementing the ruling so messily, but regardless of their motivations the story should have reminded us of the immense power that Google have over how we use the internet. This power is demonstrated in a number of ways. Firstly, in the importance we place in whether a story can be found through Google – those who talk about the Google Spain ruling being tantamount to censorship are implicitly recognising the critical role that Google plays and hence the immense power that they wield. Secondly, it has demonstrated Google’s power in that, ultimately, how Google decides to interpret and implement the ruling of the court is what decides whether we can or cannot find a story. Thirdly, the way that Google seems to be able to drive the media agenda has been apparent: it sometimes seems as though people in the media are dancing to Google’s tune.

Further, though the early figures for takedown requests under the right to be forgotten sound large – 240,000 since the Google Spain ruling – the number of requests they deal with based on copyright is far higher: 42,324,954 since the decision. Right to be forgotten requests are only 0.5% of those under copyright. Google deals with these requests without the fanfare of the right to be forgotten – and apart from a few internet freedom advocates, very few people seem to even notice. Google has that much control, and their decisions have a huge impact upon us.

Giants vs. Little People

Though the two issues seem to have very little in common, they both reflect the huge power that the internet giants have over ordinary people. It is very hard for ordinary people to fight for their rights – for little people to be able to face up to giants. Little people, therefore, have to do two things: use every tool they can in the fight for their rights, and support each other when that support is needed. When the little people work together, they can punch above their weight. One of the best ways for this to happen, is through civil society organisations. All around the world, civil society organisations make a real difference – from the Open Rights Group and Privacy International in the UK to EDRi in Europe and the EFF in the US. One of the very best of these groups – and one that punches the most above its weight, has been Digital Rights Ireland. They played a critical role in one of the most important legal ‘wins’ for privacy in recent years: the effective defeat of the Data Retention Directive, one of the legal justifications for mass surveillance. They’re a small organisation, but one with expertise and a willingness to take on the giants. Given that so many of those giants – including Facebook – are officially based in Ireland, Digital Rights Ireland are especially important.

Europe vs. Facebook

There is one particular conflict between the little people and the giants that is currently in flux: the ongoing legal fight between campaigner Max Schrems and Facebook. Schrems, who is behind the ‘Europe vs. Facebook’ campaign,  has done brilliantly so far, but his case appears to be at risk. After what looked like an excellent result – the referral by the Irish High Court to the ECJ of his case against Facebook (which relates to the vulnerability of Facebook data to US surveillance via the PRISM program) – Schrems is reported as considering abandoning his case, as the possible costs might bankrupt him if things go badly.

This would be a real disaster – and not just for Schrems. This case really matters in a lot of ways. The internet giants need to know that we little people can take them on: if costs can put us off, the giants will be able to use their huge financial muscle to win every time. It’s a pivotal case – for all of us. For Europeans, it matters in protecting our data from US surveillance. For non Europeans it matters, because it challenges the US giants at a critical point – we all need them to fight against US surveillance, and they’ll only really do that wholeheartedly if it matters to their bottom line. This case could seriously hit Facebook’s bottom line – so if they lost, they’d have to do something to protect their data from US surveillance. They wouldn’t just do that for European Facebook users, they’d do it for all.

Referral to the ECJ is critical, not just because it might give a chance to win, but because (as I’ve blogged before) recently the ECJ has shown more engagement with technological issues and more willingness to rule in favour of privacy – as in the aforementioned invalidation of the Data Retention Directive and in the contentious ruling in Google Spain. We little people need to take advantage of those times when the momentum is on our side – and right now, at least in some ways, the momentum seems to be with us in the eyes of the ECJ.

So what can be done to help Schrems? Well, the first thing I would suggest to Max is to involve Digital Rights Ireland. They could really help him – and I understand that they’ve been seeking an amicus brief in the case. They’re good at this kind of thing, and they and other organizations in Europe have experience in raising the funds for this type of case. Max has done brilliant work, but where ‘little people’ have to face up to giants, they’re much better off not fighting alone.

A week not to be forgotten….

…for those of us interested in the right to be forgotten. I’ve found myself writing and talking to people about it unlike any time before. Privacy is becoming bigger and bigger news – and I have a strong feeling that the Snowden revelations influenced the thinking of the ECJ in last week’s ruling, subconsciously if nothing else. That should not be viewed as a bad thing – quite the opposite. What we have learned through Edward Snowden’s information should have been a wake-up call for everyone. Privacy matters – and the links between the commercial gathering and holding of data and the kind of surveillance done by the authorities are complex and manifold. If we care about privacy in relation to anyone – the authorities, businesses, other individuals, advertisers, employers, criminals etc – then we need to build a more privacy-friendly infrastructure that protects us from all of these. That means thinking more deeply, and considering more radical options – and yes, that even means the right to be forgotten, for all its flaws, risks and complications. More thought is needed, and more action – and we must understand the sources of information here, the nature of those contributing to the debate and so forth.

Anyway, this isn’t a ‘real’ blog post about the subject – I’ve done enough of them in the last week. What I want to do here is provide links to what I’ve written and said in the last week, as well as to my academic contributions to the subject, both past and present, and then to link to Julia Powles’ excellent curation of the academic blogs and articles written by many people in the aftermath of the judgment.

Here’s what I’ve written:

For CNN, a summary of the judgment and its implications, written the same day as the judgment.

For the Justice Gap, a day later, looking at the judgment in context and asking whether it was a ‘good’ or a ‘bad’ thing for internet freedom.

My interview for CBC (Canada)’s Day 6 programme – talking about the implications, and examining the right for a non-European audience.

For my own blog, looking at Google’s options for the future and suggesting that the judgment isn’t the end of the world

Also for my own blog, a day later, trying to put the judgment into context – it’s not about paedophiles and politicians, and it won’t be either a triumph or a disaster.

This last piece may in some ways be the most important – because already there’s a huge about of hype being built up, and scare stories are being leaked to the media at a suspiciously fast rate. There are huge lobbies at play here, particularly from the ‘big players’ on the internet like Google, who will face significant disruption and significant costs as a result of the ruling, and seem to want to make sure that people view the conflict as one of principle, rather than one of business. People will rally behind a call to defend freedom of expression much more easily than they will behind a call to defend Google’s right to make money, particularly given Google’s taxation policies.

Then here are my academic pieces on the subject.

‘A right to delete?’ from 2011, for the European Journal of Law and Technology. This is an open access piece, suggesting a different approach.

‘The EU, the US and the Right to be Forgotten’, published in early 2014, a chapter in a Springer Book on data protection reform, arising from the CPDP conference in Brussels 2013. This, unfortunately, is not open access, but a chapter in an expensive book. This does, however, deal directly with some of the lobbying issues.

The right to be forgotten – and my particular take on it, the right to delete, is also discussed at length in my recently released book, Internet Privacy Rights. There’s a whole chapter on the subject, and it’s part of the general theme.

Finally, here’s a link to Julia Powles’ curation of the topic. This is really helpful – a list of what’s been written by academics over the last week or so, with a brief summary of each piece and a link to it. Some of the academics contributing are from the very top of the field,  including Viktor Mayer-Schönberger, Daniel Solove and Jonathan Zittrain. All the pieces are worth a read.

This subject is far from clear cut, and the debate will continue on, in a pretty heated form I suspect, for quite some time. Probably the best thing that could come out of it, in my opinion, is some more impetus for the completion of the data protection reform in the EU. This reform has been struggling on for some years, stymied amongst other things by intense lobbying  by Google and others. That lobbying will have to change tack pretty quickly: it’s no longer in Google’s interests for the reform to be delayed. If they want to have a more ‘practical’ version of the right to be forgotten in action, the best way is to be helpful rather than obstructive in the reform of the data protection regime. A new regime, with a well balanced version of the right incorporated, would be in almost everyone’s best interests.

The Right to be Forgotten: Neither Triumph Nor Disaster?

“If you can meet with triumph and disaster
And treat those two imposters just the same”

Kipling_ndThose are my two favourite lines from Kipling’s unforgettable poem, ‘If’. They have innumerable applications – and I think another one right now. The Right to be Forgotten, about which I’ve written a number of times recently, is being viewed by some as a total disaster, others as a triumph. I don’t think either are right: it’s a bit of a mess, it may well end up costing Google a lot of time, money and effort, and it may be a huge inconvenience to Data Protection Authorities all over Europe, but in the terms that people have mostly been talking about it, privacy and freedom of expression, it seems to me that it’s unlikely to have nearly as big an impact as some have suggested.

Paedophiles and politicians – and erasure of the past

Within a day or two of the ruling, already the stories were coming out about paedophiles and politicians wanting to use the right to be forgotten to erase their past – precisely the sort of rewriting of history that the term ‘right to be forgotten’ evokes, but that this ruling does not provide for. We do need to be clear about a few things that the right will NOT do. Where there’s a public interest, and where an individual is involved in public life, the right does not apply. The stories going around right now are exactly the kind of of thing that Google can and should refuse to erase links to. If Google don’t, then they’re just being bloody minded – and can give up any claims to be in favour of freedom of speech.

Similarly, we need to be clear that this ruling only applies to individuals – not to companies, government bodies, political parties, religious bodies or anything else of that kind. We’re talking human rights here – and that means humans. And, because of the exception noted above, that only means humans not involved in public life. It also only means ‘old’, ‘irrelevant’ information – though what defines ‘old’ and ‘irrelevant’ remains to be seen and argued about. There are possible slippery slope arguments here, but it doesn’t, at least on the face of it, seem to be a particularly slippery kind of slippery slope – and there’s also not that much time for it to get more slippery, or for us to slip down it, because as soon as the new data protection regime is in place, we’ll almost certainly have to start again.

We still can’t hide

Conversely, this ruling won’t really allow even us ‘little people’ to be forgotten very successfully. The ruling only allows for the erasure of links on searches (through Google or another search engine) that are based on our names. The information itself is not erased, and other forms of search can still find the same stories – that is, ‘searches’ using something other than a search engine, and even uses of search engines with different terms. You might not be able to find stories about me by searching for ‘Paul Bernal’ but still be able to find them by searching under other terms – and creative use of terms could even be automated.

There already are many ways to find things other than through search engines – whether it be crowdsourcing via Twitter or another form of search engine, employing people to look for you, or even creating your own piece of software to trawl the web. This latter idea has probably occurred to some hackers, programmers or entrepreneurs already – if the information is out there, and it still will be, there will be a way to find it. Stalkers will still be able to stalk. Employers will still be able to investigate potential employees. Credit rating agencies will still be able to find out about your ancient insolvency.

…but ‘they’ will still be able to hide

Some people seem to think that this right to be forgotten is the first attempt to manipulate search results or to rewrite history – but it really isn’t. There’s already a thriving ‘reputation management’ industry out there, who for a fee will tidy up your ‘digital footprint’, seeking out and destroying (or at least relegating to the obscurity of the later pages on your search results) disreputable stories, and building up those that show you in a good light. The old industry of SEO – search engine optimisation – did and does exactly that, from a slightly different perspective. That isn’t going to go away – if anything it’s likely to increase. People with the power and knowledge to be able to manage their reputations will still be able to.

On a slightly different tack, criminals and scammers have always been able to cover their tracks – and will still be able to. The old cat-and-mouse game between people wanting to hide their identity and people wanting to uncover those hiding them will still go on. The ‘right to be forgotten’ won’t do anything to change that.

But it’s still a mess?

It is, but not, I suspect, in the terms that people are thinking about. It will be a big mess for Google to comply, though stories are already going round that they’re building systems to allow people to apply online for links to be removed, so they might well already have had contingency plans in place. It will be a mess for data protection agencies (DPAs), as it seems that if Google refuse to comply with your request to erase a link, you can ask the DPAs to adjudicate. DPAs are already vastly overstretched and underfunded – and lacking in people and expertise. This could make their situation even messier. It might, however, also be a way for them to demand more funding from their governments – something that would surely be welcome.

It’s also a huge mess for lawyers and academics, as they struggle to get their heads around the implications and the details – but that’s all grist to the mill, when it comes down to it. It’s certainly meant that I’ve had a lot to write about and think about this week….

 

When is a ‘libertarian’ not a libertarian?

…when it’s a Kipper?

A couple of days ago, blogger Michael Abberton  got a visit from the police. As reported in the Guardian:

“He was told he had not committed any crimes and no action was taken against him, but he was asked to delete some of his tweets, particularly a tongue-in-cheek one on 10 reasons to vote for Ukip, such as scrapping paid maternity leave and raising income tax for the poorest 88% of Britons.”

This is the poster Michael tweeted:

Screen Shot 2014-05-12 at 14.39.13

Michael described his experience in his own blog here. As he put it:

“…they said this was in relation to a complaint that had been made by a certain political party in relation to tweets I had published about them and one tweet in particular which talked about ten reasons to vote for them. The PC wanted to know if I had made that poster.”

The police were polite and concluded that there was no charge to answer and that it was not a police matter – but they still asked him to delete the relevant tweets, and suggested that he not tweet about their visit. I, for one, am glad that he did. There are a number of questions for the police – why they couldn’t work out what was going on just by reading the tweets and blogs, for example, and why they couldn’t see that a visit from the police would look very bad. Do the police not realise that people don’t like having a knock on their door from them? And if they do realise it, why not find another way to deal with something like this – a phone call, for example? If the police were a bit more ‘savvy’ they could have worked out what was going on pretty quickly and simply – and come to the conclusion that they finally did, that this was not a police matter at all. Michael is a scrupulous and intelligent blogger – what he was actually doing was fact-checking a parody UKIP poster that had been doing the rounds for a while.

The police have a lot to learn about this – but I think they are beginning to learn. What is more interesting to me is the role of UKIP. As confirmed by the police, it was a UKIP councillor that made the original complaint. Some UKIP supporters have suggested that the poster was a breach of the somewhat notorious S.127(1) of the Communications Act 2003, the section under which Paul Chambers was prosecuted in the farcical ‘Twitter Joke Trial”. Here’s Marty Caine, for example:

Screen Shot 2014-05-12 at 14.44.54

Now S.127(1) of the Communications Act 2003 is notoriously broad, but even if it could be stretched to cover Michael Abberton’s tweet (which the police concluded it couldn’t), why would UKIP, a party that fairly often puts itself forward as ‘libertarian’, try to use it? One of the basic tenets of libertarianism is a strong belief in freedom of speech. To a ‘real’ libertarian, the law should be used as little as possible. Freedom matters – and freedom of speech in particular. When someone says something bad about you, you should argue with them. Win the battle of wits. Compete in the marketplace of ideas – not try to find a way to silence your opponents, using the law – and the police – to try to stop them arguing against you.

Personally I detest UKIP – as my various blog posts on the subject over the last few months should make pretty clear – but I wouldn’t use the law to try to shut them up. I argue against them, tease them, parody them, try to persuade them – and yes, sometimes even shout at them – but I don’t try to silence them. Am I more of a libertarian than UKIP? It seems so – but then again, no party with pretentions of libertarianism would have as their central policy the control of immigration.

These kinds of tactics should be taken seriously. Visits from the police are disturbing to anyone – and interference in the political debate, particularly this close to an election, should be taken very seriously indeed. Michael Abberton’s blog was very much part of the debate, looking precisely at the policies of UKIP. As Michael put it in his blog:

“Why would a political party, so close to an election, seek to stop people finding out what their policies are or their past voting record? And is it not a matter for concern that a political party would seek to silence dissent and debate in such a manner?”

Yes, it absolutely is.

 

UPDATE:

It turns out that the UKIP councillor that reported Michael’s tweet was Peter Reeve – and that the reason for the complaint seems to have been that as Michael was a Green Party supporter, his tweet should have been labelled as official Green Party electoral material. To say that this is unconvincing is putting it mildly – and Michael’s Twitter avi has a Green Party twibbon, just to make it clear even on a tweet. What’s more, this doesn’t in any way alter the overall freedom of speech argument – trying to silence political opponents by bringing in the police should be anathema….

Communications Surveillance – a miscast debate

GCHQI have just made a submission to the Intelligence and Security Committee’s call for evidence on their Privacy and Security Inquiry. The substance of the submission is set out below – the key point is that I believe that the debate, and indeed the questions asked by the Intelligence and Security Committee, miscast the debate in such a way as to significantly understate the impact of internet surveillance and hence make the case for that surveillance stronger than it really is. I am sure there will be many other excellent submission to the inquiry – this is my small contribution.

——————————

Submission to the Intelligence and Security Committee by Dr Paul Bernal

I am making this submission in response to the Privacy and Security Call for Evidence made by the Intelligence and Security Committee on 11th December 2013, in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy. I have a book dealing with the subject, Internet Privacy Rights, which will be published by Cambridge University Press, in March 2014. The subject of internet privacy, therefore, lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

Executive summary

There are a great many issues that are brought up by the subject of communications surveillance. This submission does not intend to deal with all of them. It focuses primarily on three key issues:

  1. The debate – and indeed the initial question asked by the ISC – which talks of a balance between ‘individual privacy’ and ‘collective security’ is a miscast one. Communications surveillance impacts upon much more than privacy. It has an impact on all the classical ‘civil liberties’: freedom of expression, freedom of assembly and association and so forth. Privacy is not a merely ‘individual’ issue. It, and the connected rights, are community rights, collective rights, and to undermine them does more than undermine individuals: it hits at the very nature of a free, democratic society.
  2. The invasion of privacy, the impact on the other rights mentioned above, occurs at the point when data is gathered, not when data is accessed. The mass surveillance approach that appears to have been adopted – a ‘gather all, put controls on at the access stage’ is misconceived. The very gathering of the data has an impact on privacy, and leaves data open for misuse, vulnerable to hacking, loss or misappropriation, and has a direct chilling effect.
  3. In terms of mass surveillance, meta-data can in practice be more useful – and have more of an impact on individual rights and freedoms – than content data. It can reveal an enormous amount of information about the individuals involved, and because of its nature it is more easily and automatically analysed and manipulated.

The implications of these three issues are significant: the current debate, as presented to the public and to politicians, is misleading and incomplete. That in turn means that experts remain sceptical about the motivations of those involved in the debate in favour of surveillance – and that it is very hard for there to be real trust between the intelligence services and the public.

It also means that the bar should be placed much higher in terms of evidence that this kind of surveillance is successful in achieving the aims of the intelligence services. Those aims need to be made clear, and the successfulness of the surveillance demonstrated, if the surveillance is to be appropriate in a democratic society. Given the impact in terms of a wide spectrum of human rights – not just individual rights to privacy – the onus is on the security services to demonstrate that success, or move away from mass surveillance as a tactic.

1      A new kind of surveillance

The kind of surveillance currently undertaken – and envisaged in legislation such as the Communications Data Bill in 2012 – is qualitatively different from that hitherto imagined. It is not like ‘old-fashioned’ wiretapping or even email interception. What also makes it new is the way that we use the internet – and in particular the way that the internet is, for most people in what might loosely be described as developed societies, used for almost every aspect of our lives. By observing our internet activities, therefore, the level of scrutiny in our private lives is vastly higher than any form of surveillance could have been in the past.

In particular, the growth of social networking sites and the development of profiling and behavioural tracking systems and their equivalents change the scope of the information available. In parallel with this, technological developments have changed the nature of the data that can be obtained by surveillance – most directly the increased use of mobile phones and in particular smartphones, provides new dimensions of data such as geo-location data, and allow further levels of aggregation and analysis. Other technologies such as facial recognition, in combination with the vast growth of use of digital, online photography – ‘selfie’ was the Oxford Dictionaries Word of the Year for 2013 – take this to a higher level.

This combination of factors means that the ‘new’ surveillance is both qualitatively and quantitatively different from what might be labelled ‘traditional’ surveillance or interception of communications. This means that the old debates, the old balances, need to be recast. Where traditional ‘communications’ was in some ways a subset of traditional privacy rights – as reflected in its part, for example, within Article 8 of the ECHR, the new form of communications has a much broader relevance, a wider scope, and brings into play a much broader array of human rights.

2      Individual right to privacy vs. collective right to security?

2.1      Privacy is not just an individual right

Privacy is often misconstrued as a purely individual right – indeed, it is sometimes characterised as an ‘anti-community’ right, a right to hide yourself away from society. Society, in this view, would be better if none of us had any privacy – a ‘transparent society’. In practice, nothing could be further from the truth: privacy is something that has collective benefit, supporting coherent societies. Privacy isn’t so much about ‘hiding’ things as being able to have some sort of control over your life. The more control people have, the more freely and positively they are likely to behave. Most of us realise this when we consider our own lives. We talk more freely with our friends and relations knowing (or assuming) that what we talk about won’t be plastered all over noticeboards, told to all our colleagues, to the police and so forth. Privacy has a crucial social function – it’s not about individuals vs. society. The opposite: societies cannot function without citizens having a reasonable expectation of privacy.

2.2      Surveillance doesn’t just impact upon privacy

The idea that surveillance impacts only upon privacy is equally misconceived. Surveillance impacts upon many different aspects of our lives – and how we function in this ‘democratic’ society. In human rights terms, it impacts upon a wide range of those rights that we consider crucial: in particular, it impacts upon freedom of expression, freedom of association and freedom of assembly, and others.

2.2.1      Freedom of expression

The issue of freedom of expression is particularly pertinent. Privacy is often misconstrued as somehow an ‘enemy’ of freedom of expression – blogger Paul Staines (a.k.a. Guido Fawkes) for example, suggested that ‘privacy is a euphemism for censorship’. He had a point in one particularly narrow context – the way that privacy law has been used by certain celebrities and politicians to attempt to prevent certain stories from being published – but it misses the much wider meaning and importance of privacy.

Without privacy, speech can be chilled. The Nightjack saga, of which the committee may be aware, is one case in point. The Nightjack blogger was a police insider, providing an excellent insight into the real lives of police officers. His blog won the 2009 Orwell Award – but as a result of email hacking by a journalist working for the Times, he was unable to keep his name private, and ultimately he was forced to close his blog. His freedom of expression was stifled – because his privacy was not protected. In Mexico, at least four bloggers writing about the drugs cartels have not just been prevented from blogging – they’ve been sought out, located, and brutally murdered. There are many others for whom privacy is crucial – from dissenters in oppressive regimes to whistle-blowers to victims of spousal abuse. The internet has given them hitherto unparalleled opportunities to have their voices heard – internet surveillance can take that away. Even the possibility of being located or identified can be enough to silence them.

Internet surveillance not only impacts upon the ability to speak, it impacts upon the ability to receive information – the crucial second part to freedom of speech, as set out in both the European Convention on Human Rights and the Universal Declaration of Human Rights. If people know that which websites they visit will be tracked and observed, they’re much more likely to avoid seeking out information that the authorities or others might deem ‘inappropriate’ or ‘untrustworthy’. That, potentially, is a huge chilling effect. The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, in his report of 2013, made it clear that the link between privacy and freedom of expression is direct and crucial.

“States cannot ensure that individuals are able to freely seek and receive information or express themselves without respecting, protecting and promoting their right to privacy. Privacy and freedom of expression are interlinked and mutually dependent; and infringement upon one can be both the cause and consequence of an infringement upon the other.”

2.2.2      Freedom of association and of assembly

Freedom of association and assembly is equally at risk from surveillance. The internet offers unparalleled opportunities for groups to gather and work together – not just working online, but organising and coordinating assembly and association offline. The role the net played in the Arab Spring has probably been exaggerated – but it did play a part, and it continues to be crucial for many activists, protestors and so forth. The authorities realise this, and also that through surveillance they can counter it. A headline from a few months ago in the UK, “Whitehall chiefs scan Twitter to head off badger protests” should have rung the alarm bells – is ‘heading off’ a protest an appropriate use of surveillance? It is certainly a practical one – and with the addition of things like geo-location data the opportunities for surveillance to block association and assembly both offline and online is one that needs serious consideration. The authorities in the Ukraine recently demonstrated this through the use of surveillance of mobile phone geolocation data in order to identify people who might be protesting – and then sending threatening text messages warning those in the location that they were now on a list: a clear attempt to chill their protests. Once more, this is very much not about individual privacy – it is about collective and community rights.

3      Controls are required at the gathering stage

The essential approach in the current form of internet surveillance, as currently practiced and as set out in the Communications Data Bill in 2012, is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions.

3.1      Data vulnerability

Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data in laptops, and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks’ losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable. The high profile hacks of Apple, Facebook, Twitter, Sony and others show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

3.2      Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to internet surveillance this is a very significant risk: the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime can creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth – there is evidence that this is already taken place. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

There can be creep in terms of the types of data gathered. The split between ‘meta data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep is inherently difficult. As with data vulnerability, the only way to guard against it is not to gather the data in the first place. That means that controls need to be placed at the data gathering stage, not at the data access stage.

4      The role of metadata

Rather than being less important, or less intrusive, than ‘content’, the gathering of meta data in the new kinds of surveillance of the internet may well be more intrusive and more significant. Meta data is the primary form of data used in profiling of people as performed by commercial operators for functions such as behavioural advertising. It is easier to analyse and aggregate, easier for patterns to be determined, and much richer in its implications than content. It is also harder to ‘fake’: content can be concealed by the use of code words and so forth – meta data by its nature is more likely to be ‘true’.

In relation to trust, it is important that those who are engaged in surveillance acknowledge this: and those that scrutinise the intelligence services understand this. It was notable in the open session of the Intelligence and Security Committee at the end of 2013 that none of those questioning the heads of MI5, MI6 and GCHQ made the point, or questioned the use of statements to the effect that they were not reading our emails or listening to our phone calls. Those statements may be true, but they are beside the point: it is the gathering of metadata that matters more. It can reveal automatically – without the need of expert human intervention – great details. As Professor Ed Felten put it in his testimony to the Senate Judiciary Committee hearing on the Continued Oversight of the Foreign Intelligence Surveillance Act:

“Metadata can expose an extraordinary amount about our habits and activities. Calling patterns can reveal when we are awake and asleep; our religion, if a person regularly makes no calls on the Sabbath, or makes a large number of calls on Christmas Day; our work habits and our social attitudes; the number of friends we have; and even our civil and political affiliations.”

Professor Felten was talking about telephony metadata – metadata from internet browsing, emails, social network activity and so forth can be even more revealing.

5      Conclusion

The subject of internet surveillance is of critical importance. Debate is crucial if public support for the programmes of the intelligence service is to be found – and that debate must be informed, appropriate and on the right terms.

It isn’t a question of individual privacy, a kind of luxury in today’s dangerous world, being balanced against the deadly serious issue of security. If expressed in those misleading terms it is easy to see which direction the balance will go. Privacy matters far more than that – and it matters not just to individuals but to society as a whole. It underpins many of our most fundamental and hard-won freedoms – the civil rights that have been something we, as members of liberal and democratic societies, have been most proud.

Similarly, the question of where the controls are built needs to be opened up for debate – at present the assumption seems to be made that gathering is acceptable even without controls. As noted above, that opens up a wide range of risks, risks that should be acknowledged and assessed in relation to the appropriateness of surveillance.

Finally, those involved in the debate should be more open and honest about the role of meta-data: the bland reassurances that ‘we are not reading your emails or listening to your phone calls’ should always be qualified with the acknowledgment that this does not really offer much protection to privacy at all.

Dr Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia Norwich
NR4 7TJ
Email: paul.bernal@uea.ac.uk