Are Google intentionally overreacting to the Right to be Forgotten?

In one of my original reactions to the Google Spain ruling on the Right to be Forgotten, which I wrote for The Justice Gap, I said this, about Google’s response to the ruling.:

“How they respond to the ruling will be interesting – for the moment they’re saying very little. They have creative minds working for them – if they can rise to the challenge and find a way to comply with the ruling that enables ordinary people to take back a little control, that could be a very good thing. If, instead, they retrench and withdraw – or go over the top in allowing censorship too easily, it could be very bad.”

From what I’ve seen so far, it looks as though they’ve taken the ‘over the top’ approach, and are allowing censorship too easily. Two particular stories have come out today, one from the Guardian (here), the other from the BBC (here). In both cases, the journalists concerned are high profile, influential and expert – James Ball at the Guardian and Robert Peston at the BBC – and the stories, to be frank, do not seem to fall within the categories that the CJEU ruling in the Google Spain case suggested might be suitable for the right to apply. James Ball’s stories were mostly pretty recent – from 2010 and 2011 – as well as being fairly easy to argue as being ‘relevant’ in terms of public interest. Robert Peston’s stories are not so recent, but even more clearly relevant and in the public interest.

So why have they been caught by Google’s net as appropriate for the ‘right to be forgotten’? It looks very much as though this is the intentional overreaction that I was concerned about in my original posting for the Justice Gap. They’re trying to say, I think, ‘you know, we were right! This ruling means censorship! This is dangerous!’ They’re also trying to get journalists like James Ball and Robert Peston to be on their side, not on the side of the CJEU – and in Ball’s case, at least, they seem to be succeeding to an extent. Peston is more critical, saying that Google’s implementation of the ruling ‘looks odd, perhaps clumsy.’

Clumsy or intentional?

I’m not convinced that it’s clumsy at all, but intentional. I hope I’m wrong, and that, as Google themselves have said, they will be refining the method and sorting out the details. If they’re really trying to fight this, to prove that the ruling is unworkable, we’re in for some serious trouble, because the ruling will not be at all easy to reverse. Rather the opposite – and the wheels of the European legal system grind very slowly, so the fight and the mess could be protracted.

What’s more, what this should really highlight for people is not just the problem with the Google Spain ruling, but the huge power that Google already wields – because, ultimately, it is Google that is doing this ‘censorship’, not the court ruling. And Google does similar things already, though without such a fanfare, in relation to copyright protection, links to things like obscene content and so forth. Google already are acting like censors, if you see it that way, and without the drama of the right to be forgotten.

What can we do now?

In the meantime, people will develop coping mechanisms – or find ways to bypass Google’s European search systems, either going straight to google.com or using alternatives like duckduckgo, or even not using search at all, because there are other ways to find information such as crowdsourcing via Twitter. The more people use these, the more they’ll like them, and the more they’ll move away from Google.  I hope that Google see this, and find a more productive way forward than this excessive, clumsy implementation of the ruling. What’s more, I hope they engage positively and actively with the reform process for the Data Protection Regime – because a well executed reform, with a better written and more appropriate version of the right to be forgotten (or even better, the right to erasure) is the ultimate solution here. If that can be brought in soon – rather than delayed or undermined – then we can all move on from the Google Spain ruling, both legally and practically. I think everyone might benefit from that.

A week not to be forgotten….

…for those of us interested in the right to be forgotten. I’ve found myself writing and talking to people about it unlike any time before. Privacy is becoming bigger and bigger news – and I have a strong feeling that the Snowden revelations influenced the thinking of the ECJ in last week’s ruling, subconsciously if nothing else. That should not be viewed as a bad thing – quite the opposite. What we have learned through Edward Snowden’s information should have been a wake-up call for everyone. Privacy matters – and the links between the commercial gathering and holding of data and the kind of surveillance done by the authorities are complex and manifold. If we care about privacy in relation to anyone – the authorities, businesses, other individuals, advertisers, employers, criminals etc – then we need to build a more privacy-friendly infrastructure that protects us from all of these. That means thinking more deeply, and considering more radical options – and yes, that even means the right to be forgotten, for all its flaws, risks and complications. More thought is needed, and more action – and we must understand the sources of information here, the nature of those contributing to the debate and so forth.

Anyway, this isn’t a ‘real’ blog post about the subject – I’ve done enough of them in the last week. What I want to do here is provide links to what I’ve written and said in the last week, as well as to my academic contributions to the subject, both past and present, and then to link to Julia Powles’ excellent curation of the academic blogs and articles written by many people in the aftermath of the judgment.

Here’s what I’ve written:

For CNN, a summary of the judgment and its implications, written the same day as the judgment.

For the Justice Gap, a day later, looking at the judgment in context and asking whether it was a ‘good’ or a ‘bad’ thing for internet freedom.

My interview for CBC (Canada)’s Day 6 programme – talking about the implications, and examining the right for a non-European audience.

For my own blog, looking at Google’s options for the future and suggesting that the judgment isn’t the end of the world

Also for my own blog, a day later, trying to put the judgment into context – it’s not about paedophiles and politicians, and it won’t be either a triumph or a disaster.

This last piece may in some ways be the most important – because already there’s a huge about of hype being built up, and scare stories are being leaked to the media at a suspiciously fast rate. There are huge lobbies at play here, particularly from the ‘big players’ on the internet like Google, who will face significant disruption and significant costs as a result of the ruling, and seem to want to make sure that people view the conflict as one of principle, rather than one of business. People will rally behind a call to defend freedom of expression much more easily than they will behind a call to defend Google’s right to make money, particularly given Google’s taxation policies.

Then here are my academic pieces on the subject.

‘A right to delete?’ from 2011, for the European Journal of Law and Technology. This is an open access piece, suggesting a different approach.

‘The EU, the US and the Right to be Forgotten’, published in early 2014, a chapter in a Springer Book on data protection reform, arising from the CPDP conference in Brussels 2013. This, unfortunately, is not open access, but a chapter in an expensive book. This does, however, deal directly with some of the lobbying issues.

The right to be forgotten – and my particular take on it, the right to delete, is also discussed at length in my recently released book, Internet Privacy Rights. There’s a whole chapter on the subject, and it’s part of the general theme.

Finally, here’s a link to Julia Powles’ curation of the topic. This is really helpful – a list of what’s been written by academics over the last week or so, with a brief summary of each piece and a link to it. Some of the academics contributing are from the very top of the field,  including Viktor Mayer-Schönberger, Daniel Solove and Jonathan Zittrain. All the pieces are worth a read.

This subject is far from clear cut, and the debate will continue on, in a pretty heated form I suspect, for quite some time. Probably the best thing that could come out of it, in my opinion, is some more impetus for the completion of the data protection reform in the EU. This reform has been struggling on for some years, stymied amongst other things by intense lobbying  by Google and others. That lobbying will have to change tack pretty quickly: it’s no longer in Google’s interests for the reform to be delayed. If they want to have a more ‘practical’ version of the right to be forgotten in action, the best way is to be helpful rather than obstructive in the reform of the data protection regime. A new regime, with a well balanced version of the right incorporated, would be in almost everyone’s best interests.

The Right to be Forgotten: Neither Triumph Nor Disaster?

“If you can meet with triumph and disaster
And treat those two imposters just the same”

Kipling_ndThose are my two favourite lines from Kipling’s unforgettable poem, ‘If’. They have innumerable applications – and I think another one right now. The Right to be Forgotten, about which I’ve written a number of times recently, is being viewed by some as a total disaster, others as a triumph. I don’t think either are right: it’s a bit of a mess, it may well end up costing Google a lot of time, money and effort, and it may be a huge inconvenience to Data Protection Authorities all over Europe, but in the terms that people have mostly been talking about it, privacy and freedom of expression, it seems to me that it’s unlikely to have nearly as big an impact as some have suggested.

Paedophiles and politicians – and erasure of the past

Within a day or two of the ruling, already the stories were coming out about paedophiles and politicians wanting to use the right to be forgotten to erase their past – precisely the sort of rewriting of history that the term ‘right to be forgotten’ evokes, but that this ruling does not provide for. We do need to be clear about a few things that the right will NOT do. Where there’s a public interest, and where an individual is involved in public life, the right does not apply. The stories going around right now are exactly the kind of of thing that Google can and should refuse to erase links to. If Google don’t, then they’re just being bloody minded – and can give up any claims to be in favour of freedom of speech.

Similarly, we need to be clear that this ruling only applies to individuals – not to companies, government bodies, political parties, religious bodies or anything else of that kind. We’re talking human rights here – and that means humans. And, because of the exception noted above, that only means humans not involved in public life. It also only means ‘old’, ‘irrelevant’ information – though what defines ‘old’ and ‘irrelevant’ remains to be seen and argued about. There are possible slippery slope arguments here, but it doesn’t, at least on the face of it, seem to be a particularly slippery kind of slippery slope – and there’s also not that much time for it to get more slippery, or for us to slip down it, because as soon as the new data protection regime is in place, we’ll almost certainly have to start again.

We still can’t hide

Conversely, this ruling won’t really allow even us ‘little people’ to be forgotten very successfully. The ruling only allows for the erasure of links on searches (through Google or another search engine) that are based on our names. The information itself is not erased, and other forms of search can still find the same stories – that is, ‘searches’ using something other than a search engine, and even uses of search engines with different terms. You might not be able to find stories about me by searching for ‘Paul Bernal’ but still be able to find them by searching under other terms – and creative use of terms could even be automated.

There already are many ways to find things other than through search engines – whether it be crowdsourcing via Twitter or another form of search engine, employing people to look for you, or even creating your own piece of software to trawl the web. This latter idea has probably occurred to some hackers, programmers or entrepreneurs already – if the information is out there, and it still will be, there will be a way to find it. Stalkers will still be able to stalk. Employers will still be able to investigate potential employees. Credit rating agencies will still be able to find out about your ancient insolvency.

…but ‘they’ will still be able to hide

Some people seem to think that this right to be forgotten is the first attempt to manipulate search results or to rewrite history – but it really isn’t. There’s already a thriving ‘reputation management’ industry out there, who for a fee will tidy up your ‘digital footprint’, seeking out and destroying (or at least relegating to the obscurity of the later pages on your search results) disreputable stories, and building up those that show you in a good light. The old industry of SEO – search engine optimisation – did and does exactly that, from a slightly different perspective. That isn’t going to go away – if anything it’s likely to increase. People with the power and knowledge to be able to manage their reputations will still be able to.

On a slightly different tack, criminals and scammers have always been able to cover their tracks – and will still be able to. The old cat-and-mouse game between people wanting to hide their identity and people wanting to uncover those hiding them will still go on. The ‘right to be forgotten’ won’t do anything to change that.

But it’s still a mess?

It is, but not, I suspect, in the terms that people are thinking about. It will be a big mess for Google to comply, though stories are already going round that they’re building systems to allow people to apply online for links to be removed, so they might well already have had contingency plans in place. It will be a mess for data protection agencies (DPAs), as it seems that if Google refuse to comply with your request to erase a link, you can ask the DPAs to adjudicate. DPAs are already vastly overstretched and underfunded – and lacking in people and expertise. This could make their situation even messier. It might, however, also be a way for them to demand more funding from their governments – something that would surely be welcome.

It’s also a huge mess for lawyers and academics, as they struggle to get their heads around the implications and the details – but that’s all grist to the mill, when it comes down to it. It’s certainly meant that I’ve had a lot to write about and think about this week….

 

It’s not the end of the world as we know it….

Screen Shot 2014-05-14 at 10.51.36

Over the weekend, I was asked by CNN if I would be able to write something about the ruling that was due on the right to be forgotten – it was expected on Tuesday, they told me. I said yes, partly because I’m a bit of a sucker for a media gig, and partly because I thought it would be easy. After all, we all knew what the CJEU was going to say – the Advocate-General’s opinion in June last year had been clear and, frankly, rather dull, absolving Google of responsibility for the data on third party websites and denying the existence of the right to be forgotten.

On Monday, which was a relatively free day for me, I drafted something up on the assumption that the ruling would follow the AG’s opinion, as they generally do. On Tuesday morning, however, when the ruling came out, all hell broke loose. When I saw the press release I was doing a little shopping – and I actually ran back from the shops straight home to try to digest what the ruling meant. I certainly hadn’t expected this – and I don’t know anyone in the field who had. The ruling was strong and unequivocally against Google – and it said, clearly and simply, that we do have a right to be forgotten.

I rewrote the piece for CNN – it’s here – and the main feeling I had was that this would really shake things up. I still think that – but that this isn’t the end of the world as we know it, despite some pretty apocalyptic suggestions going around the internet.

On the positive side, the ruling effectively says that individuals (and only individuals, not corporations, government bodies or other institutions) can ask Google to remove links (and not the stories themselves) that come up as a result of searches for their names. It’s a victory for the individual over the corporate – in one way. The most obvious negative side is that it could reduce our ability to find information about other individuals – but there are other risks attached too. Most of those concern what Google does next – and that’s something which, for the moment, Google seem to be keeping very close to their chest.

On the surface, Google’s legal options seem very limited – there’s no obvious route of appeal, as the CJEU is the highest court. If they don’t comply, they could find themselves losing case after case after case – and there could be thousands of cases. There are already more than 200 in Spain alone, and this ruling effectively applies throughout Europe. If they do choose to comply, how will they do so? Will they create a mechanism to allow individuals to ask for things to be unlinked automatically? Will they ‘over-censor’ by taking things down at a simple request – they already do something rather like that when YouTube videos are accused of breaching copyright?

My suspicion that one thing they will do is to tweak their algorithm to reduce the number of possible cases – they will look at the kinds of search results that are likely to trigger requests, and try to reduce those automatically. That could mean, for example, setting their systems so that older stories have even less priority than before – producing an effect similar to Viktor Mayer-Schönberger’s ‘expiry dates’ for data, something that in my opinion might well be beneficial in the main. It could also mean, however, placing less priority on things like insolvency actions (the specific case that the ruling arose from was about debts) or other financial events, which would not have such a beneficial effect. Indeed, it could well be seen as detrimental.

The bigger risk, however, is to Google’s business model. Complying with this ruling could end up very costly – it effectively asks Google to make a kind of judgment call of privacy vs public interest, and making those kinds of calls is very difficult algorithmically. It might mean employing people – and people are expensive and slow… and reduce profits.  Threatening Google’s business model doesn’t just threaten Google’s shareholders – it threatens the whole ‘free services for data’ approach to the net, and that’s something we all (in general) benefit from. I don’t currently think this threat is that big – but we’re still digesting the possibilities, I think.

One other possible result – in the longer term – which I would hope to see (though I’m not holding my breath) is less of a reliance on search, and on Google in particular. There are other ways to find information on the internet, ways that this ruling would not have an impact on. One of the most direct is crowdsourcing via something like Twitter – these days I get more of my information through Twitter than I do through Google. If you have a body of informed, intelligent and helpful people out there who are scouring the internet for information in their own particular way, they can supply you in a very different way to Google. They can bypass the filters that Google already put in place, and the biases that Google has (but pretends not to have) – with your own connections there are of course other biases but they’re more obvious and out in the open.

Indeed, I would also hope that this ruling is the start of our having a more objective view of what Google is – though the reactions of some that this ruling is the end of the world suggest rather the opposite. Further, we should start to think more about the kind of internet we want to have – and how to get it. I would hope that those bemoaning the censorship that this ruling might bring are equally angry about the censorship that our government in the UK, and many others around the world, have already brought in inside the Trojan Horse of ‘porn filters’. That kind of censorship, in my opinion, offers far more of a threat to freedom of expression than the idea of a right to be forgotten. If we’re really keen on freedom of expression, we should be up in arms about that – but we mostly seem to be acquiescing to it with barely a murmur.

What this ruling actually results in is yet to be seen – but if we’re positive and creative it can be something positive rather than something negative. It should be seen as a start, and not an end.

Dear Larry and Mark….

Larry Page, Google

Mark Zuckerberg, Facebook

8th June, 2013

Dear Larry and Mark

The PRISM project

I know that you’ve been as deeply distressed as I have by the revelations and accusations released to the world about the PRISM project – and I am delighted by the vehemence and clarity with which you have denied the substance of the reports insofar as they relate to your services. The zeal with which you wish to protect your users’ privacy is highly commendable – and I’m looking forward to seeing how that zeal produces results in the future. To find that the two of you, the leaders of two of the biggest providers of services on the internet, are so clearly in favour of individual privacy on the internet is a wonderful thing for privacy advocates such as myself. There are, however, a few ways that you could make a slightly more direct contribution to that individual privacy – and seeing the depth of feeling in your proclamations over PRISM I feel sure that you will be happy to do them.

Do Not Track

As I’m sure you’re aware, people are concerned not just about governments tracking their activities on the net, but others tracking them – not least since it appears clear from the PRISM project that if commercial organisations track people, governments might try to get access to that tracking, and perhaps even succeed. As you know, the Do Not Track initiative was designed with commercial tracking in mind – but it has become a little bogged down since it began, and looks as though it might be far less effective than it could be. You could change that – put your considerable power into making it strong and robust, very clearly do not track rather than do not target, and most importantly ensure that do not track is on by default. As you clearly care about the surveillance of your users, I know that you’ll want them not to be tracked unless they actively choose to let advertisers track them. That’s the privacy-friendly way – and as supporters of privacy, I’m sure you’ll want to support that. Larry, in particular, I know this is something you’ll want to do, as perhaps the world leader in advertising – and now also in privacy – your support of this will be both welcome and immensely valuable.

Anonymity – no more ‘real names’ policies

As UN Special Rapporteur on Freedom of Expression and Opinion, Frank La Rue, recently reported, privacy, and in particular anonymity is a crucial underpinning of freedom of expression on the internet. I’m sure you will have read his report – and will have realised that your insistence on people using real names when they use your services is a mistake. I imagine, indeed, that you’re already preparing to reverse those policies, and come out strongly for people’s right to use pseudonyms – particularly you, Mark, as Facebook is so noted for its ‘real names’ policy. As supporters of privacy, there can’t be any other way – and now that you’re both so clearly in the privacy-supporting camp, I feel confident that you’ll make that choice. I’m looking forward to the press releases already.

Data Protection Reform

As supporters of privacy, I know you’ll be aware of the current reform programme going on with the European Data Protection regime – data protection law is strongly supportive of individual privacy, and may indeed be the most important legal protection for privacy in the world. You might be shocked to discover that there are people from both of your companies lobbying to weaken and undermine that reform – so I’m sure you’ll tell them at once to stop that lobbying, and instead to get solidly behind those looking for better protection for individual privacy and stronger rights to protect themselves from tracking and misuse of their data.  As you are now the champions of individual privacy, I’m sure you’ll be delighted to do so – and I suspect memos have already been issued from your desks to those lobbying teams ordering them to change your stance and support rather than undermine individuals’ rights over their data. I know that those pushing for this reform will be delighted by your new found support.

That support, I’m sure, will build on Eric Schmidt’s recent revelation that he thinks the internet needs a ‘delete’ button – so you’ll be backing Viviane Reding’s ‘right to be forgotten’ and doing everything you can to build in easy ways for people to delete their accounts with you, to remove all traces of their profiling and related data and so on.

Geo-location, Facial Recognition and Google Glass

Your new found zeal for privacy will doubtless also be reflected in the way that you deal with geo-location and facial recognition – and in Larry’s case, with Google Glass. Of course you’ve probably had privacy very much in the forefront of your thoughts in all of these areas, but just haven’t yet chosen to talk about it. Moving away from products that gather location data by default, and cutting back on facial recognition except where people really need it and have given clear and properly informed consent will doubtless be built in to your new programs – and, Larry, I’m sure you’ll find some radical way to cut down on the vast array of privacy issues associated with Google Glass. I can’t quite see how you can at the moment, but I’m sure you’ll find a way, and that you’re devoting huge resources to do so.

Supporting privacy

We in the privacy advocacy field are delighted to have you on our side now – and look forward greatly to seeing that support reflected in your actions, and not just in relation to government surveillance. I’ve outlined some of the ways that this might be manifested in reality – I am waiting with bated breath to see it all come to fruition.

Kind regards

Paul Bernal

P.S. Tongue very firmly in cheek

What Muad’Dib can teach us about personal data…

With all the current debate about the so-called ‘right to be forgotten’, I thought I’d post one of my earlier, somewhat less than serious takes on the matter. A geeky take. A science fiction take…

I’ve written about it before in more serious ways – both in blogs (such as the two part one on the INFORRM blog, part 1 here and part 2 here) and in an academic paper (here, in the European Journal of Law and Technology) – and I’ve ranted about it on this blog too (‘Crazy Europeans!?!’).

This, however, is a very different take – one I presented at the GiKii conference in Gothenburg last summer. In it I look back at that classic of science fiction, Dune. There’s a key point in the book, a key issue in the book, that has direct relevance to the issue of personal data. As the protagonist, Paul-Muad’Dib, puts it:

“The power to destroy a thing is the absolute control over it.”

In the book, Muad’Dib has the power to destroy the supply of the spice ‘Melange’, the most valuable commodity in the Dune universe. In a similar manner, if a way can be found for individuals to claim the right to delete personal data, control over that data can begin to shift from businesses and governments back to the individuals.

Here’s an animated version of the presentation I gave at Gikii…

This is what it’s supposed to suggest…

Melange in Dune

In Frank Herbert’s Dune series, the most essential and valuable commodity in the universe is melange, a geriatric drug that gives the user a longer life span, greater vitality, and heightened awareness; it can also unlock prescience in some humans, depending upon the dosage and the consumer’s physiology. This prescience-enhancing property makes safe and accurate interstellar travel possible. Melange comes with a steep price, however: it is addictive, and withdrawal is fatal.

Personal data in the online world

In our modern online world, personal data plays a similar role to the spice melange. It is the most essential and valuable commodity in the online world. It can give those who gather and control it heightened awareness, and can unlock prescience (through predictive profiling). This prescience enhancing property makes all kinds of things possible. It too comes with a steep price, however: it is addictive, and withdrawal can be fatal – businesses and governments are increasingly dependent on their gathering, processing and holding of personal data.

What we can learn from Muad’Dib

For Muad’Dib to achieve ascendency, he had to assert control over the spice – we as individuals need to assert the same control over personal data. We need to assert our rights over the data – both over its ‘production’ and over its existence afterwards. The most important of these rights, the absolute control over it, is the right to destroy it – the right to delete personal data. That’s what the right to be forgotten is about – and what, in my opinion, it should be called. If we have the right to delete data – and the mechanisms to make that right reality – then businesses and governments need to take what we say and want into account before they gather, hold or use our data. If they ride roughshod over our views, we’ll have a tool to hold them to account…

The final solution, as for Arrakis, the proper name for the planet known as ‘Dune’, should be a balance. Production of personal data should still proceed, just as production of spice on Arrakis could still proceed, but on our own terms, and to mutual benefit. Most people don’t want a Jihad, just as Paul Atreides didn’t want a Jihad – though some may seek confrontation with the authorities and businesses rather than cooperation with them. In Dune, Paul Muad’Dib was not strong enough to prevent that Jihad – and though there has certainly been a ramping up of activism and antagonism over the last year or two, it should be possible to prevent it. If that is to happen, an assertion of rights, and in particular rights over the control over personal data, could be a key step.

A question of control – not of censorship

Looked at from this direction, the right to be forgotten (which I still believe is better understood as a right to delete) is not, as some suggest, about censorship, or about restricting free expression. Instead, it should be seen as a salvo in a conflict over control – a move towards giving netizens more power over the behemoths who currently hold sway.

If people are too concerned about the potential censorship issues – and personally I don’t think they should be, but I understand why they are – then perhaps they can suggest other ways to give people more control over what’s happening. Right now, as things like the Facebook ‘deleted’ photos issue I blogged about last week suggest, those who are in control don’t seem to be doing much to address our genuine concerns….

Otherwise, they might have to deal with the growing power of the internet community…

Facebook, Photos and the Right to be Forgotten

Another day, another story about the right to be forgotten. This time it’s another revelation about how hard it is to delete stuff from Facebook. In this case it’s photos – with Ars Technica giving an update on their original story from 2009 about how ‘deleted’ photos weren’t really deleted. Now, according to their new story, three years later, the photos they tried to remove back then are STILL there.

The Ars Technica story gives a lot more detail – and does suggest that Facebook are at least trying to do something about the problem, though without much real impact at this stage. As Ars Technica puts it:

“….with the process not expected to be finished until a couple months from now—and unfortunately, with a company history of stretching the truth when asked about this topic—we’ll have to see it before we believe it.”

I’m not going to try to analyse why Facebook has been so slow at dealing with this – there are lots of potential reasons, from the technical to the political and economic – but from the perspective of someone who’s been watching developments over the years one thing is very important to understand: this slowness and apparent unwillingness (or even disinterest) has had implications. Indeed, it can be seen as one of the main drivers behind the push by the European Union to bring in a ‘right to be forgotten’.

I’ve written (and most recently ranted in my blog ‘Crazy Europeans’) about the subject many times before, but I think it bears repeating. This kind of legislative approach, which seems to make some people in the field very unhappy, doesn’t arise from nothing, just materialising at the whim of a few out-of-touch privacy advocates or power-hungry bureaucrats. It emerges from a real concern, from the real worries of real people. As the Ars Technica article puts it:

“That’s when the reader stories started pouring in: we were told horror stories about online harassment using photos that were allegedly deleted years ago, and users who were asked to take down photos of friends that they had put online. There were plenty of stories in between as well, and panicked Facebook users continue to e-mail me, asking if we have heard of any new way to ensure that their deleted photos are, well, deleted.”


When people’s real concerns aren’t being addressed – and when people feel that their real concerns aren’t being addressed – then things start to happen. Privacy advocates bleat – and those in charge of regulation think about changing that regulation. In Europe we seem to be more willing to regulate than in the US, but with Facebook facing regular privacy audits from the FTC in the US, they’re going to have to start to face up to the problem, to take it more seriously.

There’s something in it for Facebook too. It’s in Facebook’s interest that people are confident that their needs will be met.  What’s more, if they want to encourage sharing, particularly immediate, instinctive, impulsive sharing, they need to understand that when people do that kind of thing they can and do make mistakes – and they would like the opportunity to rectify those mistakes. Awareness of the risks appears to be growing among users of these kinds of system – and privacy is now starting to become a real selling point on the net. Google and Microsoft’s recent advertising campaigns on privacy are testament to that – and Google’s attempts to portray its new privacy policy as something positive are quite intense.

That in itself is a good sign, and with Facebook trying to milk as much as they can from the upcoming IPO, they might start to take privacy with the seriousness that their users want and need. Taking down photos when people want them taken down – and not keeping them for years after the event – would be a good start. If it doesn’t happen soon, and isn’t done well, then Facebook can expect an even stronger push behind regulation like the Right to be Forgotten. If they don’t want this kind of thing, then they need to pre-empt it by implementing better privacy, better user rights, themselves.

Crazy Europeans!?!

As anyone who pays attention to the world of data – and data privacy in particular – cannot help but be aware, those crazy Europeans are pushing some more of their mad data protection laws (a good summary of which can be found here) including the clearly completely insane ‘right to be forgotten’. Reactions have been pretty varied on in Europe, but in the US they seem to have been pretty consistent, and can largely be boiled down to two points:

1) These Europeans are crazy!
2) This will all be a huge imposition on business – No fair!!!

There have been a fair few similar reactions in the UK too, and there will probably be more once the more rabidly anti-European parts of the popular press actually notice what’s going on. As I’ve blogged before, the likes of Ken Clarke have spoken up against this kind of thing before.

So I think we need to ask ourselves one question: why ARE these crazy Europeans doing all this mad stuff?

Well, to be frank, the Internet ‘industry’ has only got itself to blame. This is an industry that has developed the surreptitious gathering of people’s personal data into an art form, yet an industry that can’t keep its data safe from hackers and won’t keep it safe from government agencies. This is an industry that tracks our every move on the web and gets stroppy if we want to know when it’s happening and why. This is an industry that makes privacy policies ridiculously hard to read whilst at the same time working brilliantly on making other aspects of their services more and more user-friendly. Why not do the same to the privacy settings? This is an industry that makes account deletion close to impossible (yes, I’m talking to you, Facebook) and pulls out all the stops to keep us ‘logged in’ at all times. This is an industry that tells us that WE should be completely transparent while remaining as obscure and opaque as possible themselves. This is an industry that often seems to regard privacy as just a little problem that needs to be sidestepped – or something that is ‘no longer a social norm’ (and yes, I’m talking to you, Facebook again)…..

So…. If the internet ‘industry’, particularly in the US,  doesn’t want this kind of regulation, this kind of ‘interference’ with its business models, the answer’s actually really simple: build better business models, models that respect people’s privacy! Stop riding rough-shod over what we, particularly in Europe, but certainly in the US too, care deeply about. Use your brilliance in both business and technology to find a better way, rather than just moaning that we’re interfering with what you want to do. When fighting against SOPA and PIPA (and I hope ACTA too in the near future), most of the industry champion the people admirably – perhaps because the people’s interests coincided with their own. In privacy, the same is actually true, however much it may seem the other way around. In the end, the internet industry will be better off if it takes privacy seriously.

Regulation doesn’t happen just because a bunch of faceless Belgian bureaucrats have too much power and too little to do – it happens when there’s a real problem to solve. Oh, they may well go over the top, they may well use crude regulatory sledgehammers where delicate rapiers would do the job better, but they do at least try, which seems more than much of the industry does…

So don’t blame the crazy Europeans. Take a closer look in the mirror…

Players and Pawns in the Game of Privacy

Privacy is pretty constantly in the news at the moment. People like me can hardly take their eye off the news for a moment. This morning I was trying to do three things at once: follow David Allen Green’s evidence at the Leveson inquiry (where amongst other things he was talking about the NightJack story which has significant privacy implications), listen to Viviane Reding talking about the new reforms to the data protection regime in Europe, and discover what was going on in the emerging story of 02‘s apparent sending of people’s mobile numbers to websites visited via their mobile phones….

Big issues… and lots of media coverage… and lots of opportunities for academics, advocates of one position or other, technical experts and so forth to write/talk/tweet/blog etc on the subject. And many of us are taking the opportunity to say our bit, as we like to do. A good thing? Yes, in general – because perhaps the biggest change I’ve seen over the years I’ve been researching into the field is that the debate is wider, bringing in more people and more subjects, and getting more public attention – which must, overall, be a good thing. The more the issues are debated and thought about, the more chance there is that we can get better understanding, some sort of consensus, and find better solutions. And yet there are dangers attached to the process – because as well as the people who have valuable things to say and good, strong ethical positions to support their case, there are others with much more questionable agendas, often hidden, who would like to use others for their own purposes. Advocates, academics and experts need to guard against being used by others with very different motives.

There are particular examples happening right now. One subject that particularly interests me, about which I’ve blogged and written many times before, is the right to be forgotten. Viviane Reding has talked about it in the last few days – and there have been reactions in both directions. Both, it seems to me, need to be wary of their being used in ways that they don’t intend:

i) Those who oppose a ‘right to be forgotten’/’right to delete’ need to be careful that they’re not being used as ‘cover’ for those whose business models depend on the holding and using of personal data. The right to delete is a threat to their business models, and they can (and probably will) use all the tools at their disposal to oppose it, including using ‘experts’ and academics. The valid concerns about censorship/free expression aren’t what those people care about – they want to be able to continue to use people’s personal data to make money. Advocates for free expression etc need to be careful that they’re not being used in that kind of way.

ii) Conversely, those who (like me) advocate for a ‘right to be forgotten’/’right to delete’ need to be careful that they’re not being used by those who wish to censor and control – because there IS a danger that a poorly written and executed right to be forgotten could be set up in that kind of way. I don’t believe that’s what’s intended by the current version, nor to I believe that this is how it would or could be used, but it’s certainly possible, and people on ‘my’ side of the argument need to be vigilant that it doesn’t go that way.

Similar arguments can be used in other fields – for example about the question of the right to anonymity. Those who (like me) espouse a right to anonymity need to be careful about not providing unfettered opportunities for those who wish to bully, to defame etc., while those who support the reverse – an internet with real name/identification systems throughout, to control access to age-sensitive sites, to deal with copyright infringement etc – need to be very careful not to be used as an excuse for setting up systems which allow control and ultimately oppression.

So what does this all mean? Should academics and other ‘experts’ simply keep out of the blogosphere and the media, and leave their musings for academic journals and unreadable books? Certainly not – but we do need to be a little more thoughtful about the agendas of those who might use us, who might misquote us, who might take us out of context and so forth. I suspect that this might have been what happened to Vint Cerf when he wrote a short while ago suggesting that internet access was not a human right. Others might well have been trying to use him… as they might well try to use any of those who write in this kind of a field. However clever we might think we are, we’re very often pawns in the game, not players.