Privacy-friendly judges?

Supreme court sealYesterday’s ruling by the Supreme Court of the United States, requiring the police to get a warrant before accessing a suspect’s mobile phone data, was remarkable in many ways. It demonstrated two things in particular that fit within a recent pattern around the world, one which may have quite a lot to do with the revelations of Edward Snowden. The first is that the judiciary shows a willingness and strength to support privacy rights in the face of powerful forces, the second is an increasing understanding of the way that privacy, in these technologically dominated days, is not the simple thing that it was in the past.

The stand-out phrase in the ruling is remarkable in its clarity:

13-132 Riley v. California (06/25/2014)

“Modern cell phones are not just another technological convenience. With all they contain and all they may reveal, they hold for many Americans “the privacies of life,” Boyd, supra, at 630. The fact that technology now allows an individual to carry such information in his hand does not make the information any less worthy of the protection for which the Founders fought. Our answer to the question of what police must do before searching a cell phone seized incident to an arrest is accordingly simple— get a warrant.”

Privacy advocates around the world have been justifiably excited by this – not only is the judgment a clearly privacy-friendly one, but it effectively validates some of the critical ideas that many of us have been trying to get the authorities to understand for a long time. Most importantly, that the way that we communicate these days, the way that we use the internet and other forms of communication, plays a far more important part in our lives than it did in the past. The emphasis on the phrase ‘the privacies of life’ is a particularly good one. This isn’t just about communication – it’s about the whole of our lives.

The argument about cell-phones can be extended to all of our communications on the internet – and the implications are significant. As I’ve argued before, the debate needs to be reframed, to take into account the new ways that we use communications – privacy these days isn’t as easily dismissed as it was before. It’s not about tapping a few phone calls or noting the addresses on a few letters that you send – communications, and the internet in particular, pervades every aspect of our lives. The authorities in the UK still don’t seem to get this – but the Supreme Court of the US does seem to be getting there, and its not alone. The last few months have seen a series of quite remarkable cases, each of which demonstrates that judges are starting to get a real grip on the issues, and are willing to take on the powerful groups with a vested interest in downplaying the importance of privacy:

  • The ECJ ruling invalidating the Data Retention Directive on 8th April 2014
  • The ECJ Google Spain ruling on the ‘Right to be Forgotten’  on 13th May 2014
  • The Irish High Court referring Max Schrems’ case against Facebook to the ECJ, on 19th June 2014

These three cases all show similar patterns. They all involve individuals taking on very powerful groups – in the data retention case, taking on pretty much all the security services in Europe, in the other two the internet giants Google and Facebook respectively. In all three cases – as in the Supreme Court of the US yesterday – the rulings are fundamentally about the place that privacy plays, and the priority that privacy is given. The most controversial statement in the Google Spain case makes it explicit:

“As the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the Charter, request that the information in question no longer be made available to the general public on account of its inclusion in such a list of results, those rights override, as a rule, not only the economic interest of the operator of the search engine but also the interest of the general public in having access to that information upon a search relating to the data subject’s name” (emphasis added)

That has been, of course, highly controversial in relation to freedom of information and freedom of expression, but the first part, that privacy overrides the economic interest of the operator of the search engine, is far less so – and the fact that it is far less controversial does at least show that there is a movement in the privacy-friendly direction.

The invalidation of the Data Retention Directive may be even more significant – and again, it is based on the idea that privacy rights are more important than security advocates in particular have been trying to suggest. The authorities in the UK are still trying to avoid implementing this invalidation – they’re effectively trying to pretend that the ruling does not apply – but the ruling itself is direct and unequivocal.

As for the decision in the Irish High Court to refer the ‘Europe vs Facebook’ case to the ECJ, the significance of that has yet to be seen, but Facebook may very well be deeply concerned – because, as the two previous cases have shown, the ECJ has been bold and unfazed by the size and strength of those it might be challenging, and willing to make rulings that have dramatic consequences. The Irish High Court is the only one of the three courts to make explicit mention of the revelations of Edward Snowden, but I do not think that it is too great a leap to suggest that Snowden has had an influence on all the others. Not a direct one – but a raising of awareness, even at the judicial level, of the issues surrounding privacy, why they matter, and how many different things are at stake. A willingness to really examine the technology, to face up to the ways in which the ‘new’ world is different from the old – and a willingness to take on the big players.

I may well be being overly optimistic, and I don’t think too much should be read into this, but it could be critical. The law is only one small factor in the overall story – but it is a critical one, and if people are to begin to take back their privacy, they need to have the law at least partly on their side, and to have judges who are able and willing to enforce that law. With this latest ruling, and the ones that have come over the last few months, the signs are more positive than they have been for some time.

 

Addendum: As David Anderson has pointed out, the UK Supreme Court showed related tendencies in last week’s ruling over the disclosure of past criminal records in job applications, in R (T ) v SSHD [2014] UKSC 35 on 18th June. See the UKSC Blog post here.

The Right to be Forgotten: Neither Triumph Nor Disaster?

“If you can meet with triumph and disaster
And treat those two imposters just the same”

Kipling_ndThose are my two favourite lines from Kipling’s unforgettable poem, ‘If’. They have innumerable applications – and I think another one right now. The Right to be Forgotten, about which I’ve written a number of times recently, is being viewed by some as a total disaster, others as a triumph. I don’t think either are right: it’s a bit of a mess, it may well end up costing Google a lot of time, money and effort, and it may be a huge inconvenience to Data Protection Authorities all over Europe, but in the terms that people have mostly been talking about it, privacy and freedom of expression, it seems to me that it’s unlikely to have nearly as big an impact as some have suggested.

Paedophiles and politicians – and erasure of the past

Within a day or two of the ruling, already the stories were coming out about paedophiles and politicians wanting to use the right to be forgotten to erase their past – precisely the sort of rewriting of history that the term ‘right to be forgotten’ evokes, but that this ruling does not provide for. We do need to be clear about a few things that the right will NOT do. Where there’s a public interest, and where an individual is involved in public life, the right does not apply. The stories going around right now are exactly the kind of of thing that Google can and should refuse to erase links to. If Google don’t, then they’re just being bloody minded – and can give up any claims to be in favour of freedom of speech.

Similarly, we need to be clear that this ruling only applies to individuals – not to companies, government bodies, political parties, religious bodies or anything else of that kind. We’re talking human rights here – and that means humans. And, because of the exception noted above, that only means humans not involved in public life. It also only means ‘old’, ‘irrelevant’ information – though what defines ‘old’ and ‘irrelevant’ remains to be seen and argued about. There are possible slippery slope arguments here, but it doesn’t, at least on the face of it, seem to be a particularly slippery kind of slippery slope – and there’s also not that much time for it to get more slippery, or for us to slip down it, because as soon as the new data protection regime is in place, we’ll almost certainly have to start again.

We still can’t hide

Conversely, this ruling won’t really allow even us ‘little people’ to be forgotten very successfully. The ruling only allows for the erasure of links on searches (through Google or another search engine) that are based on our names. The information itself is not erased, and other forms of search can still find the same stories – that is, ‘searches’ using something other than a search engine, and even uses of search engines with different terms. You might not be able to find stories about me by searching for ‘Paul Bernal’ but still be able to find them by searching under other terms – and creative use of terms could even be automated.

There already are many ways to find things other than through search engines – whether it be crowdsourcing via Twitter or another form of search engine, employing people to look for you, or even creating your own piece of software to trawl the web. This latter idea has probably occurred to some hackers, programmers or entrepreneurs already – if the information is out there, and it still will be, there will be a way to find it. Stalkers will still be able to stalk. Employers will still be able to investigate potential employees. Credit rating agencies will still be able to find out about your ancient insolvency.

…but ‘they’ will still be able to hide

Some people seem to think that this right to be forgotten is the first attempt to manipulate search results or to rewrite history – but it really isn’t. There’s already a thriving ‘reputation management’ industry out there, who for a fee will tidy up your ‘digital footprint’, seeking out and destroying (or at least relegating to the obscurity of the later pages on your search results) disreputable stories, and building up those that show you in a good light. The old industry of SEO – search engine optimisation – did and does exactly that, from a slightly different perspective. That isn’t going to go away – if anything it’s likely to increase. People with the power and knowledge to be able to manage their reputations will still be able to.

On a slightly different tack, criminals and scammers have always been able to cover their tracks – and will still be able to. The old cat-and-mouse game between people wanting to hide their identity and people wanting to uncover those hiding them will still go on. The ‘right to be forgotten’ won’t do anything to change that.

But it’s still a mess?

It is, but not, I suspect, in the terms that people are thinking about. It will be a big mess for Google to comply, though stories are already going round that they’re building systems to allow people to apply online for links to be removed, so they might well already have had contingency plans in place. It will be a mess for data protection agencies (DPAs), as it seems that if Google refuse to comply with your request to erase a link, you can ask the DPAs to adjudicate. DPAs are already vastly overstretched and underfunded – and lacking in people and expertise. This could make their situation even messier. It might, however, also be a way for them to demand more funding from their governments – something that would surely be welcome.

It’s also a huge mess for lawyers and academics, as they struggle to get their heads around the implications and the details – but that’s all grist to the mill, when it comes down to it. It’s certainly meant that I’ve had a lot to write about and think about this week….

 

It’s not the end of the world as we know it….

Screen Shot 2014-05-14 at 10.51.36

Over the weekend, I was asked by CNN if I would be able to write something about the ruling that was due on the right to be forgotten – it was expected on Tuesday, they told me. I said yes, partly because I’m a bit of a sucker for a media gig, and partly because I thought it would be easy. After all, we all knew what the CJEU was going to say – the Advocate-General’s opinion in June last year had been clear and, frankly, rather dull, absolving Google of responsibility for the data on third party websites and denying the existence of the right to be forgotten.

On Monday, which was a relatively free day for me, I drafted something up on the assumption that the ruling would follow the AG’s opinion, as they generally do. On Tuesday morning, however, when the ruling came out, all hell broke loose. When I saw the press release I was doing a little shopping – and I actually ran back from the shops straight home to try to digest what the ruling meant. I certainly hadn’t expected this – and I don’t know anyone in the field who had. The ruling was strong and unequivocally against Google – and it said, clearly and simply, that we do have a right to be forgotten.

I rewrote the piece for CNN – it’s here – and the main feeling I had was that this would really shake things up. I still think that – but that this isn’t the end of the world as we know it, despite some pretty apocalyptic suggestions going around the internet.

On the positive side, the ruling effectively says that individuals (and only individuals, not corporations, government bodies or other institutions) can ask Google to remove links (and not the stories themselves) that come up as a result of searches for their names. It’s a victory for the individual over the corporate – in one way. The most obvious negative side is that it could reduce our ability to find information about other individuals – but there are other risks attached too. Most of those concern what Google does next – and that’s something which, for the moment, Google seem to be keeping very close to their chest.

On the surface, Google’s legal options seem very limited – there’s no obvious route of appeal, as the CJEU is the highest court. If they don’t comply, they could find themselves losing case after case after case – and there could be thousands of cases. There are already more than 200 in Spain alone, and this ruling effectively applies throughout Europe. If they do choose to comply, how will they do so? Will they create a mechanism to allow individuals to ask for things to be unlinked automatically? Will they ‘over-censor’ by taking things down at a simple request – they already do something rather like that when YouTube videos are accused of breaching copyright?

My suspicion that one thing they will do is to tweak their algorithm to reduce the number of possible cases – they will look at the kinds of search results that are likely to trigger requests, and try to reduce those automatically. That could mean, for example, setting their systems so that older stories have even less priority than before – producing an effect similar to Viktor Mayer-Schönberger’s ‘expiry dates’ for data, something that in my opinion might well be beneficial in the main. It could also mean, however, placing less priority on things like insolvency actions (the specific case that the ruling arose from was about debts) or other financial events, which would not have such a beneficial effect. Indeed, it could well be seen as detrimental.

The bigger risk, however, is to Google’s business model. Complying with this ruling could end up very costly – it effectively asks Google to make a kind of judgment call of privacy vs public interest, and making those kinds of calls is very difficult algorithmically. It might mean employing people – and people are expensive and slow… and reduce profits.  Threatening Google’s business model doesn’t just threaten Google’s shareholders – it threatens the whole ‘free services for data’ approach to the net, and that’s something we all (in general) benefit from. I don’t currently think this threat is that big – but we’re still digesting the possibilities, I think.

One other possible result – in the longer term – which I would hope to see (though I’m not holding my breath) is less of a reliance on search, and on Google in particular. There are other ways to find information on the internet, ways that this ruling would not have an impact on. One of the most direct is crowdsourcing via something like Twitter – these days I get more of my information through Twitter than I do through Google. If you have a body of informed, intelligent and helpful people out there who are scouring the internet for information in their own particular way, they can supply you in a very different way to Google. They can bypass the filters that Google already put in place, and the biases that Google has (but pretends not to have) – with your own connections there are of course other biases but they’re more obvious and out in the open.

Indeed, I would also hope that this ruling is the start of our having a more objective view of what Google is – though the reactions of some that this ruling is the end of the world suggest rather the opposite. Further, we should start to think more about the kind of internet we want to have – and how to get it. I would hope that those bemoaning the censorship that this ruling might bring are equally angry about the censorship that our government in the UK, and many others around the world, have already brought in inside the Trojan Horse of ‘porn filters’. That kind of censorship, in my opinion, offers far more of a threat to freedom of expression than the idea of a right to be forgotten. If we’re really keen on freedom of expression, we should be up in arms about that – but we mostly seem to be acquiescing to it with barely a murmur.

What this ruling actually results in is yet to be seen – but if we’re positive and creative it can be something positive rather than something negative. It should be seen as a start, and not an end.

Why I’ll be voting Green – and it’s not about the environment!

Green PartyThe forthcoming European elections are important in a lot of ways – but one of them, for me a critical one, is barely making the news. This European election could be crucial for privacy – and that’s one of the main reasons that I will be voting Green in May.

There are many, many issues that are coming into the public debate on the European elections within the UK. Our very future in the EU, for a start. The spectre of the rise of UKIP – whose campaign poster launched over the Easter weekend was truly vile and xenophobic. The likely humiliation for the Lib Dems – their duplicity and complicity in the nastiness of the Coalition government  has not been forgotten, and neither should it be. The subject of privacy – and in particular data privacy – has been barely mentioned – and therein lies the problem. We in the UK do not, in general, take privacy seriously at all. The limpness of our reaction to the revelations of Edward Snowden is just one example. The way the government is attempting to sell health data (and more recently HMRC trying to sell our tax data) is another: privacy is very low on the priority list. Data protection is the best example of all – the UK resisted proper data privacy from the start, and has continued to campaign against it, with a mixture of whinging and wining and actual undermining of the legislation: the UK’s implementation of the 1995 Data Protection Directive was flawed to say the least.

That has continued with the current drive to reform the data protection regime for the internet era. Negotiations on that reform have been going on for some years – and the UK government has been doing its best to water down the potential reform, to weaken our privacy rights as much as possible. They’re doing so still, both in public and in the background – and at a time when we need those rights, that protection, more than ever. Data protection is deeply flawed and fundamentally incomplete – but it is still a crucial part of the picture, and one of the few forms of protection that we have.

One key to the reform as it is currently set out is that it is a regulation rather than a directive – which means, in effect, that it will be automatically implemented in a uniform way across the EU. Specifically, we in the UK would not be able to produce a weaker implementation, more ‘business-friendly’ (i.e. less privacy-friendly) with more holes in it to exploit. The UK government is still lobbying against this move – though it seems unlikely that they will succeed.

However, the reform is at a pivotal stage. The European Parliament has passed it in a fairly strong (though far from perfect) form – but with the intricacies of the European system, that does not mean that everything is finished. There are several stages to go through, and the Council of Ministers (effectively the representatives of the governments of the member states) still have a hand in it. It seems entirely likely that they will attempt to water it down – effectively to reduce our privacy protection. At that point, there will need to be as strong a European Parliament as possible to resist this. If we care about privacy, we need strong data protection – and that means we need to do our best to get a European Parliament that understands the issues and is willing and able to drive this through.

That’s where the European elections come in. It would be great if the regulation was agreed before the election – but it seems very unlikely now. That, ultimately, is the reason I shall be voting Green. The Tories have consistently tried to undermine data protection. The Lib Dems have largely done what their Tory masters have told them. Labour are just as bad – and just as much in the hands of the industry lobbies as the Tories are. UKIP are so repellent in every way that even if they were fully in favour of strong data protection reform, I would never vote for them. The Greens, on the other hand, get the issue right. It’s in the manifesto of the Green candidates in my particular area – and the Greens throughout Europe have been very positive for privacy. Jan-Phillip Albrecht, a Green MEP from Germany, has played the lead role in ensuring that the reform has not been driven off track by the lobbyists.

What is more, in my region at least, a Green vote could well be effective. Because there is a form of proportional representation in the European elections, in our region, the East of England Region, it will not take that much of a swing to elect a Green MEP. People should check their own regions carefully to see whether the same would work for them. If there’s a chance, I think it’s a chance worth going for.

There are of course other excellent reasons to vote Green – but this one is quite specific and is one of the particular areas where (in my opinion) European politics and European law can really help. If left to our major political parties, we in the UK would make a godawful mess of the whole thing.

Data retention: fighting for privacy!

This morning’s news that the Court of Justice of the European Union has declared the Data Retention Directive to be invalid has been greeted with joy amongst privacy advocates. It’s a big win for privacy – though far from a knockout blow to the supporters of mass surveillance – and one that should be taken very seriously indeed. As Glyn Moody put it in his excellent analysis:

“…this is a massively important ruling. It not only says that the EU’s Data Retention Directive is illegal, but that it always was from the moment it was passed. It criticises it on multiple grounds that will make it much harder to frame a replacement. That probably won’t be impossible, but it will be circumscribed in all sorts of good ways that will help to remove some of its worst elements.”

I’m not going to attempt a detailed legal analysis here – others far more expert than me have already begun the process. These are some of the best that I have seen so far:

Fiona de Londras: http://humanrights.ie/civil-liberties/cjeu-strikes-down-data-retention-directive/

Daithí Mac Síthigh: http://www.lexferenda.com/08042014/2285/

Simon McGarr: http://www.mcgarrsolicitors.ie/2014/04/08/digital-rights-ireland-ecj-judgement-on-data-retention/

The full impact of the ruling won’t become clear for some time, I suspect – and already some within the European Commission seems to be somewhat in panic mode, looking around for ways to underplay the ruling and limit the damage to their plans for more and more surveillance and data retention. Things are likely to remain in flux for some time – but there are some key things to take from this already.

The most important of these is that privacy is worth fighting for – and that when we fight for privacy, we can win, despite what may seem overwhelming odds and supremely powerful and well-resourced opponents. This particular fight exemplifies the problems faced – but also the way that they can and are being overcome. It was brought by an alliance of digital rights activists – most notably Digital Rights Ireland – and has taken a huge amount of time and energy. It is, as reported in the Irish Times by the excellent Karlin Lillington, a ‘true David versus Goliath victory‘. It is a victory for the small people, the ordinary people – for all of us – and one from which we should take great heart.

Privacy often seems as though it is dead, or at the very least dying. Each revelation from Edward Snowden seems to demonstrate that every one of our movements is being watched at all times. Each new technological development seems to have privacy implications, and the developers of the technology often seem blissfully unaware of those implications until it’s almost too late. Each new government seems to embrace surveillance and see it as a solution to all kinds of problems, from fighting terrorism to rooting out paedophiles, from combatting the ‘evil’ of music and movie piracy to protecting children from cyberbullies or online pornography, regardless of the evidence that it really doesn’t work very well in those terms, if at all. Seeing it in that way, however, misses the other side of the equation – that more and more people are coming to understand that privacy matters, and are willing to take up the fight for privacy. Some times those fights are doomed to failure – but sometimes, as with today’s ruling over data retention, they can succeed. We need to keep fighting.

Care.data and the community…

care-data_328x212

The latest piece of health data news, that, according to the Telegraph, the hospital records of all NHS patients have been sold to insurers, is a body-blow to the care.data scheme, but make no mistake about it, the scheme was already in deep trouble. Last week’s news that the scheme had been delayed for six months was something which a lot of people greeted as good news – and quite rightly. The whole project has been mismanaged, particularly in terms of communication, and it’s such an important project that it really needs to be done right. Less haste and much more care is needed – and with the latest blow to public confidence it may well be that even with that care the scheme is doomed, and with it a key part of the UK’s whole open data strategy.

The most recent news relates to hospital data – and the details such as we know them so far are depressingly predictable to many of those following the story for a while. The care.data scheme relates to data currently held by GPs – the new scandal relates to data held by hospitals, and suggests that, as the Telegraph puts it:

“a report by a major UK insurance society discloses that it was able to obtain 13 years of hospital data – covering 47 million patients – in order to help companies “refine” their premiums.”

That is, that the hospital data was given or sold to insurers not in order to benefit public health or to help research efforts, but to help business to make more money – potentially to the detriment of many thousands of individuals, and entirely without those individuals’ consent or understanding. This exemplifies some of the key risks that privacy campaigners have been highlighting over the past weeks and months in relation to the care.data – and adds fuel to their already partially successful efforts. Those efforts lay behind the recently announced six month delay – and unless the backers of care.data change their approach, this last story may well be enough to kill the project entirely.

Underestimating the community

One of the key features of the farrago so far has been the way that those behind the project have drastically underestimated the strength, desire, expertise and flexibility of the community – and in particular the online community. That community includes many real experts, in many different fields, whose expertise strike at the heart of the care.data story. As well as many involved in health care, there are academics and lawyers whose studies cover privacy, consent and so forth who have a direct interest in the subject. Data protection professionals with real-life knowledge of data vulnerability and the numerous ways in which the health services in particular have lost data over the years – even before this latest scandal. Computer scientists, programmers and hackers, who understand in detail the risks and weaknesses of the systems proposed to ‘anonymise’ and protect our data. Advocates and campaigners such as Privacy International, the Open Rights Group and Big Brother Watch who have experience of fighting and winning fights against privacy-invasive projects from the ID card plan to the Snoopers Charter.

All of these groups have been roused into action – and they know how to use the tools of a modern campaign, from tweeting and blogging to making their presence felt in the mainstream media. They’ve been good at it – and have to a great degree caught the proponents of care.data on the hop. Often Tim Kelsey, the NHS National Director for Patients and Information and leader of the care.data project, has come across as flustered, impatient and surprised at the resistance and criticism. How he reacts to this latest story will be telling.

Critical issues

Two specific issues have been particularly important: the ‘anonymisation’ of the data, and the way that the data will be sold or made available, and to whom. Underlying both of these is a more general issue – that people DO care about privacy, no matter what some may think.

“Anonymisation”?

On the anonymisation issue, academics and IT professions know that the kind of ‘de-identification’ that care.data talks about is relatively easily reversed. Academics from the fields of computer science and law have demonstrated this again and again – from Latanya Sweeney as far back as 1997 to Arvind Narayanan and Vitaly Shmatikov’s “Robust De-anonymization of Large Sparse Datasets” in 2008 and Paul Ohm’s seminal piece in 2009 “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization”. Given this, to be told blithely by NHS England that their anonymisation system ‘works’ – and to hear the public being told that it works, without question or doubt, naturally raises suspicion. There are very serious risks – both theoretical and practical that must be acknowledged and taken into account. Right now, they seem to either be denied or glossed over – or characterised as scaremongering.

The sale or misuse of data

The second key issue is that of the possible sale and misuse of data – one made particularly pertinent by the most recent revelations, which have confirmed some of the worst fears of privacy campaigners. Two factors particularly come into play. The first is that the experience of the last few years, with the increasing sense of privatisation of our health services, makes many people suspicious that here is just another asset to be sold off to the highest bidder, with the profits mysteriously finding their way into the pockets of those already rich and well-connected. That and the way that exactly who might or might not be able to access the data has remained apparently deliberately obscure makes it very hard to trust those involved – and trust is really crucial here, particularly now.

Many of us – myself included – would be happy, delighted even, for our health data to be used for the benefit of public health and better knowledge and understanding, but far less happy for our data to be used primarily to increase the profits of Big Pharma and the insurance industry, with no real benefit for the rest of us at all. The latest leak seems to suggest that this is a distinct possibility.

The second factor here, and one that seems to be missed (either deliberately or through naïveté) is the number of other, less obvious and potentially far less desirable uses that this kind of data can be put to. Things like raising insurance premiums or health-care costs for those with particular conditions, as demonstrated by the most recent story, are potentially deeply damaging – but they are only the start of the possibilities. Health data can also be used to establish credit ratings, by potential employers, and other related areas – and without any transparency or hope of appeal, as such things may well be calculated by algorithm, with the algorithms protected as trade secrets, and the decisions made automatically. For some particularly vulnerable groups this could be absolutely critical – people with HIV, for example, who might face all kinds of discrimination. Or, to pick a seemingly less extreme and far more numerous group, people with mental health issues. Algorithms could be set up to find anyone with any kind of history of mental health issues – prescriptions for anti-depressants, for example – and filter them out of job applicants, seeing them as potential ‘trouble’. Discriminatory? Absolutely. Illegal? Absolutely. Impossible? Absolutely not – and the experience over recent years of the use of black-lists for people connected with union activity (see for example here) shows that unscrupulous employers might well not just use but encourage the kind of filtering that would ensure that anyone seen as ‘risky’ was avoided. In a climate where there are many more applicants than places for any job, discovering that you have been discriminated against is very, very hard.

This last part is a larger privacy issue – health data is just a part of the equation, and can be added to an already potent mix of data, from the self-profiling of social networks like Facebook to the behavioural targeting of the advertising industry to search-history analytics from Google. Why, then, does care.data matter, if all the rest of it is ‘out there’? Partly because it can confirm and enrich the data gathered in other ways – as the Telegraph story seems to confirm – and partly because it makes it easy for the profilers, and that’s something we really should avoid. They already have too much power over people – we should be reducing that power, not adding to it.

People care about privacy

That leads to the bigger, more general point. The reaction to the care.data saga so far has been confirmation that, despite what some people have been suggesting, particularly over the last few years, people really do care about privacy. They don’t want their most intimate information to be made publicly available – to be bought and sold to all and sundry, and potentially to be used against them. They have a strong sense that this data is theirs – and that they should be consulted, informed, and given some degree of control over what happens to it. They particularly don’t like the feeling that they’re being lied to. It happens far too often in far too many different parts of their lives. It makes them angry – and can stir them into action. That has already happened in relation to care.data – and if those behind the project don’t want the reaction to be even stronger, even angrier, and even more likely to finish off a project that is already teetering on the brink, they need to change their whole approach.

A new approach?

  1. The first and most important step is more honesty. When people discover that they’re not being told the truth – they don’t like it. There has been a distinct level of misinformation in the public discussion of care.data – particularly on the anonymisation issue – and those of us who have understood the issues have been deeply unimpressed by the responses from the proponents of the scheme. How they react to this latest revelation will be crucial.
  2. The second is a genuine assessment of the risks – working with those who are critical – rather than a denial that those risks even exist. There are potentially huge benefits to this kind of project – but these benefits need to be weighed properly and publicly against the risks if people are to make an appropriate decision. Again, the response to the latest story is critical here – if the authorities attempt to gloss over it, minimise it or suggest that the care.data situation is totally different, they’ll be rightly attacked.
  3. The idea that such a scheme should be ‘opt-out’ rather than ‘opt-in’ is itself questionable, for a start, though the real ‘value ‘ of the data is in it’s scale, so it is understandable that an opt-out system is proposed. For that to be acceptable, however, we as a society have to be the clear beneficiaries of the project – and so far, that has not been demonstrated – indeed, with this latest story the reverse seems far more easily shown.
  4. To begin to demonstrate this, particularly after this latest story, a clear and public set of proposals about who can and cannot get access to the data, and under what terms, needs to be put together and debated. Will insurance companies be able to access this information? Is the access for ‘researchers’ about profits for the drugs companies or for research whose results will be made available to all? Will any drugs developed be made available at cheap prices to the NHS – or to those in countries less rich than ours? We need to know – and we need to have our say about what is or is not acceptable.
  5. Those pushing the care.data project need to stand well clear of those who might be profiting from the project – in particular the lobby groups of the insurance and drug companies and others. Vested interests need to be declared if we are to entrust the people involved with our most intimate information. That trust is already rapidly evaporating.

Finding a way?

Will they be able to do this? I am not overly optimistic, particularly as my only direct interaction with Tim Kelsey has been on Twitter where he first accused me of poor journalism after reading my piece ‘Privacy isn’t selfish’ (I am not and have never presented myself as a journalist – as a brief look at my blog would have confirmed) and then complained that a brief set of suggestions that I made on Twitter was a ‘rant’. I do rant, from time to time, particularly about politics, but that conversation was quite the opposite. I hope I caught him on a bad day – and that he’s more willing to listen to criticism now than he was them. If those behind this project try to gloss over the latest scandal, and think that this six month delay is just a chance for them to explain to us that we are all wrong, are scaremongering, don’t understand or are being ‘selfish’, I’m afraid this project will be finished before it has even started. Things need to change – or they may well find that care.data never sees the light of day at all.

The community needs to be taken seriously – to be listened to as well as talked to – and its expertise and campaigning ability respected. It is more powerful than it might appear – if it’s thought of as a rag-tag mob of bloggers and tweeters, scaremongerers, luddites and conspiracy theorists, care.data could go the way of the ID card and the Snoopers Charter. Given the potential benefits, to me at least this could be a real shame – and an opportunity lost.

Time to get Angry about Data Protection!

Angry-Birds-HD-WallpaperThe latest revelation from the Snowden leaks has caused a good deal of amusement: the NSA has been ‘piggybacking’ on apps like Angry Birds. The images that come to mind are indeed funny – I like the idea of a Man in Black riding on the back of an Angry Bird – but there’s a serious point and a serious risk underneath it, one that’s particularly pertinent on European Data Protection Day.

The point is very simple: the NSA can only get information from ‘leaky’ apps like Angry Birds if those apps collect the information in the first place. If we want to stop the NSA gathering data about us, then, ultimately, the key is to have less data out there, less data gathered – less data gathering, and by commercial entities, not just by governments. Why, you might (and should) ask, does Angry Birds need to gather so much information about you in the first place? And, more importantly, should it be able to?

This hits at the fundamental problem that underlies the whole NSA/GCHQ mass surveillance farrago. As Bruce Schneier put it, quoted here:

“The NSA didn’t wake up and say, ‘Let’s just spy on everybody.’ They looked up and said, ‘Wow, corporations are spying on everybody. Let’s get ourselves a copy.’”

If we want to stop the NSA spying, the first and most important step is to cut down on commercial surveillance. If we want the NSA to have less access to our private and personal data, we need to stop the commercial entities from have so much of our private and personal data. If the commercial entities gather and hold the data, you can be pretty sure that, one way or another, the authorities – and others – will find a way to get access to that data.

That’s where data protection should come in. One of the underlying principles of data protection is ‘data minimisation': only the minimum of data should be held, and for the minimum length of time, for a specific purpose, one that has been explained to the people about whom the data has been gathered. Sadly, data minimisation is mostly ignored, or at best paid lip service to. It shouldn’t be – and we should be getting angry about it. Yes, we should be angry that Angry Birds is ‘leaky’ – but we should be equally angry that Angry Birds is gathering so much data about us in the first place.

Whatever happens with the reform of data protection – and the reform process has been tortuous over the last two years – we shouldn’t let it be weakened. We shouldn’t let principles like data minimisation be watered down. We should strengthen them, and fight for them. Data Protection has a lot of problems, but it’s still a crucial tool to protect us, and not just from corporate intrusions but from the excesses of the intelligence agencies on others. On European Data Protection Day we should remember that, and do our best to support it.