Care.data and the community…

care-data_328x212

The latest piece of health data news, that, according to the Telegraph, the hospital records of all NHS patients have been sold to insurers, is a body-blow to the care.data scheme, but make no mistake about it, the scheme was already in deep trouble. Last week’s news that the scheme had been delayed for six months was something which a lot of people greeted as good news – and quite rightly. The whole project has been mismanaged, particularly in terms of communication, and it’s such an important project that it really needs to be done right. Less haste and much more care is needed – and with the latest blow to public confidence it may well be that even with that care the scheme is doomed, and with it a key part of the UK’s whole open data strategy.

The most recent news relates to hospital data – and the details such as we know them so far are depressingly predictable to many of those following the story for a while. The care.data scheme relates to data currently held by GPs – the new scandal relates to data held by hospitals, and suggests that, as the Telegraph puts it:

“a report by a major UK insurance society discloses that it was able to obtain 13 years of hospital data – covering 47 million patients – in order to help companies “refine” their premiums.”

That is, that the hospital data was given or sold to insurers not in order to benefit public health or to help research efforts, but to help business to make more money – potentially to the detriment of many thousands of individuals, and entirely without those individuals’ consent or understanding. This exemplifies some of the key risks that privacy campaigners have been highlighting over the past weeks and months in relation to the care.data – and adds fuel to their already partially successful efforts. Those efforts lay behind the recently announced six month delay – and unless the backers of care.data change their approach, this last story may well be enough to kill the project entirely.

Underestimating the community

One of the key features of the farrago so far has been the way that those behind the project have drastically underestimated the strength, desire, expertise and flexibility of the community – and in particular the online community. That community includes many real experts, in many different fields, whose expertise strike at the heart of the care.data story. As well as many involved in health care, there are academics and lawyers whose studies cover privacy, consent and so forth who have a direct interest in the subject. Data protection professionals with real-life knowledge of data vulnerability and the numerous ways in which the health services in particular have lost data over the years – even before this latest scandal. Computer scientists, programmers and hackers, who understand in detail the risks and weaknesses of the systems proposed to ‘anonymise’ and protect our data. Advocates and campaigners such as Privacy International, the Open Rights Group and Big Brother Watch who have experience of fighting and winning fights against privacy-invasive projects from the ID card plan to the Snoopers Charter.

All of these groups have been roused into action – and they know how to use the tools of a modern campaign, from tweeting and blogging to making their presence felt in the mainstream media. They’ve been good at it – and have to a great degree caught the proponents of care.data on the hop. Often Tim Kelsey, the NHS National Director for Patients and Information and leader of the care.data project, has come across as flustered, impatient and surprised at the resistance and criticism. How he reacts to this latest story will be telling.

Critical issues

Two specific issues have been particularly important: the ‘anonymisation’ of the data, and the way that the data will be sold or made available, and to whom. Underlying both of these is a more general issue – that people DO care about privacy, no matter what some may think.

“Anonymisation”?

On the anonymisation issue, academics and IT professions know that the kind of ‘de-identification’ that care.data talks about is relatively easily reversed. Academics from the fields of computer science and law have demonstrated this again and again – from Latanya Sweeney as far back as 1997 to Arvind Narayanan and Vitaly Shmatikov’s “Robust De-anonymization of Large Sparse Datasets” in 2008 and Paul Ohm’s seminal piece in 2009 “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization”. Given this, to be told blithely by NHS England that their anonymisation system ‘works’ – and to hear the public being told that it works, without question or doubt, naturally raises suspicion. There are very serious risks – both theoretical and practical that must be acknowledged and taken into account. Right now, they seem to either be denied or glossed over – or characterised as scaremongering.

The sale or misuse of data

The second key issue is that of the possible sale and misuse of data – one made particularly pertinent by the most recent revelations, which have confirmed some of the worst fears of privacy campaigners. Two factors particularly come into play. The first is that the experience of the last few years, with the increasing sense of privatisation of our health services, makes many people suspicious that here is just another asset to be sold off to the highest bidder, with the profits mysteriously finding their way into the pockets of those already rich and well-connected. That and the way that exactly who might or might not be able to access the data has remained apparently deliberately obscure makes it very hard to trust those involved – and trust is really crucial here, particularly now.

Many of us – myself included – would be happy, delighted even, for our health data to be used for the benefit of public health and better knowledge and understanding, but far less happy for our data to be used primarily to increase the profits of Big Pharma and the insurance industry, with no real benefit for the rest of us at all. The latest leak seems to suggest that this is a distinct possibility.

The second factor here, and one that seems to be missed (either deliberately or through naïveté) is the number of other, less obvious and potentially far less desirable uses that this kind of data can be put to. Things like raising insurance premiums or health-care costs for those with particular conditions, as demonstrated by the most recent story, are potentially deeply damaging – but they are only the start of the possibilities. Health data can also be used to establish credit ratings, by potential employers, and other related areas – and without any transparency or hope of appeal, as such things may well be calculated by algorithm, with the algorithms protected as trade secrets, and the decisions made automatically. For some particularly vulnerable groups this could be absolutely critical – people with HIV, for example, who might face all kinds of discrimination. Or, to pick a seemingly less extreme and far more numerous group, people with mental health issues. Algorithms could be set up to find anyone with any kind of history of mental health issues – prescriptions for anti-depressants, for example – and filter them out of job applicants, seeing them as potential ‘trouble’. Discriminatory? Absolutely. Illegal? Absolutely. Impossible? Absolutely not – and the experience over recent years of the use of black-lists for people connected with union activity (see for example here) shows that unscrupulous employers might well not just use but encourage the kind of filtering that would ensure that anyone seen as ‘risky’ was avoided. In a climate where there are many more applicants than places for any job, discovering that you have been discriminated against is very, very hard.

This last part is a larger privacy issue – health data is just a part of the equation, and can be added to an already potent mix of data, from the self-profiling of social networks like Facebook to the behavioural targeting of the advertising industry to search-history analytics from Google. Why, then, does care.data matter, if all the rest of it is ‘out there’? Partly because it can confirm and enrich the data gathered in other ways – as the Telegraph story seems to confirm – and partly because it makes it easy for the profilers, and that’s something we really should avoid. They already have too much power over people – we should be reducing that power, not adding to it.

People care about privacy

That leads to the bigger, more general point. The reaction to the care.data saga so far has been confirmation that, despite what some people have been suggesting, particularly over the last few years, people really do care about privacy. They don’t want their most intimate information to be made publicly available – to be bought and sold to all and sundry, and potentially to be used against them. They have a strong sense that this data is theirs – and that they should be consulted, informed, and given some degree of control over what happens to it. They particularly don’t like the feeling that they’re being lied to. It happens far too often in far too many different parts of their lives. It makes them angry – and can stir them into action. That has already happened in relation to care.data – and if those behind the project don’t want the reaction to be even stronger, even angrier, and even more likely to finish off a project that is already teetering on the brink, they need to change their whole approach.

A new approach?

  1. The first and most important step is more honesty. When people discover that they’re not being told the truth – they don’t like it. There has been a distinct level of misinformation in the public discussion of care.data – particularly on the anonymisation issue – and those of us who have understood the issues have been deeply unimpressed by the responses from the proponents of the scheme. How they react to this latest revelation will be crucial.
  2. The second is a genuine assessment of the risks – working with those who are critical – rather than a denial that those risks even exist. There are potentially huge benefits to this kind of project – but these benefits need to be weighed properly and publicly against the risks if people are to make an appropriate decision. Again, the response to the latest story is critical here – if the authorities attempt to gloss over it, minimise it or suggest that the care.data situation is totally different, they’ll be rightly attacked.
  3. The idea that such a scheme should be ‘opt-out’ rather than ‘opt-in’ is itself questionable, for a start, though the real ‘value ‘ of the data is in it’s scale, so it is understandable that an opt-out system is proposed. For that to be acceptable, however, we as a society have to be the clear beneficiaries of the project – and so far, that has not been demonstrated – indeed, with this latest story the reverse seems far more easily shown.
  4. To begin to demonstrate this, particularly after this latest story, a clear and public set of proposals about who can and cannot get access to the data, and under what terms, needs to be put together and debated. Will insurance companies be able to access this information? Is the access for ‘researchers’ about profits for the drugs companies or for research whose results will be made available to all? Will any drugs developed be made available at cheap prices to the NHS – or to those in countries less rich than ours? We need to know – and we need to have our say about what is or is not acceptable.
  5. Those pushing the care.data project need to stand well clear of those who might be profiting from the project – in particular the lobby groups of the insurance and drug companies and others. Vested interests need to be declared if we are to entrust the people involved with our most intimate information. That trust is already rapidly evaporating.

Finding a way?

Will they be able to do this? I am not overly optimistic, particularly as my only direct interaction with Tim Kelsey has been on Twitter where he first accused me of poor journalism after reading my piece ‘Privacy isn’t selfish’ (I am not and have never presented myself as a journalist – as a brief look at my blog would have confirmed) and then complained that a brief set of suggestions that I made on Twitter was a ‘rant’. I do rant, from time to time, particularly about politics, but that conversation was quite the opposite. I hope I caught him on a bad day – and that he’s more willing to listen to criticism now than he was them. If those behind this project try to gloss over the latest scandal, and think that this six month delay is just a chance for them to explain to us that we are all wrong, are scaremongering, don’t understand or are being ‘selfish’, I’m afraid this project will be finished before it has even started. Things need to change – or they may well find that care.data never sees the light of day at all.

The community needs to be taken seriously – to be listened to as well as talked to – and its expertise and campaigning ability respected. It is more powerful than it might appear – if it’s thought of as a rag-tag mob of bloggers and tweeters, scaremongerers, luddites and conspiracy theorists, care.data could go the way of the ID card and the Snoopers Charter. Given the potential benefits, to me at least this could be a real shame – and an opportunity lost.

Time to get Angry about Data Protection!

Angry-Birds-HD-WallpaperThe latest revelation from the Snowden leaks has caused a good deal of amusement: the NSA has been ‘piggybacking’ on apps like Angry Birds. The images that come to mind are indeed funny – I like the idea of a Man in Black riding on the back of an Angry Bird – but there’s a serious point and a serious risk underneath it, one that’s particularly pertinent on European Data Protection Day.

The point is very simple: the NSA can only get information from ‘leaky’ apps like Angry Birds if those apps collect the information in the first place. If we want to stop the NSA gathering data about us, then, ultimately, the key is to have less data out there, less data gathered – less data gathering, and by commercial entities, not just by governments. Why, you might (and should) ask, does Angry Birds need to gather so much information about you in the first place? And, more importantly, should it be able to?

This hits at the fundamental problem that underlies the whole NSA/GCHQ mass surveillance farrago. As Bruce Schneier put it, quoted here:

“The NSA didn’t wake up and say, ‘Let’s just spy on everybody.’ They looked up and said, ‘Wow, corporations are spying on everybody. Let’s get ourselves a copy.’”

If we want to stop the NSA spying, the first and most important step is to cut down on commercial surveillance. If we want the NSA to have less access to our private and personal data, we need to stop the commercial entities from have so much of our private and personal data. If the commercial entities gather and hold the data, you can be pretty sure that, one way or another, the authorities – and others – will find a way to get access to that data.

That’s where data protection should come in. One of the underlying principles of data protection is ‘data minimisation’: only the minimum of data should be held, and for the minimum length of time, for a specific purpose, one that has been explained to the people about whom the data has been gathered. Sadly, data minimisation is mostly ignored, or at best paid lip service to. It shouldn’t be – and we should be getting angry about it. Yes, we should be angry that Angry Birds is ‘leaky’ – but we should be equally angry that Angry Birds is gathering so much data about us in the first place.

Whatever happens with the reform of data protection – and the reform process has been tortuous over the last two years – we shouldn’t let it be weakened. We shouldn’t let principles like data minimisation be watered down. We should strengthen them, and fight for them. Data Protection has a lot of problems, but it’s still a crucial tool to protect us, and not just from corporate intrusions but from the excesses of the intelligence agencies on others. On European Data Protection Day we should remember that, and do our best to support it.

Privacy is not the enemy – rebooted…

Today, Saturday February 23rd 2013, is International Privacy Day. To mark it, I’ve done a re-boot of an old blog post: ‘Privacy is not the enemy’. The original post (which you can find here) came back in December 2011, after I attended an ‘open data’ event organised by the Oxford Internet Institute – but it’s worth repeating, because those of us who advocate for privacy often find themselves having to defend themselves against attack, as though ‘privacy’ was somehow the enemy of so much that is good.

Privacy is not the enemy

Privacy advocates are often used to being in a defensive position – trying to ‘shout out’ about privacy to a room full of avid data-sharers or supporters of business innovation above all things. There is a lot of antagonism. Those we speak to can sometimes feel that they are being ‘threatened’ – some of the recent debate over the proposed reform of the Data Protection regime has had very much that character. And yet I believe that many of those threatened are missing the point about privacy. Just as Guido Fawkes is wrong to characterise privacy just as a ‘euphemism for censorship’ (as I’ve written about before) and Paul McMullan was wrong to suggest to the Leveson Inquiry that ‘privacy is for paedos’, the idea that privacy is the ‘enemy’ of so many things is fundamentally misconceived. To a great extent the opposite is true.

Privacy is not the enemy of free expression – indeed, as Jo Glanville of Index on Censorship has argued, privacy is essential for free expression. Without the protection provided by privacy, people are shackled by the risk that their enemies, those that would censor them, arrest them or worse, can uncover their identities, find them and do their worst. Without privacy, there is no free expression. The two go hand-in-hand, particularly where those without ‘power’ are concerned – and just as privacy shouldn’t just be something available for the rich and powerful, free speech shouldn’t only be available to those robust enough to cope with exposure.

Privacy is not the enemy of ‘publicness’ – in a similar way, to be truly ‘public’, people need to be able to protect what is private. They need to be able to have at least some control over what they share, what they put into the public. If they have no privacy, no control at all, how can they know what to share?

Privacy is not the enemy of law enforcement – privacy is sometimes suggested to be a tool for criminals, something behind which they can hide behind. The old argument that ‘if you’ve got nothing to hide, you’ve got nothing to fear’ has been exposed as a fallacy many times – perhaps most notably by Daniel Solove (e.g. here), but there is another side to the argument. Criminals will use whatever tools you present them with. If you provide an internet with privacy and anonymity they’ll use that privacy and anonymity – but if you provide an internet without privacy, they’ll exploit that lack of privacy. Many scams related to identity theft are based around taking advantage of that lack of privacy. It would perhaps be stretching a point to suggest that privacy is a friend to law enforcement – but it is as much of an enemy to criminals as it is to law enforcement agencies. Properly implemented privacy can protect us from crime.

Privacy is not the enemy of security – in a similar way, terrorists and those behind what’s loosely described as cyberwarfare will exploit whatever environment they are provided with. If Western Law enforcement agencies demand that social networks install ‘back doors’ to allow them to pursue terrorists and criminals, you can be sure that those back doors will be used by their enemies – terrorists, criminals, agents of enemy states and so forth. Privacy International’s ‘Big Brother Inc’ campaign has revealed the extent to which surveillance products developed in the West are being sold to despotic and oppressive regimes – in an industry worth an estimated $5 billion a year. It’s systematic, and understandable. Surveillance is a double-edged sword – and privacy is a shield which faces many ways (to stretch a metaphor beyond its limits!). Proper privacy protection works against the ‘bad guys’ as well as the ‘good’. It’s a supporter of security, not an enemy.

Privacy is not the enemy of business – though it is the enemy of certain particular business models, just as ‘health’ is the enemy of the tobacco industry. Ultimately, privacy is a supporter of business, because better privacy increases trust, and trust helps business. Governments need to start to be clear that this is the case – and that by undermining privacy (for example though the oppressive and disproportionate attempts to control copyright infringement) they undermine trust, both in businesses and in themselves as governments. Privacy is certainly a challenge to business – but that’s merely reflective of the challenges that all businesses face (and should face) in developing businesses that people want to use and are willing to pay money for.

Privacy is not the enemy of open data – indeed, precisely the opposite. First of all, privacy should make it clear which data should be shared, and how. ‘Public’ data doesn’t infringe privacy – from bus timetables to meteorological records, from public accounts to parliamentary voting records. Personal data is just that – personal – and sharing it should happen with real consent. When is that consent likely to be given? When people trust that their data will be used appropriately. When will they trust? When privacy is generally in place. Better privacy means better data sharing.

All this is without addressing the question of whether (and to what extent) privacy is a fundamental right. I won’t get into that here – it’s a philosophical question and one of great interest to me, but the arguments in favour of privacy are highly practical as well as philosophical. Privacy shouldn’t be the enemy – it should be seen as something positive, something that can assist and support. Privacy builds trust, and trust helps everyone.

———————————-

Over the time since I first wrote this post, privacy has if anything become bigger news that it was. If Facebook launches a new product (e.g. Graph Search, about which I wrote here and here), it makes privacy a centre-piece of the launch, regardless of the true privacy impact of the product. Apple has now put privacy settings into iOS for its iPhone and iPad. Privacy is big news! Let’s mark International Privacy Day by reminding ourselves that privacy is not an enemy – the opposite….

Lobbyists: who pays the piper…

A few weeks ago I experienced first hand the role of lobbyists, when I saw them do their best to start steering the CREATe project in their own direction (see my blog here). In the time since then, two more issues have come up that have highlighted their significance – and why we need to be concerned. We should be looking much more carefully at their activities.

Copyright lobbyists

To recap, at CREATe it was the lobbyists for the ‘content’ industry – what might loosely be called ‘copyright’ lobbyists – who were trying to ensure that the project, which is amongst other things looking at copyright reform, did not dare to challenge their assumption that ‘piracy’ needs to be stomped on above all things. The copyright lobby is a very powerful one indeed, and has had huge influence on the policies of governments worldwide – in the UK, they still seem to have a firm grip on all the major parties, and were the key behind the controversial Digital Economy Act. They are, however, only one of the lobby groups that we should be watching.

Advertising industry lobbyists

The second emerging issue concerns another key lobby – the online advertising industry. For privacy advocates like me, the advertising industry as often been a bit of a bête noire – behavioural advertising in particular generally works through significant invasions of privacy – but their recent activities in relation to the ‘Do Not Track’ initiative have been concerning. They’ve been fighting tooth and nail to block Microsoft’s idea that DNT should be ‘on’ by default on Internet Explorer – and according to Alexander Hanff they’ve also managed to co-opt privacy advocates to help undermine the DNT specification itself, allowing for ‘de-identified’ tracking without any kind of consent.

There’s a long way to go on this one, but I’m far from alone in thinking that they’ll manage to pretty much entirely neuter DNT. As security expert Nadim Kobeissi put it in a blog post yesterday, DNT is becoming ‘Dangerous and Ineffective’. We can largely thank advertising industry lobbyists for that.

‘Internet Industry’ Lobbyists

The third and potentially most worrying of all the recent lobbyists activities to emerge is the story of US ‘internet industry’ lobbyists working to undermine the draft Data Protection Regulations. As the Telegraph reported:

“Tory MEPs ‘copy and paste Amazon and Google lobbyist text'”

As I also experienced first hand at the Computers, Privacy and Data Protection conference in Brussels earlier this month, industry lobbyists particularly from the US are very concerned by the proposed Data Protection Regulation, partly because as drafted it would allow them to have the power to actually fine industry groups a meaningful amount of money – 2% of their global turnover – the kind of fine that would actually make a difference, and could actually make them change their activities.

Making changes….

That’s the key – indeed, the key for all three of the lobbying stories above. A resistance to change. The copyright lobbyists don’t want to have to change either their business model or their approach to enforcement. The advertising industry don’t want to have to change their privacy-invasive way of tracking people. The ‘internet industry’ companies don’t want to have to change their way of gathering and using people’s personal data. And in all three cases, they don’t seem to really care what people want or care about. In the copyright lobbyists example, as I noted in my blog at the time, they seem to be resisting even the gathering of evidence. In the other two cases, I suspect the same is true – because the more evidence that comes out, the clearer it is that people do care about privacy and don’t want to be tracked.

It’s not US vs EU

One of the most common arguments made in these cases is that it’s some kind of a Transatlantic conflict – a ‘cultural difference’ between the US and the EU. We in Europe are trying to ‘impose’ our values onto the US. Is it true? Well, the most recent evidence suggests otherwise – indeed, it suggests that people in the US care every bit as much as people in Europe do about privacy. According to a recent survey, 77% of Americans would select ‘do not track’  if it were available – putting them above many European countries, below only France. As David Meyer put it: ‘Think Europeans are more into data privacy than Americans? Think again.”

I suspect he’s right – and the divide isn’t a Transatlantic one. It’s a divide between individuals everywhere and the industry lobbyists. Lobbyists, by their nature, look out for those they’re lobbying on behalf of. Of course they do – that’s their job. We need to understand that – and act appropriately. What the lobbyists do should worry us – because they don’t serve our interests. Who pays the piper calls the tune – and it’s not us!

That’s not to say that they don’t have legitimate interests – they do! What the industries they represent do is crucial for all of us, for the future of the internet. However, it does need to be balanced, and right now it looks very much out of balance.

Google, privacy and a new kind of lawsuit

Today is Data Privacy Day – and new lawsuit has been launched against Google in the UK – one which highlights a number of key issues. It could be very important – a ‘landmark case’ according to a report on Reuters. The most notable thing about the case, for me, is that it is consumer-led: UK consumers are no longer relying on the authorities, and the Information Commissioner’s Office in particular, to safeguard their privacy. They’re taking it into their own hands.

The case concerns the way that Google exploited a bug in Apple’s Safari browser to enable it to bypass customers’ privacy settings. As reported on Reuters:

“Through its DoubleClick adverts, Google designed a code to circumvent privacy settings in order to deposit the cookies on computers in order to provide user-targeted advertising. The claimants thought that cookies were being blocked on their devices because of Safari’s strict default privacy settings and separate assurances being given by Google at the time. This was not the case.”

The group of consumers have engaged noted media and telecomms lawyers Olswang for the case. Dan Tench, the partner at Olswang responsible for the case, told Reuters:

“Google has a responsibility to consumers and should be accountable for the trust placed in them. We hope that they will take this opportunity to give Safari users a proper explanation about what happened, to apologise and, where appropriate, compensate the victims of their intrusion.”

For further information – and if you want to join the action – Tench can be contacted by email at daniel.tench@olswang.com

There’s also a Facebook page for the suit: https://www.facebook.com/SafariUsersAgainstGooglesSecretTracking

What’s important here?

The case highlights several crucial aspects of privacy on the net. The first is the extent to which we can – or should be able to – rely on the settings we make on our browsers. What was happening here is that those settings were being overridden. Now it’s a moot point quite how many people use their privacy settings – or indeed even know that they exist – but if those settings are being overridden by anyone, let alone a company as big and respected as Google, it’s something that we need to know about and to fight. Browser settings – and privacy settings in general – are the key control, perhaps the only control, that individuals have over their online privacy, so we need to know that they work if we are to have any trust. A lack of trust is something that damages everyone.

The second is that the case highlights that users aren’t going to take things lying down – and neither are they going to rely on what often seem to be supine regulators, regulators unwilling to take on the ‘big boys’ of the internet, regulators who seem to take their role as supporters of business much more seriously than their role as protectors of the public. Alexander Hanff, a privacy advocate who is assisting Olswang on this case, said that:

“This group action is not about getting rich by suing Google, this lawsuit is about sending a very clear message to corporations that circumventing privacy controls will result in significant consequences. The lawsuit has the potential of costing Google £10s of millions, perhaps even breaking £100m in damages given the potential number of claimants – making it the biggest group action ever launched in the UK. It should also be seen as a message to the Information Commissioner’s Office that they are in contempt of the British public and are not doing their job.”

This last point is crucial – and it may suggest not that the Information Commissioner’s Office are not doing their job but that their job is one that needs redefining. The ICO sometimes appears to be caught between two stools – their role is more complex than just as protectors of the public. They’re not a Privacy Commissioner’s Office – and perhaps that is what we need. An office with teeth whose prime task is to protect individuals’ privacy.

What happens next?

This lawsuit will be watched very carefully by everyone in the field of online privacy. The number of people who join the case is one question – there are plenty who could, as Safari, though somewhat a niche browser on computers, is the default browser on iPhones, so is used by many millions in the UK. How it progresses has yet to be seen – there are many different possibilities. If nothing else, I hope it acts as a wake-up-call for all involved: Google, the ICO, and the public.

In praise of regulation….

Travelling back from the Computers, Privacy and Data Protection conference in Brussels, I had a fascinating conversation with someone who was there right at the beginning of data protection. It was a conversation that revealed a great deal to me, first of all about the process towards the reform of the Data Protection regime, but more importantly about the whole process of reform.

The conversation was about the negotiations that led up to the adoption of the initial Data Protection Directive, back in 1995 – nearly 20 years ago – a process that has great parallels with the current, somewhat agonizing processes as we work towards a new data protection regime. This is something that needs proper academic study – but even at first glance the echoes are very strong. As it was outlined to me, the process had two very direct parallels with the current negotiations:

  1. Businesses were lobbying very heavily, and making predictions of total disaster: data protection was going to destroy business, ruin lives etc
  2. The UK government was supporting their lobbying – helping them directly in an attempt to undermine, weaken or possible destroy the directive.

Exactly the same seems to be happening this time around – the business lobbying is if anything even heavier and doom-laden, and the government has been laying on just as thick with speeches and reports coming thick and fast, most recently suggesting that the whole ‘regulation’ approach is inappropriate.

Now the first time around, despite the doom mongering, the business world didn’t come to an end. Data protection hasn’t brought about the end of the world as we know it – indeed, for something created nearly 20 years ago in a world where technology has been changing with incredible rapidity, data protection has, in my opinion, shown remarkable resilience and continuing relevance.

The world didn’t end….

If the world didn’t come to an end that time around, is it any more likely to this time around? It doesn’t seem likely – so we should take all the moaning, groaning and doom-mongering about the new regulation, and in particular about things like the right to be forgotten which is part of that regulation, with huge pinches of salt.

There are, of course, many other reasons that the world didn’t come to an end as a result of the introduction of data protection. The first, and perhaps most important, is that IT itself developed in such a way as to either circumvent the ‘disadvantages’ of data protection regulation, or as to make compliance with data protection easier. The latter could be said to be an advantage of the legislation – it brought about the beginnings of what’s now known as ‘privacy by design’. That is, systems were designed with compliance in mind – which is a good thing, if you believe in the aims of the legislation.

The former, that people found ways to circumvent the disadvantages, is also closely connected with the other main reason that data protection legislation didn’t cause the end of the world: many people simply ignored it, and went along their own merry way, either undetected or willing to take the consequences of any detection.

Will it be any different this time around?

All of these key factors – that systems will develop to make compliance easy or to avoid the legislation, or that people will just not bother to comply – are pretty much as likely to happen this time around as last time. IT will develop – it always does – and in all kinds of unpredictable ways. People will find ways to avoid, circumvent, or comply with the legislation in other ways – they always do. It will be the same cat-and-mouse story as before – as it is in pretty much all areas of law. Ultimately, though, life will go on.

Of course businesses moan – and of course they fear regulation, because regulation challenges them. It challenges them to change – because change is needed. That’s the real point. Regulation doesn’t arise in a vacuum, just because some bureaucrats have decided they want to wrap us all in a bit more red tape. Regulation arises, in general, because there’s a problem that needs addressing. Sometimes it arises because businesses or people have been behaving in ways that they shouldn’t, or ways that threaten the rights of others. Sometimes it arises because new technology or new situations demand it.

In the case of the new data protection regime, it’s a bit of both of these – some businesses are doing things they shouldn’t. They’re invading our privacy in ways that they really shouldn’t, and ways that do threaten our rights. And the technology has changed, and those changes need addressing. So we need a new regulation – and we shouldn’t be so afraid of it. Regulation isn’t all bad – indeed, it’s very often quite the opposite. Good, robust regulation helps those it regulates – as data protection has, in general, helped over the years.

Yes, regulation will challenge some business models – but business models NEED to be challenged. Some may even fail – but, frankly, some businesses need to fail. We shouldn’t be overly concerned by it – and shouldn’t bend over backwards to support them, as we seem to do all to often. Phorm is an example in this field which springs immediately to mind…

New regulation can help support new and better businesses – and businesses that are positive and forward-looking, that build business models that respect the privacy and rights of their customers, could find that new regulations offer new opportunities. Better businesses could get competitive advantages by behaving well, rather than by behaving badly.  It’s all too easy for systems to support the unethical businesses over those that are ethical and supportive of their customers, as the last few years have demonstrated all too graphically.

…so let’s embrace regulation – even privacy regulation – and see how it can help us, rather than fighting it and fearing it. That doesn’t help anyone. The new proposed Data Protection Regulation has a lot going for it – and being more positive about it, working with it, trying to understand it rather than trying to undermine it, is much more likely to get a good result, both for people and for businesses.

Scrambling for safety?

This afternoon I was at ‘Scrambling for Safety’ – a fascinating conference, focussing on the proposed ‘Communications Capabilities Development Programme’, aptly if not entirely accurately dubbed the ‘snoopers’ charter’ by the media. The conference was organised by Privacy International, the Open Rights Group, the Foundation for Information Policy Research and Big Brother Watch – and had a truly stellar line-up, from Ross Anderson and Shami Chakrabati to MPs David Davis, Julian Huppert and Tom Brake, David Smith from the ICO, Professor Douwe Korff, former Chief Police Officer Sir Chris Fox QPM, noted cryptographer Whit Diffie and industry expert and rep Trefor Davies. Some of the best and most expert people from many different areas in the field.

Overall, it was a remarkable conference – I’m not going to try to summarise what people said, just to pick out some of the key things I took away from the event. Some lessons, some observations, so confirmations of what we already knew – and, sadly, some huge barriers that will need to be overcome if we are to be successful in beating this hugely misguided and highly dangerous project.

  1. There are a LOT of people from all fields who are deeply concerned with this. The number of people – and the kind of people – who took their time to attend, at short notice, was very impressive.
  2. This problem really does matter – I know I go on about privacy and related subjects a lot, but when I attend an event like this, and listen to these kinds of people talk, it reminds me how much is at stake.
  3. The work of Privacy International, the Open Rights Group and Big Brother Watch needs to be applauded and supported! Getting this kind of an event to work in such a way was brilliant work – and Gus Hosein (PI), Eric King (PI), Jim Killock (ORG), Nick Pickles (BBW) and their colleagues did an excellent job.
  4. David Davis is a really impressive – and I say that as someone generally diametrically opposed to his political views. On this subject, he really does get it, and in a way that almost no other politician in this country gets it.
  5. As David Davis said, it really isn’t a party political issue – I’ve blogged before about this (here) but what happened at Scrambling for Safety made it even clearer than before. All the parties have their problems…
  6. …and one of them was made crystal clear, by the very, very disappointing performance of Tom Brake MP, a Lib Dem MP and spokesperson on the issue. He seemed to offer nothing but a repeat of exactly the kind of propaganda spouted by apologists for the security lobby ad nauseam over the last decade or more. In fact, he said pretty much everything that Gus Hosein, in his opening to the conference, said that official spokespeople would say by way of misdirection and obfuscation. If Tom Brake is a representative of the ‘better-informed’ of MPs, we really are in trouble. It wasn’t just that his performance seemed that of a ‘yes-man’ or ‘career politician’, but that he simply didn’t seem to understand the issues, concerns, or even the technology involved.
  7. Julian Huppert, also from the Lib Dems, was far more impressive – but of course he has no ‘official’ position. That seems to be the problem: anyone who understands this kind of thing is not ‘allowed’ to be involved in the decision-making process: or perhaps once they do get involved in any ‘official’ capacity, they lose (or have stripped away from them) the capacity for independent thought…
  8. The police are NOT the enemy here – in fact, former Chief Constable Sir Chris Fox was one of the most impressive speakers, putting a strong case against this kind of thing from the perspective of the police. In the end, the police don’t really want this kind of thing any more than privacy advocates do. This kind of universal surveillance, he said, could overwhelm the police with data and detract from the kind of real police work that can actually help combat terrorism. Sir Chris was supported by another police officer, one of the audience, a former Special Branch officer, who confirmed all Sir Chris’s comments.
  9. Sir Chris Fox also made what I thought was probably the most important observation about the whole counter-terrorism issue: that we have to accept there WILL be more terrorist incidents – but that this is balanced by the benefits we have from living in a free society.
  10. The problem of ignorance matters on all levels – and in many different directions: technological, legal, practical, political. That’s the real problem here. People are pushing policies that they don’t understand, to deal with problems with which they have no real experience or knowledge…. politicians, civil servants, etc, etc, etc
  11. I was very interested that Ross Anderson (who was excellent, as always) expects us to be able to defeat the CCDP – because once people understand what is at stake, they won’t accept it. He did, however, suggest that once we’ve defeated this, the next stage will be harder to defeat – that the security lobby will try to work through the providers directly, asking (for example) Google, Facebook etc to install ‘black boxes’ on their own systems, rather than through ISPs… and some of these providers will just do it… that’s harder to know about, and harder to combat.
  12. Last, but far from least, David Davis made the point that though people who know and understand these issues are few and far between (though very well represented at the conference!), they can punch above their weight – the very fact that ‘we’ know how to use social media etc means that we can have more of an impact than our numbers might suggest.

This last point is the one that I came away with the most. We really NEED to punch above our weight – there’s a huge job to do. There was a great deal of energy, enthusiasm and expertise evident at Scrambling for Safety, but even by the end of the afternoon it was losing a bit of focus. We need to be focussed, coordinated and ‘clever’ in how we do this. Surveillance must be kept in the headlines – and we mustn’t let the kind of misdirection and distraction that politicians and their spin-doctors use far too often distract us from fighting against this.

What’s more, again as David Davis said, we don’t just need to stop this CCDP, we need to reverse the trend. The powers in RIPA, the data retention already done under the Data Retention Directive, are already too much – they need to be cut back, not extended or ‘modernised’. It will be a huge task – but one worth doing.

Truth and lies, policy and practice…

Last week it struck me that we were entering a new phase in the way that privacy is dealt with on the net. Two of the biggest players, Google and Facebook, have made significant shifts in their ‘privacy policy’ – shifts that have got some people up in arms.

I’m not going to go through the new policies in detail – lots of people have already done that, and in Google’s case in particular close legal investigation by the French data protection authority CNIL is underway. No, what interests me is something different. Is the biggest change in both Google and Facebook’s case actually something that we should be greeting with a little more positivity? Is it just that now they’re both telling a bit more of the truth? Showing a bit more of that transparency that we privacy advocates are always talking about?

Brutal Honesty

Taking Google first, the key change in their policy, it seems to me, is that they’re admitting to data aggregation. That is, they’re openly acknowledging – indeed in some ways trumpeting – the fact that they’re now bringing together the data they gather from all the various different google services, and using it together. Google has a vast array of different services, from search to gmail, their various ‘location’ services (Google Earth, Google Streetview, Google Maps etc), YouTube, picassa, and of course Google +, so from their own perspective this makes perfect sense. Many of us in the privacy field have suspected (or even assumed) that they’ve always been doing this, or something like it – and their previous privacy policies have been vague enough or ambiguous enough that they could be read to make this sort of thing possible. Now, it seems to me, they’re being more open about it – more honest, more transparent.

That, of course, doesn’t make it any more ‘legal’ or ‘acceptable’ as a policy. Indeed, I wouldn’t be at all surprised if the CNIL investigation concludes that the new policy breaches EU data protection law – but, in reality, I wouldn’t have been at all surprised if the old policies, if investigated properly, had been in breach of EU data protection law. Even more pertinently, as I shall suggest below, I wouldn’t be at all surprised if Google’s practices, rather than they policies, were in breach of data protection law. They may well still be….

Moving on to Facebook, there is a bit of a hoo-haa about their changing the name of their ‘privacy policy’ to a ‘data use policy’. Again, it seems to me, this is actually a bit more honest, a bit more transparent. Facebook’s policy was always to use your data. Indeed, that’s the whole basis of their business model – and why we get to use Facebook for free. They give us the service, we let them use our data. For Facebook to admit that is a good thing, surely? If they’re more honest about what they do, we can make better informed decisions about whether to use them or not. If there is anyone out there who uses Facebook and doesn’t realise that Facebook are using their data – then they should be picked up and shaken, and told!

Facebook’s policy is to use your data, not to protect your privacy – isn’t it better to be open and say that?

Google’s policy is to aggregate all of your data – isn’t it better for them to be open and say that?

Policy and Practice

Finally, it should be remembered that policies are just words – what really matters isn’t what companies like Google and Facebook say they’re doing, but what they actually do. Very few people read privacy (or data use!) policies anyway. We don’t want companies to think changing privacy policies is a matter of good legal drafting – but a reflection of changing the way they actually operate, how they actually gather, hold and use our data, how they monitor us, profile us, target us and so forth.  I hope that the investigation by the CNIL looks properly at that – and that the regular FTC privacy audits of both Facebook and Google do the same. I wouldn’t say I’m exactly optimistic that they will…

….at least not this time. However, I do suspect that the increase in awareness about privacy issues by both individuals and authorities is one of the reasons that policy and practice may be getting closer. Facebook and Google seem to be being more honest and open about how they deal with privacy – because they are realising that they may have to be. We’re starting to at least try to hold them to account. That must be a good thing.

Time for a change?

I attended the Westminster eForum this morning. The subject was the new Data Protection Framework, and there was a stellar cast of speakers and panellists, from the estimable Peter Hustinx (the European Data Protection Supervisor), the MoJ’s Lord McNally and the ICO’s David Smith to representatives of Facebook, Google, the online advertising industry, computer security experts Symantec, Which, and top lawyers Allen and Overy..

Most of the forum was fairly predictable – strong and excellent stuff from Hustinx defending the new framework, even suggesting it might not go far enough in some places, to the expected (if carefully worded) attempts to undermine it from the politicians and most of the business people. The latter were generally disappointing in one particular way: very few of them seemed to grasp the ultimate purpose of the regulation, or the real reasons for its existences. They didn’t seem to have asked themselves two key questions: why has this regulation come about in the first place, and what is its underlying purpose?

Why has this regulation come about?

The two are of course linked – and missing the point of both is similarly linked. So why has this regulation come about? Well, we heard a lot of history this morning, all about how much had changed since the original data protection regime came into existence in 1995. All of it was undoubtedly true – the internet as it now exists was close to inconceivable back in 1995, and what we do now both as individuals and as businesses has completely changed. Is that why the regulation needed to change? In a way, of course it is – but thinking along those lines is missing the bigger point. Why was data protection regulation needed in the first place, back in 1995, and what was its intention then?

Ultimately, there were (and still are) two purposes. As Hustinx and other (including an excellent intervention from Douwe Korff) stressed, it is about what we (in Europe at least) consider to be fundamental rights. Ilias Chantzos of Symantec made the point that the original intention was to enable better cross-border data flow – and indeed it is clear that both are the case. Fundamental rights need protecting, and data needs to be allowed (or even encouraged) to flow, but in accordance with those rights.

All that is well and good – but still begs the underlying question: why was data protection needed? Regulation generally comes about because there is a problem – and that is the case here.

The problem was twofold: that data was not flowing as freely as it should had been, and that fundamental rights were not being protected. In particular, privacy was not being respected.

What has changed in the intervening period? Well, there doesn’t seem to be as much of a problem of data flowing as there used to be – but there’s still a problem of privacy not being respected. That, more than anything else, is what lies behind the need for the new regulation. That’s why the regulation is tough. If there aren’t big problems, there’s no need for tough regulation.

We have a tough regulation here – because there ARE big problems.

How do you comply with regulation?

This is where the real problem seemed to come for me. All the businesses want to know how to comply with regulations – but they don’t seem to understand the real point. These kinds of regulations aren’t really supposed to be about ticking boxes, or finding the right words to describe your activities in order to comply with the technical details of the relevant laws. Nigel Parker from Allen and Overy gave a very revealing and detailed picture of how he had to navigate some of his multi-national clients through the complexities of the different international regulations concerning data protection – but he seemed not to want to offer one particular piece of advice. He didn’t seem to want to tell his clients that they might well have to change what they do – or perhaps even decide not to do it.

The purpose of the very existence of these regulations are to make businesses (and governments) change what they do, or at least how they do it.

Changes!

Protecting fundamental rights when those rights are being infringed does not mean filling boxes or writing reports. It means changing what you do. Let me repeat that. It means changing what you do.

The approach to regulations seems generally to be more like ‘we’re going to do this, now help us comply with the regulations’ than ‘what do the regulations suggest is inappropriate – let’s not do them’. That’s not the real point – the point is that compliance should come by doing the right thing, not by trying to shape your ‘wrong’ thing into a form that ticks the boxes. Only the impressive Anthony House from Google seemed to grasp that – and suggest that Google wants to do the ‘right’ thing about privacy not because the law says it should, but because it’s a good thing to do, and because its users want these kinds of things. Whether Google are actually doing this is a slightly moot point – but he did seem to understand.

Change is hard, everyone knows that – but the first stage is recognition that change is necessary. If you find that your business, or your government department, can’t seem to comply with the regulations, don’t complain about the regulations – ask yourself why your activities don’t seem to comply. Could it be that you need to change? It could, you know, it could….

Ready to Rumble?

This morning I attended a lecture given by European Commissioner Viviane Reding – and I have to say I was impressed. The lecture was at my old Alma Mater, the LSE, with the estimable Professor Andrew Murray in the chair, and was officially about the importance of data protection in keeping businesses competitive – but in practice it turned about to be a vigorous defence of the new Data Protection Regulation. Commissioner Reding was robust, forthright – and remarkably straightforward for someone in her position.

Her speech started off by looking at the changes that have taken place since the original Data Protection Directive – which was brought in in 1995. She didn’t waste much time – most of the changes are pretty much self-evident to anyone who’s paid much attention, and she knew that her audience wasn’t the kind that would need to be told. The key, though, was that she was looking from the perspective of business. The needs of businesses have changed – and as she put it, the new regulation was designed to meet those needs.

The key points from this perspective will be familiar to most who have studied the planned regulation. First and foremost, because it is a regulation rather than a directive, it applies uniformly throughout the EU, creating both an even playing field and a degree of certainty. Secondly, it is intended to remove ‘red tape’ – multinational companies will only have to deal with the data protection authorities in the country that is their primary base, rather than having to deal with a separate authority for each country they operate in. Taken together, she said that the administrative burden for companies would go down by 2.3 billion Euro a year. It was very direct and clear – she certainly seems to believe what she’s saying.

She also made the point (which she’s made before) that the right to be forgotten, which has received a lot of press, and which I’ve written about before (ad nauseam I suspect), is NOT a threat to free expression, and not a tool for censorship, regardless of how that point seems to be misunderstood or misrepresented. The key, as she described, is to understand that no rights are absolute, and that they have to compete with other rights – and they certainly don’t override them. As I’ve also noted before, this is something that isn’t really understood in the US as well as it is in Europe – the American ‘take’ on rights is much more absolutists, which is one of the reason they accept as ‘rights’ a much narrower range of things that most of the rest of the world.

I doubt her words on the right to be forgotten will cut much mustard with the critics of the right on either side of the Atlantic – but I’m not sure that will matter that much to Commissioner Reding. She’s ready for a fight on this, it seems to me, and for quite a lot else besides. Those who might be expecting her to back down, to compromise, I think are in for a surprise. She’s ready to rumble…

The first and biggest opponent she’s ready to take on looks like being Google. She name-checked them several times both in the speech and in her answers to questions. She talked specifically about the new Google privacy policy – coming into force today – and in answer to a question I asked about the apparent resistance of US companies to data protection she freely admitted that part of the reason for the form and content of the regulation is to give the Commission teeth in its dealings with companies like Google. Now, she said, there was little that Europe could do to Google. Each of the individual countries in the EU could challenge Google, and each could potentially fine Google. ‘Peanuts’ was the word that she used about these fines, freely acknowledging that she didn’t have the weapons with which to fight. With the new regulations, however, they could fine Google 2% of their worldwide revenue. 560 million euro was the figure she quoted: enough to get even Google to stand up and take notice.

She showed no sign of backing down on cookies either – reiterating the need for explicit, informed consent whenever data is gathered, including details of the purposes to which the data is to be put. She seemed ready for a fight on that as well.

Overall, it was a combative Commissioner that took to the lectern this morning – and I was impressed. She’s ready for the fight, whether businesses and governments want it or not. As I’ve blogged elsewhere, the UK government doesn’t share her enthusiasm for a strengthening of data protection, and the reaction from the US has been far from entirely positive either. Commissioner Reding had a few words for the US too, applauding Obama’s moves for online privacy (about which I’ve blogged here) but suggesting that the US is a good way behind the EU in dealing with privacy. They’re still playing catch-up, talking about it and suggesting ideas, but not ready to take the bull by the horns yet. We may yet lead them to the promised land, seemed to be the message…. and only with her tongue half in her cheek.

She’s not going to give up – and neither should she, in my opinion. This is important stuff, and it needs fighting for. She’s one of the ‘Crazy Europeans‘ about which I’ve written before – but we need them. As @spinzo tweeted to me there’s ‘nothing more frightening than a self-righteous regulator backed by federal fiat and federal coffers’ – but I’d LIKE some of the companies involved in privacy invasive practices around the net to be frightened. If they behaved in a bit more of a privacy friendly way we wouldn’t need the likes of Commissioner Reding to be ready to rumble. They don’t – and we do!