Care.data and the community…

care-data_328x212

The latest piece of health data news, that, according to the Telegraph, the hospital records of all NHS patients have been sold to insurers, is a body-blow to the care.data scheme, but make no mistake about it, the scheme was already in deep trouble. Last week’s news that the scheme had been delayed for six months was something which a lot of people greeted as good news – and quite rightly. The whole project has been mismanaged, particularly in terms of communication, and it’s such an important project that it really needs to be done right. Less haste and much more care is needed – and with the latest blow to public confidence it may well be that even with that care the scheme is doomed, and with it a key part of the UK’s whole open data strategy.

The most recent news relates to hospital data – and the details such as we know them so far are depressingly predictable to many of those following the story for a while. The care.data scheme relates to data currently held by GPs – the new scandal relates to data held by hospitals, and suggests that, as the Telegraph puts it:

“a report by a major UK insurance society discloses that it was able to obtain 13 years of hospital data – covering 47 million patients – in order to help companies “refine” their premiums.”

That is, that the hospital data was given or sold to insurers not in order to benefit public health or to help research efforts, but to help business to make more money – potentially to the detriment of many thousands of individuals, and entirely without those individuals’ consent or understanding. This exemplifies some of the key risks that privacy campaigners have been highlighting over the past weeks and months in relation to the care.data – and adds fuel to their already partially successful efforts. Those efforts lay behind the recently announced six month delay – and unless the backers of care.data change their approach, this last story may well be enough to kill the project entirely.

Underestimating the community

One of the key features of the farrago so far has been the way that those behind the project have drastically underestimated the strength, desire, expertise and flexibility of the community – and in particular the online community. That community includes many real experts, in many different fields, whose expertise strike at the heart of the care.data story. As well as many involved in health care, there are academics and lawyers whose studies cover privacy, consent and so forth who have a direct interest in the subject. Data protection professionals with real-life knowledge of data vulnerability and the numerous ways in which the health services in particular have lost data over the years – even before this latest scandal. Computer scientists, programmers and hackers, who understand in detail the risks and weaknesses of the systems proposed to ‘anonymise’ and protect our data. Advocates and campaigners such as Privacy International, the Open Rights Group and Big Brother Watch who have experience of fighting and winning fights against privacy-invasive projects from the ID card plan to the Snoopers Charter.

All of these groups have been roused into action – and they know how to use the tools of a modern campaign, from tweeting and blogging to making their presence felt in the mainstream media. They’ve been good at it – and have to a great degree caught the proponents of care.data on the hop. Often Tim Kelsey, the NHS National Director for Patients and Information and leader of the care.data project, has come across as flustered, impatient and surprised at the resistance and criticism. How he reacts to this latest story will be telling.

Critical issues

Two specific issues have been particularly important: the ‘anonymisation’ of the data, and the way that the data will be sold or made available, and to whom. Underlying both of these is a more general issue – that people DO care about privacy, no matter what some may think.

“Anonymisation”?

On the anonymisation issue, academics and IT professions know that the kind of ‘de-identification’ that care.data talks about is relatively easily reversed. Academics from the fields of computer science and law have demonstrated this again and again – from Latanya Sweeney as far back as 1997 to Arvind Narayanan and Vitaly Shmatikov’s “Robust De-anonymization of Large Sparse Datasets” in 2008 and Paul Ohm’s seminal piece in 2009 “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization”. Given this, to be told blithely by NHS England that their anonymisation system ‘works’ – and to hear the public being told that it works, without question or doubt, naturally raises suspicion. There are very serious risks – both theoretical and practical that must be acknowledged and taken into account. Right now, they seem to either be denied or glossed over – or characterised as scaremongering.

The sale or misuse of data

The second key issue is that of the possible sale and misuse of data – one made particularly pertinent by the most recent revelations, which have confirmed some of the worst fears of privacy campaigners. Two factors particularly come into play. The first is that the experience of the last few years, with the increasing sense of privatisation of our health services, makes many people suspicious that here is just another asset to be sold off to the highest bidder, with the profits mysteriously finding their way into the pockets of those already rich and well-connected. That and the way that exactly who might or might not be able to access the data has remained apparently deliberately obscure makes it very hard to trust those involved – and trust is really crucial here, particularly now.

Many of us – myself included – would be happy, delighted even, for our health data to be used for the benefit of public health and better knowledge and understanding, but far less happy for our data to be used primarily to increase the profits of Big Pharma and the insurance industry, with no real benefit for the rest of us at all. The latest leak seems to suggest that this is a distinct possibility.

The second factor here, and one that seems to be missed (either deliberately or through naïveté) is the number of other, less obvious and potentially far less desirable uses that this kind of data can be put to. Things like raising insurance premiums or health-care costs for those with particular conditions, as demonstrated by the most recent story, are potentially deeply damaging – but they are only the start of the possibilities. Health data can also be used to establish credit ratings, by potential employers, and other related areas – and without any transparency or hope of appeal, as such things may well be calculated by algorithm, with the algorithms protected as trade secrets, and the decisions made automatically. For some particularly vulnerable groups this could be absolutely critical – people with HIV, for example, who might face all kinds of discrimination. Or, to pick a seemingly less extreme and far more numerous group, people with mental health issues. Algorithms could be set up to find anyone with any kind of history of mental health issues – prescriptions for anti-depressants, for example – and filter them out of job applicants, seeing them as potential ‘trouble’. Discriminatory? Absolutely. Illegal? Absolutely. Impossible? Absolutely not – and the experience over recent years of the use of black-lists for people connected with union activity (see for example here) shows that unscrupulous employers might well not just use but encourage the kind of filtering that would ensure that anyone seen as ‘risky’ was avoided. In a climate where there are many more applicants than places for any job, discovering that you have been discriminated against is very, very hard.

This last part is a larger privacy issue – health data is just a part of the equation, and can be added to an already potent mix of data, from the self-profiling of social networks like Facebook to the behavioural targeting of the advertising industry to search-history analytics from Google. Why, then, does care.data matter, if all the rest of it is ‘out there’? Partly because it can confirm and enrich the data gathered in other ways – as the Telegraph story seems to confirm – and partly because it makes it easy for the profilers, and that’s something we really should avoid. They already have too much power over people – we should be reducing that power, not adding to it.

People care about privacy

That leads to the bigger, more general point. The reaction to the care.data saga so far has been confirmation that, despite what some people have been suggesting, particularly over the last few years, people really do care about privacy. They don’t want their most intimate information to be made publicly available – to be bought and sold to all and sundry, and potentially to be used against them. They have a strong sense that this data is theirs – and that they should be consulted, informed, and given some degree of control over what happens to it. They particularly don’t like the feeling that they’re being lied to. It happens far too often in far too many different parts of their lives. It makes them angry – and can stir them into action. That has already happened in relation to care.data – and if those behind the project don’t want the reaction to be even stronger, even angrier, and even more likely to finish off a project that is already teetering on the brink, they need to change their whole approach.

A new approach?

  1. The first and most important step is more honesty. When people discover that they’re not being told the truth – they don’t like it. There has been a distinct level of misinformation in the public discussion of care.data – particularly on the anonymisation issue – and those of us who have understood the issues have been deeply unimpressed by the responses from the proponents of the scheme. How they react to this latest revelation will be crucial.
  2. The second is a genuine assessment of the risks – working with those who are critical – rather than a denial that those risks even exist. There are potentially huge benefits to this kind of project – but these benefits need to be weighed properly and publicly against the risks if people are to make an appropriate decision. Again, the response to the latest story is critical here – if the authorities attempt to gloss over it, minimise it or suggest that the care.data situation is totally different, they’ll be rightly attacked.
  3. The idea that such a scheme should be ‘opt-out’ rather than ‘opt-in’ is itself questionable, for a start, though the real ‘value ‘ of the data is in it’s scale, so it is understandable that an opt-out system is proposed. For that to be acceptable, however, we as a society have to be the clear beneficiaries of the project – and so far, that has not been demonstrated – indeed, with this latest story the reverse seems far more easily shown.
  4. To begin to demonstrate this, particularly after this latest story, a clear and public set of proposals about who can and cannot get access to the data, and under what terms, needs to be put together and debated. Will insurance companies be able to access this information? Is the access for ‘researchers’ about profits for the drugs companies or for research whose results will be made available to all? Will any drugs developed be made available at cheap prices to the NHS – or to those in countries less rich than ours? We need to know – and we need to have our say about what is or is not acceptable.
  5. Those pushing the care.data project need to stand well clear of those who might be profiting from the project – in particular the lobby groups of the insurance and drug companies and others. Vested interests need to be declared if we are to entrust the people involved with our most intimate information. That trust is already rapidly evaporating.

Finding a way?

Will they be able to do this? I am not overly optimistic, particularly as my only direct interaction with Tim Kelsey has been on Twitter where he first accused me of poor journalism after reading my piece ‘Privacy isn’t selfish’ (I am not and have never presented myself as a journalist – as a brief look at my blog would have confirmed) and then complained that a brief set of suggestions that I made on Twitter was a ‘rant’. I do rant, from time to time, particularly about politics, but that conversation was quite the opposite. I hope I caught him on a bad day – and that he’s more willing to listen to criticism now than he was them. If those behind this project try to gloss over the latest scandal, and think that this six month delay is just a chance for them to explain to us that we are all wrong, are scaremongering, don’t understand or are being ‘selfish’, I’m afraid this project will be finished before it has even started. Things need to change – or they may well find that care.data never sees the light of day at all.

The community needs to be taken seriously – to be listened to as well as talked to – and its expertise and campaigning ability respected. It is more powerful than it might appear – if it’s thought of as a rag-tag mob of bloggers and tweeters, scaremongerers, luddites and conspiracy theorists, care.data could go the way of the ID card and the Snoopers Charter. Given the potential benefits, to me at least this could be a real shame – and an opportunity lost.

Surveillance: ten ways to fight back!

The-Day-We-Fight-Back-2-e1391612024967

Today, 11th February 2014, is ‘The Day We Fight Back” – a day of campaigning against mass surveillance. It’s a day where campaigners are trying to raise awareness of the issue – and begin fighting against it. The big question is how can we fight back – what can we actually do. It often seems as though privacy is dead, and that there’s nothing we can do about it. I don’t think so – there are lots of things we can do, lots of things we must do. Here are just ten….

1     Support The Day We Fight Back

One of the most important things in the whole fight is to raise awareness – and to take advantage of opportunities to spread the message that surveillance is a big issue. Days like The Day We Fight Back help to do that. Check out the website here. Tweet about it. Blog about it. Talk about it with your friends and colleagues. Make it something that people notice.

2     Lobby your politicians – or unseat them!

Let the politicians know that you care about this – because, ultimately, they are supposed to be your representatives. It may not feel as though they listen to you much – but if enough people tell them the same thing, if enough people bother them, then they may finally get up off their backsides and do something. And if they don’t, use your vote against them. Politicians make a difference here – or rather they could, if they could be bothered. Most of them don’t understand what’s going on – try to educate them! Help them to understand, and don’t let them get away with bland, meaningless reassurances.

3     Don’t let the corporations off the hook!

The Snowden revelations were shocking, revealing a degree of governmental surveillance that surprised many people, and made a lot of people angry with their governments – but we shouldn’t be fooled into thinking this is just about governments, or just about specific agencies like the NSA and GCHQ. The malaise is far deeper than that – and corporations are in it right up to their necks. In many ways corporate surveillance is worse than governmental surveillance – it can have real impact on people, messing with their credit ratings and insurance premiums, affecting their job prospects, the prices they pay for things and more.

The NSA and GCHQ to a great extent piggyback on the surveillance that the corporates do, utilise the tools that the corporates create, mine the data that the corporates hold – if the corporates weren’t doing it, the agencies couldn’t tap into it. What’s more, corporations actively lobby to undermine privacy law, obfuscate over their privacy policies and do a lot more to undermine the whole concept of privacy. We shouldn’t accept that – let alone allow themselves to portray themselves as the good guys in this story. They’re not. Right now, they’re the henchmen and sidekicks of the NSA and GCHQ – if they want our support, they need to start supporting us.

4     Don’t just demand transparency – demand less surveillance!

There’s a lot of talk of transparency, particularly in relation to governmental requests for data from the likes go Google, Facebook, Twitter etc. Transparency is great – but it’s not nearly enough. We shouldn’t let ourselves be fobbed off with talk of transparency – we need less surveillance. We need to demand that surveillance is cut back – not just that there is better accountability and transparency. Accountability often ends up in farces like the UK’s Intelligence and Security Committee’s hearing with the heads of MI5, MI6 and GCHQ – no real scrutiny at all, just a bit of lip service and a lot of back-slapping. It’s not enough. Not nearly enough.

5     Join or support civil society

Civil society groups all over the world are key players in this – and they need your support. Here in the UK, the Open Rights Group, Privacy International and Big Brother Watch have been in the forefront of the campaigns against surveillance. In the US the Electronic Frontier Foundation have been crucial. In the Netherlands Bits of Freedom have done wonders. These, however, are not groups with the scale or resources of the governments and corporations that are behind the surveillance – so they need every bit of support they can get.

6     Challenge the media!

The mainstream media, for the most part, have not played the part that they could in the fight against mass surveillance. The Guardian has been an honourable exception – and their role in making sure that the Snowden story has seen the light of day has been, for me, one of the most important pieces of journalism for many years – but generally the whole issue has been the subject of far less attention than it should have had. That’s sadly common – because reporting of almost all technology matters is pretty disappointing. We need to challenge that – and shame the media into doing a better job. When they misreport stories about surveillance they should be challenged – using the social media, for example. And, perhaps even more importantly, when they report on technology without seeing the privacy aspects we should challenge that too. One key example right now is the subject of ‘Smart Meters’ – they have deep problems in relation to privacy, but when you see a report in much of the media it only talks of the advantages, not the risks. That’s not good enough.

7     Educate yourself

Part of the reason that surveillance has grown, almost without our noticing, is that far too many of us – and I’m certainly one of them – have not kept ourselves up to date. This year is supposed to be the ‘Year of Code’ – and though that campaign is pretty farcical it does highlight the fact that most of us don’t really know how the tech we use works. If we don’t know how it works, it’ll be much harder for us to protect ourselves. I’m making a commitment right now that I’m going to learn cryptography – and that I’m going to use it.

8     Use and support privacy friendly tech

That brings the next point. There are a lot of privacy-friendly tools out there and we should use them. Search with duckduckgo or startpage rather than Google. Use Ghostery or Abine’s DoNotTrackMe to monitor or block those who are tracking you – remembering that commercial trackers can be hijacked by the authorities. These are just a few of the tools available – and there are more coming all the time – but they need to be used in order to succeed. They need support if they are to grow.

9     Keep your eye on the news

There are more stories about surveillance and other invasions of privacy appearing all the time – keep your eye on the news for them, and let other people know about them. It’s hard to keep up, but don’t give up. Don’t expect to know everything, but if we don’t keep up with the news we aren’t going to be in a position to fight. Information is power – which is a great deal of what surveillance is about. We need to be informed in order to fight back

10     Make sure the fightback isn’t just for a day

This is the most important thing of all. Campaigns for one day are pretty meaningless – and the authorities will generally let them ride, possibly with a few little comments but almost no action. Political pronouncement and political action needs long-term campaigning. Shifts in attitudes don’t happen in a day – so we need to keep this campaign going…. and expect it to be a long, attritional fight. It won’t be easy – but it’s worth it.

Communications Surveillance – a miscast debate

GCHQI have just made a submission to the Intelligence and Security Committee’s call for evidence on their Privacy and Security Inquiry. The substance of the submission is set out below – the key point is that I believe that the debate, and indeed the questions asked by the Intelligence and Security Committee, miscast the debate in such a way as to significantly understate the impact of internet surveillance and hence make the case for that surveillance stronger than it really is. I am sure there will be many other excellent submission to the inquiry – this is my small contribution.

——————————

Submission to the Intelligence and Security Committee by Dr Paul Bernal

I am making this submission in response to the Privacy and Security Call for Evidence made by the Intelligence and Security Committee on 11th December 2013, in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy. I have a book dealing with the subject, Internet Privacy Rights, which will be published by Cambridge University Press, in March 2014. The subject of internet privacy, therefore, lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

Executive summary

There are a great many issues that are brought up by the subject of communications surveillance. This submission does not intend to deal with all of them. It focuses primarily on three key issues:

  1. The debate – and indeed the initial question asked by the ISC – which talks of a balance between ‘individual privacy’ and ‘collective security’ is a miscast one. Communications surveillance impacts upon much more than privacy. It has an impact on all the classical ‘civil liberties’: freedom of expression, freedom of assembly and association and so forth. Privacy is not a merely ‘individual’ issue. It, and the connected rights, are community rights, collective rights, and to undermine them does more than undermine individuals: it hits at the very nature of a free, democratic society.
  2. The invasion of privacy, the impact on the other rights mentioned above, occurs at the point when data is gathered, not when data is accessed. The mass surveillance approach that appears to have been adopted – a ‘gather all, put controls on at the access stage’ is misconceived. The very gathering of the data has an impact on privacy, and leaves data open for misuse, vulnerable to hacking, loss or misappropriation, and has a direct chilling effect.
  3. In terms of mass surveillance, meta-data can in practice be more useful – and have more of an impact on individual rights and freedoms – than content data. It can reveal an enormous amount of information about the individuals involved, and because of its nature it is more easily and automatically analysed and manipulated.

The implications of these three issues are significant: the current debate, as presented to the public and to politicians, is misleading and incomplete. That in turn means that experts remain sceptical about the motivations of those involved in the debate in favour of surveillance – and that it is very hard for there to be real trust between the intelligence services and the public.

It also means that the bar should be placed much higher in terms of evidence that this kind of surveillance is successful in achieving the aims of the intelligence services. Those aims need to be made clear, and the successfulness of the surveillance demonstrated, if the surveillance is to be appropriate in a democratic society. Given the impact in terms of a wide spectrum of human rights – not just individual rights to privacy – the onus is on the security services to demonstrate that success, or move away from mass surveillance as a tactic.

1      A new kind of surveillance

The kind of surveillance currently undertaken – and envisaged in legislation such as the Communications Data Bill in 2012 – is qualitatively different from that hitherto imagined. It is not like ‘old-fashioned’ wiretapping or even email interception. What also makes it new is the way that we use the internet – and in particular the way that the internet is, for most people in what might loosely be described as developed societies, used for almost every aspect of our lives. By observing our internet activities, therefore, the level of scrutiny in our private lives is vastly higher than any form of surveillance could have been in the past.

In particular, the growth of social networking sites and the development of profiling and behavioural tracking systems and their equivalents change the scope of the information available. In parallel with this, technological developments have changed the nature of the data that can be obtained by surveillance – most directly the increased use of mobile phones and in particular smartphones, provides new dimensions of data such as geo-location data, and allow further levels of aggregation and analysis. Other technologies such as facial recognition, in combination with the vast growth of use of digital, online photography – ‘selfie’ was the Oxford Dictionaries Word of the Year for 2013 – take this to a higher level.

This combination of factors means that the ‘new’ surveillance is both qualitatively and quantitatively different from what might be labelled ‘traditional’ surveillance or interception of communications. This means that the old debates, the old balances, need to be recast. Where traditional ‘communications’ was in some ways a subset of traditional privacy rights – as reflected in its part, for example, within Article 8 of the ECHR, the new form of communications has a much broader relevance, a wider scope, and brings into play a much broader array of human rights.

2      Individual right to privacy vs. collective right to security?

2.1      Privacy is not just an individual right

Privacy is often misconstrued as a purely individual right – indeed, it is sometimes characterised as an ‘anti-community’ right, a right to hide yourself away from society. Society, in this view, would be better if none of us had any privacy – a ‘transparent society’. In practice, nothing could be further from the truth: privacy is something that has collective benefit, supporting coherent societies. Privacy isn’t so much about ‘hiding’ things as being able to have some sort of control over your life. The more control people have, the more freely and positively they are likely to behave. Most of us realise this when we consider our own lives. We talk more freely with our friends and relations knowing (or assuming) that what we talk about won’t be plastered all over noticeboards, told to all our colleagues, to the police and so forth. Privacy has a crucial social function – it’s not about individuals vs. society. The opposite: societies cannot function without citizens having a reasonable expectation of privacy.

2.2      Surveillance doesn’t just impact upon privacy

The idea that surveillance impacts only upon privacy is equally misconceived. Surveillance impacts upon many different aspects of our lives – and how we function in this ‘democratic’ society. In human rights terms, it impacts upon a wide range of those rights that we consider crucial: in particular, it impacts upon freedom of expression, freedom of association and freedom of assembly, and others.

2.2.1      Freedom of expression

The issue of freedom of expression is particularly pertinent. Privacy is often misconstrued as somehow an ‘enemy’ of freedom of expression – blogger Paul Staines (a.k.a. Guido Fawkes) for example, suggested that ‘privacy is a euphemism for censorship’. He had a point in one particularly narrow context – the way that privacy law has been used by certain celebrities and politicians to attempt to prevent certain stories from being published – but it misses the much wider meaning and importance of privacy.

Without privacy, speech can be chilled. The Nightjack saga, of which the committee may be aware, is one case in point. The Nightjack blogger was a police insider, providing an excellent insight into the real lives of police officers. His blog won the 2009 Orwell Award – but as a result of email hacking by a journalist working for the Times, he was unable to keep his name private, and ultimately he was forced to close his blog. His freedom of expression was stifled – because his privacy was not protected. In Mexico, at least four bloggers writing about the drugs cartels have not just been prevented from blogging – they’ve been sought out, located, and brutally murdered. There are many others for whom privacy is crucial – from dissenters in oppressive regimes to whistle-blowers to victims of spousal abuse. The internet has given them hitherto unparalleled opportunities to have their voices heard – internet surveillance can take that away. Even the possibility of being located or identified can be enough to silence them.

Internet surveillance not only impacts upon the ability to speak, it impacts upon the ability to receive information – the crucial second part to freedom of speech, as set out in both the European Convention on Human Rights and the Universal Declaration of Human Rights. If people know that which websites they visit will be tracked and observed, they’re much more likely to avoid seeking out information that the authorities or others might deem ‘inappropriate’ or ‘untrustworthy’. That, potentially, is a huge chilling effect. The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, in his report of 2013, made it clear that the link between privacy and freedom of expression is direct and crucial.

“States cannot ensure that individuals are able to freely seek and receive information or express themselves without respecting, protecting and promoting their right to privacy. Privacy and freedom of expression are interlinked and mutually dependent; and infringement upon one can be both the cause and consequence of an infringement upon the other.”

2.2.2      Freedom of association and of assembly

Freedom of association and assembly is equally at risk from surveillance. The internet offers unparalleled opportunities for groups to gather and work together – not just working online, but organising and coordinating assembly and association offline. The role the net played in the Arab Spring has probably been exaggerated – but it did play a part, and it continues to be crucial for many activists, protestors and so forth. The authorities realise this, and also that through surveillance they can counter it. A headline from a few months ago in the UK, “Whitehall chiefs scan Twitter to head off badger protests” should have rung the alarm bells – is ‘heading off’ a protest an appropriate use of surveillance? It is certainly a practical one – and with the addition of things like geo-location data the opportunities for surveillance to block association and assembly both offline and online is one that needs serious consideration. The authorities in the Ukraine recently demonstrated this through the use of surveillance of mobile phone geolocation data in order to identify people who might be protesting – and then sending threatening text messages warning those in the location that they were now on a list: a clear attempt to chill their protests. Once more, this is very much not about individual privacy – it is about collective and community rights.

3      Controls are required at the gathering stage

The essential approach in the current form of internet surveillance, as currently practiced and as set out in the Communications Data Bill in 2012, is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions.

3.1      Data vulnerability

Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data in laptops, and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks’ losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable. The high profile hacks of Apple, Facebook, Twitter, Sony and others show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

3.2      Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to internet surveillance this is a very significant risk: the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime can creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth – there is evidence that this is already taken place. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

There can be creep in terms of the types of data gathered. The split between ‘meta data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep is inherently difficult. As with data vulnerability, the only way to guard against it is not to gather the data in the first place. That means that controls need to be placed at the data gathering stage, not at the data access stage.

4      The role of metadata

Rather than being less important, or less intrusive, than ‘content’, the gathering of meta data in the new kinds of surveillance of the internet may well be more intrusive and more significant. Meta data is the primary form of data used in profiling of people as performed by commercial operators for functions such as behavioural advertising. It is easier to analyse and aggregate, easier for patterns to be determined, and much richer in its implications than content. It is also harder to ‘fake’: content can be concealed by the use of code words and so forth – meta data by its nature is more likely to be ‘true’.

In relation to trust, it is important that those who are engaged in surveillance acknowledge this: and those that scrutinise the intelligence services understand this. It was notable in the open session of the Intelligence and Security Committee at the end of 2013 that none of those questioning the heads of MI5, MI6 and GCHQ made the point, or questioned the use of statements to the effect that they were not reading our emails or listening to our phone calls. Those statements may be true, but they are beside the point: it is the gathering of metadata that matters more. It can reveal automatically – without the need of expert human intervention – great details. As Professor Ed Felten put it in his testimony to the Senate Judiciary Committee hearing on the Continued Oversight of the Foreign Intelligence Surveillance Act:

“Metadata can expose an extraordinary amount about our habits and activities. Calling patterns can reveal when we are awake and asleep; our religion, if a person regularly makes no calls on the Sabbath, or makes a large number of calls on Christmas Day; our work habits and our social attitudes; the number of friends we have; and even our civil and political affiliations.”

Professor Felten was talking about telephony metadata – metadata from internet browsing, emails, social network activity and so forth can be even more revealing.

5      Conclusion

The subject of internet surveillance is of critical importance. Debate is crucial if public support for the programmes of the intelligence service is to be found – and that debate must be informed, appropriate and on the right terms.

It isn’t a question of individual privacy, a kind of luxury in today’s dangerous world, being balanced against the deadly serious issue of security. If expressed in those misleading terms it is easy to see which direction the balance will go. Privacy matters far more than that – and it matters not just to individuals but to society as a whole. It underpins many of our most fundamental and hard-won freedoms – the civil rights that have been something we, as members of liberal and democratic societies, have been most proud.

Similarly, the question of where the controls are built needs to be opened up for debate – at present the assumption seems to be made that gathering is acceptable even without controls. As noted above, that opens up a wide range of risks, risks that should be acknowledged and assessed in relation to the appropriateness of surveillance.

Finally, those involved in the debate should be more open and honest about the role of meta-data: the bland reassurances that ‘we are not reading your emails or listening to your phone calls’ should always be qualified with the acknowledgment that this does not really offer much protection to privacy at all.

Dr Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia Norwich
NR4 7TJ
Email: paul.bernal@uea.ac.uk