The Snoopers’ Charter: we need a new consultation

The Communications Data Bill – more commonly (and fairly accurately) known as the ‘Snoopers’ Charter’ is due to re-emerge at any moment. We have been expecting it for some time – and yet have seen nothing official, and there has been no sign of a proper public consultation on the subject. That, to me, is wholly inadequate – so I have written to the Home Office, copying in my MP, Dr Julian Huppert. The contents of the letter are below. If you feel strongly about this matter – and I hope you do – you could add to the pressure to have a proper public consultation by using the Open Rights Group’s system, which can be found at:

http://www.openrightsgroup.org/campaigns/snoopers-charter-consultation

Here’s what I wrote – it is fairly long, but still only scratches at the surface of what is wrong with the overall approach to surveillance put forward in this bill:

————————————————————

Dear Home Office

Re: Draft Communications Data Bill

I write to you as a legal academic, specialising in data privacy, and as a member of the public with significant concerns over the progress of the Communications Data Bill. In my opinion we need a consultation – and a public and open consultation – on the Bill for many reasons.

The media storm – and the eventual and significant criticism levelled at the bill by the Parliamentary Committee hastily convened to discuss the bill the first time around should have made it clear to the proponents of the bill that there is a huge public interest in this area. That has a number of implications:

  1. That the criticisms levelled at the bill need to be taken seriously.
  2. That all the interested groups, including advocacy groups and academics – and indeed the public – need to be talked to up front, not after all the work has been done.
  3. That a ‘fait accompli’ is not an acceptable solution
  4. That the level of ‘proof’ provided by those putting the bill forward needs to be much more convincing – and much more open – than what has been provided to date. It is simply not sufficient to say ‘it’s dangerous and something must be done’, or ‘we can’t tell you why, but we need to do this’.

Those of us interested in the Bill have been waiting for the consultation to begin – there have been leaks to the media at intervals suggesting that it would start soon, but so far nothing has been made official or public. That is both unfortunate and ultimately unacceptable. We need that consultation to begin soon, and in an open and public way.

A targeted rather than universal approach to surveillance

Though in my view the Parliamentary Committee did a very good job in scrutinising the bill and in reviewing the huge amount of evidence submitted to it, there are a number of areas that I do not believe were sufficiently considered.

These areas hit at the very essence of the approach adopted by the bill. The whole idea of a ‘gather everything for later scrutiny’ approach misses many of the fundamental risks attached to this kind of surveillance: the risks of data and system vulnerability, of function creep, and of system misuse. Much of the evidence submitted to the committee that scrutinised the Communications Data Bill examined these risks – but the committee did not, in my opinion, see quite how fundamentally they undermined the overall approach of the Bill. Nor, in my opinion, did they look sufficiently into a genuinely alternative approach.

That alternative is to go for a targeted rather then universal surveillance. This kind of approach can significantly reduce all of these risks. Putting both the warranting and the filtration systems before the gathering stage rather than the accessing stage would reduce the amount of data that is vulnerable, make the systems harder to misuse, and reduce the likelihood – or the impact – of function creep. It is closer to the concept at the heart of British justice: that people are innocent until proven guilty.

1     Risks of data and system vulnerability.

It is a fundamental truth of computer data that wherever data exists and however it is held it can be vulnerable – to hacking, to accidental loss, to corruption, to misinterpretation, to inappropriate transfers and to many other things. By gathering all the communications data, this approach sets itself up for disaster – it is like painting metaphorical signs saying ‘hack me’ on the databases of information. If you build it, it will be hacked.

What is more, it is not only data that’s vulnerable but systems – if ‘black boxes’ are installed at ISPs, those black boxes will also have the metaphorical ‘hack me’ signs upon them. If you make a back door, the bad people as well as the good people can come in. It doesn’t matter how secure you think your system is, it can be broken into and hacked.

The government doesn’t have an inspiring record in terms of keeping data secure – from the Child Benefit data discs and the MOD laptops to the numerous NHS data breaches – but this is not really so much a reflection of government inadequacy to an underlying truth about data and systems. Stories of hack and data losses are in the news almost every day – and even those with the greatest technical ability and the greatest incentives to keep data secure have been victims, from Swiss banks to pretty much every major technology company. Facebook, Apple, Microsoft and Google have all fallen victim over recent months.

Ultimately, the only data that is not vulnerable is data that doesn’t exist at all. Furthermore, the only systems that can’t be hacked are systems that don’t exist. If targeted rather than universal surveillance is used, then the vulnerability is enormously reduced.

2      Risks of Function Creep

When data is gathered, or systems are built, for a specific purpose, that purpose can very easily end up being changed. This is a phenomenon particularly common in the field of anti-terror legislation and systems. Most people are aware of RIPA having been used for such things as dog fouling and fly-tipping, of CCTV cameras ostensibly for crime prevention actually being used to check children’s addresses for school catchment areas and so forth – but these are not myths or even particularly atypical. This has become something close to a ‘norm’ in this field.

There often appear to be good reasons for this function creep – not many people argued when the CCTV system for the London Congestion Charge started being used for crime prevention, for example – but it is a phenomenon that needs to be acknowledged. There isn’t really a legislative way to deal with it – caveats in laws can be sidestepped, laws can be amended in moments of ‘need’ and so forth. The only way to prevent it, as for data vulnerability, is to not build the systems or gather the data in the first place.

Again, this is a strong argument against universal data gathering – data gathered specifically, in a targeted and warranted way, presents less of a risk of function creep. Similarly, specifically designed and targeted systems are less susceptible to function creep than huge, universal surveillance systems.

3      Risks of System and Data Misuse

Another phenomenon familiar to those who study this field is that systems can be and are misused – whether it is databases searched for information about people against whom the searcher has a grudge, or collusion between the authorities and the press. The Leveson Inquiry should have made it entirely clear that such risks are not mythical – and if anyone believes that either the police and other authorities or the press have completely changed as a result of the exposure of the phone and email hacking then they are being extremely naïve.

The systems and data envisaged in this plan are particularly susceptible to this kind of misuse. The description of this kind of system as a ‘Google for communications data’ is entirely apt, and anyone who uses Google regularly should understand how easily the process of searching morphs from one thing to another. Human beings will use these systems – and human beings have human weaknesses, and those weaknesses lead almost inevitably to this kind of misuse. With universal data gathering built into the system, the database would be ripe for very serious misuse indeed.

Again, there is only one real way to deal with the possibility of system and data misuse – minimise the size and scope of the system and the amount of data involved. That, again, suggests that we need targeted rather than universal surveillance.

The Normalisation of Surveillance – and the Panopticon Chill

These are just a few of the problems that a system like this could bring about. There are many more – and I have written before about how this kind of surveillance impacts not only on privacy but on a whole array of human rights. By suggesting that universal surveillance is something that we should consider normal and acceptable we are ‘normalising’ – which has a whole set of implications.

Jeremy Bentham’s concept of the Panopticon, one with which I am sure you are familiar, is based on the idea that if people know that they’re being observed, they modify their behaviour. He envisaged it for a prison – to help control the behaviour of potentially violent and dangerous prisoners, put them in a position where they know that at any time they might be observed. In effect, this is what this kind of a system does – it tells everyone that whatever they do on the internet can and will be observed.

What will that mean? Well, it creates a kind of ‘chilling effect’ – what I would call the ‘Panopticon Chill’. It means people will be less free with their speech – and feel less free about what they do, where they go, and what they do online. That impacts upon many of our recognised human rights: freedom of expression, freedom of association and freedom of assembly to start with.

There are some who would welcome this kind of Panopticon Chill – some who think that we need to control the internet more. That, however, is to treat us as though we are prisoners in an enormous online jail. Bentham’s idea was not for ordinary peoples, but for potentially violent and dangerous prisoners. Is that how we all want to be treated? Is that the kind of society that we want to live in?

What kind of society do we want to live in?

That is the bottom line for the Communications Data Bill. Putting this kind of bill into place would be to set up precisely the kind of surveillance society that Orwell warned us of in 1984. Is that what we want to do?

There are other huge questions – not least the question of whether it will work at all. As the huge quantity of evidence submitted in the initial consultation revealed, few real experts believe that it will – anyone with expertise will be able to sidestep the system, leaving the rest of us to suffer the downsides of this kind of surveillance without the upsides even existing.

What is more, by promoting such a system in this country we not only give permission for the more oppressive of regimes to do the same (and leaders from Putin in Russia to Assad in Syria will be watching us with eagle eyes) but we would also be kick-starting the surveillance technology industry. Whichever companies win the contracts to supply the tech to enable the Bill will be looking to exploit that technology – who will they sell their systems to? What will the next systems they apply their learning and experience to? The £1.8 billion (and probably more) that the UK government spends on this will reap benefits to dictators and oppressors worldwide in coming decades.

A new draft of the Communications Data Bill?

As I understand it, only people close to the Home Office have yet seen how the Communications Data Bill will look in its new draft. I have been told that I won’t be upset when I see it – but without more information it is hard for me to be optimistic. Unless there is a fundamental change – most importantly a shift from the universal to the targeted approach, and an application of warrants and filters at the gathering rather than the accessing stage – it is hard to imagine that it will be something that I do like, and something that will inspire public trust.

A bill based on targeted rather than universal surveillance is possible. If it is brought about I believe it could not only be more compatible with human rights, send better messages to the rest of the world and be more cost effective – but it could also be more effective and less wasteful of the scarce resources of the police and intelligence services.

It does, however, require a big shift in attitudes. I hope that shift in attitudes is possible – and, at the very least, that we can have this debate in public, on reasonable and sensible terms, and without the feeling that we are being railroaded into a particular solution without any real options being presented.

A proper consultation

That, ultimately, is why we need a proper consultation. I have a perspective to present – and very particular views to put forward. I believe they are worthy of consideration – and I am very much open to discuss them. That discussion has not yet taken place. We need it to happen – and we have a right to have it happen. A proper consultation should happen now – at the drafting stage – not after everything has been already set in stone.

One real key here is that the public has not been properly informed over this debate. Initially, it appeared as though the government wanted to get this bill passed so quickly there wouldn’t even be time for Parliamentary scrutiny – it was only when the issue caused outrage that the committee was set up, and even then the consultation period was very brief and at a time when many academics in particular were not in a position to submit. We need more time – this is a crucial issue, and the public needs to have confidence that an appropriate decision is being made. The debate was characterised at times by language that should have no place in a serious debate, with opponents of the bill being accused of having blood on their hands.

This time around, we need proper consultation, with sufficient time and sufficient opportunity for all stakeholders to have their say. All options need to be considered, and in a way that both encourages and supports public participation, and develops a greater level of trust.

I am copying this letter to my MP, Dr Julian Huppert, who was on the Parliamentary Committee that scrutinised the bill, and to the Open Rights Group as well as making it public on my blog.

Kind regards

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia
Norwich NR4 7TJ
Email: paul.bernal@uea.ac.uk

If you build it, they will come…

The proposed new surveillance programme – the Communications Capabilities Development Programme – in the UK has many disturbing aspects – from the whole idea that ‘security’ justifies almost any infringement of privacy to the re-emergence of the fundamentally flawed ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument. The response on the internet has been impressive – I’ve read great blogs and tweets and heard excellent arguments from many directions.

One of the key areas of focus has been the question of whether the police, intelligence services or other authorities will have to obtain a warrant to get access to the data gathered – but while that is a crucial issues, and will rightly get a lot of attention, in one key way it is missing the point. It presupposes that it’s OK to gather the information, to monitor our communications etc, so long as access to that information is subject to appropriate due process, and held securely.

Can data ever be genuinely securely held?

That last point gives a clue to the fundamental problem. Held securely. Can data ever be held really securely? Whether that is even theoretically possible is a moot point: experience shows that it is, on a practical level, never the case. Where data is held, it is always vulnerable What is often forgotten is quite how many ways data can be (and is) vulnerable. People think about hacking – and this kind of database practically screams out ‘hack me’ – but other vulnerabilities are both more regular and potentially more dangerous. Human error. Human malice. Weaknesses in systems. Technical and technological errors. The use of insufficiently trustworthy subcontractors. Complacency. Changes of personnel. Disgruntled employees. Drives for cost-cutting. The possibilities are almost endless…

Even those who you would most expect to keep data secure have failed again and again. The HMRC child benefit disc loss in 2007 is notorious, but the MOD lost the entire database of current and past members of the armed forces – including addresses, bank details etc – simply by leaving a laptop in a car park. Swiss Banks, who should be the most careful about their data, lost huge amounts through the ‘work’ of a subcontractor doing systems work – data which was then sold to the German tax authorities to seek out tax evaders.

Risk from function creep

Perhaps even more dangerously, once the data exists, there’s an often almost overwhelming imperative to find a use for it – making ‘function creep’ all but inevitable. Cameras set up to prevent serious crime end up being used to monitor dog fouling, or even check out whether parents really live in the catchment areas for schools – and even ‘single purpose’ cameras like those monitoring the Congestion Charge in London will almost certainly soon be accessible to the police. When Swedish foreign minister Anna Lindh was murdered in 2003 a DNA database designed and set up for purely medical research was accessed in the hunt for her killer – without consent from those on the database. These are just some of the many examples of function creep – there are many more.

Risks from change of situation – or change of government

One thing I’ve seen when teaching about data security has been that those who’ve experienced life under oppressive regimes are often the clearest about why allowing governments access to information is a serious risk. I remember one particular class I taught, where most of the students were British, and seemed generally OK with allowing full police access to information. One student, however, came from Kazakstan, and after listening for a while he stood up and basically told everyone they were mad. He wouldn’t like the government to have any of this data – he’s seen what happens when they do. I’ve heard the same from many people from other former communist countries in Eastern Europe in particular.

We in the West have a tendency to be far too complacent about what our governments might do. We may trust our government now (though of course many of us don’t) but setting systems like this in place, building databases of information, is effectively providing them for all subsequent governments and authorities, whatever their complexion.

What’s more, when the situation changes, when emergencies become more acute, even a ‘good’ government ends up doing ‘bad’ things – and ‘popular opinion’ will often ‘support’ those kinds of bad things, as the Anna Lindh case illustrated quite disturbingly.

Risk from private/public ‘cooperation’

It would be highly surprising if the data gathered and held in this kind of situation was purely done by ‘public servants’. Whether the form is some kind of private/public partnership, the use of subcontractors or freelancers, or even by requiring the ISPs etc to do the actual data gathering, holding and analysing is far from clear, but the private sector will almost certainly be involved in one way or another. That brings in a whole new raft of potential vulnerabilities. Private sector companies are both naturally and generally appropriately driven by profit rather than security – and this can mean cutting the costs to the bone, particularly if competitive tendering is involved. It might also mean conflict of jurisdiction – if the ultimate owner of a company is in the US, for example, the PATRIOT Act could come into play. What happens if a private company goes into administration? What happens if the ownership changes? Each event introduces another vulnerability.

What does this all mean?

Ultimately, if we let the data be gathered and held, it is vulnerable. Those who want to ‘abuse’ it will come.

The only way for data not to be vulnerable is for it not to exist.

Though the idea of warrants/due process in terms of the use of the data is highly important, it would be better to put controls in place at the data gathering stage as well, or else we’re building a database that is just ripe for abuse.

We need to worry not just about the data use, but the gathering of data in the first place.

What that would mean is a very different approach to data collection: targeted rather than general data gathering. If you have to go through a process to justify gathering data, then you can only gather it in a targeted way. It also means that we should demand deletion of data after a period unless further procedures are passed to justify that further holding: more due process needed.

The very whisper of the words ‘terrorist’ or ‘paedophile’ should not be enough to make us forget the basics not just of civil liberties but of technological logic. Any kind of solution that allows data to be gathered without a warrant, and on a ‘universal’ basis, even if it has good controls at the ‘data use’ stage, is fundamentally flawed, and should be avoided.

Every which way to lose your data…

The ACS ‘data leak’ story that’s been emerging fairly dramatically over the last couple of days has got pretty much everything you could hope for in this kind of story: a bit of porn, a bit of piracy, some hacking, threats of huge fines, legal action and so on. It’s already been widely reported on – Andrew Murray’s blog on the subject gives an excellent description of what ACS do, and how this whole thing has to a great extent blown up in ACS’s face. As he explains, it’s a prime example of how symbiotic regulation works – and why the law is not the only thing that matters when regulating the internet.

There is, however, something else that is very graphically demonstrated by the whole saga – how many different ways your personal data can be at risk. This small story alone demonstrates at least five different ways that personal data can be vulnerable:

  1. To monitoring and tracking – the initial data about the supposed copyright infringers was obtained by monitoring traffic on the internet.
  2. To ‘legal’ attack – ACS initially got a court order to demand that the ISPs involved (we know about BT, Sky and PlusNet in this case) disclose the personal details of the account holders suspected of copyright infringement, based upon this monitoring.
  3. To human error – BT have admitted that they sent this personal data on an unencrypted Excel file attached to an ordinary email, in breach of their official policies and practices.
  4. To hacking – at least this is part of what ACS have claimed – that their systems were hacked into in order for the data to be obtained in order to be leaked.
  5. To deliberate leaking – precisely who did the leaking is far from clear, and who wished for the data to be leaked, but there is certainly a possibility that someone wanted the names to be out in the public domain.
Of course the data itself is far from reliable. It is just the details of the account holders that are suspected of being used to share illegal content, without there being any direct evidence that the people themselves did the sharing – which brings even more dimensions of vulnerability into play: confusion, mistaken identity, even things like defamation by implication could come into play. If your name is on the list, you’re not only being labelled a lawbreaker but a consumer of porn – and it might very easily not have been you doing it at all. Other people might be using your account, perhaps without your knowledge, perhaps without your permission, perhaps without your understanding.
Simon Davies, of Privacy International, quoted in the BBC, said that ‘You rarely find an aspect where almost every aspect of the Data Protection Act (DPA) has been breached, but this is one of them’. It’s also true that almost every aspect of data vulnerability has been demonstrated in one fell swoop.

Perhaps an even more important point, however, is the way that personal data – and individuals’ privacy – is viewed almost as ‘collateral damage’ in the ongoing battle between the entertainment industry (and their hired guns like ACS:Law) and the ‘pirates’. From the outside it looks as though as far as the 4chan hackers and ACS:Law are concerned, it’s that battle that matters. ACS:Law wants to ‘get’ the pirates, while the 4chan hackers want to ‘get’ ACS:Law and to ‘win’ the war with the entertainment industry for the ’cause’ of free and unfettered file-sharing. The fact that some 13,000 individuals have had their personal data released into the public domain and face all kinds of possible consequences from embarrassment (or humiliation) to legal action onwards seems somehow less important. Sadly it often seems to be that way. Privacy is squeezed by politics, law, business and a whole lot more. Every which way, privacy loses.