The proposed new surveillance programme – the Communications Capabilities Development Programme – in the UK has many disturbing aspects – from the whole idea that ‘security’ justifies almost any infringement of privacy to the re-emergence of the fundamentally flawed ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument. The response on the internet has been impressive – I’ve read great blogs and tweets and heard excellent arguments from many directions.
One of the key areas of focus has been the question of whether the police, intelligence services or other authorities will have to obtain a warrant to get access to the data gathered – but while that is a crucial issues, and will rightly get a lot of attention, in one key way it is missing the point. It presupposes that it’s OK to gather the information, to monitor our communications etc, so long as access to that information is subject to appropriate due process, and held securely.
Can data ever be genuinely securely held?
That last point gives a clue to the fundamental problem. Held securely. Can data ever be held really securely? Whether that is even theoretically possible is a moot point: experience shows that it is, on a practical level, never the case. Where data is held, it is always vulnerable What is often forgotten is quite how many ways data can be (and is) vulnerable. People think about hacking – and this kind of database practically screams out ‘hack me’ – but other vulnerabilities are both more regular and potentially more dangerous. Human error. Human malice. Weaknesses in systems. Technical and technological errors. The use of insufficiently trustworthy subcontractors. Complacency. Changes of personnel. Disgruntled employees. Drives for cost-cutting. The possibilities are almost endless…
Even those who you would most expect to keep data secure have failed again and again. The HMRC child benefit disc loss in 2007 is notorious, but the MOD lost the entire database of current and past members of the armed forces – including addresses, bank details etc – simply by leaving a laptop in a car park. Swiss Banks, who should be the most careful about their data, lost huge amounts through the ‘work’ of a subcontractor doing systems work – data which was then sold to the German tax authorities to seek out tax evaders.
Risk from function creep
Perhaps even more dangerously, once the data exists, there’s an often almost overwhelming imperative to find a use for it – making ‘function creep’ all but inevitable. Cameras set up to prevent serious crime end up being used to monitor dog fouling, or even check out whether parents really live in the catchment areas for schools – and even ‘single purpose’ cameras like those monitoring the Congestion Charge in London will almost certainly soon be accessible to the police. When Swedish foreign minister Anna Lindh was murdered in 2003 a DNA database designed and set up for purely medical research was accessed in the hunt for her killer – without consent from those on the database. These are just some of the many examples of function creep – there are many more.
Risks from change of situation – or change of government
One thing I’ve seen when teaching about data security has been that those who’ve experienced life under oppressive regimes are often the clearest about why allowing governments access to information is a serious risk. I remember one particular class I taught, where most of the students were British, and seemed generally OK with allowing full police access to information. One student, however, came from Kazakstan, and after listening for a while he stood up and basically told everyone they were mad. He wouldn’t like the government to have any of this data – he’s seen what happens when they do. I’ve heard the same from many people from other former communist countries in Eastern Europe in particular.
We in the West have a tendency to be far too complacent about what our governments might do. We may trust our government now (though of course many of us don’t) but setting systems like this in place, building databases of information, is effectively providing them for all subsequent governments and authorities, whatever their complexion.
What’s more, when the situation changes, when emergencies become more acute, even a ‘good’ government ends up doing ‘bad’ things – and ‘popular opinion’ will often ‘support’ those kinds of bad things, as the Anna Lindh case illustrated quite disturbingly.
Risk from private/public ‘cooperation’
It would be highly surprising if the data gathered and held in this kind of situation was purely done by ‘public servants’. Whether the form is some kind of private/public partnership, the use of subcontractors or freelancers, or even by requiring the ISPs etc to do the actual data gathering, holding and analysing is far from clear, but the private sector will almost certainly be involved in one way or another. That brings in a whole new raft of potential vulnerabilities. Private sector companies are both naturally and generally appropriately driven by profit rather than security – and this can mean cutting the costs to the bone, particularly if competitive tendering is involved. It might also mean conflict of jurisdiction – if the ultimate owner of a company is in the US, for example, the PATRIOT Act could come into play. What happens if a private company goes into administration? What happens if the ownership changes? Each event introduces another vulnerability.
What does this all mean?
Ultimately, if we let the data be gathered and held, it is vulnerable. Those who want to ‘abuse’ it will come.
The only way for data not to be vulnerable is for it not to exist.
Though the idea of warrants/due process in terms of the use of the data is highly important, it would be better to put controls in place at the data gathering stage as well, or else we’re building a database that is just ripe for abuse.
We need to worry not just about the data use, but the gathering of data in the first place.
What that would mean is a very different approach to data collection: targeted rather than general data gathering. If you have to go through a process to justify gathering data, then you can only gather it in a targeted way. It also means that we should demand deletion of data after a period unless further procedures are passed to justify that further holding: more due process needed.
The very whisper of the words ‘terrorist’ or ‘paedophile’ should not be enough to make us forget the basics not just of civil liberties but of technological logic. Any kind of solution that allows data to be gathered without a warrant, and on a ‘universal’ basis, even if it has good controls at the ‘data use’ stage, is fundamentally flawed, and should be avoided.