Contact tracing, privacy, magical thinking – and trust!

The saga of the UK’s contact tracing app has barely begun but already it is fraught with problems. Technical problems – the app barely works on iPhones, for example, and communication between iPhones requires someone with an Android phone to be in close proximity – are just the start of it. Legal problems are another issue – the app looks likely to stretch data protection law at the very least. Then there are practical problems – will the app record you as having contact with people from whom you are blocked by a wall, for example – and the huge issue of getting enough people to download it when many don’t have smartphones, many won’t be savvy enough to get it going, and many more, it seems likely, won’t trust the app enough to use it.

That’s not even to go into the bigger problems with the app. First of all, it seems unlikely to do what people want it to do – though even what is wanted is unclear, a problem which I will get back to. Secondly, it rides roughshod over privacy in not just a legal but a practical way, and despite what many might suggest people do care about privacy enough to make decisions on its basis.

This piece is not about the technical details of the app – there are people far more technologically adept than me who have already written extensively and well about this – and nor is it about the legal details, which have also been covered extensively and well by some real experts (see the Hawktawk blog on data protection, and the opinion of Matthew Ryder QC, Edward Craven, Gayatri Sarathy & Ravi Naik for example) but rather about the underlying problems that have beset this project from the start: misunderstanding privacy, magical thinking, and failure to grasp the nature of trust.

These three issues together mean that right now, the project is likely to fail, do damage, and distract from genuine ways to help deal with the coronavirus crisis, and the best thing people should do is not download or use the app, so that the authorities are forced into a rethink and into a better way forward. It would be far from the first time during this crisis that the government has had to be nudged in a positive direction.

Misunderstanding Privacy – Part 1

Although people often underplay it – particularly in relation to other people – privacy is important to everyone. MPs, for example, will fiercely guard their own privacy whilst passing the most intrusive of surveillance laws. Journalists will fight to protect the privacy of their sources even whilst invading the privacy of the subjects of their investigations. Undercover police officers will resist even legal challenges to reveal their identities after investigations go wrong.

This is for one simple reason: privacy matters to people when things are important.

That is particularly relevant here, because the contact tracing app hits at three of the most important parts of our privacy: our health, our location, and our social interactions. Health and location data, as I detail in my most recent book, what do we know and what should we do about internet privacy, are two of the key areas of the current data world, in part because we care a lot about them and in part because they can be immensely valuable in both positive and negative ways. We care about them because they’re intensely personal and private – but that’s also why they can be valuable to those who wish to exploit or harm us. Health data, for example, can be used to discriminate – something the contact tracing app might well enable, as it could force people to self-isolate whilst others are free to move, or even act as an enabler for the ‘immunity passports’ that have been mooted but are fraught with even more problems than the contact tracing app.

Location data is another matter and something worthy of much more extensive discussion – but suffice it to say that there’s a reason we don’t like the idea of being watched and followed at all times, and that reason is real. If people know where you are or where you have been, they can learn a great deal about you – and know where you are not (if you’re not at home, you might be more vulnerable to burglars) as well as where you might be going. Authoritarian states can find dissidents. Abusive spouses can find their victims and so forth. More ‘benignly’, it can be used to advertise and sell local and relevant products – and in the aggregate can be used to ‘manage’ populations.

Relationship data – who you know, how well you know them, what you do with them and so forth – is in online terms one of the things that makes Facebook so successful and at the same time so intrusive. What a contact tracing system can do is translate that into the offline world. Indeed, that’s the essence of it: to gather data about who you come into contact with, or at least in proximity to, by getting your phone to communicate with all the phones close to you in the real world.

This is something we do and should care about, and could and should be protective over. Whilst it makes sense in relation to protecting against the spread of an infection, the potential for misuse of this kind of data is perhaps even greater than that of health and location data. Authoritarian states know this – it’s been standard practice for spies for centuries. The Stasi’s files were full of details of who had met whom and when, and for how long – this is precisely the kind of data that a contact tracing system has the potential to gather. This is also why we should be hugely wary of establishing systems that enable it to be done easily, remotely and at scale. This isn’t just privacy as some kind of luxury – this is real concern about things that are done in the real world and have been for many, many years, just not with the speed, efficiency and cheapness of installing an app on people’s phones.

Some of this people ‘instinctively’ know – they feel that the intrusions on their privacy are ‘creepy’ – and hence resist. Businesses and government often underestimate how much they care and how much they resist – and how able they are to resist. In my work I have seen this again and again. Perhaps the most relevant here was the dramatic nine day failure that was the Samaritans Radar app, which scanned people’s tweets to detect whether they might be feeling vulnerable and even suicidal, but didn’t understand that even this scanning would be seen as intrusive by the very people it was supposed to protect. They rebelled, and the app was abandoned almost immediately it had started. The NHS’s own ‘care.data’ scheme, far bigger and grander, collapsed for similar reasons – it wanted to suck up data from GP practices into a great big central database, but didn’t get either the legal or the practical consent from enough people to make it work. Resistance was not futile – it was effective.

This resistance seems likely in relation to the contact tracing app too – not least because the resistance grows spectacularly when there is little trust in the people behind a project. And, as we shall see, the government has done almost everything in its power to make people distrust their project.

Magical thinking

The second part of the problem is what can loosely be called ‘magical thinking’. This is another thing that is all too common in what might loosely be called the ‘digital age’. Broadly speaking, it means treating technology as magical, and thinking that you can solve complex, nuanced and multifaceted problems with a wave of a technological wand. It is this kind of magic that Brexiters believed would ‘solve’ the Irish border problems (it won’t) and led anti-porn campaigners to think that ‘age verification’ systems online would stop kids (and often adults) from accessing porn (it won’t).

If you watched Matt Hancock launch the app at the daily Downing Street press conference, you could have seen how this works. He enthused about the app like a child with a new toy – and suggested that it was the key to solving all the problems. Even with the best will in the world, a contact tracing app could only be a very small part of a much bigger operation, and only make a small contribution to solving whatever problems they want it to solve (more of which later). Magical thinking, however, makes it the key, the silver bullet, the magic spell that needs just to be spoken to transform Cinderella into a beautiful princess. It will never be that, and the more it is thought of in those terms the less chance it has of working in any way at all. The magical thinking means that the real work that needs to go on is relegated to the background or eliminated at all, replaced only by the magic of tech.

Here, the app seems to be designed to replace the need for a proper and painstaking testing regime. As it stands, it is based on self-reporting of symptoms, rather than testing. A person self-reports, and then the system alerts anyone who it thinks has been in contact with that person that they might be at risk. Regardless of the technological safeguards, that leaves the system at the mercy of hypochondriacs who will report the slightest cough or headache, thus alerting anyone they’ve been close to, or malicious self-reporters who either just want to cause mischief (scare your friends for a laugh) or who actually want to cause damage – go into a shop run by a rival, then later self-report and get all the workers in the shop worried into self-isolation.

These are just a couple of the possibilities. There are more. Stoics, who have symptoms but don’t take it seriously and don’t report – or people afraid to report because it might get them into trouble with work or friends. Others who don’t even recognise the symptoms. Asymptomatic people who can go around freely infecting people and not get triggered on the system at all. The magical thinking that suggests the app can do everything doesn’t take human nature into account – let alone malicious actors. History shows that whenever a technological system is developed the people who wish to find and exploit flaws in it – or different ways to use it – are ready to take advantage.

Magical thinking also means not thinking anything will go wrong – whether it be the malicious actors already mentioned or some kind of technical flaw that has not been anticipated. It also means that all these problems must be soluble by a little bit of techy cleverness, because the techies are so clever. Of course they are clever – but there are many problems that tech alone can’t solve

The issue of trust

One of those is trust. Tech can’t make people trust you – indeed, many people are distinctly distrustful of technology. The NHS generates trust, and those behind the app may well be assuming that they can ride on the coattails of that trust – but that itself may be wishful thinking, because they have done almost none of the things that generate real trust – and the app depends hugely on trust, because without it people won’t download and won’t use the app.

How can they generate that trust? The first point, and perhaps the hardest, is to be trustworthy. The NHS generates trust but politicians do the opposite. These particular politicians have been demonstrably and dramatically untrustworthy, noted for their lies – Boris Johnson having been sacked from more than one job for having lied. Further, their tech people have a particularly dishonourable record – Dominic Cummings is hardly seen as a paragon of virtue even by his own side, whilst the social media manipulative tactics of the leave campaign were remarkable for their effectiveness and their dishonesty.

In those circumstances, that means you have to work hard to generate trust. There are a few keys here. The first is to distance yourself from the least trustworthy people – the vote leave campaigners should not have been let near this with a barge pole, for example. The second is to follow systems and procedures in an exemplary way, building in checks and balances at all times, and being as transparent as possible.

Here, they’ve done the opposite. It has been almost impossible to find out what was going to until the programme was actually already in pilot stage. Parliament – through its committee system – was not given oversight until the pilot was already under way, and the report of the Human Rights Committee was deeply critical. There appears to have been no Data Protection Impact Assessment done in advance of the pilot – which is almost certainly in breach of the GDPR.

Further, it is still not really clear what the purpose of the project is – and this is also something crucial for the generation of trust. We need to know precisely what the aims are – and how they will be measured, so that it is possible to ascertain whether it is a success or not. We need to know the duration, what happens on completion – to the project, to the data gathered and to the data derived from the data gathered. We need to know how the project will deal with the many, many problems that have already been discussed – and we needed to know that before the project went into its pilot stage.

Being presented with a ‘fait accompli’ and being told to accept it is one way to reduce trust, not to gain it. All these processes need to take place whilst there is still a chance to change the project, and change is significantly – because all the signs are that a significant change will be needed. Currently it seems unlikely that the app will do anything very useful, and it will have significant and damaging side effects.

Misunderstanding Privacy – part 2

…which brings us back to privacy. One of the most common misunderstandings of privacy is the idea that it’s about hiding something away – hence the facetious and false ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument that is made all the time. In practice, privacy is complex and nuanced and more about controlling – or at least influencing – what kind of information about you is made available to whom.

This last part is the key. Privacy is relational. You need privacy from someone or something else, and you need it in different ways. Privacy scholars are often asked ‘who do you worry about most, governments or corporations?’ Are you more worried about Facebook or GCHQ. It’s a bit of a false question – because you should be (and probably are) worried about them in different ways, just as you’re worried about privacy from your boss, your parents, your kids, your friends in different ways. You might tell your doctor the most intimate details about your health, but you probably wouldn’t tell your boss or a bloke you meet in the pub.

With the coronavirus contact tracing app, this is also the key. Who gets access to our data, who gets to know about our health, our location, our movements and our contacts? If we know this information is going to be kept properly confidential, we might be more willing to share it. Do we trust our doctors to keep it confidential? Probably. Would we trust the politicians to keep it confidential? Far less likely. How can we be sure who will get access to it?

Without getting into too much technical detail, this is where the key current argument is over the app. When people talk about a centralised system, they mean that the data (or rather some of the data) is uploaded to a central server when you report symptoms. A decentralised system does not do that – the data is only communicated between phones, and doesn’t get stored in a central database. This is much more privacy-friendly, but does not build up a big central database for later use and analysis.

This is why privacy people much prefer the idea of a decentralised system – because, amongst other things, it keeps the data out of the hands of people that we cannot and should not trust. Out of the hands of the people we need privacy from.

The government does not seem to see this. They’re keen to stress how well the data is protected in ‘security’ terms – protected from hackers and so forth – without realising (or perhaps admitting) that the people we really want privacy from, the people who present the biggest risk to the users, are the government themselves. We don’t trust this government – and we should not really trust any government, but build in safeguards and protections from those governments, and remember that what we build now will be available not just to this government but to successors, which may be even worse, however difficult that might be to imagine.

Ways forward?

Where do we go from here? It seems likely that the government will try to push on regardless, and present whatever happens as a great success. That should be fought against, tooth and nail. They can and should be challenged and pushed on every point – legal, technical, practical, and trust-related. That way they may be willing to move to a more privacy-friendly solution. They do exist, and it’s not too late to change.

what do we know and what should we do about…? internet privacy

My new book, what do we know and what should we do about internet privacy has just been published, by Sage. It is part of a series of books covering a wide range of current topics – the first ones have been on immigrationinequality, the future of work and housing. 

This is a very different kind of book from my first two books – Internet Privacy Rights, and The Internet, Warts and All, both of which are large, relatively serious academic books, published by Cambridge University Press, and sufficiently expensive and academic as to be purchasable only by other academics – or more likely university libraries. The new book is meant for a much more general audience – it is short, written intentionally accessibly, and for sale at less than £10. It’s not a law book – the series is primarily social science, and in many ways I would call the book more sociology than anything else. I was asked to write the book by the excellent Chris Grey – whose Brexit blogs have been vital reading over the last few years – and I was delighted to be asked, because making this subject in particular more accessible has been something I’ve been wanting to do for a long time. Internet privacy has been a subject for geeks and nerds for years – but as this new book tries to show, it’s something that matters more and more for everyone these days.

Cover

It may be a short book (well, it is a short book, well under 100 pages) but it covers a wide range. It starts by setting the context – a brief history of privacy, a brief history of the internet, and then showing how we got from what were optimistic, liberal and free beginnings to the current situation – all-pervading surveillance, government involvement at every level, domination by a few, huge corporations with their own interests at heart. It looks at the key developments along the way – the world-wide-web, search, social networks – and their privacy implications. It then focusses on the biggest ‘new’ issues: location data, health data, facial recognition and other biometrics, the internet of things, and political data and political manipulation. It sketches out how each of these matters significantly – but how the combination of them matters even more, and what it means in terms of our privacy, our autonomy and our future.

The final part of the book – the ‘what should we do about…’ section – is by its nature rather shorter. There is not as much that we can do as many of us would like – as the book outlines, we have reached a position from which it is very difficult to escape. We have built dependencies that are hard to find alternatives to – but not impossible. The book outlines some of the key strategies – from doing our best to extricate ourselves from the disaster that is Facebook to persuading our governments not to follow the current ultimately destructive paths that it seems determined to pursue. Two policies get particular attention: Real Names, which though superficially attractive are ultimately destructive and authoritarian, fail to deal with the issues they claim to and put vulnerable people in more danger, and the current and fundamentally misguided attempts to undermine the effectiveness of encryption.

Can we change? I have to admit this is not a very optimistic book, despite the cheery pink colour of its cover, but it is not completely negative. I hope that the starting point is raising awareness, which is what this book is intended to do.

The book can be purchased directly from Sage here, or via Amazon here, though if you buy it through Amazon, after you’ve read the book you might feel you should have bought it another way!

 

Paul Bernal

February 2020

The Investigatory Powers Act: still a question of trust…

I read the short review of the Investigatory Powers Act by David Anderson QC, Independent Reviewer of Terrorism Legislation, with a great deal of interest. Anderson has been exemplary in his role, and has played a very significant part in ensuring that the Investigatory Powers Act has the safeguards that it does, and the chance to be something other than the ‘Snooper’s Charter’ which it often described as.

I find myself agreeing with a great deal of what he says – though coming to rather different conclusions. As one of those who followed the process of the act from beginning to end – and who participated in a number of the reviews, including appearing before the Joint Bill Committee, and being one of those consulted by David in his Bulk Powers Review, I agree with him entirely that the bill has been one of the most carefully scrutinised in recent times. That, however, also reveals the weaknesses of our scrutiny system. Some of these weaknesses that are unavoidable – it would be impossible to expect parliamentarians to understand many of the issues, or even to read all the fairly massive reports that the various reviews resulted in. Others are not: parliamentarians should be able to see their own weaknesses, and be willing to listen a bit more carefully to those who do understand them. As a legal academic, for example, I try to recognise my own weaknesses in understanding the technology, and defer to those who do understand it.

Where I find myself disagreeing most with the Independent Reviewer is in the weight that he appears to give to the bad features and weaknesses of the Investigatory Powers Act. Many of the problems seem to hit at the heart of the Act, and undermine its claim to be something positive overall.

  1. Internet Connection Records, which he notes that he had no opportunity to evaluate, were the one area noted as being entirely new in the bill – and in the view of many (including myself) are both unproven and represent a huge risk, a huge waste of resources. They should, in my view, have been included in David Anderson’s Bulk Powers Review – though not, in the technical terms of the bill, ‘Bulk Powers’, they are in a real sense every bit as ‘bulky’ and ‘powerful’. There are likely (in my view) to be highly difficult to implement, highly unlikely to be effective – and they could have been excluded from the Act, or introduced and tested on a pilot basis, with scope for a proper review.
  2. I share David Anderson’s concern over the dual lock system – and agree with him that this could and should have been done better. As another key element of the bill – and considered to be one of the key safeguards – this really matters. If the dual lock ends up being little more than a rubber stamp, its existence may do more harm than good, providing false assurance and complacency. The test of this will be in the implementation – something that needs to be watched very carefully.
  3. I also share David Anderson’s note that it is “legitimate to ask whether there are adequate advance safeguards on the exercise of some of the very extensive powers now spelled out for the first time”. This, it seems to me, is very important indeed – and hits at the heart of the problems that many of us have with the bill. The powers are extensive, and it is not at all clear that the safeguards are adequate.
  4. Finally as David Anderson notes, the failure to recognise in statute the idea of an ‘Investigatory Powers Commission’ could be significant. The question is why it was omitted: was it, as those suspicious of the authorities might suggest, because they don’t want to put proper, independent oversight on a statutory basis for fear of its restricting their actions?

That, I think, reflects my overall difference with David Anderson – the same question that he highlighted in his review of investigatory powers in 2015. A question of trust. The biggest weakness of the Investigatory Powers Act, for me, is that it still relies on a great deal of trust, without the authorities having yet, for me, proved themselves worthy of that trust. We have to trust that the dual lock system will work. We have to trust that an investigatory powers commission will be put in place and have appropriate powers – they’re not set down in statute. We have to trust that the Technology Advisory Panel will be filled with the right kind of people, and will be able to perform its functions. We have to trust that everything is ‘OK’ with Internet Connection Records.

We have to trust (as David Anderson also notes) that the government interprets the various grey areas and ambiguities in the Act appropriately – when we really didn’t need to nearly as much as we do. Things like how to deal with encryption (whether the Act allows the government to mandate ‘back doors’ etc) and extraterritoriality (how the Act will be enforced on service providers outside the UK) remain subject to a great deal of doubt – and are potentially deeply dangerous.

Whether it is possible for me to agree with David Anderson that this is a ‘victory for democracy and the rule of law’ remains to be seen. Right now, I can’t give it a round of applause. I don’t condemn it completely – but there are sufficient problems at the heart of many of the most important parts of the Act to make it impossible to applaud. A chance missed, is the best I can say at this stage.

The real test is in the implementation. On that, I wholeheartedly agree with David Anderson that the new Investigatory Powers Commission (or whatever name is given to it) is the key. It will make or break the trust that people can have in the Act, and indeed in those engaged in surveillance. As he puts it:

“the new supervisory body needs to develop a culture of high-level technical understanding, intellectual enquiry, openness and challenge.”

If it does that, I will be delighted – and, with my cynical hat on, very surprised. I hope that I am.

Warning signs – and surveillance…

There are many things being said at the Conservative Party Conference that should be worrying people – from the idea that we should be sending foreign doctors home and ‘naming and shaming’ companies that have the temerity and lack of patriotism to dare to employ foreigners onwards. Military in schools just sends one extra shiver down the spine – these things, when looked at together, do not paint a pretty picture at all. The direction our government is headed is one that is ringing alarm bells for many. Even if you don’t believe the current government is ‘extreme’, the idea that it could become extreme should be taken very seriously indeed.

That, in turn, should raise even louder alarm bells at the current plans for surveillance. The powers that are being granted to the authorities under the Investigatory Powers Bill that is currently making its final steps through parliament are extremely potent and worrying even in the hands of a trustworthy, ‘moderate’ government – but in the hands of an extreme government they become something far, far worse. Tools such as Internet Connection Records, though very poorly suited to the purpose for which they are being put forward, are very good at the kind of profile-based politically-motivated population control that totalitarian regimes thrive upon. The same for many of the ‘bulk powers’ built into the Investigatory Powers Bill. It is bad enough – dangerous enough – to give these kinds of powers to a government that can be trusted, but by putting them into law and building the ‘necessary’ systems to implement them, we are giving them to subsequent governments, governments that may be far less trustworthy, and far more worrying. Governments like those that we have seen more than glimpses of at the Conservative Party conference over the last few days.

When the recent revelation that Yahoo! secretly scanned all of its customers incoming emails on behalf of the US intelligence agencies is added to the equation – with the added twist that Yahoo! had been subject to a massive hack – the picture gets still worse. As I point out in my new academic piece on surveillance, it is a mistake to think of commercial and governmental surveillance as separate and entirely different: they are intimately connected and inextricably linked. If we accept, unthinking, corporate surveillance as harmless, innovative and just about a bit more annoying advertising, we miss the bigger picture. By accepting that, we accept government use of the same techniques, government ‘forcing’ corporations to work with and for them and so on – and not just our current, relatively benign (!) governments but future, more extreme, more alarming, more dangerous governments. If Amber Rudd wants to know whether a company is employing too many foreigners, why not scan all that company’s emails, monitor all the web-browsing from that company’s computers and use profiling to work out which of the employees are probably ‘foreign’, then target them accordingly. Naming and shaming. Labelling. Deporting.

As Bruce Schneier put it:

“It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.”

The combination of the level of corporate surveillance, the interaction between corporates and governments, and the disturbing political developments all over the world – from the Conservative Party conference to Donald Trump (and Hillary Clinton is no saint in surveillance terms!) to extremism in Hungary and Poland and more – is making his warning too important to ignore.

It is not too late to change direction – at least we had better hope it is – and we should do everything we can to do so. In the UK, all the opposition parties should fight much harder to limit and amend the Investigatory Powers Bill, for example – as should those within the Conservative Party who have any sense of the traditions of liberty that they purport to hold as important. Whether they will is another matter. This Conservative Party conference should be a warning sign for all.

A better debate on surveillance?

screen-shot-2016-09-21-at-18-57-00Back in 2015, Andrew Parker, the head of MI5, called for a ‘mature debate’ on surveillance – in advance of the Investigatory Powers Bill, the surveillance law which has now almost finished making its way through parliament, and will almost certainly become law in a few months time. Though there has been, at least in some ways, a better debate over this bill than over previous attempts to update the UK’s surveillance law, it still seems as though the debate in both politics and the media remains distinctly superficial and indeed often deeply misleading.

It is in this context that I have a new academic paper out: “Data gathering, surveillance and human rights: recasting the debate”, in a new journal, the Journal of Cyber Policy. It is an academic piece, and access, sadly, is relatively restricted, so I wanted to say a little about the piece here, in a blog which is freely accessible to all – at least in places where censorship of the internet has not yet taken full hold.

The essence of the argument in the paper is relatively straightforward. The debate over surveillance is simplified and miscast in a number of ways, and those ways in general tend to make surveillance seem more positive and effective that it is, and with less broad and significant an impact on ordinary people than it might have. The rights that it impinges are underplayed, and the side-effects of the surveillance are barely mentioned, making surveillance seem much more attractive than should be – and hence decisions are made that might not have been made if the debate had been better informed. If the debate is improved, then the decisions will be improved – and we might have both better law and better surveillance practices.

Perhaps the most important way in which the debate needs to be improved is to understand that surveillance does not just impact upon what is portrayed as a kind of selfish, individual privacy – privacy that it is implied does not matter for those who ‘have nothing to hide’ – but upon a wide range of what are generally described as ‘civil liberties’. It has a big impact on freedom of speech – an impact that been empirically evidenced in the last year – and upon freedom of association and assembly, both online and in the ‘real’ world. One of the main reasons for this – a reason largely missed by those who advocate for more surveillance – is that we use the internet for so many more things than we ever used telephones and letters, or even email. We work, play, romance and research our health. We organise our social lives, find entertainment, shop, discuss politics, do our finances and much, much more. There is pretty much no element of our lives that does not have a very significant online element – and that means that surveillance touches all aspects of our lives, and any chilling effect doesn’t just chill speech or invade selfish privacy, but almost everything.

This, and much more, is discussed in my paper – which I hope will contribute to the debate, and indeed stimulate debate. Some of it is contentious – the role of commercial surveillance the interaction between it and state surveillance – but that too is intentional. Contentious issues need to be discussed.

There is one particular point that often gets missed – the question of when surveillance occurs. Is it when data is gathered, when it is algorithmically analysed, or when human eyes finally look at it. In the end, this may be a semantic point – what technically counts as ‘surveillance’ is less important than what actually has an impact on people, which begins at the data gathering stage. In my conclusion, I bring out that point by quoting our new Prime Minister, from her time as Home Secretary and chief instigator of our current manifestation of surveillance law. This is how I put it in the paper:

“Statements such as Theresa May’s that ‘the UK does not engage in mass surveillance’ though semantically arguable, are in effect deeply unhelpful. A more accurate statement would be that:

‘the UK engages in bulk data gathering that interferes not only with privacy but with freedom of expression, association and assembly, the right to a free trial and the prohibition of discrimination, and which puts people at a wide variety of unacknowledged and unquantified risks.’”

It is only when we can have clearer debate, acknowledging the real risks, that we can come to appropriate conclusions. We are probably too late for that to happen in relation to the Investigatory Powers Bill, but given that the bill includes measures such as the contentious Internet Connection Records that seem likely to fail, in expensive and probably farcical ways, the debate will be returned to again and again. Next time, perhaps it might be a better debate.

More on Corbyn’s Digital Manifesto…

Yesterday a piece I wrote about Corbyn’s Digital Manifesto was published on The Conversation – you can find it here:

https://theconversation.com/corbyns-digital-meh-nifesto-is-too-rooted-in-the-past-to-offer-much-for-the-future-65003

The natural constraints of a short piece, and the requirements of The Conversation meant that I didn’t cover all the areas, and my own tendency to, well, be a bit strident in my opinions at times means that it may not have been quite as clear as it could have been. I would like to add a few things to what I said, clarify a few more, and open up the opportunity for anyone to comment on it.

The first thing to make absolutely clear is that though I was distinctly underwhelmed by the Digital Democracy Manifesto, it is far better than anything produced by Labour to date, and vastly better than anything I have seen by the Tories. My criticism of it was not in any way supporting what the Tories are currently doing, nor what they are likely to do. I used the word ‘meh’ in my piece because I wanted (and still want) Labour to be bolder, clearer, and more forward-looking precisely so that they can provide a better opposition to the Tories – and to the generally lamentable status quo on internet policy. As I tried (but perhaps failed) to make clear, I am delighted that Corbyn has taken this initiative, and hope it sparks more discussion. There are many of us who would be delighted to contribute to the discussion and indeed to the development of policy.

The second thing to make clear is that my piece was not an exhaustive analysis of the manifesto – indeed, it largely missed some really good parts. The support of Open Source, for example – which was criticised aggressively in the Sun – is to be thoroughly applauded. You can, as usual, trust The Sun to get things completely wrong.

I would of course like to say much more about privacy – sadly the manifesto (in some ways subconsciously) repeats the all-too-common idea that privacy is a purely personal, individual right, when it actually underpins the functioning of communities. I’ve written about this many times before – one piece is here, for example – but that is for another time. Labour, for me, should change its tack on privacy completely – but I know that I am somewhat unusual in that belief. I’ll continue to plug away on that particular issue, but not here and not now.

What I would hope is that the manifesto starts an open discussion – and starts to move us to a better understanding of these issues. If we don’t understand them better, we’ll continue to be driven down very unhelpful paths. Whether you’re one of Corbyn’s supporters or his bitterest opponents, that’s something to be avoided.

How not to reclaim the internet…

The new campaign to ‘Reclaim the Internet‘, to ‘take a stand against online abuse’ was launched yesterday – and it could be a really important campaign. The scale and nature of abuse online is appalling – and it is good to see that the campaign does not focus on just one kind of abuse, instead talking about ‘misogyny, sexism, racism, homophobia, transphobia’ and more. There is more than anecdotal evidence of this abuse – even if the methodology and conclusions from the particular Demos survey used at the launch has been subject to significant criticism: Dr Claire Hardaker of Lancaster University’s forensic dissection is well worth a read – and it is really important not to try to suggest that this kind of abuse is not hideous and should not be taken seriously. It should – but great care needs to be taken and the risks attached to many of the potential strategies to ‘reclaim the internet’ are very high indeed. Many of them would have precisely the wrong effect: silencing exactly those voices that the campaign wishes to have heard.

Surveillance and censorship

Perhaps the biggest risk is that the campaign is used to enable and endorse those twin tools of oppression and control, surveillance and censorship. The idea that we should monitor everything to try to find all those who commit abuse or engage in sexism, misogyny, racism, homophobia and transphobia may seem very attractive – find the trolls, root them out and punish them – but building a surveillance infrastructure and making it seem ‘OK’ is ultimately deeply counterproductive for almost every aspect of freedom. Evidence shows that surveillance chills free speech, discourages people from seeking out information, associating and assembling with people and more – as well as enabling discrimination and exacerbating power differences. Surveillance helps the powerful to oppress the weak – so should be avoided except in the worst of situations. Any ‘solutions’ to online abuse that are based around an increase in surveillance need a thorough rethink.

Censorship is the other side of the coin – but works with surveillance to let the powerful control the weak. Again, huge care is needed to make sure that attempts to ‘reclaim’ the internet don’t become tools to enforce orthodoxy and silence voices that don’t ‘fit’ the norm. Freedom of speech matters most precisely when that speech might offend and upset – it is easy to give those you like the freedom to say what they want, much harder to give those you disagree with that freedom.  It’s a very difficult area – because if we want to reduce the impact of abuse, that must mean restricting abusers’ freedom of speech – but it must be navigated very carefully, and tools not created that allow easy silencing of those who disagree with people rather than those who abuse them.

Real names

One particularly important trap not to fall into is that of demanding ‘real names’: it is a common idea that the way to reduce abuse is to prevent people being anonymous online, or to ban the use of pseudonyms. Not only does this not work, but it, again, damages many of those who the idea of ‘reclaiming the internet’ is intended to support. Victims of abuse in the ‘real’ world, people who are being stalked or victimised, whistleblowers and so forth need pseudonyms in order to protect themselves from their abusers, stalkers, enemies and so on. Force ‘real names’ on people, and you put those people at risk. Many will simply not engage – chilled by the demand for real names and the fear of being revealed. That’s even without engaging with the huge issue of the right to define your own name – and the joy of playing with identity, which for some people is one of the great pleasures of the internet, from parodies to fantasies. Real names are another way that the powerful can exert their power on the weak – it is no surprise that the Chinese government are one of the most ardent supporters of the idea of forcing real names on the internet. Any ‘solution’ to reclaiming the internet that demands or requires real names should be fiercely opposed.

Algorithms and errors

Another key mistake to be avoided is over-reliance on algorithmic analysis – particularly of content of social media posts. This is one of the areas that the Demos survey lets itself down – it makes assumptions about the ability of algorithms to understand language. As Dr Claire Hardaker puts it:

“Face an algorithm with messy features like sarcasm, threats, allusions, in-jokes, novel metaphors, clever wordplay, typographical errors, slang, mock impoliteness, and so on, and it will invariably make mistakes. Even supposedly cut-and-dried tasks such as tagging a word for its meaning can fox a computer. If I tell you that “this is light” whilst pointing to the sun you’re going to understand something very different than if I say “this is light” whilst picking up an empty bag. Programming that kind of distinction into a software is nightmarish.”

This kind of error is bad enough in a survey – but some of the possible routes to ‘reclaiming the internet’ include using this kind of analysis to identify offending social media comments, or even to automatically block or censor social media comments. Indeed, much internet filtering works that way – one of the posts on this blog which was commenting on ‘porn blocking’ was blocked by a filter as it had words relating to pornography in it a number of times. Again, reliance on algorithmic ‘solutions’ to reclaiming the internet is very dangerous – and could end up stifling conversations, reducing freedom of speech and much more.

Who’s trolling who? Double-edged swords…

One of the other major problems with dealing with ‘trolls’ (the quotation marks are entirely intentional) is that in practice it can be very hard to identify them. Indeed, in conflicts on the internet it is common for both sides to believe that the other side is the one doing the abuse, the other side are the ‘trolls’, and they themselves are the victims who need protecting. Anyone who observes even the most one-sided of disputes should be able to see this – from GamerGate to some of the conflicts over transphobia. Not that many who others would consider to be ‘trolls’ would consider themselves to be trolls.

The tragic case of Brenda Leyland should give everyone pause for thought. She was described and ‘outed’ as a ‘McCann troll’ – she tweeted as @Sweepyface and campaigned, as she saw it, for justice for Madeleine McCann, blaming Madeleine’s parents for her death. Sky News reporter Martin Brunt doorstepped her, and days later she was found dead, having committed suicide. Was she a ‘troll’? Was the media response to her appropriate, proportionate, or positive? These are not easy questions – because this isn’t an easy subject.

Further, one of the best defences of a ‘troll’ is to accuse the person they’re trolling of being a troll – and that is something that should be remembered whatever the tools you introduce to help reduce abuse online. Those tools are double-edged swords. Bring in quick and easy ways to report abuse – things like immediate blocking of social media accounts when those accounts are accused of being abusive – and you will find those tools being used by the trolls themselves against their victims. ‘Flame wars’ have existed pretty much since the beginning of the internet – any tools you create ‘against’ abuse will be used as weapons in flame wars in the future.

No quick fixes and no silver bullets

That should remind us of the biggest point here. There are no quick fixes to this kind of problem. No silver bullets that will slay the werewolves, or magic wands that will make everything OK. Technology often encourages the feeling that if only we created this one new tool, we could solve everything. In practice, it’s almost never the case – and in relation to online abuse this is particularly true.

Some people will suggest that it’s already easy. ‘All you have to do is block your abuser’ is all very well, but if you get 100 new abusive messages every minute you’ll spend your whole time blocking. Some will say that the solution is just not to feed the trolls – but many trolls don’t need any feeding at all. Others may suggest that people are just whining – none of this really hurts you, it’s just words – but that’s not true either. Words do hurt – and most of those suggesting this haven’t been subject to the kind of abuse that happens to others. What’s more, the chilling effect of abuse is real – if you get attacked every time you go online, why on earth would you want to stay online?

The problem is real, and needs careful thought and time to address. The traps involved in addressing it – and I’ve mentioned only a few of them here – are also real, and need to be avoided and considered very carefully. There really are no quick fixes – and it is really important not to raise false hopes that it can all be solved quickly and easily. That false hope may be the biggest trap of all.

Panama, privacy and power…

David Cameron’s first reaction to the questions about his family’s involvement with the Mossack Fonseca leaks was that it was a ‘private matter’ – something that was greeted with a chorus of disapproval from his political opponents and large sections of both the social and ‘traditional’ media. Privacy scholars and advocates, however, were somewhat muted – and quite rightly, because there are complex issues surrounding privacy here, issues that should at the very least make us pause and think. Privacy, in the view of many people, is a human right. It is included in one form or another in all the major human rights declarations and conventions. This, for example, is Article 8 of the European Convention on Human Rights:

“Everyone has the right to respect for his private and family life, his home and his correspondence.”

Everyone. Not just the people we like. Indeed, the test of your commitment to human rights is how you apply them to those who you don’t like, not how you apply them to those that you do. It is easy to grant rights to your friends and allies, harder to grant them to your enemies or those you dislike. We see how many of those who shout loudly about freedom of speech when their own speech is threatened are all too ready to try to shut out their enemies: censorship of extremist speech is considered part of the key response to terrorism in the UK, for example. Those of us on the left of politics, therefore, should be very wary of overriding our principles when the likes of David Cameron and George Osborne are concerned. Even Cameron and Osborne have the right to privacy, we should be very clear about that. We can highlight the hypocrisy of their attempts to implement mass surveillance through the Investigatory Powers Bill whilst claiming privacy for themselves, but we should not deny them privacy itself without a very good cause indeed.

Privacy for the powerful?

And yet that is not the whole story. Rights, and human rights in particular, are most important when used by the weak to protect themselves from the powerful.The powerful generally have other ways to protect themselves. Privacy in particular has at times been given a very bad name because it has been used by the powerful to shield themselves from scrutiny. A stream of philandering footballers have tried to use privacy law to prevent their affairs becoming public – Ryan Giggs, Rio Ferdinand and John Terry. Prince Charles’ ultimately unsuccessful attempts to keep the ‘Black Spider Memos’ from being exposed were also on the basis of privacy. The Catholic Church covered up the abuses of its priests. Powerful people using a law which their own kind largely forged is all too common, and should not be accepted without a fight. As feminist scholar Anita Allen put it:

“[it should be possible to] rip down the doors of ‘private’ citizens in ‘private’ homes and ‘private’ institutions as needed to protect the vital interests of vulnerable people.”

This argument may have its most obvious application in relation to domestic abuse, but it also has an application to the Panama leaks – particularly at a time when the politics of austerity is being used directly against the vital interests of vulnerable people. Part of the logic of austerity is that there isn’t enough money to pay for welfare and services – and part of the reason that we don’t have ‘enough’ money is that so much tax is being avoided or evaded, so there’s a public interest in exposing the nature and scale of tax avoidance and evasion, a public interest that might override the privacy rights of the individuals involved.

How private is financial information?

That brings the next question: should financial or taxation information be treated as private, and accorded the strongest protection? Traditions and laws vary on this. In Norway, for example, income and tax information for every citizen is publicly available. This has been true since the 19th century – from the Norwegian perspective, financial and tax transparency is part of what makes a democratic society function.

It is easy to see how this might work – and indeed, an anecdote from my own past shows it very clearly. When I was working for one of the biggest chartered accountancy firms back in the 80s, I started to get suspicious about what had happened over a particular pay rise – so I started asking my friends and colleagues, all of whom had started with the firm at the same time, and progressed up the ladder in the same way, how much they were earning, I discovered to my shock that every single woman was earning less than every single man. That is, that the highest paid woman earned less than the lowest paid man – and I knew them well enough to know that this was in no way a reflection of their merits as workers. The fact that salaries were considered private, and that no-one was supposed to know (or ask) what anyone else was earning, meant that what appeared to me once I knew about it to be blatant sexism was kept completely secret. Transparency would have exposed it in a moment – and probably prevented it from happening.

In the UK, however, privacy over financial matters is part of our culture. That may well be a reflection of our conservatism – if functions in a ‘conservative’ way, tending to protect the power of the powerful – but it is also something that most people, I would suggest, believe is right. Indeed, as a privacy advocate I would in general support more privacy rather than less. It might be a step too far to suggest that all our finances should be made public – but not, perhaps, that the finances of those in public office should be private. The people who, in this case, are supporting or driving policies should be required to show whether they are benefiting from those policies – and whether they are being hypocritical in putting those policies forward. We should be able to find out whether they personally benefit from tax cuts or changes, for example, and whether they’re contributing appropriately when they’re requiring others to tighten their belts.

I do not, of course, expect any of this to happen. In the UK in particular the powerful have far too strong a hold on our politics to let it happen. That then brings me to one more privacy-related issue exposed by the Panama papers. If there is no legal way for information that is to the public benefit to come out, what approach should be taken to the illegal ways that information is acquired. There have been many other prominent examples – Snowden’s revelations about the NSA, GCHQ and so on, Hervé Falciani’s data from HSBC in Switzerland in particular – where in some very direct ways the public interest could be said to be served by the leaks. Are they whistleblowers or criminals? Spies? Should they be prosecuted or cheered? And then what about other hackers like the ‘Impact Team’ who hacked Ashley Madison? Whether each of them was doing ‘good’ is a matter of perspective.

Vulnerability of data…

One thing that should be clear, however, is that no-one should be complacent about data security and data vulnerability. All data, however it is held, wherever it is held, and whoever it is held by, is vulnerable. The degree of that vulnerability, the likelihood of any vulnerability being exploited and so forth varies a great deal – but the vulnerability is there. That has two direct implications for the state of the internet right now. Firstly, it means that we should encourage and support encryption – and not do anything to undermine it, even for law enforcement purposes. Secondly, it means that we should avoid holding data that we don’t need to hold – let alone create unnecessary data. The Investigatory Powers Bill breaks both of those principles. It undermines rather than supports encryption, and requires the creation of massive amounts of data (the Internet Connection Records) and the gathering and/or retention of even more (via the various bulk powers). All of this adds to our vulnerability and our risks – something that we should think very, very hard before doing. I’m not sure that thinking is happening.

 

Internet Connection Records: answering the wrong question?

Watching and listening to the Commons debate over the Investigatory Powers Bill, and in particular when ‘Internet Connection Records’ were mentioned, it was hard not to feel that what was being discussed had very little connection with reality. There were many mentions of how bad and dangerous things were on the internet, how the world had changed, and how we needed this law – and in particular Internet Connection Records (ICRs) – to deal with the new challenges. As I watched, I found myself imagining a distinctly unfunny episode of Yes Minister which went something like this:


Screen Shot 2016-03-16 at 10.16.58Scene 1:

Minister sitting in leather arm chair, glass of brandy in his hand, while old civil servant sits opposite, glasses perched on the end of his nose.

Minister: This internet, it makes everything so hard. How can we find all these terrorists and paedophiles when they’re using all this high tech stuff?

Civil Servant: It was easier in the old days, when they just used telephones. All we needed was itemised phone bills. Then we could find out who they were talking to, tap the phones, and find out everything we needed. Those were the days.

Minister: Ah yes, those were the days.

The Civil Servant leans back in his chair and takes a sip from his drink. The Minister rubs his forehead looking thoughtful. Then his eyes clear.

Minister: I know. Why don’t we just make the internet people make us the equivalent of itemised phone bills, but for the internet?

Civil Servant blinks, not knowing quite what to say.

Minister: Simple, eh? Solves all our problems in one go. Those techie people can do it. After all, that’s their job.

Civil Servant: Minister….

Minister: No, don’t make it harder. You always make things difficult. Arrange a meeting.

Civil Servant: Yes, Minister


Scene 2

Minister sitting at the head of a large table, two youngish civil servants sitting before him, pads of paper in front of them and well-sharpened pencils in their hands.

Minister: Right, you two. We need a new law. We need to make internet companies make us the equivalent of Itemised Phone Bill.

Civil servant 1: Minister?

Minister: You can call them ‘Internet Connection Records’. Add them to the new Investigatory Powers Bill. Make the internet companies create them and store them, and then give them to the police when they ask for them.

Civil servant 2: Are we sure the internet companies can do this, Minister?

Minister: Of course they can. That’s their business. Just draft the law. When the law is ready, we can talk to the internet companies. Get our technical people here to write it in the right sort of way.

The two civil servants look at each other for a moment, then nod.

Civil servant 1: Yes, minister.


 

Scene 3

A plain, modern office, somewhere in Whitehall. At the head of the table is one of the young civil servants. Around the table are an assortment of nerdish-looking people, not very sharply dressed. In front of each is a ring-bound file, thick, with a dark blue cover.

Civil servant: Thank you for coming. We’re here to discuss the new plan for Internet Connection Records. If you look at your files, Section 3, you will see what we need.

The tech people pick up their files and leaf through them. A few of them scratch their heads. Some blink. Some rub their eyes. Many look at each other.

Civil servant: Well, can you do it? Can you create these Internet Connection Records?

Tech person 1: I suppose so. It won’t be easy.

Tech person 2: It will be very expensive

Tech person 3: I’m not sure how much it will tell you

Civil servant: So you can do it? Excellent. Thank you for coming.


 

The real problem is a deep one – but it is mostly about asking the wrong question. Internet Connection Records seem to be an attempt to answer the question ‘how can we recreate that really useful thing, the itemised phone bill, for the internet age’? And, from most accounts, it seems clear that the real experts, the people who work in the internet industry, weren’t really consulted until very late in the day, and then were only asked that question. It’s the wrong question. If you ask the wrong question, even if the answer is ‘right’, it’s still wrong. That’s why we have the mess that is the Internet Connection Record system: an intrusive, expensive, technically difficult and likely to be supremely ineffective idea.

The question that should have been asked is really the one that the Minister asked right at the start: how can we find all these terrorists and paedophiles when they’re using all this high tech stuff? It’s a question that should have been asked of the industry, of computer scientists, of academics, of civil society, of hackers and more. It should have been asked openly, consulted upon widely, and given the time and energy that it deserved. It is a very difficult question – I certainly don’t have an answer – but rather than try to shoe-horn an old idea into a new situation, it needs to be asked. The industry and computer scientists in particular need to be brought in as early as possible – not presented with an idea and told to implement it, no matter how bad an idea it is.

As it is, listening to the debate, I feel sure that we will have Internet Connection Records in the final bill, and in a form not that different from the mess currently proposed. They won’t work, will cost a fortune and bring about a new kind of vulnerability, but that won’t matter. In a few years – probably rather more than the six years currently proposed for the first real review of the law – it may finally be acknowledged that it was a bad idea, but even then it may well not be. It is very hard for people to admit that their ideas have failed.


As a really helpful tweeter (@sw1nn) pointed out, there’s a ‘techie’ term for this kind of issue: An XY problem!  See http://xyproblem.info. ICRs seem to be a classic example.

 

Labour and the #IPBill

I am a legal academic, specialising in internet privacy – a lecturer at the UEA Law School. I am the author of Internet Privacy Rights: Rights to Protect Autonomy, published by Cambridge University Press in 2014, and was one of the academics who was a witness before the Joint Parliamentary Committee on the Investigatory Powers Bill. I am also a member of the Labour Party – this piece is written from all of those perspectives.


 Labour and the Investigatory Powers Bill

The Investigatory Powers Bill has its second reading on Tuesday – part of what appears an attempt to pass the Bill with unseemly haste. One of the biggest questions is how Labour will approach the Bill – the messages so far have been mixed. Andy Burnham’s press release on the 1st of March in response to the latest draft was from my perspective the best thing that has emerged from Labour in relation to surveillance in many decades, if not ever.

What is important is that Labour builds on this – for in taking a strong and positive response to the Investigatory Powers Bill Labour has a chance to help shape its future in other areas. What is more, Labour can tap into some of its best and most important traditions and realise the promise of some of its best moments.

Demand more time

The first and most important thing that Labour should do at this stage is demand more time for scrutiny for the bill. There are some very significant issues that have not received sufficient time – the three parliamentary committees that have examined the bill so far (the Science and Technology Committee, the Intelligence and Security Committee and the specially convened Joint Parliamentary Committee on the Investigatory Powers Bill) all made that very clear. The Independent Reviewer of Terrorism Legislation, David Anderson QC has also been persistent in his calls for more time and more careful scrutiny – most recently in his piece in the Telegraph where he said:

“A historic opportunity now exists for comprehensive reform of the law governing electronic surveillance. Those who manage parliamentary business must ensure that adequate time – particularly in committee – is allowed before December 2016.”

David Anderson is right on all counts – this is a historic opportunity, and adequate time is required for that review. How Labour responds could well be the key to ensuring that this time is provided: a strong response now, and in particular the willingness to reject the bill in its entirety unless sufficient time is given, would put the government in a position where it has to provide that time.

As well as pushing for more time, there are a number of things that Labour – and others – should be requiring in the new bill, many of which were highlighted by the three parliamentary committees but have not been put into the new draft bill.

Proper, independent oversight

The first of these is proper, independent oversight – oversight not just of how the powers introduced or regulated by the bill are being used in a procedural way (whether warrants are being appropriately processed and so forth) but whether the powers are actually being used in the ways that parliament envisaged, that the people were being told and so forth. Reassurances made need to be not just verified but re-examined – and as time moves on, as technology develops and as the way that people use that technology develops it needs to be possible to keep asking whether the powers remain appropriate.

The oversight body needs not just to be independent, but to have real powers. Powers to sanction, powers to notify, and even powers to suspend the functioning of elements of the bill should those elements be found to be no longer appropriate or to have been misused.

Independent oversight – as provided, for example, by the Independent Reviewer of Terrorism Legislation – is not just valuable in itself, but in the way that it can build trust. Building trust is critical in this area: a lot of trust has been lost, as can be seen by the rancorous nature of a lot of the debate. It would help everyone if that rancour is reduced.

Re-examine and rebalance ‘Bulk Powers’

One of the most contentious areas in the bill is that of ‘Bulk Powers’: bulk interception, bulk acquisition (of communications data), bulk equipment interference (which includes what is generally referred to as ‘hacking’) and bulk personal datasets. These powers remain deeply contentious – and potentially legally challengeable. There are specific issues with some of them – with bulk equipment interference a sufficiently big issue that the Intelligence and Security Committee recommended their removal from the bill.

It is these powers that lead to the accusation that the bill involves ‘mass surveillance’ – and it is not sufficient for the Home Secretary simply to deny this. Her denials appear based on a semantic argument about what constitutes ‘surveillance’ – and argument that potentially puts her at odds with both the European Court of Human Rights and the Court of Justice of the European Union. It also puts the UK increasingly at odds with opinion around the world. The UN’s Special Rapporteur on the right to privacy, Joseph A. Cannataci, said in his Report to the UN Human Rights Council on the 8th March:

“It would appear that the serious and possibly unintended consequences of legitimising bulk interception and bulk hacking are not being fully appreciated by the UK Government.”

Much more care is needed here if the Investigatory Powers Bill is to be able to face up to legal challenge and not damage not only people’s privacy but the worldwide reputation of the UK. Again, proper and independent oversight would help here, as well as stronger limits on the powers.

An independent feasibility study for ICRs

The Home Office have described ‘Internet Connection Records’ as the one genuinely new part of the Investigatory Powers Bill: it is also one of the most concerning. Critics have come from many directions. Privacy advocates note that they are potentially the most intrusive measure of all, gathering what amounts to substantially all of our internet browsing history – and creating databases of highly vulnerable data, adding rather than reducing security and creating unnecessary risks. Industry experts have suggested they would be technically complex, extortionately expensive and extremely unlikely to achieve the aims that have been suggested. All three parliamentary committees asked for more information and clarity – and yet that clarity has not been provided. The suggestion that ICRs are like an ‘itemised phone bill’ for the internet has been roundly criticised (notably by the Joint IP Bill Committee) and yet it appears to remain the essential concept and underpinning logic to the idea.

Given all this, to introduce the idea without proper testing and discussion with the industry seems premature and ill conceived at best. If the idea cannot be rejected outright, it should at least be properly tested – and again, with independent oversight. Instead of including it within the bill, a feasibility study could be mounted – a year of working with industry to see if the concept can be made to work, without excessive cost, and producing results that can actually be useful, can be properly secured and so forth. If at the end of the feasibility study the evidence suggests the idea is workable, it can be added back into the bill. If not, alternative routes can be taken.

Reassess encryption

Perhaps the most contentious issue of all at present is the way in which the bill addresses encryption. All three parliamentary committees demanded clarity over the matter – particularly in relation to end-to-end encryption. That clarity is conspicuous by its absence in the bill. Whether the lack of clarity is intentional or not is somewhat beside the point: the industry in particular needs clarity. Specifically, the industry needs the government to be clear in the legislation that it will not either ban end-to-end encryption, demand that ‘back doors’ are built into systems, or pressurise companies to build in those back doors or weaken their encryption systems.

The current position not only puts the government at odds with the industry, it puts it at odds with computer scientists around the world. The best of those scientists have made their position entirely clear – and yet still the government seems unwilling to accept what both scientists and industry are telling them. This needs to change – what is being suggested right now is dangerous to privacy and security and potentially puts the UK technology industry at a serious competitive disadvantage compared to the rest of the world.

Working with industry and science

Therein lies one of the most important keys: working with rather than against the IT industry and computer scientists. Plans such as those in the Investigatory Powers Bill should have been made with the industry and science from the very start – and the real experts should be listened to, not ridden roughshod over. Inconvenient answers need to be faced up to, not rejected. Old concepts should not be used as models for new situations when the experts tell you otherwise.

This is where one of Labour’s longest traditions should come into play. Harold Wilson’s famous Scarborough speech in 1963, where he talked about the ‘white heat’ of technology is perhaps even more apt now than it was all those years ago. Labour should be a modernising party – and that means embracing technology and science, listening to scientists and learning from them, using evidence-based policy and all that entails. Currently, the Investigatory Powers Bill is very much the reverse of that – but it still could become that, if appropriate changes are made.

Protecting ordinary people

Labour should also be tapping into another strong tradition – indeed in many ways its founding tradition. Labour was born to support and protect working people – ‘ordinary’ people in the positive sense of that word. Surveillance, in practice, often does precisely the opposite – it can be used by the powerful against those with less power. It can be politically misused – and the history of surveillance of trade unionists, left-wing activists is one of which the Labour Party should be acutely aware. Without sufficient safeguards and limitations, any surveillance system can and will be misused, and often in precisely these kinds of ways.

Labour could and should remember this – and work very hard to ensure that those safeguards and limitations are built in. Some of the measures outlined above – proper oversight, rebalancing bulk powers, a feasibility study on ICRs in particular – are intended to do precisely that.

Not ‘soft’ but strong

Building in these safeguards, working with technology industries and scientists, protecting rather than undermining encryption should not be seen as something ‘soft’ – and any suggestion that by opposing the measures currently in the Bill is somehow being ‘soft’ on terrorists and paedophiles should not just be rejected but should be turned on its head. The current bill will not protect us in the ways suggested – indeed, it will make us less secure, more at risk from cybercriminals, create more openings for terrorists and others, and could be a massive waste of money, time and expertise. That money, time and expertise could be directed in ways that do provide more protection.

What is more, as noted above, the current bill would be much more vulnerable to legal challenge than it should be. That is not a sign of strength: very much the opposite.

Labour’s future direction

Most of these issues are relevant to all political parties – but for Labour the issue is particularly acute. Labour is currently trying to find a new direction – the challenge presented by the Investigatory Powers Bill could help it be found. A positive approach could build on the old traditions outlined above, as well as the human rights tradition build in Blair’s early years: the Human Rights Act is one of New Labour’s finest achievements, despite the bad treatment it receives in the press. A party that forges alliances with the technology industry and with computer science, one that embraces the internet rather than seeing it as a scary and dangerous place to be corralled and controlled, is a party that has a real future. Labour wants to engage with young people – so be the party that supports WhatsApp rather than tries to ban it or break it. Be the party that understands encryption rather than fights against it.

All this could begin right now. I hope Labour is up to the challenge.