Who needs privacy?

You might be forgiven for thinking that this government is very keen on privacy. After all, MPs all seem to enjoy the end-to-end encryption provided by the WhatsApp groups that they use to make their plots and plans, and they’ve been very keen to keep the details of their numerous parties during lockdown as private as possible – so successfully that it seems to have taken a year or more for information about evidently well-attended (work) events to become public. Some also seem enthused by the use of private email for work purposes, and to destroy evidence trails to keep other information private and thwart FOI requests – Sue Gray even provided some advice on the subject a few years back.

On the other hand, they also love surveillance – 2016’s Investigatory Powers Act gives immense powers to the authorities to watch pretty much our every move on the internet, and gather pretty much any form of data about us that’s held by pretty much anyone. They’ve also been very keen to force everyone to use ‘real names’ on social media – which, though it may not seem completely obvious, is a move designed primarily to cut privacy. And, for many years, they’ve been fighting against the expansion of the use of encryption. Indeed, a new wave of attacks on encryption is just beginning.

So what’s going on? In some ways, it’s very simple: they want privacy for themselves, and no privacy for anyone else. It fits the general pattern of ‘one rule for us, another for everyone else’, but it’s much more insidious than that. It’s not just a double-standard, it’s the reverse of what is appropriate – because it needs to be understood that privacy is ultimately about power.

People need privacy against those who have power over them – employees need privacy from their employers (something exemplified by the needs of whistleblowers for privacy and anonymity), citizens need privacy from their governments, victims need privacy from their stalkers and bullies and so on. Kids need privacy from their parents, their teachers and more. The weaker and more vulnerable people are, the more they need privacy – and the approach by the government is exactly the opposite. The powerful (themselves) get more privacy, the weaker (ordinary people, and in particular minority groups and children) get less or even no privacy. The people who should have more accountability – notably the government – get privacy to prevent that accountability – whilst the people who need more protection lose the protection that privacy can provide

This is why moves to ban or limit the use of end-to-end encryption are so bad. Powerful people – and tech-savvy people, like the criminals that they use as the excuse for trying to restrict encryption – will always be able to get that encryption. You can do it yourself, if you know how. The rest of the people – the ‘ordinary’ users of things like Facebook messenger – are the ones who need it, to protect themselves from criminals, stalkers, bullies etc – and are the ones that moves like this from the government are trying to stop getting it.

The push will be a strong one – trying to persuade us that in order to protect kids etc we need to be able to see everything they’re doing, so we need to (effectively) remove all their privacy. That’s just wrong. Making their communications ‘open’ to the authorities, to their parents etc also makes it open to their enemies – bullies, abusers, scammers etc, and indeed those parents or authority figures who are themselves dangerous to kids. We need to understand that this is wrong.

None of this is easy – and it’s very hard to give someone privacy when you don’t trust them. That’s another key here. We need to learn who to trust and how to trust them – and we need to do our best to teach our kids how to look after themselves. To a great extent they know – kids understand privacy far more that people give them credit for – and we need to trust that too.

Contact tracing, privacy, magical thinking – and trust!

The saga of the UK’s contact tracing app has barely begun but already it is fraught with problems. Technical problems – the app barely works on iPhones, for example, and communication between iPhones requires someone with an Android phone to be in close proximity – are just the start of it. Legal problems are another issue – the app looks likely to stretch data protection law at the very least. Then there are practical problems – will the app record you as having contact with people from whom you are blocked by a wall, for example – and the huge issue of getting enough people to download it when many don’t have smartphones, many won’t be savvy enough to get it going, and many more, it seems likely, won’t trust the app enough to use it.

That’s not even to go into the bigger problems with the app. First of all, it seems unlikely to do what people want it to do – though even what is wanted is unclear, a problem which I will get back to. Secondly, it rides roughshod over privacy in not just a legal but a practical way, and despite what many might suggest people do care about privacy enough to make decisions on its basis.

This piece is not about the technical details of the app – there are people far more technologically adept than me who have already written extensively and well about this – and nor is it about the legal details, which have also been covered extensively and well by some real experts (see the Hawktawk blog on data protection, and the opinion of Matthew Ryder QC, Edward Craven, Gayatri Sarathy & Ravi Naik for example) but rather about the underlying problems that have beset this project from the start: misunderstanding privacy, magical thinking, and failure to grasp the nature of trust.

These three issues together mean that right now, the project is likely to fail, do damage, and distract from genuine ways to help deal with the coronavirus crisis, and the best thing people should do is not download or use the app, so that the authorities are forced into a rethink and into a better way forward. It would be far from the first time during this crisis that the government has had to be nudged in a positive direction.

Misunderstanding Privacy – Part 1

Although people often underplay it – particularly in relation to other people – privacy is important to everyone. MPs, for example, will fiercely guard their own privacy whilst passing the most intrusive of surveillance laws. Journalists will fight to protect the privacy of their sources even whilst invading the privacy of the subjects of their investigations. Undercover police officers will resist even legal challenges to reveal their identities after investigations go wrong.

This is for one simple reason: privacy matters to people when things are important.

That is particularly relevant here, because the contact tracing app hits at three of the most important parts of our privacy: our health, our location, and our social interactions. Health and location data, as I detail in my most recent book, what do we know and what should we do about internet privacy, are two of the key areas of the current data world, in part because we care a lot about them and in part because they can be immensely valuable in both positive and negative ways. We care about them because they’re intensely personal and private – but that’s also why they can be valuable to those who wish to exploit or harm us. Health data, for example, can be used to discriminate – something the contact tracing app might well enable, as it could force people to self-isolate whilst others are free to move, or even act as an enabler for the ‘immunity passports’ that have been mooted but are fraught with even more problems than the contact tracing app.

Location data is another matter and something worthy of much more extensive discussion – but suffice it to say that there’s a reason we don’t like the idea of being watched and followed at all times, and that reason is real. If people know where you are or where you have been, they can learn a great deal about you – and know where you are not (if you’re not at home, you might be more vulnerable to burglars) as well as where you might be going. Authoritarian states can find dissidents. Abusive spouses can find their victims and so forth. More ‘benignly’, it can be used to advertise and sell local and relevant products – and in the aggregate can be used to ‘manage’ populations.

Relationship data – who you know, how well you know them, what you do with them and so forth – is in online terms one of the things that makes Facebook so successful and at the same time so intrusive. What a contact tracing system can do is translate that into the offline world. Indeed, that’s the essence of it: to gather data about who you come into contact with, or at least in proximity to, by getting your phone to communicate with all the phones close to you in the real world.

This is something we do and should care about, and could and should be protective over. Whilst it makes sense in relation to protecting against the spread of an infection, the potential for misuse of this kind of data is perhaps even greater than that of health and location data. Authoritarian states know this – it’s been standard practice for spies for centuries. The Stasi’s files were full of details of who had met whom and when, and for how long – this is precisely the kind of data that a contact tracing system has the potential to gather. This is also why we should be hugely wary of establishing systems that enable it to be done easily, remotely and at scale. This isn’t just privacy as some kind of luxury – this is real concern about things that are done in the real world and have been for many, many years, just not with the speed, efficiency and cheapness of installing an app on people’s phones.

Some of this people ‘instinctively’ know – they feel that the intrusions on their privacy are ‘creepy’ – and hence resist. Businesses and government often underestimate how much they care and how much they resist – and how able they are to resist. In my work I have seen this again and again. Perhaps the most relevant here was the dramatic nine day failure that was the Samaritans Radar app, which scanned people’s tweets to detect whether they might be feeling vulnerable and even suicidal, but didn’t understand that even this scanning would be seen as intrusive by the very people it was supposed to protect. They rebelled, and the app was abandoned almost immediately it had started. The NHS’s own ‘care.data’ scheme, far bigger and grander, collapsed for similar reasons – it wanted to suck up data from GP practices into a great big central database, but didn’t get either the legal or the practical consent from enough people to make it work. Resistance was not futile – it was effective.

This resistance seems likely in relation to the contact tracing app too – not least because the resistance grows spectacularly when there is little trust in the people behind a project. And, as we shall see, the government has done almost everything in its power to make people distrust their project.

Magical thinking

The second part of the problem is what can loosely be called ‘magical thinking’. This is another thing that is all too common in what might loosely be called the ‘digital age’. Broadly speaking, it means treating technology as magical, and thinking that you can solve complex, nuanced and multifaceted problems with a wave of a technological wand. It is this kind of magic that Brexiters believed would ‘solve’ the Irish border problems (it won’t) and led anti-porn campaigners to think that ‘age verification’ systems online would stop kids (and often adults) from accessing porn (it won’t).

If you watched Matt Hancock launch the app at the daily Downing Street press conference, you could have seen how this works. He enthused about the app like a child with a new toy – and suggested that it was the key to solving all the problems. Even with the best will in the world, a contact tracing app could only be a very small part of a much bigger operation, and only make a small contribution to solving whatever problems they want it to solve (more of which later). Magical thinking, however, makes it the key, the silver bullet, the magic spell that needs just to be spoken to transform Cinderella into a beautiful princess. It will never be that, and the more it is thought of in those terms the less chance it has of working in any way at all. The magical thinking means that the real work that needs to go on is relegated to the background or eliminated at all, replaced only by the magic of tech.

Here, the app seems to be designed to replace the need for a proper and painstaking testing regime. As it stands, it is based on self-reporting of symptoms, rather than testing. A person self-reports, and then the system alerts anyone who it thinks has been in contact with that person that they might be at risk. Regardless of the technological safeguards, that leaves the system at the mercy of hypochondriacs who will report the slightest cough or headache, thus alerting anyone they’ve been close to, or malicious self-reporters who either just want to cause mischief (scare your friends for a laugh) or who actually want to cause damage – go into a shop run by a rival, then later self-report and get all the workers in the shop worried into self-isolation.

These are just a couple of the possibilities. There are more. Stoics, who have symptoms but don’t take it seriously and don’t report – or people afraid to report because it might get them into trouble with work or friends. Others who don’t even recognise the symptoms. Asymptomatic people who can go around freely infecting people and not get triggered on the system at all. The magical thinking that suggests the app can do everything doesn’t take human nature into account – let alone malicious actors. History shows that whenever a technological system is developed the people who wish to find and exploit flaws in it – or different ways to use it – are ready to take advantage.

Magical thinking also means not thinking anything will go wrong – whether it be the malicious actors already mentioned or some kind of technical flaw that has not been anticipated. It also means that all these problems must be soluble by a little bit of techy cleverness, because the techies are so clever. Of course they are clever – but there are many problems that tech alone can’t solve

The issue of trust

One of those is trust. Tech can’t make people trust you – indeed, many people are distinctly distrustful of technology. The NHS generates trust, and those behind the app may well be assuming that they can ride on the coattails of that trust – but that itself may be wishful thinking, because they have done almost none of the things that generate real trust – and the app depends hugely on trust, because without it people won’t download and won’t use the app.

How can they generate that trust? The first point, and perhaps the hardest, is to be trustworthy. The NHS generates trust but politicians do the opposite. These particular politicians have been demonstrably and dramatically untrustworthy, noted for their lies – Boris Johnson having been sacked from more than one job for having lied. Further, their tech people have a particularly dishonourable record – Dominic Cummings is hardly seen as a paragon of virtue even by his own side, whilst the social media manipulative tactics of the leave campaign were remarkable for their effectiveness and their dishonesty.

In those circumstances, that means you have to work hard to generate trust. There are a few keys here. The first is to distance yourself from the least trustworthy people – the vote leave campaigners should not have been let near this with a barge pole, for example. The second is to follow systems and procedures in an exemplary way, building in checks and balances at all times, and being as transparent as possible.

Here, they’ve done the opposite. It has been almost impossible to find out what was going to until the programme was actually already in pilot stage. Parliament – through its committee system – was not given oversight until the pilot was already under way, and the report of the Human Rights Committee was deeply critical. There appears to have been no Data Protection Impact Assessment done in advance of the pilot – which is almost certainly in breach of the GDPR.

Further, it is still not really clear what the purpose of the project is – and this is also something crucial for the generation of trust. We need to know precisely what the aims are – and how they will be measured, so that it is possible to ascertain whether it is a success or not. We need to know the duration, what happens on completion – to the project, to the data gathered and to the data derived from the data gathered. We need to know how the project will deal with the many, many problems that have already been discussed – and we needed to know that before the project went into its pilot stage.

Being presented with a ‘fait accompli’ and being told to accept it is one way to reduce trust, not to gain it. All these processes need to take place whilst there is still a chance to change the project, and change is significantly – because all the signs are that a significant change will be needed. Currently it seems unlikely that the app will do anything very useful, and it will have significant and damaging side effects.

Misunderstanding Privacy – part 2

…which brings us back to privacy. One of the most common misunderstandings of privacy is the idea that it’s about hiding something away – hence the facetious and false ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument that is made all the time. In practice, privacy is complex and nuanced and more about controlling – or at least influencing – what kind of information about you is made available to whom.

This last part is the key. Privacy is relational. You need privacy from someone or something else, and you need it in different ways. Privacy scholars are often asked ‘who do you worry about most, governments or corporations?’ Are you more worried about Facebook or GCHQ. It’s a bit of a false question – because you should be (and probably are) worried about them in different ways, just as you’re worried about privacy from your boss, your parents, your kids, your friends in different ways. You might tell your doctor the most intimate details about your health, but you probably wouldn’t tell your boss or a bloke you meet in the pub.

With the coronavirus contact tracing app, this is also the key. Who gets access to our data, who gets to know about our health, our location, our movements and our contacts? If we know this information is going to be kept properly confidential, we might be more willing to share it. Do we trust our doctors to keep it confidential? Probably. Would we trust the politicians to keep it confidential? Far less likely. How can we be sure who will get access to it?

Without getting into too much technical detail, this is where the key current argument is over the app. When people talk about a centralised system, they mean that the data (or rather some of the data) is uploaded to a central server when you report symptoms. A decentralised system does not do that – the data is only communicated between phones, and doesn’t get stored in a central database. This is much more privacy-friendly, but does not build up a big central database for later use and analysis.

This is why privacy people much prefer the idea of a decentralised system – because, amongst other things, it keeps the data out of the hands of people that we cannot and should not trust. Out of the hands of the people we need privacy from.

The government does not seem to see this. They’re keen to stress how well the data is protected in ‘security’ terms – protected from hackers and so forth – without realising (or perhaps admitting) that the people we really want privacy from, the people who present the biggest risk to the users, are the government themselves. We don’t trust this government – and we should not really trust any government, but build in safeguards and protections from those governments, and remember that what we build now will be available not just to this government but to successors, which may be even worse, however difficult that might be to imagine.

Ways forward?

Where do we go from here? It seems likely that the government will try to push on regardless, and present whatever happens as a great success. That should be fought against, tooth and nail. They can and should be challenged and pushed on every point – legal, technical, practical, and trust-related. That way they may be willing to move to a more privacy-friendly solution. They do exist, and it’s not too late to change.

The Investigatory Powers Act: still a question of trust…

I read the short review of the Investigatory Powers Act by David Anderson QC, Independent Reviewer of Terrorism Legislation, with a great deal of interest. Anderson has been exemplary in his role, and has played a very significant part in ensuring that the Investigatory Powers Act has the safeguards that it does, and the chance to be something other than the ‘Snooper’s Charter’ which it often described as.

I find myself agreeing with a great deal of what he says – though coming to rather different conclusions. As one of those who followed the process of the act from beginning to end – and who participated in a number of the reviews, including appearing before the Joint Bill Committee, and being one of those consulted by David in his Bulk Powers Review, I agree with him entirely that the bill has been one of the most carefully scrutinised in recent times. That, however, also reveals the weaknesses of our scrutiny system. Some of these weaknesses that are unavoidable – it would be impossible to expect parliamentarians to understand many of the issues, or even to read all the fairly massive reports that the various reviews resulted in. Others are not: parliamentarians should be able to see their own weaknesses, and be willing to listen a bit more carefully to those who do understand them. As a legal academic, for example, I try to recognise my own weaknesses in understanding the technology, and defer to those who do understand it.

Where I find myself disagreeing most with the Independent Reviewer is in the weight that he appears to give to the bad features and weaknesses of the Investigatory Powers Act. Many of the problems seem to hit at the heart of the Act, and undermine its claim to be something positive overall.

  1. Internet Connection Records, which he notes that he had no opportunity to evaluate, were the one area noted as being entirely new in the bill – and in the view of many (including myself) are both unproven and represent a huge risk, a huge waste of resources. They should, in my view, have been included in David Anderson’s Bulk Powers Review – though not, in the technical terms of the bill, ‘Bulk Powers’, they are in a real sense every bit as ‘bulky’ and ‘powerful’. There are likely (in my view) to be highly difficult to implement, highly unlikely to be effective – and they could have been excluded from the Act, or introduced and tested on a pilot basis, with scope for a proper review.
  2. I share David Anderson’s concern over the dual lock system – and agree with him that this could and should have been done better. As another key element of the bill – and considered to be one of the key safeguards – this really matters. If the dual lock ends up being little more than a rubber stamp, its existence may do more harm than good, providing false assurance and complacency. The test of this will be in the implementation – something that needs to be watched very carefully.
  3. I also share David Anderson’s note that it is “legitimate to ask whether there are adequate advance safeguards on the exercise of some of the very extensive powers now spelled out for the first time”. This, it seems to me, is very important indeed – and hits at the heart of the problems that many of us have with the bill. The powers are extensive, and it is not at all clear that the safeguards are adequate.
  4. Finally as David Anderson notes, the failure to recognise in statute the idea of an ‘Investigatory Powers Commission’ could be significant. The question is why it was omitted: was it, as those suspicious of the authorities might suggest, because they don’t want to put proper, independent oversight on a statutory basis for fear of its restricting their actions?

That, I think, reflects my overall difference with David Anderson – the same question that he highlighted in his review of investigatory powers in 2015. A question of trust. The biggest weakness of the Investigatory Powers Act, for me, is that it still relies on a great deal of trust, without the authorities having yet, for me, proved themselves worthy of that trust. We have to trust that the dual lock system will work. We have to trust that an investigatory powers commission will be put in place and have appropriate powers – they’re not set down in statute. We have to trust that the Technology Advisory Panel will be filled with the right kind of people, and will be able to perform its functions. We have to trust that everything is ‘OK’ with Internet Connection Records.

We have to trust (as David Anderson also notes) that the government interprets the various grey areas and ambiguities in the Act appropriately – when we really didn’t need to nearly as much as we do. Things like how to deal with encryption (whether the Act allows the government to mandate ‘back doors’ etc) and extraterritoriality (how the Act will be enforced on service providers outside the UK) remain subject to a great deal of doubt – and are potentially deeply dangerous.

Whether it is possible for me to agree with David Anderson that this is a ‘victory for democracy and the rule of law’ remains to be seen. Right now, I can’t give it a round of applause. I don’t condemn it completely – but there are sufficient problems at the heart of many of the most important parts of the Act to make it impossible to applaud. A chance missed, is the best I can say at this stage.

The real test is in the implementation. On that, I wholeheartedly agree with David Anderson that the new Investigatory Powers Commission (or whatever name is given to it) is the key. It will make or break the trust that people can have in the Act, and indeed in those engaged in surveillance. As he puts it:

“the new supervisory body needs to develop a culture of high-level technical understanding, intellectual enquiry, openness and challenge.”

If it does that, I will be delighted – and, with my cynical hat on, very surprised. I hope that I am.

More on Corbyn’s Digital Manifesto…

Yesterday a piece I wrote about Corbyn’s Digital Manifesto was published on The Conversation – you can find it here:

https://theconversation.com/corbyns-digital-meh-nifesto-is-too-rooted-in-the-past-to-offer-much-for-the-future-65003

The natural constraints of a short piece, and the requirements of The Conversation meant that I didn’t cover all the areas, and my own tendency to, well, be a bit strident in my opinions at times means that it may not have been quite as clear as it could have been. I would like to add a few things to what I said, clarify a few more, and open up the opportunity for anyone to comment on it.

The first thing to make absolutely clear is that though I was distinctly underwhelmed by the Digital Democracy Manifesto, it is far better than anything produced by Labour to date, and vastly better than anything I have seen by the Tories. My criticism of it was not in any way supporting what the Tories are currently doing, nor what they are likely to do. I used the word ‘meh’ in my piece because I wanted (and still want) Labour to be bolder, clearer, and more forward-looking precisely so that they can provide a better opposition to the Tories – and to the generally lamentable status quo on internet policy. As I tried (but perhaps failed) to make clear, I am delighted that Corbyn has taken this initiative, and hope it sparks more discussion. There are many of us who would be delighted to contribute to the discussion and indeed to the development of policy.

The second thing to make clear is that my piece was not an exhaustive analysis of the manifesto – indeed, it largely missed some really good parts. The support of Open Source, for example – which was criticised aggressively in the Sun – is to be thoroughly applauded. You can, as usual, trust The Sun to get things completely wrong.

I would of course like to say much more about privacy – sadly the manifesto (in some ways subconsciously) repeats the all-too-common idea that privacy is a purely personal, individual right, when it actually underpins the functioning of communities. I’ve written about this many times before – one piece is here, for example – but that is for another time. Labour, for me, should change its tack on privacy completely – but I know that I am somewhat unusual in that belief. I’ll continue to plug away on that particular issue, but not here and not now.

What I would hope is that the manifesto starts an open discussion – and starts to move us to a better understanding of these issues. If we don’t understand them better, we’ll continue to be driven down very unhelpful paths. Whether you’re one of Corbyn’s supporters or his bitterest opponents, that’s something to be avoided.

Panama, privacy and power…

David Cameron’s first reaction to the questions about his family’s involvement with the Mossack Fonseca leaks was that it was a ‘private matter’ – something that was greeted with a chorus of disapproval from his political opponents and large sections of both the social and ‘traditional’ media. Privacy scholars and advocates, however, were somewhat muted – and quite rightly, because there are complex issues surrounding privacy here, issues that should at the very least make us pause and think. Privacy, in the view of many people, is a human right. It is included in one form or another in all the major human rights declarations and conventions. This, for example, is Article 8 of the European Convention on Human Rights:

“Everyone has the right to respect for his private and family life, his home and his correspondence.”

Everyone. Not just the people we like. Indeed, the test of your commitment to human rights is how you apply them to those who you don’t like, not how you apply them to those that you do. It is easy to grant rights to your friends and allies, harder to grant them to your enemies or those you dislike. We see how many of those who shout loudly about freedom of speech when their own speech is threatened are all too ready to try to shut out their enemies: censorship of extremist speech is considered part of the key response to terrorism in the UK, for example. Those of us on the left of politics, therefore, should be very wary of overriding our principles when the likes of David Cameron and George Osborne are concerned. Even Cameron and Osborne have the right to privacy, we should be very clear about that. We can highlight the hypocrisy of their attempts to implement mass surveillance through the Investigatory Powers Bill whilst claiming privacy for themselves, but we should not deny them privacy itself without a very good cause indeed.

Privacy for the powerful?

And yet that is not the whole story. Rights, and human rights in particular, are most important when used by the weak to protect themselves from the powerful.The powerful generally have other ways to protect themselves. Privacy in particular has at times been given a very bad name because it has been used by the powerful to shield themselves from scrutiny. A stream of philandering footballers have tried to use privacy law to prevent their affairs becoming public – Ryan Giggs, Rio Ferdinand and John Terry. Prince Charles’ ultimately unsuccessful attempts to keep the ‘Black Spider Memos’ from being exposed were also on the basis of privacy. The Catholic Church covered up the abuses of its priests. Powerful people using a law which their own kind largely forged is all too common, and should not be accepted without a fight. As feminist scholar Anita Allen put it:

“[it should be possible to] rip down the doors of ‘private’ citizens in ‘private’ homes and ‘private’ institutions as needed to protect the vital interests of vulnerable people.”

This argument may have its most obvious application in relation to domestic abuse, but it also has an application to the Panama leaks – particularly at a time when the politics of austerity is being used directly against the vital interests of vulnerable people. Part of the logic of austerity is that there isn’t enough money to pay for welfare and services – and part of the reason that we don’t have ‘enough’ money is that so much tax is being avoided or evaded, so there’s a public interest in exposing the nature and scale of tax avoidance and evasion, a public interest that might override the privacy rights of the individuals involved.

How private is financial information?

That brings the next question: should financial or taxation information be treated as private, and accorded the strongest protection? Traditions and laws vary on this. In Norway, for example, income and tax information for every citizen is publicly available. This has been true since the 19th century – from the Norwegian perspective, financial and tax transparency is part of what makes a democratic society function.

It is easy to see how this might work – and indeed, an anecdote from my own past shows it very clearly. When I was working for one of the biggest chartered accountancy firms back in the 80s, I started to get suspicious about what had happened over a particular pay rise – so I started asking my friends and colleagues, all of whom had started with the firm at the same time, and progressed up the ladder in the same way, how much they were earning, I discovered to my shock that every single woman was earning less than every single man. That is, that the highest paid woman earned less than the lowest paid man – and I knew them well enough to know that this was in no way a reflection of their merits as workers. The fact that salaries were considered private, and that no-one was supposed to know (or ask) what anyone else was earning, meant that what appeared to me once I knew about it to be blatant sexism was kept completely secret. Transparency would have exposed it in a moment – and probably prevented it from happening.

In the UK, however, privacy over financial matters is part of our culture. That may well be a reflection of our conservatism – if functions in a ‘conservative’ way, tending to protect the power of the powerful – but it is also something that most people, I would suggest, believe is right. Indeed, as a privacy advocate I would in general support more privacy rather than less. It might be a step too far to suggest that all our finances should be made public – but not, perhaps, that the finances of those in public office should be private. The people who, in this case, are supporting or driving policies should be required to show whether they are benefiting from those policies – and whether they are being hypocritical in putting those policies forward. We should be able to find out whether they personally benefit from tax cuts or changes, for example, and whether they’re contributing appropriately when they’re requiring others to tighten their belts.

I do not, of course, expect any of this to happen. In the UK in particular the powerful have far too strong a hold on our politics to let it happen. That then brings me to one more privacy-related issue exposed by the Panama papers. If there is no legal way for information that is to the public benefit to come out, what approach should be taken to the illegal ways that information is acquired. There have been many other prominent examples – Snowden’s revelations about the NSA, GCHQ and so on, Hervé Falciani’s data from HSBC in Switzerland in particular – where in some very direct ways the public interest could be said to be served by the leaks. Are they whistleblowers or criminals? Spies? Should they be prosecuted or cheered? And then what about other hackers like the ‘Impact Team’ who hacked Ashley Madison? Whether each of them was doing ‘good’ is a matter of perspective.

Vulnerability of data…

One thing that should be clear, however, is that no-one should be complacent about data security and data vulnerability. All data, however it is held, wherever it is held, and whoever it is held by, is vulnerable. The degree of that vulnerability, the likelihood of any vulnerability being exploited and so forth varies a great deal – but the vulnerability is there. That has two direct implications for the state of the internet right now. Firstly, it means that we should encourage and support encryption – and not do anything to undermine it, even for law enforcement purposes. Secondly, it means that we should avoid holding data that we don’t need to hold – let alone create unnecessary data. The Investigatory Powers Bill breaks both of those principles. It undermines rather than supports encryption, and requires the creation of massive amounts of data (the Internet Connection Records) and the gathering and/or retention of even more (via the various bulk powers). All of this adds to our vulnerability and our risks – something that we should think very, very hard before doing. I’m not sure that thinking is happening.

 

Labour and the #IPBill

I am a legal academic, specialising in internet privacy – a lecturer at the UEA Law School. I am the author of Internet Privacy Rights: Rights to Protect Autonomy, published by Cambridge University Press in 2014, and was one of the academics who was a witness before the Joint Parliamentary Committee on the Investigatory Powers Bill. I am also a member of the Labour Party – this piece is written from all of those perspectives.


 Labour and the Investigatory Powers Bill

The Investigatory Powers Bill has its second reading on Tuesday – part of what appears an attempt to pass the Bill with unseemly haste. One of the biggest questions is how Labour will approach the Bill – the messages so far have been mixed. Andy Burnham’s press release on the 1st of March in response to the latest draft was from my perspective the best thing that has emerged from Labour in relation to surveillance in many decades, if not ever.

What is important is that Labour builds on this – for in taking a strong and positive response to the Investigatory Powers Bill Labour has a chance to help shape its future in other areas. What is more, Labour can tap into some of its best and most important traditions and realise the promise of some of its best moments.

Demand more time

The first and most important thing that Labour should do at this stage is demand more time for scrutiny for the bill. There are some very significant issues that have not received sufficient time – the three parliamentary committees that have examined the bill so far (the Science and Technology Committee, the Intelligence and Security Committee and the specially convened Joint Parliamentary Committee on the Investigatory Powers Bill) all made that very clear. The Independent Reviewer of Terrorism Legislation, David Anderson QC has also been persistent in his calls for more time and more careful scrutiny – most recently in his piece in the Telegraph where he said:

“A historic opportunity now exists for comprehensive reform of the law governing electronic surveillance. Those who manage parliamentary business must ensure that adequate time – particularly in committee – is allowed before December 2016.”

David Anderson is right on all counts – this is a historic opportunity, and adequate time is required for that review. How Labour responds could well be the key to ensuring that this time is provided: a strong response now, and in particular the willingness to reject the bill in its entirety unless sufficient time is given, would put the government in a position where it has to provide that time.

As well as pushing for more time, there are a number of things that Labour – and others – should be requiring in the new bill, many of which were highlighted by the three parliamentary committees but have not been put into the new draft bill.

Proper, independent oversight

The first of these is proper, independent oversight – oversight not just of how the powers introduced or regulated by the bill are being used in a procedural way (whether warrants are being appropriately processed and so forth) but whether the powers are actually being used in the ways that parliament envisaged, that the people were being told and so forth. Reassurances made need to be not just verified but re-examined – and as time moves on, as technology develops and as the way that people use that technology develops it needs to be possible to keep asking whether the powers remain appropriate.

The oversight body needs not just to be independent, but to have real powers. Powers to sanction, powers to notify, and even powers to suspend the functioning of elements of the bill should those elements be found to be no longer appropriate or to have been misused.

Independent oversight – as provided, for example, by the Independent Reviewer of Terrorism Legislation – is not just valuable in itself, but in the way that it can build trust. Building trust is critical in this area: a lot of trust has been lost, as can be seen by the rancorous nature of a lot of the debate. It would help everyone if that rancour is reduced.

Re-examine and rebalance ‘Bulk Powers’

One of the most contentious areas in the bill is that of ‘Bulk Powers’: bulk interception, bulk acquisition (of communications data), bulk equipment interference (which includes what is generally referred to as ‘hacking’) and bulk personal datasets. These powers remain deeply contentious – and potentially legally challengeable. There are specific issues with some of them – with bulk equipment interference a sufficiently big issue that the Intelligence and Security Committee recommended their removal from the bill.

It is these powers that lead to the accusation that the bill involves ‘mass surveillance’ – and it is not sufficient for the Home Secretary simply to deny this. Her denials appear based on a semantic argument about what constitutes ‘surveillance’ – and argument that potentially puts her at odds with both the European Court of Human Rights and the Court of Justice of the European Union. It also puts the UK increasingly at odds with opinion around the world. The UN’s Special Rapporteur on the right to privacy, Joseph A. Cannataci, said in his Report to the UN Human Rights Council on the 8th March:

“It would appear that the serious and possibly unintended consequences of legitimising bulk interception and bulk hacking are not being fully appreciated by the UK Government.”

Much more care is needed here if the Investigatory Powers Bill is to be able to face up to legal challenge and not damage not only people’s privacy but the worldwide reputation of the UK. Again, proper and independent oversight would help here, as well as stronger limits on the powers.

An independent feasibility study for ICRs

The Home Office have described ‘Internet Connection Records’ as the one genuinely new part of the Investigatory Powers Bill: it is also one of the most concerning. Critics have come from many directions. Privacy advocates note that they are potentially the most intrusive measure of all, gathering what amounts to substantially all of our internet browsing history – and creating databases of highly vulnerable data, adding rather than reducing security and creating unnecessary risks. Industry experts have suggested they would be technically complex, extortionately expensive and extremely unlikely to achieve the aims that have been suggested. All three parliamentary committees asked for more information and clarity – and yet that clarity has not been provided. The suggestion that ICRs are like an ‘itemised phone bill’ for the internet has been roundly criticised (notably by the Joint IP Bill Committee) and yet it appears to remain the essential concept and underpinning logic to the idea.

Given all this, to introduce the idea without proper testing and discussion with the industry seems premature and ill conceived at best. If the idea cannot be rejected outright, it should at least be properly tested – and again, with independent oversight. Instead of including it within the bill, a feasibility study could be mounted – a year of working with industry to see if the concept can be made to work, without excessive cost, and producing results that can actually be useful, can be properly secured and so forth. If at the end of the feasibility study the evidence suggests the idea is workable, it can be added back into the bill. If not, alternative routes can be taken.

Reassess encryption

Perhaps the most contentious issue of all at present is the way in which the bill addresses encryption. All three parliamentary committees demanded clarity over the matter – particularly in relation to end-to-end encryption. That clarity is conspicuous by its absence in the bill. Whether the lack of clarity is intentional or not is somewhat beside the point: the industry in particular needs clarity. Specifically, the industry needs the government to be clear in the legislation that it will not either ban end-to-end encryption, demand that ‘back doors’ are built into systems, or pressurise companies to build in those back doors or weaken their encryption systems.

The current position not only puts the government at odds with the industry, it puts it at odds with computer scientists around the world. The best of those scientists have made their position entirely clear – and yet still the government seems unwilling to accept what both scientists and industry are telling them. This needs to change – what is being suggested right now is dangerous to privacy and security and potentially puts the UK technology industry at a serious competitive disadvantage compared to the rest of the world.

Working with industry and science

Therein lies one of the most important keys: working with rather than against the IT industry and computer scientists. Plans such as those in the Investigatory Powers Bill should have been made with the industry and science from the very start – and the real experts should be listened to, not ridden roughshod over. Inconvenient answers need to be faced up to, not rejected. Old concepts should not be used as models for new situations when the experts tell you otherwise.

This is where one of Labour’s longest traditions should come into play. Harold Wilson’s famous Scarborough speech in 1963, where he talked about the ‘white heat’ of technology is perhaps even more apt now than it was all those years ago. Labour should be a modernising party – and that means embracing technology and science, listening to scientists and learning from them, using evidence-based policy and all that entails. Currently, the Investigatory Powers Bill is very much the reverse of that – but it still could become that, if appropriate changes are made.

Protecting ordinary people

Labour should also be tapping into another strong tradition – indeed in many ways its founding tradition. Labour was born to support and protect working people – ‘ordinary’ people in the positive sense of that word. Surveillance, in practice, often does precisely the opposite – it can be used by the powerful against those with less power. It can be politically misused – and the history of surveillance of trade unionists, left-wing activists is one of which the Labour Party should be acutely aware. Without sufficient safeguards and limitations, any surveillance system can and will be misused, and often in precisely these kinds of ways.

Labour could and should remember this – and work very hard to ensure that those safeguards and limitations are built in. Some of the measures outlined above – proper oversight, rebalancing bulk powers, a feasibility study on ICRs in particular – are intended to do precisely that.

Not ‘soft’ but strong

Building in these safeguards, working with technology industries and scientists, protecting rather than undermining encryption should not be seen as something ‘soft’ – and any suggestion that by opposing the measures currently in the Bill is somehow being ‘soft’ on terrorists and paedophiles should not just be rejected but should be turned on its head. The current bill will not protect us in the ways suggested – indeed, it will make us less secure, more at risk from cybercriminals, create more openings for terrorists and others, and could be a massive waste of money, time and expertise. That money, time and expertise could be directed in ways that do provide more protection.

What is more, as noted above, the current bill would be much more vulnerable to legal challenge than it should be. That is not a sign of strength: very much the opposite.

Labour’s future direction

Most of these issues are relevant to all political parties – but for Labour the issue is particularly acute. Labour is currently trying to find a new direction – the challenge presented by the Investigatory Powers Bill could help it be found. A positive approach could build on the old traditions outlined above, as well as the human rights tradition build in Blair’s early years: the Human Rights Act is one of New Labour’s finest achievements, despite the bad treatment it receives in the press. A party that forges alliances with the technology industry and with computer science, one that embraces the internet rather than seeing it as a scary and dangerous place to be corralled and controlled, is a party that has a real future. Labour wants to engage with young people – so be the party that supports WhatsApp rather than tries to ban it or break it. Be the party that understands encryption rather than fights against it.

All this could begin right now. I hope Labour is up to the challenge.

 

 

The IP Bill: opaqueness on encryption?

One thing that all three of the Parliamentary committees that reviewed the Draft Investigatory Powers Bill agreed upon was that the bill needed more clarity over encryption.

This is the Intelligence and Security Committee report:

Screen Shot 2016-03-03 at 15.30.32

This is the Science and Technology Committee report:

Screen Shot 2016-03-03 at 15.32.14

This is the Joint Parliamentary Committee on the Investigatory Powers Bill:

Screen Shot 2016-03-03 at 15.33.44

In the new draft Bill, however, this clarity does not appear to have been provided – at least as far as most of the people who have been reading through it have been able to determine. There are three main possible interpretations of this:

  1. That the Home Office is deliberately trying to avoid providing clarity;
  2. That the Home Office has not really considered the requests for clarity seriously; or
  3. That the Home Office believes it has provided clarity

The first would be the most disturbing – particularly as one of the key elements of the Technical Capability Notices as set out both in the original draft bill and the new version is that the person upon whom the notice is served “may not disclose the existence or contents of the notice to any other person without the permission of the Secretary of State” (S218(8)). The combination of an unclear power and the requirement to keep it secret is a very dangerous.

The second possibility is almost as bad – because, as noted above, all three committees were crystal clear about how important this issue is. Indeed, their reports could be seen as models for the Home Office as to how to make language clear. Legal drafting is never quite as easy as it might be, but it can be clear and should be clear.

The third possibility – that they believe they have provided clarity is also pretty disastrous in the circumstances, particularly as the amount of time that appears to be being made available to scrutinise and amend the Bill appears likely to be limited. This is the interpretation that the Home Office ‘response to consultations’ suggests – but people who have examined the Bill so far have not, in general, found it to be clear at all. That includes both technological experts and legal experts. Interpretation of law is of course at times difficult – but that is precisely why effort must be put in to make it as clear as possible. At the moment whether a backdoor or equivalent could be demanded depends on whether it is ‘technically feasible’ or ‘practicable’ – terms open to interpretation – and on interdependent and somewhat impenetrable definitions of ‘telecommunications operator’, ‘telecommunications service’ and ‘telecommunications system’, which may or may not cover messaging apps, hardware such as iPhones and so forth. Is it clear? It doesn’t seem clear to me – but I am often wrong, and would love to be corrected on this.

This issue is critical for the technology industry. It needs to be sorted out quickly and simply. It should have been done already – which is why the first possibility, that the lack of clarity is deliberate, looms larger  that it ordinarily would. If it is true, then why have the Home Office not followed the advice of all three committees on this issue?

If on the other hand this is simply misinterpretation, then some simple, direct redrafting could solve the problems. Time will tell.

An independent review body for the IP Bill?

One of the recommendations of the Joint Parliamentary Committee on the Investigatory Powers Bill was that the Bill should include some kind of a review process or ‘sunset clause’. The new Bill, as I noted in my earlier post on the subject, has included a term that seems to answer that recommendation – but does so in such a cursory way as to be close to irrelevant. This is how it is set out:

222 Review of operation of Act

(1)  The Secretary of State must, within the period of 6 months beginning with the end of the initial period, prepare a report on the operation of this Act.

(2)  In subsection (1) “the initial period” is the period of 5 years and 6 months beginning with the day on which this Act is passed.

(3)  In preparing the report under subsection (1), the Secretary of State must, in particular, take account of any report on the operation of this Act made by a Select Committee of either House of Parliament (whether acting alone or jointly).

(4)  The Secretary of State must

(a)  publish the report prepared under subsection (1), and

(b)  lay a copy of it before Parliament.

So, effectively, this means that the Secretary of State will have to produce a report after six years and lay a copy of it before Parliament – that’s all. Six years is a long time in relation to the internet. Six years ago, for example, WhatsApp had only just been launched, and SnapChat did not even exist. Facebook had 400 million users: it now has 1.6 billion.

Even more pertinently, the Investigatory Powers Bill has some significant new and distinctly controversial powers – most directly some of the ‘Bulk Powers’ and the Internet Connection Records (ICRs) about which I have also written about a number of times (here and here for example). ICRs have been criticised in a number of ways: their potential intrusiveness, the difficulty in defining what they actually are, the costs involved in their collection and retention, and the likelihood of their being able to do what the Bill suggests that they should do. All these matter – and to a great extent all of these are a matter of conjecture. Those like myself who believe that they will end up hugely expensive, highly ineffective and potentially vulnerable are to at least some degree speculating – but so are those who believe they’ll be a crucial tool for law enforcement and the security services, a proportionate and effective response, easily safeguarded and no great burden on the relevant service providers.

Both sides of the argument believe that they’re right – and have provided evidence to back up their opinions. Personally I believe that my evidence is the more compelling – but I would believe that. I am sure that the proponents of the inclusion of Internet Connection Records believe the same about their evidence. Who is right? The best way to tell might well be to have a proper, regular and independent review of the reality. An audit of a kind, to assess all these different aspects. Is it proving easy to define ICRs in all the relevant cases? Are the ICRs being useful? Are they proving expensive to collect and retain? Have they been kept securely or have there been losses through error, hacking, technological malfunction or something similar?

This kind of audit could be required under the Act – and if the drafters had followed the advice of the Independent Reviewer of Terrorism Legislation and created an Independent Intelligence and Surveillance Commission, it could have been the perfect body to perform such an audit. If that Commission had been granted the powers to ask for a part of the bill to be suspended or subject to amendment that would make this possibility even better.

In my oral evidence to the Committee I suggested something further – that the review should include a kind of ‘contextual’ review, looking not just at how the powers were being used in relation to the Bill, but in relation to how people were using communications systems. In effect, assessing whether the powers were still appropriate and balanced because how people use service can, in practice, change how intrusive powers relating to a service can be. Undermining encryption, for example, is far less troublesome if the only people using encryption are the most technologically adept of geeks and nerds than it is if we are all reliant on encryption for our banking and confidential work.

If properly constituted and empowered, a review body could look at this – and rather than being in a position we are now, where outdated laws are being misapplied to situations that have radically changed, we could keep not just the law but how it is used up to date and proportionate. We could learn where mistakes are being made, where resources are being misapplied, what works and what doesn’t work – and not just from those who have a vested interest in telling us that those powers are working and that they need the resources that they’re being given. The two examples we have in this field – the Independent Reviewer of Terrorism Legislation and the Interception of Communications Commissioner’s Office (IOCCO) – have proven their worth in a number of ways. An independent body to oversee the implementation, effectiveness and proportionality of the operations of the Investigatory Powers Bill could be similarly effective.

That, however, is not what the IP Bill currently proposes. The review as it is set out in S 222 is too late, not independent, and without the power to produce any real effect. This could, however, be relatively simply changed. In their response to the consultations, the main objection to making such a change seems to be cost: the response says that it would cost an extra £0.5m/year.  Though that may seem like a lot of money, in the grand scheme of things it really is not. If, as just one (small) example, ICRs are as expensive as it seems likely they will be, and the review body reveals this after three years rather than six, spending that £0.5m/year would be very cheap at the price. Other savings could be made in other areas as revealed by the reviews – and that’s not considering the significant extra level of trust that would be generated by a properly independent review body. The potential benefits are very significant: I hope that those pushing the Bill are willing to consider it.

The new IP Bill…. first thoughts…

This morning, in advance of the new draft of the Investigatory Powers Bill being released, I asked six questions:

Screen Shot 2016-03-01 at 09.46.09

At a first glance, they seem to have got about 2 out of 6, which is perhaps better than I suspected, but  not as good as I hoped.

  1. On encryption, I fear they’ve failed again – or if anything made things worse. The government claims to have clarified things in S217 and indeed in the Codes of Practice – but on a first reading this seems unconvincing. The Communications Data Draft Code of Practice section on ‘Maintenance of a Technical Capability’ relies on the idea of ‘reasonability’ which in itself is distinctly vague. No real clarification here – and still the possibility of ordering back-doors via a ‘Technical Capability Notice’ looms very large. (0 out of 1)
  2. Bulk Equipment Interference remains in the Act – large scale hacking ‘legitimised’ despite the recommendation from the usually ‘authority-friendly’ Intelligence and Security Committee that it be dropped from the Bill. (0 out of 2)
  3. A review clause has been added to the Bill – but it is so anaemic as to be scarcely worth its place. S222 of the new draft says that the Secretary of State must prepare a report by the end of the sixth year after the Bill is passed, publish it and lay it before parliament. This is not a sunset clause, and the report prepared is not required to be independent or undertaken by a review body, just by the Secretary of State. It’s a review clause without any claws, so worth only 1/4 a point. (1/4 out of 3)
  4. At first read-through, the ‘double-lock’ does not appear to have been notably changed, but the ‘urgent’ clause has seemingly been tightened a little, from 5 days to 3, but even that isn’t entirely clear. I’d give this 1/4 of a point (so that’s 1/2 out of 4)
  5. The Codes of Practice were indeed published with the bill (and are accessible here) which is something for which the Home Office should be applauded (so that’s 1 and 1/2 out of 5)
  6. As for giving full time for scrutiny of the Bill, the jury is still out – the rumour is second reading today, which still looks like undue haste, so the best I can give them is 1/2 a point – making it a total of 2 out of 6 on my immediate questions.

That’s not quite as bad as I feared – but it’s not as good as it might have been and should have been. Overall, it looks as though the substance of the bill is largely unchanged – which is very disappointing given the depth and breadth of the criticism levelled at it by the three parliamentary committees that examined it. The Home Office may be claiming to have made ‘most’ of the changes asked for – but the changes they have made seem to have been the small, ‘easy’ changes rather than the more important substantial ones.

Those still remain. The critical issue of encryption has been further obfuscated, the most intrusive powers – the Bulk Powers and the ICRs – remain effectively untouched, as do the most controversial ‘equipment interference’ powers. The devil may well be in the detail, though, and that takes time and careful study – there are people far more able and expert than me poring over the various documents as I type, and a great deal more will come out of that study. Time will tell – if we are given that time.

 

Why is Apple fighting the FBI?

The conflict between Apple and the FBI over the San Bernardino shooter’s iPhone has already had a huge amount of coverage, and that’s likely to continue for a while. The legal details and the technical details have already been written about at great length, but what is perhaps more interesting is why Apple is making such a point here. It isn’t, as some seem to be suggesting, because Apple doesn’t take terrorism seriously, and cares more about the privacy rights of a dead terrorist than it does its responsibilities to past and future victims of terrorism. Neither is it because Apple are the great guardians of our civil liberties and privacy, taking a stand for freedom. Apple aren’t champions of privacy any more than Google are champions of freedom of speech or Facebook are liberators of the poor people of India.  Apple, Google and Facebook are businesses. Their bottom line is their bottom line. Individuals within all of those companies may well have particular political, ethical or moral stances in all these areas, but that isn’t the key. The key is business.

So why, in those circumstances, is Apple taking such a contentious stance? Why now? Why in this case? It is Apple, on the surface at least, that is making this into such a big deal – Tim Cook’s open letter didn’t just talk about the specifics of the case or indeed of iPhones, but in much broader terms:

“While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.”

It’s wonderful stuff – and from the perspective of this privacy advocate at least it should be thoroughly applauded. It should, however, also be examined more carefully, with several pinches of salt, a healthy degree of scepticism and a closer look at the motivations. Ultimately, Apple is taking this stance because Apple believes it’s in Apple’s interests to take this stance. There may be a number of reasons for this. In a broad sense, Apple knows that security – and this is very much a security as well as a privacy issue – is critical for the success of the internet and of the technology sector in general. Security and privacy are critical under-pinners of trust, and trust is crucial for business success. People currently do trust Apple (in general terms) and that really matters to Apple’s business. The critical importance, again in a broad sense, of security and trust is why the other tech giants – Google, Facebook, Twitter et al – have also lined up behind Apple, though their own brands and businesses rely far less on privacy than Apple’s does. Indeed, for Google and Facebook privacy is very much a double-edged sword: their business models depend on their being able to invade our privacy for their own purposes. Trust and security, however, are crucial.

In a narrower sense, Apple has positioned itself as ‘privacy-friendly’ in recent years – partly in contrast to Google, but also in relation to the apparent overreach of governmental authorities. Apple has the position to be able to do this – it’s business model is based on shifting widgets, not harvesting data – but Apple has also taken the view that people now really care about privacy, enough to make decisions at least influenced by their sense of privacy. This is where things get interesting. In the last section of my book, Internet Privacy Rights, where I speculate about the possibility of a more privacy-friendly future, this is one of the key messages: business is the key. If businesses take privacy seriously, they’ll create a technological future where privacy is protected – but they won’t take it seriously out of high-minded principle. They’ll only take it seriously because there’s money in it for them, and there will only be money in it for them if we, their customer, take privacy seriously.

That, for me, could be the most positive thing to come from this story so far. Not just Apple but pretty much all the tech companies (in the US at least) have taken stances which suggest that they think people do take privacy seriously. A few years ago that would have been much less likely – and it is a good sign, from my perspective at least. Ultimately, as I’ve argued many times before, a privacy-friendly internet is something that we will all benefit from – even law enforcement. It is often very hard to see it that way, but in the long term the gains in security, in trust and much more will help us all.

That’s why in the UK, the Intelligence and Security Committee’s report criticised the new Investigatory Powers Bill for not making protection of privacy more prominent. As they put it:

“One might have expected an overarching statement at the forefront of the legislation, or to find universal privacy protections applied consistently throughout the draft Bill”

It is also why the FBI is playing a very dangerous game by taking on Apple in this way. Whilst it is risky for Apple to be seen as ‘on the side of the terrorists’ it may be even more risky for the FBI (and by implication the whole government of the US) to be seen as wanting to ride roughshod over everyone’s privacy. This is a battle for hearts and minds as much as a battle over the data in one phone, data that it is entirely possible is pretty much useless. Right now, it is hard to tell exactly who is winning that battle – but right now my money would be on the tech companies. I hope I’m right, because in the end that would be to the benefit of us all.