A better debate on surveillance?

screen-shot-2016-09-21-at-18-57-00Back in 2015, Andrew Parker, the head of MI5, called for a ‘mature debate’ on surveillance – in advance of the Investigatory Powers Bill, the surveillance law which has now almost finished making its way through parliament, and will almost certainly become law in a few months time. Though there has been, at least in some ways, a better debate over this bill than over previous attempts to update the UK’s surveillance law, it still seems as though the debate in both politics and the media remains distinctly superficial and indeed often deeply misleading.

It is in this context that I have a new academic paper out: “Data gathering, surveillance and human rights: recasting the debate”, in a new journal, the Journal of Cyber Policy. It is an academic piece, and access, sadly, is relatively restricted, so I wanted to say a little about the piece here, in a blog which is freely accessible to all – at least in places where censorship of the internet has not yet taken full hold.

The essence of the argument in the paper is relatively straightforward. The debate over surveillance is simplified and miscast in a number of ways, and those ways in general tend to make surveillance seem more positive and effective that it is, and with less broad and significant an impact on ordinary people than it might have. The rights that it impinges are underplayed, and the side-effects of the surveillance are barely mentioned, making surveillance seem much more attractive than should be – and hence decisions are made that might not have been made if the debate had been better informed. If the debate is improved, then the decisions will be improved – and we might have both better law and better surveillance practices.

Perhaps the most important way in which the debate needs to be improved is to understand that surveillance does not just impact upon what is portrayed as a kind of selfish, individual privacy – privacy that it is implied does not matter for those who ‘have nothing to hide’ – but upon a wide range of what are generally described as ‘civil liberties’. It has a big impact on freedom of speech – an impact that been empirically evidenced in the last year – and upon freedom of association and assembly, both online and in the ‘real’ world. One of the main reasons for this – a reason largely missed by those who advocate for more surveillance – is that we use the internet for so many more things than we ever used telephones and letters, or even email. We work, play, romance and research our health. We organise our social lives, find entertainment, shop, discuss politics, do our finances and much, much more. There is pretty much no element of our lives that does not have a very significant online element – and that means that surveillance touches all aspects of our lives, and any chilling effect doesn’t just chill speech or invade selfish privacy, but almost everything.

This, and much more, is discussed in my paper – which I hope will contribute to the debate, and indeed stimulate debate. Some of it is contentious – the role of commercial surveillance the interaction between it and state surveillance – but that too is intentional. Contentious issues need to be discussed.

There is one particular point that often gets missed – the question of when surveillance occurs. Is it when data is gathered, when it is algorithmically analysed, or when human eyes finally look at it. In the end, this may be a semantic point – what technically counts as ‘surveillance’ is less important than what actually has an impact on people, which begins at the data gathering stage. In my conclusion, I bring out that point by quoting our new Prime Minister, from her time as Home Secretary and chief instigator of our current manifestation of surveillance law. This is how I put it in the paper:

“Statements such as Theresa May’s that ‘the UK does not engage in mass surveillance’ though semantically arguable, are in effect deeply unhelpful. A more accurate statement would be that:

‘the UK engages in bulk data gathering that interferes not only with privacy but with freedom of expression, association and assembly, the right to a free trial and the prohibition of discrimination, and which puts people at a wide variety of unacknowledged and unquantified risks.’”

It is only when we can have clearer debate, acknowledging the real risks, that we can come to appropriate conclusions. We are probably too late for that to happen in relation to the Investigatory Powers Bill, but given that the bill includes measures such as the contentious Internet Connection Records that seem likely to fail, in expensive and probably farcical ways, the debate will be returned to again and again. Next time, perhaps it might be a better debate.

How not to reclaim the internet…

The new campaign to ‘Reclaim the Internet‘, to ‘take a stand against online abuse’ was launched yesterday – and it could be a really important campaign. The scale and nature of abuse online is appalling – and it is good to see that the campaign does not focus on just one kind of abuse, instead talking about ‘misogyny, sexism, racism, homophobia, transphobia’ and more. There is more than anecdotal evidence of this abuse – even if the methodology and conclusions from the particular Demos survey used at the launch has been subject to significant criticism: Dr Claire Hardaker of Lancaster University’s forensic dissection is well worth a read – and it is really important not to try to suggest that this kind of abuse is not hideous and should not be taken seriously. It should – but great care needs to be taken and the risks attached to many of the potential strategies to ‘reclaim the internet’ are very high indeed. Many of them would have precisely the wrong effect: silencing exactly those voices that the campaign wishes to have heard.

Surveillance and censorship

Perhaps the biggest risk is that the campaign is used to enable and endorse those twin tools of oppression and control, surveillance and censorship. The idea that we should monitor everything to try to find all those who commit abuse or engage in sexism, misogyny, racism, homophobia and transphobia may seem very attractive – find the trolls, root them out and punish them – but building a surveillance infrastructure and making it seem ‘OK’ is ultimately deeply counterproductive for almost every aspect of freedom. Evidence shows that surveillance chills free speech, discourages people from seeking out information, associating and assembling with people and more – as well as enabling discrimination and exacerbating power differences. Surveillance helps the powerful to oppress the weak – so should be avoided except in the worst of situations. Any ‘solutions’ to online abuse that are based around an increase in surveillance need a thorough rethink.

Censorship is the other side of the coin – but works with surveillance to let the powerful control the weak. Again, huge care is needed to make sure that attempts to ‘reclaim’ the internet don’t become tools to enforce orthodoxy and silence voices that don’t ‘fit’ the norm. Freedom of speech matters most precisely when that speech might offend and upset – it is easy to give those you like the freedom to say what they want, much harder to give those you disagree with that freedom.  It’s a very difficult area – because if we want to reduce the impact of abuse, that must mean restricting abusers’ freedom of speech – but it must be navigated very carefully, and tools not created that allow easy silencing of those who disagree with people rather than those who abuse them.

Real names

One particularly important trap not to fall into is that of demanding ‘real names’: it is a common idea that the way to reduce abuse is to prevent people being anonymous online, or to ban the use of pseudonyms. Not only does this not work, but it, again, damages many of those who the idea of ‘reclaiming the internet’ is intended to support. Victims of abuse in the ‘real’ world, people who are being stalked or victimised, whistleblowers and so forth need pseudonyms in order to protect themselves from their abusers, stalkers, enemies and so on. Force ‘real names’ on people, and you put those people at risk. Many will simply not engage – chilled by the demand for real names and the fear of being revealed. That’s even without engaging with the huge issue of the right to define your own name – and the joy of playing with identity, which for some people is one of the great pleasures of the internet, from parodies to fantasies. Real names are another way that the powerful can exert their power on the weak – it is no surprise that the Chinese government are one of the most ardent supporters of the idea of forcing real names on the internet. Any ‘solution’ to reclaiming the internet that demands or requires real names should be fiercely opposed.

Algorithms and errors

Another key mistake to be avoided is over-reliance on algorithmic analysis – particularly of content of social media posts. This is one of the areas that the Demos survey lets itself down – it makes assumptions about the ability of algorithms to understand language. As Dr Claire Hardaker puts it:

“Face an algorithm with messy features like sarcasm, threats, allusions, in-jokes, novel metaphors, clever wordplay, typographical errors, slang, mock impoliteness, and so on, and it will invariably make mistakes. Even supposedly cut-and-dried tasks such as tagging a word for its meaning can fox a computer. If I tell you that “this is light” whilst pointing to the sun you’re going to understand something very different than if I say “this is light” whilst picking up an empty bag. Programming that kind of distinction into a software is nightmarish.”

This kind of error is bad enough in a survey – but some of the possible routes to ‘reclaiming the internet’ include using this kind of analysis to identify offending social media comments, or even to automatically block or censor social media comments. Indeed, much internet filtering works that way – one of the posts on this blog which was commenting on ‘porn blocking’ was blocked by a filter as it had words relating to pornography in it a number of times. Again, reliance on algorithmic ‘solutions’ to reclaiming the internet is very dangerous – and could end up stifling conversations, reducing freedom of speech and much more.

Who’s trolling who? Double-edged swords…

One of the other major problems with dealing with ‘trolls’ (the quotation marks are entirely intentional) is that in practice it can be very hard to identify them. Indeed, in conflicts on the internet it is common for both sides to believe that the other side is the one doing the abuse, the other side are the ‘trolls’, and they themselves are the victims who need protecting. Anyone who observes even the most one-sided of disputes should be able to see this – from GamerGate to some of the conflicts over transphobia. Not that many who others would consider to be ‘trolls’ would consider themselves to be trolls.

The tragic case of Brenda Leyland should give everyone pause for thought. She was described and ‘outed’ as a ‘McCann troll’ – she tweeted as @Sweepyface and campaigned, as she saw it, for justice for Madeleine McCann, blaming Madeleine’s parents for her death. Sky News reporter Martin Brunt doorstepped her, and days later she was found dead, having committed suicide. Was she a ‘troll’? Was the media response to her appropriate, proportionate, or positive? These are not easy questions – because this isn’t an easy subject.

Further, one of the best defences of a ‘troll’ is to accuse the person they’re trolling of being a troll – and that is something that should be remembered whatever the tools you introduce to help reduce abuse online. Those tools are double-edged swords. Bring in quick and easy ways to report abuse – things like immediate blocking of social media accounts when those accounts are accused of being abusive – and you will find those tools being used by the trolls themselves against their victims. ‘Flame wars’ have existed pretty much since the beginning of the internet – any tools you create ‘against’ abuse will be used as weapons in flame wars in the future.

No quick fixes and no silver bullets

That should remind us of the biggest point here. There are no quick fixes to this kind of problem. No silver bullets that will slay the werewolves, or magic wands that will make everything OK. Technology often encourages the feeling that if only we created this one new tool, we could solve everything. In practice, it’s almost never the case – and in relation to online abuse this is particularly true.

Some people will suggest that it’s already easy. ‘All you have to do is block your abuser’ is all very well, but if you get 100 new abusive messages every minute you’ll spend your whole time blocking. Some will say that the solution is just not to feed the trolls – but many trolls don’t need any feeding at all. Others may suggest that people are just whining – none of this really hurts you, it’s just words – but that’s not true either. Words do hurt – and most of those suggesting this haven’t been subject to the kind of abuse that happens to others. What’s more, the chilling effect of abuse is real – if you get attacked every time you go online, why on earth would you want to stay online?

The problem is real, and needs careful thought and time to address. The traps involved in addressing it – and I’ve mentioned only a few of them here – are also real, and need to be avoided and considered very carefully. There really are no quick fixes – and it is really important not to raise false hopes that it can all be solved quickly and easily. That false hope may be the biggest trap of all.

Panama, privacy and power…

David Cameron’s first reaction to the questions about his family’s involvement with the Mossack Fonseca leaks was that it was a ‘private matter’ – something that was greeted with a chorus of disapproval from his political opponents and large sections of both the social and ‘traditional’ media. Privacy scholars and advocates, however, were somewhat muted – and quite rightly, because there are complex issues surrounding privacy here, issues that should at the very least make us pause and think. Privacy, in the view of many people, is a human right. It is included in one form or another in all the major human rights declarations and conventions. This, for example, is Article 8 of the European Convention on Human Rights:

“Everyone has the right to respect for his private and family life, his home and his correspondence.”

Everyone. Not just the people we like. Indeed, the test of your commitment to human rights is how you apply them to those who you don’t like, not how you apply them to those that you do. It is easy to grant rights to your friends and allies, harder to grant them to your enemies or those you dislike. We see how many of those who shout loudly about freedom of speech when their own speech is threatened are all too ready to try to shut out their enemies: censorship of extremist speech is considered part of the key response to terrorism in the UK, for example. Those of us on the left of politics, therefore, should be very wary of overriding our principles when the likes of David Cameron and George Osborne are concerned. Even Cameron and Osborne have the right to privacy, we should be very clear about that. We can highlight the hypocrisy of their attempts to implement mass surveillance through the Investigatory Powers Bill whilst claiming privacy for themselves, but we should not deny them privacy itself without a very good cause indeed.

Privacy for the powerful?

And yet that is not the whole story. Rights, and human rights in particular, are most important when used by the weak to protect themselves from the powerful.The powerful generally have other ways to protect themselves. Privacy in particular has at times been given a very bad name because it has been used by the powerful to shield themselves from scrutiny. A stream of philandering footballers have tried to use privacy law to prevent their affairs becoming public – Ryan Giggs, Rio Ferdinand and John Terry. Prince Charles’ ultimately unsuccessful attempts to keep the ‘Black Spider Memos’ from being exposed were also on the basis of privacy. The Catholic Church covered up the abuses of its priests. Powerful people using a law which their own kind largely forged is all too common, and should not be accepted without a fight. As feminist scholar Anita Allen put it:

“[it should be possible to] rip down the doors of ‘private’ citizens in ‘private’ homes and ‘private’ institutions as needed to protect the vital interests of vulnerable people.”

This argument may have its most obvious application in relation to domestic abuse, but it also has an application to the Panama leaks – particularly at a time when the politics of austerity is being used directly against the vital interests of vulnerable people. Part of the logic of austerity is that there isn’t enough money to pay for welfare and services – and part of the reason that we don’t have ‘enough’ money is that so much tax is being avoided or evaded, so there’s a public interest in exposing the nature and scale of tax avoidance and evasion, a public interest that might override the privacy rights of the individuals involved.

How private is financial information?

That brings the next question: should financial or taxation information be treated as private, and accorded the strongest protection? Traditions and laws vary on this. In Norway, for example, income and tax information for every citizen is publicly available. This has been true since the 19th century – from the Norwegian perspective, financial and tax transparency is part of what makes a democratic society function.

It is easy to see how this might work – and indeed, an anecdote from my own past shows it very clearly. When I was working for one of the biggest chartered accountancy firms back in the 80s, I started to get suspicious about what had happened over a particular pay rise – so I started asking my friends and colleagues, all of whom had started with the firm at the same time, and progressed up the ladder in the same way, how much they were earning, I discovered to my shock that every single woman was earning less than every single man. That is, that the highest paid woman earned less than the lowest paid man – and I knew them well enough to know that this was in no way a reflection of their merits as workers. The fact that salaries were considered private, and that no-one was supposed to know (or ask) what anyone else was earning, meant that what appeared to me once I knew about it to be blatant sexism was kept completely secret. Transparency would have exposed it in a moment – and probably prevented it from happening.

In the UK, however, privacy over financial matters is part of our culture. That may well be a reflection of our conservatism – if functions in a ‘conservative’ way, tending to protect the power of the powerful – but it is also something that most people, I would suggest, believe is right. Indeed, as a privacy advocate I would in general support more privacy rather than less. It might be a step too far to suggest that all our finances should be made public – but not, perhaps, that the finances of those in public office should be private. The people who, in this case, are supporting or driving policies should be required to show whether they are benefiting from those policies – and whether they are being hypocritical in putting those policies forward. We should be able to find out whether they personally benefit from tax cuts or changes, for example, and whether they’re contributing appropriately when they’re requiring others to tighten their belts.

I do not, of course, expect any of this to happen. In the UK in particular the powerful have far too strong a hold on our politics to let it happen. That then brings me to one more privacy-related issue exposed by the Panama papers. If there is no legal way for information that is to the public benefit to come out, what approach should be taken to the illegal ways that information is acquired. There have been many other prominent examples – Snowden’s revelations about the NSA, GCHQ and so on, Hervé Falciani’s data from HSBC in Switzerland in particular – where in some very direct ways the public interest could be said to be served by the leaks. Are they whistleblowers or criminals? Spies? Should they be prosecuted or cheered? And then what about other hackers like the ‘Impact Team’ who hacked Ashley Madison? Whether each of them was doing ‘good’ is a matter of perspective.

Vulnerability of data…

One thing that should be clear, however, is that no-one should be complacent about data security and data vulnerability. All data, however it is held, wherever it is held, and whoever it is held by, is vulnerable. The degree of that vulnerability, the likelihood of any vulnerability being exploited and so forth varies a great deal – but the vulnerability is there. That has two direct implications for the state of the internet right now. Firstly, it means that we should encourage and support encryption – and not do anything to undermine it, even for law enforcement purposes. Secondly, it means that we should avoid holding data that we don’t need to hold – let alone create unnecessary data. The Investigatory Powers Bill breaks both of those principles. It undermines rather than supports encryption, and requires the creation of massive amounts of data (the Internet Connection Records) and the gathering and/or retention of even more (via the various bulk powers). All of this adds to our vulnerability and our risks – something that we should think very, very hard before doing. I’m not sure that thinking is happening.


Internet Connection Records: answering the wrong question?

Watching and listening to the Commons debate over the Investigatory Powers Bill, and in particular when ‘Internet Connection Records’ were mentioned, it was hard not to feel that what was being discussed had very little connection with reality. There were many mentions of how bad and dangerous things were on the internet, how the world had changed, and how we needed this law – and in particular Internet Connection Records (ICRs) – to deal with the new challenges. As I watched, I found myself imagining a distinctly unfunny episode of Yes Minister which went something like this:

Screen Shot 2016-03-16 at 10.16.58Scene 1:

Minister sitting in leather arm chair, glass of brandy in his hand, while old civil servant sits opposite, glasses perched on the end of his nose.

Minister: This internet, it makes everything so hard. How can we find all these terrorists and paedophiles when they’re using all this high tech stuff?

Civil Servant: It was easier in the old days, when they just used telephones. All we needed was itemised phone bills. Then we could find out who they were talking to, tap the phones, and find out everything we needed. Those were the days.

Minister: Ah yes, those were the days.

The Civil Servant leans back in his chair and takes a sip from his drink. The Minister rubs his forehead looking thoughtful. Then his eyes clear.

Minister: I know. Why don’t we just make the internet people make us the equivalent of itemised phone bills, but for the internet?

Civil Servant blinks, not knowing quite what to say.

Minister: Simple, eh? Solves all our problems in one go. Those techie people can do it. After all, that’s their job.

Civil Servant: Minister….

Minister: No, don’t make it harder. You always make things difficult. Arrange a meeting.

Civil Servant: Yes, Minister

Scene 2

Minister sitting at the head of a large table, two youngish civil servants sitting before him, pads of paper in front of them and well-sharpened pencils in their hands.

Minister: Right, you two. We need a new law. We need to make internet companies make us the equivalent of Itemised Phone Bill.

Civil servant 1: Minister?

Minister: You can call them ‘Internet Connection Records’. Add them to the new Investigatory Powers Bill. Make the internet companies create them and store them, and then give them to the police when they ask for them.

Civil servant 2: Are we sure the internet companies can do this, Minister?

Minister: Of course they can. That’s their business. Just draft the law. When the law is ready, we can talk to the internet companies. Get our technical people here to write it in the right sort of way.

The two civil servants look at each other for a moment, then nod.

Civil servant 1: Yes, minister.


Scene 3

A plain, modern office, somewhere in Whitehall. At the head of the table is one of the young civil servants. Around the table are an assortment of nerdish-looking people, not very sharply dressed. In front of each is a ring-bound file, thick, with a dark blue cover.

Civil servant: Thank you for coming. We’re here to discuss the new plan for Internet Connection Records. If you look at your files, Section 3, you will see what we need.

The tech people pick up their files and leaf through them. A few of them scratch their heads. Some blink. Some rub their eyes. Many look at each other.

Civil servant: Well, can you do it? Can you create these Internet Connection Records?

Tech person 1: I suppose so. It won’t be easy.

Tech person 2: It will be very expensive

Tech person 3: I’m not sure how much it will tell you

Civil servant: So you can do it? Excellent. Thank you for coming.


The real problem is a deep one – but it is mostly about asking the wrong question. Internet Connection Records seem to be an attempt to answer the question ‘how can we recreate that really useful thing, the itemised phone bill, for the internet age’? And, from most accounts, it seems clear that the real experts, the people who work in the internet industry, weren’t really consulted until very late in the day, and then were only asked that question. It’s the wrong question. If you ask the wrong question, even if the answer is ‘right’, it’s still wrong. That’s why we have the mess that is the Internet Connection Record system: an intrusive, expensive, technically difficult and likely to be supremely ineffective idea.

The question that should have been asked is really the one that the Minister asked right at the start: how can we find all these terrorists and paedophiles when they’re using all this high tech stuff? It’s a question that should have been asked of the industry, of computer scientists, of academics, of civil society, of hackers and more. It should have been asked openly, consulted upon widely, and given the time and energy that it deserved. It is a very difficult question – I certainly don’t have an answer – but rather than try to shoe-horn an old idea into a new situation, it needs to be asked. The industry and computer scientists in particular need to be brought in as early as possible – not presented with an idea and told to implement it, no matter how bad an idea it is.

As it is, listening to the debate, I feel sure that we will have Internet Connection Records in the final bill, and in a form not that different from the mess currently proposed. They won’t work, will cost a fortune and bring about a new kind of vulnerability, but that won’t matter. In a few years – probably rather more than the six years currently proposed for the first real review of the law – it may finally be acknowledged that it was a bad idea, but even then it may well not be. It is very hard for people to admit that their ideas have failed.

As a really helpful tweeter (@sw1nn) pointed out, there’s a ‘techie’ term for this kind of issue: An XY problem!  See http://xyproblem.info. ICRs seem to be a classic example.


The IP Bill: opaqueness on encryption?

One thing that all three of the Parliamentary committees that reviewed the Draft Investigatory Powers Bill agreed upon was that the bill needed more clarity over encryption.

This is the Intelligence and Security Committee report:

Screen Shot 2016-03-03 at 15.30.32

This is the Science and Technology Committee report:

Screen Shot 2016-03-03 at 15.32.14

This is the Joint Parliamentary Committee on the Investigatory Powers Bill:

Screen Shot 2016-03-03 at 15.33.44

In the new draft Bill, however, this clarity does not appear to have been provided – at least as far as most of the people who have been reading through it have been able to determine. There are three main possible interpretations of this:

  1. That the Home Office is deliberately trying to avoid providing clarity;
  2. That the Home Office has not really considered the requests for clarity seriously; or
  3. That the Home Office believes it has provided clarity

The first would be the most disturbing – particularly as one of the key elements of the Technical Capability Notices as set out both in the original draft bill and the new version is that the person upon whom the notice is served “may not disclose the existence or contents of the notice to any other person without the permission of the Secretary of State” (S218(8)). The combination of an unclear power and the requirement to keep it secret is a very dangerous.

The second possibility is almost as bad – because, as noted above, all three committees were crystal clear about how important this issue is. Indeed, their reports could be seen as models for the Home Office as to how to make language clear. Legal drafting is never quite as easy as it might be, but it can be clear and should be clear.

The third possibility – that they believe they have provided clarity is also pretty disastrous in the circumstances, particularly as the amount of time that appears to be being made available to scrutinise and amend the Bill appears likely to be limited. This is the interpretation that the Home Office ‘response to consultations’ suggests – but people who have examined the Bill so far have not, in general, found it to be clear at all. That includes both technological experts and legal experts. Interpretation of law is of course at times difficult – but that is precisely why effort must be put in to make it as clear as possible. At the moment whether a backdoor or equivalent could be demanded depends on whether it is ‘technically feasible’ or ‘practicable’ – terms open to interpretation – and on interdependent and somewhat impenetrable definitions of ‘telecommunications operator’, ‘telecommunications service’ and ‘telecommunications system’, which may or may not cover messaging apps, hardware such as iPhones and so forth. Is it clear? It doesn’t seem clear to me – but I am often wrong, and would love to be corrected on this.

This issue is critical for the technology industry. It needs to be sorted out quickly and simply. It should have been done already – which is why the first possibility, that the lack of clarity is deliberate, looms larger  that it ordinarily would. If it is true, then why have the Home Office not followed the advice of all three committees on this issue?

If on the other hand this is simply misinterpretation, then some simple, direct redrafting could solve the problems. Time will tell.

The new IP Bill…. first thoughts…

This morning, in advance of the new draft of the Investigatory Powers Bill being released, I asked six questions:

Screen Shot 2016-03-01 at 09.46.09

At a first glance, they seem to have got about 2 out of 6, which is perhaps better than I suspected, but  not as good as I hoped.

  1. On encryption, I fear they’ve failed again – or if anything made things worse. The government claims to have clarified things in S217 and indeed in the Codes of Practice – but on a first reading this seems unconvincing. The Communications Data Draft Code of Practice section on ‘Maintenance of a Technical Capability’ relies on the idea of ‘reasonability’ which in itself is distinctly vague. No real clarification here – and still the possibility of ordering back-doors via a ‘Technical Capability Notice’ looms very large. (0 out of 1)
  2. Bulk Equipment Interference remains in the Act – large scale hacking ‘legitimised’ despite the recommendation from the usually ‘authority-friendly’ Intelligence and Security Committee that it be dropped from the Bill. (0 out of 2)
  3. A review clause has been added to the Bill – but it is so anaemic as to be scarcely worth its place. S222 of the new draft says that the Secretary of State must prepare a report by the end of the sixth year after the Bill is passed, publish it and lay it before parliament. This is not a sunset clause, and the report prepared is not required to be independent or undertaken by a review body, just by the Secretary of State. It’s a review clause without any claws, so worth only 1/4 a point. (1/4 out of 3)
  4. At first read-through, the ‘double-lock’ does not appear to have been notably changed, but the ‘urgent’ clause has seemingly been tightened a little, from 5 days to 3, but even that isn’t entirely clear. I’d give this 1/4 of a point (so that’s 1/2 out of 4)
  5. The Codes of Practice were indeed published with the bill (and are accessible here) which is something for which the Home Office should be applauded (so that’s 1 and 1/2 out of 5)
  6. As for giving full time for scrutiny of the Bill, the jury is still out – the rumour is second reading today, which still looks like undue haste, so the best I can give them is 1/2 a point – making it a total of 2 out of 6 on my immediate questions.

That’s not quite as bad as I feared – but it’s not as good as it might have been and should have been. Overall, it looks as though the substance of the bill is largely unchanged – which is very disappointing given the depth and breadth of the criticism levelled at it by the three parliamentary committees that examined it. The Home Office may be claiming to have made ‘most’ of the changes asked for – but the changes they have made seem to have been the small, ‘easy’ changes rather than the more important substantial ones.

Those still remain. The critical issue of encryption has been further obfuscated, the most intrusive powers – the Bulk Powers and the ICRs – remain effectively untouched, as do the most controversial ‘equipment interference’ powers. The devil may well be in the detail, though, and that takes time and careful study – there are people far more able and expert than me poring over the various documents as I type, and a great deal more will come out of that study. Time will tell – if we are given that time.


Why is Apple fighting the FBI?

The conflict between Apple and the FBI over the San Bernardino shooter’s iPhone has already had a huge amount of coverage, and that’s likely to continue for a while. The legal details and the technical details have already been written about at great length, but what is perhaps more interesting is why Apple is making such a point here. It isn’t, as some seem to be suggesting, because Apple doesn’t take terrorism seriously, and cares more about the privacy rights of a dead terrorist than it does its responsibilities to past and future victims of terrorism. Neither is it because Apple are the great guardians of our civil liberties and privacy, taking a stand for freedom. Apple aren’t champions of privacy any more than Google are champions of freedom of speech or Facebook are liberators of the poor people of India.  Apple, Google and Facebook are businesses. Their bottom line is their bottom line. Individuals within all of those companies may well have particular political, ethical or moral stances in all these areas, but that isn’t the key. The key is business.

So why, in those circumstances, is Apple taking such a contentious stance? Why now? Why in this case? It is Apple, on the surface at least, that is making this into such a big deal – Tim Cook’s open letter didn’t just talk about the specifics of the case or indeed of iPhones, but in much broader terms:

“While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.”

It’s wonderful stuff – and from the perspective of this privacy advocate at least it should be thoroughly applauded. It should, however, also be examined more carefully, with several pinches of salt, a healthy degree of scepticism and a closer look at the motivations. Ultimately, Apple is taking this stance because Apple believes it’s in Apple’s interests to take this stance. There may be a number of reasons for this. In a broad sense, Apple knows that security – and this is very much a security as well as a privacy issue – is critical for the success of the internet and of the technology sector in general. Security and privacy are critical under-pinners of trust, and trust is crucial for business success. People currently do trust Apple (in general terms) and that really matters to Apple’s business. The critical importance, again in a broad sense, of security and trust is why the other tech giants – Google, Facebook, Twitter et al – have also lined up behind Apple, though their own brands and businesses rely far less on privacy than Apple’s does. Indeed, for Google and Facebook privacy is very much a double-edged sword: their business models depend on their being able to invade our privacy for their own purposes. Trust and security, however, are crucial.

In a narrower sense, Apple has positioned itself as ‘privacy-friendly’ in recent years – partly in contrast to Google, but also in relation to the apparent overreach of governmental authorities. Apple has the position to be able to do this – it’s business model is based on shifting widgets, not harvesting data – but Apple has also taken the view that people now really care about privacy, enough to make decisions at least influenced by their sense of privacy. This is where things get interesting. In the last section of my book, Internet Privacy Rights, where I speculate about the possibility of a more privacy-friendly future, this is one of the key messages: business is the key. If businesses take privacy seriously, they’ll create a technological future where privacy is protected – but they won’t take it seriously out of high-minded principle. They’ll only take it seriously because there’s money in it for them, and there will only be money in it for them if we, their customer, take privacy seriously.

That, for me, could be the most positive thing to come from this story so far. Not just Apple but pretty much all the tech companies (in the US at least) have taken stances which suggest that they think people do take privacy seriously. A few years ago that would have been much less likely – and it is a good sign, from my perspective at least. Ultimately, as I’ve argued many times before, a privacy-friendly internet is something that we will all benefit from – even law enforcement. It is often very hard to see it that way, but in the long term the gains in security, in trust and much more will help us all.

That’s why in the UK, the Intelligence and Security Committee’s report criticised the new Investigatory Powers Bill for not making protection of privacy more prominent. As they put it:

“One might have expected an overarching statement at the forefront of the legislation, or to find universal privacy protections applied consistently throughout the draft Bill”

It is also why the FBI is playing a very dangerous game by taking on Apple in this way. Whilst it is risky for Apple to be seen as ‘on the side of the terrorists’ it may be even more risky for the FBI (and by implication the whole government of the US) to be seen as wanting to ride roughshod over everyone’s privacy. This is a battle for hearts and minds as much as a battle over the data in one phone, data that it is entirely possible is pretty much useless. Right now, it is hard to tell exactly who is winning that battle – but right now my money would be on the tech companies. I hope I’m right, because in the end that would be to the benefit of us all.