Who needs privacy?

You might be forgiven for thinking that this government is very keen on privacy. After all, MPs all seem to enjoy the end-to-end encryption provided by the WhatsApp groups that they use to make their plots and plans, and they’ve been very keen to keep the details of their numerous parties during lockdown as private as possible – so successfully that it seems to have taken a year or more for information about evidently well-attended (work) events to become public. Some also seem enthused by the use of private email for work purposes, and to destroy evidence trails to keep other information private and thwart FOI requests – Sue Gray even provided some advice on the subject a few years back.

On the other hand, they also love surveillance – 2016’s Investigatory Powers Act gives immense powers to the authorities to watch pretty much our every move on the internet, and gather pretty much any form of data about us that’s held by pretty much anyone. They’ve also been very keen to force everyone to use ‘real names’ on social media – which, though it may not seem completely obvious, is a move designed primarily to cut privacy. And, for many years, they’ve been fighting against the expansion of the use of encryption. Indeed, a new wave of attacks on encryption is just beginning.

So what’s going on? In some ways, it’s very simple: they want privacy for themselves, and no privacy for anyone else. It fits the general pattern of ‘one rule for us, another for everyone else’, but it’s much more insidious than that. It’s not just a double-standard, it’s the reverse of what is appropriate – because it needs to be understood that privacy is ultimately about power.

People need privacy against those who have power over them – employees need privacy from their employers (something exemplified by the needs of whistleblowers for privacy and anonymity), citizens need privacy from their governments, victims need privacy from their stalkers and bullies and so on. Kids need privacy from their parents, their teachers and more. The weaker and more vulnerable people are, the more they need privacy – and the approach by the government is exactly the opposite. The powerful (themselves) get more privacy, the weaker (ordinary people, and in particular minority groups and children) get less or even no privacy. The people who should have more accountability – notably the government – get privacy to prevent that accountability – whilst the people who need more protection lose the protection that privacy can provide

This is why moves to ban or limit the use of end-to-end encryption are so bad. Powerful people – and tech-savvy people, like the criminals that they use as the excuse for trying to restrict encryption – will always be able to get that encryption. You can do it yourself, if you know how. The rest of the people – the ‘ordinary’ users of things like Facebook messenger – are the ones who need it, to protect themselves from criminals, stalkers, bullies etc – and are the ones that moves like this from the government are trying to stop getting it.

The push will be a strong one – trying to persuade us that in order to protect kids etc we need to be able to see everything they’re doing, so we need to (effectively) remove all their privacy. That’s just wrong. Making their communications ‘open’ to the authorities, to their parents etc also makes it open to their enemies – bullies, abusers, scammers etc, and indeed those parents or authority figures who are themselves dangerous to kids. We need to understand that this is wrong.

None of this is easy – and it’s very hard to give someone privacy when you don’t trust them. That’s another key here. We need to learn who to trust and how to trust them – and we need to do our best to teach our kids how to look after themselves. To a great extent they know – kids understand privacy far more that people give them credit for – and we need to trust that too.

Contact tracing, privacy, magical thinking – and trust!

The saga of the UK’s contact tracing app has barely begun but already it is fraught with problems. Technical problems – the app barely works on iPhones, for example, and communication between iPhones requires someone with an Android phone to be in close proximity – are just the start of it. Legal problems are another issue – the app looks likely to stretch data protection law at the very least. Then there are practical problems – will the app record you as having contact with people from whom you are blocked by a wall, for example – and the huge issue of getting enough people to download it when many don’t have smartphones, many won’t be savvy enough to get it going, and many more, it seems likely, won’t trust the app enough to use it.

That’s not even to go into the bigger problems with the app. First of all, it seems unlikely to do what people want it to do – though even what is wanted is unclear, a problem which I will get back to. Secondly, it rides roughshod over privacy in not just a legal but a practical way, and despite what many might suggest people do care about privacy enough to make decisions on its basis.

This piece is not about the technical details of the app – there are people far more technologically adept than me who have already written extensively and well about this – and nor is it about the legal details, which have also been covered extensively and well by some real experts (see the Hawktawk blog on data protection, and the opinion of Matthew Ryder QC, Edward Craven, Gayatri Sarathy & Ravi Naik for example) but rather about the underlying problems that have beset this project from the start: misunderstanding privacy, magical thinking, and failure to grasp the nature of trust.

These three issues together mean that right now, the project is likely to fail, do damage, and distract from genuine ways to help deal with the coronavirus crisis, and the best thing people should do is not download or use the app, so that the authorities are forced into a rethink and into a better way forward. It would be far from the first time during this crisis that the government has had to be nudged in a positive direction.

Misunderstanding Privacy – Part 1

Although people often underplay it – particularly in relation to other people – privacy is important to everyone. MPs, for example, will fiercely guard their own privacy whilst passing the most intrusive of surveillance laws. Journalists will fight to protect the privacy of their sources even whilst invading the privacy of the subjects of their investigations. Undercover police officers will resist even legal challenges to reveal their identities after investigations go wrong.

This is for one simple reason: privacy matters to people when things are important.

That is particularly relevant here, because the contact tracing app hits at three of the most important parts of our privacy: our health, our location, and our social interactions. Health and location data, as I detail in my most recent book, what do we know and what should we do about internet privacy, are two of the key areas of the current data world, in part because we care a lot about them and in part because they can be immensely valuable in both positive and negative ways. We care about them because they’re intensely personal and private – but that’s also why they can be valuable to those who wish to exploit or harm us. Health data, for example, can be used to discriminate – something the contact tracing app might well enable, as it could force people to self-isolate whilst others are free to move, or even act as an enabler for the ‘immunity passports’ that have been mooted but are fraught with even more problems than the contact tracing app.

Location data is another matter and something worthy of much more extensive discussion – but suffice it to say that there’s a reason we don’t like the idea of being watched and followed at all times, and that reason is real. If people know where you are or where you have been, they can learn a great deal about you – and know where you are not (if you’re not at home, you might be more vulnerable to burglars) as well as where you might be going. Authoritarian states can find dissidents. Abusive spouses can find their victims and so forth. More ‘benignly’, it can be used to advertise and sell local and relevant products – and in the aggregate can be used to ‘manage’ populations.

Relationship data – who you know, how well you know them, what you do with them and so forth – is in online terms one of the things that makes Facebook so successful and at the same time so intrusive. What a contact tracing system can do is translate that into the offline world. Indeed, that’s the essence of it: to gather data about who you come into contact with, or at least in proximity to, by getting your phone to communicate with all the phones close to you in the real world.

This is something we do and should care about, and could and should be protective over. Whilst it makes sense in relation to protecting against the spread of an infection, the potential for misuse of this kind of data is perhaps even greater than that of health and location data. Authoritarian states know this – it’s been standard practice for spies for centuries. The Stasi’s files were full of details of who had met whom and when, and for how long – this is precisely the kind of data that a contact tracing system has the potential to gather. This is also why we should be hugely wary of establishing systems that enable it to be done easily, remotely and at scale. This isn’t just privacy as some kind of luxury – this is real concern about things that are done in the real world and have been for many, many years, just not with the speed, efficiency and cheapness of installing an app on people’s phones.

Some of this people ‘instinctively’ know – they feel that the intrusions on their privacy are ‘creepy’ – and hence resist. Businesses and government often underestimate how much they care and how much they resist – and how able they are to resist. In my work I have seen this again and again. Perhaps the most relevant here was the dramatic nine day failure that was the Samaritans Radar app, which scanned people’s tweets to detect whether they might be feeling vulnerable and even suicidal, but didn’t understand that even this scanning would be seen as intrusive by the very people it was supposed to protect. They rebelled, and the app was abandoned almost immediately it had started. The NHS’s own ‘care.data’ scheme, far bigger and grander, collapsed for similar reasons – it wanted to suck up data from GP practices into a great big central database, but didn’t get either the legal or the practical consent from enough people to make it work. Resistance was not futile – it was effective.

This resistance seems likely in relation to the contact tracing app too – not least because the resistance grows spectacularly when there is little trust in the people behind a project. And, as we shall see, the government has done almost everything in its power to make people distrust their project.

Magical thinking

The second part of the problem is what can loosely be called ‘magical thinking’. This is another thing that is all too common in what might loosely be called the ‘digital age’. Broadly speaking, it means treating technology as magical, and thinking that you can solve complex, nuanced and multifaceted problems with a wave of a technological wand. It is this kind of magic that Brexiters believed would ‘solve’ the Irish border problems (it won’t) and led anti-porn campaigners to think that ‘age verification’ systems online would stop kids (and often adults) from accessing porn (it won’t).

If you watched Matt Hancock launch the app at the daily Downing Street press conference, you could have seen how this works. He enthused about the app like a child with a new toy – and suggested that it was the key to solving all the problems. Even with the best will in the world, a contact tracing app could only be a very small part of a much bigger operation, and only make a small contribution to solving whatever problems they want it to solve (more of which later). Magical thinking, however, makes it the key, the silver bullet, the magic spell that needs just to be spoken to transform Cinderella into a beautiful princess. It will never be that, and the more it is thought of in those terms the less chance it has of working in any way at all. The magical thinking means that the real work that needs to go on is relegated to the background or eliminated at all, replaced only by the magic of tech.

Here, the app seems to be designed to replace the need for a proper and painstaking testing regime. As it stands, it is based on self-reporting of symptoms, rather than testing. A person self-reports, and then the system alerts anyone who it thinks has been in contact with that person that they might be at risk. Regardless of the technological safeguards, that leaves the system at the mercy of hypochondriacs who will report the slightest cough or headache, thus alerting anyone they’ve been close to, or malicious self-reporters who either just want to cause mischief (scare your friends for a laugh) or who actually want to cause damage – go into a shop run by a rival, then later self-report and get all the workers in the shop worried into self-isolation.

These are just a couple of the possibilities. There are more. Stoics, who have symptoms but don’t take it seriously and don’t report – or people afraid to report because it might get them into trouble with work or friends. Others who don’t even recognise the symptoms. Asymptomatic people who can go around freely infecting people and not get triggered on the system at all. The magical thinking that suggests the app can do everything doesn’t take human nature into account – let alone malicious actors. History shows that whenever a technological system is developed the people who wish to find and exploit flaws in it – or different ways to use it – are ready to take advantage.

Magical thinking also means not thinking anything will go wrong – whether it be the malicious actors already mentioned or some kind of technical flaw that has not been anticipated. It also means that all these problems must be soluble by a little bit of techy cleverness, because the techies are so clever. Of course they are clever – but there are many problems that tech alone can’t solve

The issue of trust

One of those is trust. Tech can’t make people trust you – indeed, many people are distinctly distrustful of technology. The NHS generates trust, and those behind the app may well be assuming that they can ride on the coattails of that trust – but that itself may be wishful thinking, because they have done almost none of the things that generate real trust – and the app depends hugely on trust, because without it people won’t download and won’t use the app.

How can they generate that trust? The first point, and perhaps the hardest, is to be trustworthy. The NHS generates trust but politicians do the opposite. These particular politicians have been demonstrably and dramatically untrustworthy, noted for their lies – Boris Johnson having been sacked from more than one job for having lied. Further, their tech people have a particularly dishonourable record – Dominic Cummings is hardly seen as a paragon of virtue even by his own side, whilst the social media manipulative tactics of the leave campaign were remarkable for their effectiveness and their dishonesty.

In those circumstances, that means you have to work hard to generate trust. There are a few keys here. The first is to distance yourself from the least trustworthy people – the vote leave campaigners should not have been let near this with a barge pole, for example. The second is to follow systems and procedures in an exemplary way, building in checks and balances at all times, and being as transparent as possible.

Here, they’ve done the opposite. It has been almost impossible to find out what was going to until the programme was actually already in pilot stage. Parliament – through its committee system – was not given oversight until the pilot was already under way, and the report of the Human Rights Committee was deeply critical. There appears to have been no Data Protection Impact Assessment done in advance of the pilot – which is almost certainly in breach of the GDPR.

Further, it is still not really clear what the purpose of the project is – and this is also something crucial for the generation of trust. We need to know precisely what the aims are – and how they will be measured, so that it is possible to ascertain whether it is a success or not. We need to know the duration, what happens on completion – to the project, to the data gathered and to the data derived from the data gathered. We need to know how the project will deal with the many, many problems that have already been discussed – and we needed to know that before the project went into its pilot stage.

Being presented with a ‘fait accompli’ and being told to accept it is one way to reduce trust, not to gain it. All these processes need to take place whilst there is still a chance to change the project, and change is significantly – because all the signs are that a significant change will be needed. Currently it seems unlikely that the app will do anything very useful, and it will have significant and damaging side effects.

Misunderstanding Privacy – part 2

…which brings us back to privacy. One of the most common misunderstandings of privacy is the idea that it’s about hiding something away – hence the facetious and false ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument that is made all the time. In practice, privacy is complex and nuanced and more about controlling – or at least influencing – what kind of information about you is made available to whom.

This last part is the key. Privacy is relational. You need privacy from someone or something else, and you need it in different ways. Privacy scholars are often asked ‘who do you worry about most, governments or corporations?’ Are you more worried about Facebook or GCHQ. It’s a bit of a false question – because you should be (and probably are) worried about them in different ways, just as you’re worried about privacy from your boss, your parents, your kids, your friends in different ways. You might tell your doctor the most intimate details about your health, but you probably wouldn’t tell your boss or a bloke you meet in the pub.

With the coronavirus contact tracing app, this is also the key. Who gets access to our data, who gets to know about our health, our location, our movements and our contacts? If we know this information is going to be kept properly confidential, we might be more willing to share it. Do we trust our doctors to keep it confidential? Probably. Would we trust the politicians to keep it confidential? Far less likely. How can we be sure who will get access to it?

Without getting into too much technical detail, this is where the key current argument is over the app. When people talk about a centralised system, they mean that the data (or rather some of the data) is uploaded to a central server when you report symptoms. A decentralised system does not do that – the data is only communicated between phones, and doesn’t get stored in a central database. This is much more privacy-friendly, but does not build up a big central database for later use and analysis.

This is why privacy people much prefer the idea of a decentralised system – because, amongst other things, it keeps the data out of the hands of people that we cannot and should not trust. Out of the hands of the people we need privacy from.

The government does not seem to see this. They’re keen to stress how well the data is protected in ‘security’ terms – protected from hackers and so forth – without realising (or perhaps admitting) that the people we really want privacy from, the people who present the biggest risk to the users, are the government themselves. We don’t trust this government – and we should not really trust any government, but build in safeguards and protections from those governments, and remember that what we build now will be available not just to this government but to successors, which may be even worse, however difficult that might be to imagine.

Ways forward?

Where do we go from here? It seems likely that the government will try to push on regardless, and present whatever happens as a great success. That should be fought against, tooth and nail. They can and should be challenged and pushed on every point – legal, technical, practical, and trust-related. That way they may be willing to move to a more privacy-friendly solution. They do exist, and it’s not too late to change.

what do we know and what should we do about…? internet privacy

My new book, what do we know and what should we do about internet privacy has just been published, by Sage. It is part of a series of books covering a wide range of current topics – the first ones have been on immigrationinequality, the future of work and housing. 

This is a very different kind of book from my first two books – Internet Privacy Rights, and The Internet, Warts and All, both of which are large, relatively serious academic books, published by Cambridge University Press, and sufficiently expensive and academic as to be purchasable only by other academics – or more likely university libraries. The new book is meant for a much more general audience – it is short, written intentionally accessibly, and for sale at less than £10. It’s not a law book – the series is primarily social science, and in many ways I would call the book more sociology than anything else. I was asked to write the book by the excellent Chris Grey – whose Brexit blogs have been vital reading over the last few years – and I was delighted to be asked, because making this subject in particular more accessible has been something I’ve been wanting to do for a long time. Internet privacy has been a subject for geeks and nerds for years – but as this new book tries to show, it’s something that matters more and more for everyone these days.

Cover

It may be a short book (well, it is a short book, well under 100 pages) but it covers a wide range. It starts by setting the context – a brief history of privacy, a brief history of the internet, and then showing how we got from what were optimistic, liberal and free beginnings to the current situation – all-pervading surveillance, government involvement at every level, domination by a few, huge corporations with their own interests at heart. It looks at the key developments along the way – the world-wide-web, search, social networks – and their privacy implications. It then focusses on the biggest ‘new’ issues: location data, health data, facial recognition and other biometrics, the internet of things, and political data and political manipulation. It sketches out how each of these matters significantly – but how the combination of them matters even more, and what it means in terms of our privacy, our autonomy and our future.

The final part of the book – the ‘what should we do about…’ section – is by its nature rather shorter. There is not as much that we can do as many of us would like – as the book outlines, we have reached a position from which it is very difficult to escape. We have built dependencies that are hard to find alternatives to – but not impossible. The book outlines some of the key strategies – from doing our best to extricate ourselves from the disaster that is Facebook to persuading our governments not to follow the current ultimately destructive paths that it seems determined to pursue. Two policies get particular attention: Real Names, which though superficially attractive are ultimately destructive and authoritarian, fail to deal with the issues they claim to and put vulnerable people in more danger, and the current and fundamentally misguided attempts to undermine the effectiveness of encryption.

Can we change? I have to admit this is not a very optimistic book, despite the cheery pink colour of its cover, but it is not completely negative. I hope that the starting point is raising awareness, which is what this book is intended to do.

The book can be purchased directly from Sage here, or via Amazon here, though if you buy it through Amazon, after you’ve read the book you might feel you should have bought it another way!

 

Paul Bernal

February 2020

A disturbing plan for control…

The Conservative Manifesto, unlike the Labour Manifesto, has some quite detailed proposals for digital policy – and in particular for the internet. Sadly, however, though there are a few bright spots, the major proposals are deeply disturbing and will send shivers down the spine of anyone interested in internet freedom.

Their idea of a ‘digital charter’ is safe, bland, motherhood and apple-pie stuff about safely and security online, with all the appropriate buzzwords of prosperity and growth. It seems a surprise, indeed, that they haven’t talked about having a ‘strong and stable internet’. They want Britain to be the best place to start and run a digital business, and to make Britain the safest place in the world to be online. Don’t we all?

When the detail comes in, some of it sounds very familiar to people who know what the law already says – and in particular what EU law already says – the eIDAS, the E-Commerce Directive, the Directive on Consumer Rights already say much of what the Tory Manifesto says. Then, moving onto data protection, it gets even more familiar:

“We will give people new rights to ensure they are in control of their own data, including the ability to require major social media platforms to delete information held about them at the age of 18, the ability to access and export personal data, and an expectation that personal data held should be stored in a secure way.”

This is all from the General Data Protection Regulation (GDPR), passed in 2016, and due to come into force in 2018. Effectively, the Tories are trying to take credit for a piece of EU law – or they’re committing (as they’ve almost done before) to keeping compliant with that law after we’ve left the EU. That will be problematic, given that our surveillance law may make compliance impossible, but that’s for another time…

“…we will institute an expert Data Use and Ethics Commission to advise regulators and parliament on the nature of data use and how best to prevent its abuse.”

This is quite interesting – though notable that the word ‘privacy’ is conspicuous by its absence. It is, perhaps, the only genuinely positive thing in the Tory manifesto as it relates to the internet.

“We will make sure that our public services, businesses, charities and individual users are protected from cyber risks.”

Of course you will. The Investigatory Powers Act, however, does the opposite, as does the continued rhetoric against encryption. The NHS cyber attack, it must be remembered, was performed using a tool developed by GCHQ’s partners in the NSA. If the Tories really want to protect public services, businesses, charities and individuals, they need to change tack on this completely, and start promoting and supporting good practice and good, secure technology. Instead, they again double-down in the fight against encryption (and thus against security):

“….we do not believe that there should be a safe space for terrorists to communicate online and will work to prevent them from having this capability.”

…but as anyone with any understanding of technology knows, if you stop terrorists communicating safely, you stop all of us from communicating safely.

Next:

“…we also need to take steps to protect the reliability and objectivity of information that is essential to our democracy and a free and independent press.”

This presumably means some kind of measures against ‘fake news’. Most proposed measures elsewhere in the world are likely to amount to censorship – and given what else is in the manifesto (see below) I think that is the only reasonable conclusion here.

“We will ensure content creators are appropriately rewarded for the content they make available online.”

This looks as though it almost certainly means harsher and more intense copyright enforcement. That, again, is only to be expected.

Then, on internet safety, they say:

“…we must take steps to protect the vulnerable… …online rules should reflect those that govern our lives offline…”

Yes, We already do.

“We will put a responsibility on industry not to direct users – even unintentionally – to hate speech, pornography, or other sources of harm”

Note that this says ‘pornography’, not ‘illegal pornography’, and the ‘unintentionally’ part begins the more disturbing part of the manifesto. Intermediaries seem likely to be stripped of much of their ‘mere conduit’ protection – and be required to monitor much more closely what happens through their systems. This, in general, has two effects: to encourage surveillance, and to encourage caution about content (effectively to chill speech). This needs to be watched very carefully indeed.

“…we will establish a regulatory framework in law to underpin our digital charter and to ensure that digital companies, social media platforms and content providers abide by these principles. We will introduce a sanctions regime to ensure compliance, giving regulators the ability to fine or prosecute those companies that fail in their legal duties, and to order the removal of content where it clearly breaches UK law.”

This is the most worrying part of the whole piece. Essentially it looks like a clampdown on the social media – and, to all intents and purposes, the establishment of a full-scale internet censorship system (see the ‘fake news’ point above). Where the Tories are refusing to implement statutory regulation for the press (the abandonment of part 2 of Leveson is mentioned specifically in the manifesto, along with the repeal of Section 40 of the Crime and Courts Act 2013, which was one of the few bits of Leveson part 1 that was implemented) they look very much as though they want to impose it upon the online media. The Daily Mail will have more freedom than blogging platforms, Facebook and Twitter – and you can draw your own conclusions from that.

When this is all combined with the Investigatory Powers Act, it looks very much like a solid clampdown on internet freedom. Surveillance has been enabled – this will strengthen the second part of the authoritarian pincer movement, the censorship side. Privacy has been wounded, now it’s the turn of freedom of expression to be attacked. I can see how this will be attractive to some – and will go down very well indeed with both the proprietors and the readers of the Daily Mail – but anyone interested in internet freedom should be very much disturbed.

 

A better debate on surveillance?

screen-shot-2016-09-21-at-18-57-00Back in 2015, Andrew Parker, the head of MI5, called for a ‘mature debate’ on surveillance – in advance of the Investigatory Powers Bill, the surveillance law which has now almost finished making its way through parliament, and will almost certainly become law in a few months time. Though there has been, at least in some ways, a better debate over this bill than over previous attempts to update the UK’s surveillance law, it still seems as though the debate in both politics and the media remains distinctly superficial and indeed often deeply misleading.

It is in this context that I have a new academic paper out: “Data gathering, surveillance and human rights: recasting the debate”, in a new journal, the Journal of Cyber Policy. It is an academic piece, and access, sadly, is relatively restricted, so I wanted to say a little about the piece here, in a blog which is freely accessible to all – at least in places where censorship of the internet has not yet taken full hold.

The essence of the argument in the paper is relatively straightforward. The debate over surveillance is simplified and miscast in a number of ways, and those ways in general tend to make surveillance seem more positive and effective that it is, and with less broad and significant an impact on ordinary people than it might have. The rights that it impinges are underplayed, and the side-effects of the surveillance are barely mentioned, making surveillance seem much more attractive than should be – and hence decisions are made that might not have been made if the debate had been better informed. If the debate is improved, then the decisions will be improved – and we might have both better law and better surveillance practices.

Perhaps the most important way in which the debate needs to be improved is to understand that surveillance does not just impact upon what is portrayed as a kind of selfish, individual privacy – privacy that it is implied does not matter for those who ‘have nothing to hide’ – but upon a wide range of what are generally described as ‘civil liberties’. It has a big impact on freedom of speech – an impact that been empirically evidenced in the last year – and upon freedom of association and assembly, both online and in the ‘real’ world. One of the main reasons for this – a reason largely missed by those who advocate for more surveillance – is that we use the internet for so many more things than we ever used telephones and letters, or even email. We work, play, romance and research our health. We organise our social lives, find entertainment, shop, discuss politics, do our finances and much, much more. There is pretty much no element of our lives that does not have a very significant online element – and that means that surveillance touches all aspects of our lives, and any chilling effect doesn’t just chill speech or invade selfish privacy, but almost everything.

This, and much more, is discussed in my paper – which I hope will contribute to the debate, and indeed stimulate debate. Some of it is contentious – the role of commercial surveillance the interaction between it and state surveillance – but that too is intentional. Contentious issues need to be discussed.

There is one particular point that often gets missed – the question of when surveillance occurs. Is it when data is gathered, when it is algorithmically analysed, or when human eyes finally look at it. In the end, this may be a semantic point – what technically counts as ‘surveillance’ is less important than what actually has an impact on people, which begins at the data gathering stage. In my conclusion, I bring out that point by quoting our new Prime Minister, from her time as Home Secretary and chief instigator of our current manifestation of surveillance law. This is how I put it in the paper:

“Statements such as Theresa May’s that ‘the UK does not engage in mass surveillance’ though semantically arguable, are in effect deeply unhelpful. A more accurate statement would be that:

‘the UK engages in bulk data gathering that interferes not only with privacy but with freedom of expression, association and assembly, the right to a free trial and the prohibition of discrimination, and which puts people at a wide variety of unacknowledged and unquantified risks.’”

It is only when we can have clearer debate, acknowledging the real risks, that we can come to appropriate conclusions. We are probably too late for that to happen in relation to the Investigatory Powers Bill, but given that the bill includes measures such as the contentious Internet Connection Records that seem likely to fail, in expensive and probably farcical ways, the debate will be returned to again and again. Next time, perhaps it might be a better debate.

How not to reclaim the internet…

The new campaign to ‘Reclaim the Internet‘, to ‘take a stand against online abuse’ was launched yesterday – and it could be a really important campaign. The scale and nature of abuse online is appalling – and it is good to see that the campaign does not focus on just one kind of abuse, instead talking about ‘misogyny, sexism, racism, homophobia, transphobia’ and more. There is more than anecdotal evidence of this abuse – even if the methodology and conclusions from the particular Demos survey used at the launch has been subject to significant criticism: Dr Claire Hardaker of Lancaster University’s forensic dissection is well worth a read – and it is really important not to try to suggest that this kind of abuse is not hideous and should not be taken seriously. It should – but great care needs to be taken and the risks attached to many of the potential strategies to ‘reclaim the internet’ are very high indeed. Many of them would have precisely the wrong effect: silencing exactly those voices that the campaign wishes to have heard.

Surveillance and censorship

Perhaps the biggest risk is that the campaign is used to enable and endorse those twin tools of oppression and control, surveillance and censorship. The idea that we should monitor everything to try to find all those who commit abuse or engage in sexism, misogyny, racism, homophobia and transphobia may seem very attractive – find the trolls, root them out and punish them – but building a surveillance infrastructure and making it seem ‘OK’ is ultimately deeply counterproductive for almost every aspect of freedom. Evidence shows that surveillance chills free speech, discourages people from seeking out information, associating and assembling with people and more – as well as enabling discrimination and exacerbating power differences. Surveillance helps the powerful to oppress the weak – so should be avoided except in the worst of situations. Any ‘solutions’ to online abuse that are based around an increase in surveillance need a thorough rethink.

Censorship is the other side of the coin – but works with surveillance to let the powerful control the weak. Again, huge care is needed to make sure that attempts to ‘reclaim’ the internet don’t become tools to enforce orthodoxy and silence voices that don’t ‘fit’ the norm. Freedom of speech matters most precisely when that speech might offend and upset – it is easy to give those you like the freedom to say what they want, much harder to give those you disagree with that freedom.  It’s a very difficult area – because if we want to reduce the impact of abuse, that must mean restricting abusers’ freedom of speech – but it must be navigated very carefully, and tools not created that allow easy silencing of those who disagree with people rather than those who abuse them.

Real names

One particularly important trap not to fall into is that of demanding ‘real names’: it is a common idea that the way to reduce abuse is to prevent people being anonymous online, or to ban the use of pseudonyms. Not only does this not work, but it, again, damages many of those who the idea of ‘reclaiming the internet’ is intended to support. Victims of abuse in the ‘real’ world, people who are being stalked or victimised, whistleblowers and so forth need pseudonyms in order to protect themselves from their abusers, stalkers, enemies and so on. Force ‘real names’ on people, and you put those people at risk. Many will simply not engage – chilled by the demand for real names and the fear of being revealed. That’s even without engaging with the huge issue of the right to define your own name – and the joy of playing with identity, which for some people is one of the great pleasures of the internet, from parodies to fantasies. Real names are another way that the powerful can exert their power on the weak – it is no surprise that the Chinese government are one of the most ardent supporters of the idea of forcing real names on the internet. Any ‘solution’ to reclaiming the internet that demands or requires real names should be fiercely opposed.

Algorithms and errors

Another key mistake to be avoided is over-reliance on algorithmic analysis – particularly of content of social media posts. This is one of the areas that the Demos survey lets itself down – it makes assumptions about the ability of algorithms to understand language. As Dr Claire Hardaker puts it:

“Face an algorithm with messy features like sarcasm, threats, allusions, in-jokes, novel metaphors, clever wordplay, typographical errors, slang, mock impoliteness, and so on, and it will invariably make mistakes. Even supposedly cut-and-dried tasks such as tagging a word for its meaning can fox a computer. If I tell you that “this is light” whilst pointing to the sun you’re going to understand something very different than if I say “this is light” whilst picking up an empty bag. Programming that kind of distinction into a software is nightmarish.”

This kind of error is bad enough in a survey – but some of the possible routes to ‘reclaiming the internet’ include using this kind of analysis to identify offending social media comments, or even to automatically block or censor social media comments. Indeed, much internet filtering works that way – one of the posts on this blog which was commenting on ‘porn blocking’ was blocked by a filter as it had words relating to pornography in it a number of times. Again, reliance on algorithmic ‘solutions’ to reclaiming the internet is very dangerous – and could end up stifling conversations, reducing freedom of speech and much more.

Who’s trolling who? Double-edged swords…

One of the other major problems with dealing with ‘trolls’ (the quotation marks are entirely intentional) is that in practice it can be very hard to identify them. Indeed, in conflicts on the internet it is common for both sides to believe that the other side is the one doing the abuse, the other side are the ‘trolls’, and they themselves are the victims who need protecting. Anyone who observes even the most one-sided of disputes should be able to see this – from GamerGate to some of the conflicts over transphobia. Not that many who others would consider to be ‘trolls’ would consider themselves to be trolls.

The tragic case of Brenda Leyland should give everyone pause for thought. She was described and ‘outed’ as a ‘McCann troll’ – she tweeted as @Sweepyface and campaigned, as she saw it, for justice for Madeleine McCann, blaming Madeleine’s parents for her death. Sky News reporter Martin Brunt doorstepped her, and days later she was found dead, having committed suicide. Was she a ‘troll’? Was the media response to her appropriate, proportionate, or positive? These are not easy questions – because this isn’t an easy subject.

Further, one of the best defences of a ‘troll’ is to accuse the person they’re trolling of being a troll – and that is something that should be remembered whatever the tools you introduce to help reduce abuse online. Those tools are double-edged swords. Bring in quick and easy ways to report abuse – things like immediate blocking of social media accounts when those accounts are accused of being abusive – and you will find those tools being used by the trolls themselves against their victims. ‘Flame wars’ have existed pretty much since the beginning of the internet – any tools you create ‘against’ abuse will be used as weapons in flame wars in the future.

No quick fixes and no silver bullets

That should remind us of the biggest point here. There are no quick fixes to this kind of problem. No silver bullets that will slay the werewolves, or magic wands that will make everything OK. Technology often encourages the feeling that if only we created this one new tool, we could solve everything. In practice, it’s almost never the case – and in relation to online abuse this is particularly true.

Some people will suggest that it’s already easy. ‘All you have to do is block your abuser’ is all very well, but if you get 100 new abusive messages every minute you’ll spend your whole time blocking. Some will say that the solution is just not to feed the trolls – but many trolls don’t need any feeding at all. Others may suggest that people are just whining – none of this really hurts you, it’s just words – but that’s not true either. Words do hurt – and most of those suggesting this haven’t been subject to the kind of abuse that happens to others. What’s more, the chilling effect of abuse is real – if you get attacked every time you go online, why on earth would you want to stay online?

The problem is real, and needs careful thought and time to address. The traps involved in addressing it – and I’ve mentioned only a few of them here – are also real, and need to be avoided and considered very carefully. There really are no quick fixes – and it is really important not to raise false hopes that it can all be solved quickly and easily. That false hope may be the biggest trap of all.

Panama, privacy and power…

David Cameron’s first reaction to the questions about his family’s involvement with the Mossack Fonseca leaks was that it was a ‘private matter’ – something that was greeted with a chorus of disapproval from his political opponents and large sections of both the social and ‘traditional’ media. Privacy scholars and advocates, however, were somewhat muted – and quite rightly, because there are complex issues surrounding privacy here, issues that should at the very least make us pause and think. Privacy, in the view of many people, is a human right. It is included in one form or another in all the major human rights declarations and conventions. This, for example, is Article 8 of the European Convention on Human Rights:

“Everyone has the right to respect for his private and family life, his home and his correspondence.”

Everyone. Not just the people we like. Indeed, the test of your commitment to human rights is how you apply them to those who you don’t like, not how you apply them to those that you do. It is easy to grant rights to your friends and allies, harder to grant them to your enemies or those you dislike. We see how many of those who shout loudly about freedom of speech when their own speech is threatened are all too ready to try to shut out their enemies: censorship of extremist speech is considered part of the key response to terrorism in the UK, for example. Those of us on the left of politics, therefore, should be very wary of overriding our principles when the likes of David Cameron and George Osborne are concerned. Even Cameron and Osborne have the right to privacy, we should be very clear about that. We can highlight the hypocrisy of their attempts to implement mass surveillance through the Investigatory Powers Bill whilst claiming privacy for themselves, but we should not deny them privacy itself without a very good cause indeed.

Privacy for the powerful?

And yet that is not the whole story. Rights, and human rights in particular, are most important when used by the weak to protect themselves from the powerful.The powerful generally have other ways to protect themselves. Privacy in particular has at times been given a very bad name because it has been used by the powerful to shield themselves from scrutiny. A stream of philandering footballers have tried to use privacy law to prevent their affairs becoming public – Ryan Giggs, Rio Ferdinand and John Terry. Prince Charles’ ultimately unsuccessful attempts to keep the ‘Black Spider Memos’ from being exposed were also on the basis of privacy. The Catholic Church covered up the abuses of its priests. Powerful people using a law which their own kind largely forged is all too common, and should not be accepted without a fight. As feminist scholar Anita Allen put it:

“[it should be possible to] rip down the doors of ‘private’ citizens in ‘private’ homes and ‘private’ institutions as needed to protect the vital interests of vulnerable people.”

This argument may have its most obvious application in relation to domestic abuse, but it also has an application to the Panama leaks – particularly at a time when the politics of austerity is being used directly against the vital interests of vulnerable people. Part of the logic of austerity is that there isn’t enough money to pay for welfare and services – and part of the reason that we don’t have ‘enough’ money is that so much tax is being avoided or evaded, so there’s a public interest in exposing the nature and scale of tax avoidance and evasion, a public interest that might override the privacy rights of the individuals involved.

How private is financial information?

That brings the next question: should financial or taxation information be treated as private, and accorded the strongest protection? Traditions and laws vary on this. In Norway, for example, income and tax information for every citizen is publicly available. This has been true since the 19th century – from the Norwegian perspective, financial and tax transparency is part of what makes a democratic society function.

It is easy to see how this might work – and indeed, an anecdote from my own past shows it very clearly. When I was working for one of the biggest chartered accountancy firms back in the 80s, I started to get suspicious about what had happened over a particular pay rise – so I started asking my friends and colleagues, all of whom had started with the firm at the same time, and progressed up the ladder in the same way, how much they were earning, I discovered to my shock that every single woman was earning less than every single man. That is, that the highest paid woman earned less than the lowest paid man – and I knew them well enough to know that this was in no way a reflection of their merits as workers. The fact that salaries were considered private, and that no-one was supposed to know (or ask) what anyone else was earning, meant that what appeared to me once I knew about it to be blatant sexism was kept completely secret. Transparency would have exposed it in a moment – and probably prevented it from happening.

In the UK, however, privacy over financial matters is part of our culture. That may well be a reflection of our conservatism – if functions in a ‘conservative’ way, tending to protect the power of the powerful – but it is also something that most people, I would suggest, believe is right. Indeed, as a privacy advocate I would in general support more privacy rather than less. It might be a step too far to suggest that all our finances should be made public – but not, perhaps, that the finances of those in public office should be private. The people who, in this case, are supporting or driving policies should be required to show whether they are benefiting from those policies – and whether they are being hypocritical in putting those policies forward. We should be able to find out whether they personally benefit from tax cuts or changes, for example, and whether they’re contributing appropriately when they’re requiring others to tighten their belts.

I do not, of course, expect any of this to happen. In the UK in particular the powerful have far too strong a hold on our politics to let it happen. That then brings me to one more privacy-related issue exposed by the Panama papers. If there is no legal way for information that is to the public benefit to come out, what approach should be taken to the illegal ways that information is acquired. There have been many other prominent examples – Snowden’s revelations about the NSA, GCHQ and so on, Hervé Falciani’s data from HSBC in Switzerland in particular – where in some very direct ways the public interest could be said to be served by the leaks. Are they whistleblowers or criminals? Spies? Should they be prosecuted or cheered? And then what about other hackers like the ‘Impact Team’ who hacked Ashley Madison? Whether each of them was doing ‘good’ is a matter of perspective.

Vulnerability of data…

One thing that should be clear, however, is that no-one should be complacent about data security and data vulnerability. All data, however it is held, wherever it is held, and whoever it is held by, is vulnerable. The degree of that vulnerability, the likelihood of any vulnerability being exploited and so forth varies a great deal – but the vulnerability is there. That has two direct implications for the state of the internet right now. Firstly, it means that we should encourage and support encryption – and not do anything to undermine it, even for law enforcement purposes. Secondly, it means that we should avoid holding data that we don’t need to hold – let alone create unnecessary data. The Investigatory Powers Bill breaks both of those principles. It undermines rather than supports encryption, and requires the creation of massive amounts of data (the Internet Connection Records) and the gathering and/or retention of even more (via the various bulk powers). All of this adds to our vulnerability and our risks – something that we should think very, very hard before doing. I’m not sure that thinking is happening.

 

Internet Connection Records: answering the wrong question?

Watching and listening to the Commons debate over the Investigatory Powers Bill, and in particular when ‘Internet Connection Records’ were mentioned, it was hard not to feel that what was being discussed had very little connection with reality. There were many mentions of how bad and dangerous things were on the internet, how the world had changed, and how we needed this law – and in particular Internet Connection Records (ICRs) – to deal with the new challenges. As I watched, I found myself imagining a distinctly unfunny episode of Yes Minister which went something like this:


Screen Shot 2016-03-16 at 10.16.58Scene 1:

Minister sitting in leather arm chair, glass of brandy in his hand, while old civil servant sits opposite, glasses perched on the end of his nose.

Minister: This internet, it makes everything so hard. How can we find all these terrorists and paedophiles when they’re using all this high tech stuff?

Civil Servant: It was easier in the old days, when they just used telephones. All we needed was itemised phone bills. Then we could find out who they were talking to, tap the phones, and find out everything we needed. Those were the days.

Minister: Ah yes, those were the days.

The Civil Servant leans back in his chair and takes a sip from his drink. The Minister rubs his forehead looking thoughtful. Then his eyes clear.

Minister: I know. Why don’t we just make the internet people make us the equivalent of itemised phone bills, but for the internet?

Civil Servant blinks, not knowing quite what to say.

Minister: Simple, eh? Solves all our problems in one go. Those techie people can do it. After all, that’s their job.

Civil Servant: Minister….

Minister: No, don’t make it harder. You always make things difficult. Arrange a meeting.

Civil Servant: Yes, Minister


Scene 2

Minister sitting at the head of a large table, two youngish civil servants sitting before him, pads of paper in front of them and well-sharpened pencils in their hands.

Minister: Right, you two. We need a new law. We need to make internet companies make us the equivalent of Itemised Phone Bill.

Civil servant 1: Minister?

Minister: You can call them ‘Internet Connection Records’. Add them to the new Investigatory Powers Bill. Make the internet companies create them and store them, and then give them to the police when they ask for them.

Civil servant 2: Are we sure the internet companies can do this, Minister?

Minister: Of course they can. That’s their business. Just draft the law. When the law is ready, we can talk to the internet companies. Get our technical people here to write it in the right sort of way.

The two civil servants look at each other for a moment, then nod.

Civil servant 1: Yes, minister.


 

Scene 3

A plain, modern office, somewhere in Whitehall. At the head of the table is one of the young civil servants. Around the table are an assortment of nerdish-looking people, not very sharply dressed. In front of each is a ring-bound file, thick, with a dark blue cover.

Civil servant: Thank you for coming. We’re here to discuss the new plan for Internet Connection Records. If you look at your files, Section 3, you will see what we need.

The tech people pick up their files and leaf through them. A few of them scratch their heads. Some blink. Some rub their eyes. Many look at each other.

Civil servant: Well, can you do it? Can you create these Internet Connection Records?

Tech person 1: I suppose so. It won’t be easy.

Tech person 2: It will be very expensive

Tech person 3: I’m not sure how much it will tell you

Civil servant: So you can do it? Excellent. Thank you for coming.


 

The real problem is a deep one – but it is mostly about asking the wrong question. Internet Connection Records seem to be an attempt to answer the question ‘how can we recreate that really useful thing, the itemised phone bill, for the internet age’? And, from most accounts, it seems clear that the real experts, the people who work in the internet industry, weren’t really consulted until very late in the day, and then were only asked that question. It’s the wrong question. If you ask the wrong question, even if the answer is ‘right’, it’s still wrong. That’s why we have the mess that is the Internet Connection Record system: an intrusive, expensive, technically difficult and likely to be supremely ineffective idea.

The question that should have been asked is really the one that the Minister asked right at the start: how can we find all these terrorists and paedophiles when they’re using all this high tech stuff? It’s a question that should have been asked of the industry, of computer scientists, of academics, of civil society, of hackers and more. It should have been asked openly, consulted upon widely, and given the time and energy that it deserved. It is a very difficult question – I certainly don’t have an answer – but rather than try to shoe-horn an old idea into a new situation, it needs to be asked. The industry and computer scientists in particular need to be brought in as early as possible – not presented with an idea and told to implement it, no matter how bad an idea it is.

As it is, listening to the debate, I feel sure that we will have Internet Connection Records in the final bill, and in a form not that different from the mess currently proposed. They won’t work, will cost a fortune and bring about a new kind of vulnerability, but that won’t matter. In a few years – probably rather more than the six years currently proposed for the first real review of the law – it may finally be acknowledged that it was a bad idea, but even then it may well not be. It is very hard for people to admit that their ideas have failed.


As a really helpful tweeter (@sw1nn) pointed out, there’s a ‘techie’ term for this kind of issue: An XY problem!  See http://xyproblem.info. ICRs seem to be a classic example.

 

The IP Bill: opaqueness on encryption?

One thing that all three of the Parliamentary committees that reviewed the Draft Investigatory Powers Bill agreed upon was that the bill needed more clarity over encryption.

This is the Intelligence and Security Committee report:

Screen Shot 2016-03-03 at 15.30.32

This is the Science and Technology Committee report:

Screen Shot 2016-03-03 at 15.32.14

This is the Joint Parliamentary Committee on the Investigatory Powers Bill:

Screen Shot 2016-03-03 at 15.33.44

In the new draft Bill, however, this clarity does not appear to have been provided – at least as far as most of the people who have been reading through it have been able to determine. There are three main possible interpretations of this:

  1. That the Home Office is deliberately trying to avoid providing clarity;
  2. That the Home Office has not really considered the requests for clarity seriously; or
  3. That the Home Office believes it has provided clarity

The first would be the most disturbing – particularly as one of the key elements of the Technical Capability Notices as set out both in the original draft bill and the new version is that the person upon whom the notice is served “may not disclose the existence or contents of the notice to any other person without the permission of the Secretary of State” (S218(8)). The combination of an unclear power and the requirement to keep it secret is a very dangerous.

The second possibility is almost as bad – because, as noted above, all three committees were crystal clear about how important this issue is. Indeed, their reports could be seen as models for the Home Office as to how to make language clear. Legal drafting is never quite as easy as it might be, but it can be clear and should be clear.

The third possibility – that they believe they have provided clarity is also pretty disastrous in the circumstances, particularly as the amount of time that appears to be being made available to scrutinise and amend the Bill appears likely to be limited. This is the interpretation that the Home Office ‘response to consultations’ suggests – but people who have examined the Bill so far have not, in general, found it to be clear at all. That includes both technological experts and legal experts. Interpretation of law is of course at times difficult – but that is precisely why effort must be put in to make it as clear as possible. At the moment whether a backdoor or equivalent could be demanded depends on whether it is ‘technically feasible’ or ‘practicable’ – terms open to interpretation – and on interdependent and somewhat impenetrable definitions of ‘telecommunications operator’, ‘telecommunications service’ and ‘telecommunications system’, which may or may not cover messaging apps, hardware such as iPhones and so forth. Is it clear? It doesn’t seem clear to me – but I am often wrong, and would love to be corrected on this.

This issue is critical for the technology industry. It needs to be sorted out quickly and simply. It should have been done already – which is why the first possibility, that the lack of clarity is deliberate, looms larger  that it ordinarily would. If it is true, then why have the Home Office not followed the advice of all three committees on this issue?

If on the other hand this is simply misinterpretation, then some simple, direct redrafting could solve the problems. Time will tell.

The new IP Bill…. first thoughts…

This morning, in advance of the new draft of the Investigatory Powers Bill being released, I asked six questions:

Screen Shot 2016-03-01 at 09.46.09

At a first glance, they seem to have got about 2 out of 6, which is perhaps better than I suspected, but  not as good as I hoped.

  1. On encryption, I fear they’ve failed again – or if anything made things worse. The government claims to have clarified things in S217 and indeed in the Codes of Practice – but on a first reading this seems unconvincing. The Communications Data Draft Code of Practice section on ‘Maintenance of a Technical Capability’ relies on the idea of ‘reasonability’ which in itself is distinctly vague. No real clarification here – and still the possibility of ordering back-doors via a ‘Technical Capability Notice’ looms very large. (0 out of 1)
  2. Bulk Equipment Interference remains in the Act – large scale hacking ‘legitimised’ despite the recommendation from the usually ‘authority-friendly’ Intelligence and Security Committee that it be dropped from the Bill. (0 out of 2)
  3. A review clause has been added to the Bill – but it is so anaemic as to be scarcely worth its place. S222 of the new draft says that the Secretary of State must prepare a report by the end of the sixth year after the Bill is passed, publish it and lay it before parliament. This is not a sunset clause, and the report prepared is not required to be independent or undertaken by a review body, just by the Secretary of State. It’s a review clause without any claws, so worth only 1/4 a point. (1/4 out of 3)
  4. At first read-through, the ‘double-lock’ does not appear to have been notably changed, but the ‘urgent’ clause has seemingly been tightened a little, from 5 days to 3, but even that isn’t entirely clear. I’d give this 1/4 of a point (so that’s 1/2 out of 4)
  5. The Codes of Practice were indeed published with the bill (and are accessible here) which is something for which the Home Office should be applauded (so that’s 1 and 1/2 out of 5)
  6. As for giving full time for scrutiny of the Bill, the jury is still out – the rumour is second reading today, which still looks like undue haste, so the best I can give them is 1/2 a point – making it a total of 2 out of 6 on my immediate questions.

That’s not quite as bad as I feared – but it’s not as good as it might have been and should have been. Overall, it looks as though the substance of the bill is largely unchanged – which is very disappointing given the depth and breadth of the criticism levelled at it by the three parliamentary committees that examined it. The Home Office may be claiming to have made ‘most’ of the changes asked for – but the changes they have made seem to have been the small, ‘easy’ changes rather than the more important substantial ones.

Those still remain. The critical issue of encryption has been further obfuscated, the most intrusive powers – the Bulk Powers and the ICRs – remain effectively untouched, as do the most controversial ‘equipment interference’ powers. The devil may well be in the detail, though, and that takes time and careful study – there are people far more able and expert than me poring over the various documents as I type, and a great deal more will come out of that study. Time will tell – if we are given that time.