The Internet is not a Telephone System

One of the most important statements in the report of the Joint Committee on the Draft Investigatory Powers Bill is also one that may seem, on the surface at least, to be little more than a matter of presentation.

“We do not believe that ICRs are the equivalent of an itemised telephone bill. However well-intentioned, this comparison is not a helpful one.”

The committee had to make this statement because a number of the advocates for the Bill – and for the central place that Internet Connection Records play in the Bill – have been using this comparison. Many of the witnesses to both this committee and the two other parliamentary committees that have scrutinised and reported on the Bill (the Science and Technology Committee and the Intelligence and Security Committee) have been deeply critical of the comparison. The criticisms come from a number of different directions. One is the level of intrusion: this is Big Brother Watch, in the IP Bill Committee report:

“A telephone bill reveals who you have been speaking to, when and for how long. Your internet activity on the other had reveals every single thing you do online.”

Some criticised the technological complexity. This is from Professors John Naughton and David Vincent’s evidence to the IP Bill Committee:

“The Secretary of State said that an Internet Connection Record was “simply the modern equivalent of an itemised phone bill”. It is a deeply misleading analogy, because—whatever it turns out to be—an ICR in the current technological context will be significantly more complex and harder to compile than an itemised bill.”

Others, including myself, made the point that the way that people actually use the internet – and the way that the current communications systems function – simply does not fit the whole idea. Andrews & Arnold Limited put it like this:

“If the mobile provider was even able to tell that [a person] had used Twitter at all (which is not as easy as it sounds), it would show that the phone had been connected to Twitter 24 hours a day, and probably Facebook as well. is is because the very nature of messaging and social media applications is that they stay connected so that they can quickly alert you to messages, calls, or amusing cat videos, without any delay.”

This is Richard Clayton:

“The ICR data will be unable to distinguish between a visit to a jihadist website and visiting a blog where, unbeknown to the visitor (and the blog owner) the 329th comment (of 917) on the current article contains an image which is served by that jihadist site. So an ICR will never be evidence of intent—it merely records that some data has owed over the Internet and so it is seldom going to be ‘evidence’ rather than just ‘intelligence”.”

The Home Secretary, however, effectively dismissed these objections – but at the same time highlighted why the mistaken comparison is more important, and more revealing than just a question of presentation.

“As people move from telephony to communications on the internet, the use of apps and so forth, it is necessary to take that forward to be able to access similar information in relation to the use of the internet. I would say it is not inaccurate and it was a genuine attempt to try to draw out for people a comparison as to what was available to the law enforcement agencies now—why there is now a problem—because people communicate in different ways, and how that will be dealt with in the future.”

There were two ways to interpret the initial comparison. One interpretation was that it was a deliberate attempt to oversimplify, to sell the idea to the people, all the time knowing that it was an inappropriate comparison, one that simultaneously downplayed the intrusiveness of the records, underestimated the difficulty there would be in creating them and overestimated their likely effectiveness in assisting the police and the security services. The other was that those advocating the implementation of internet connection records genuinely believed that the comparison was a valid and valuable one. The evidence of the Home Secretary – and indeed of others backing the bill – seems to suggest that the second of these interpretations is closer to the truth. And though the first may seem the more worrying, as it might suggests a level of deception and dissembling that should disturb anyone, it is the second that should worry us more, as it betrays a far more problematic mindset, and one that sadly can be seen elsewhere in the debate over surveillance and indeed over regulating, policing and controlling the internet in other ways.

It suggests that rather than facing up to the reality of the way the internet works, those in charge of the lawmaking (and perhaps even the policing itself) are trying to legislate as though the internet were the kind of communication system they are used to. The kind they already understand. They’re not just comparing the internet to a telephone system, they’re acting as though it is a telephone system, and trying to force everything to fit that belief. With the concept of Internet Connection Records, they’re saying to the providers of modern, complex, interactive, constantly connecting, multifaceted systems that they’ve got to create data as though their modern, complex, interactive constantly connecting and multifaceted systems were actually old-fashioned telephone systems.

The problem is that the internet is not an old-fashioned telephone system. Pretending that it is won’t work. The problems highlighted – in particular the technical difficulties and the inevitable ineffectiveness – won’t go away no matter how much the Home Office wish for them to do so. It is a little disappointing to me that the report of the committee was not strong enough to say this directly – instead they emphasise that the government needs to explain how it will address the issues that have been raised.

Sadly it seems almost certain that the government will continue to push this idea. The future seems all too easy to predict. A few years down the line they will still be trying to get the idea to work, still trying to make it useful, still trying to prove to themselves that the internet is just like a telephone system. Many millions will have been spent, huge amounts of effort and expertise will have been wasted on a fruitless, irrelevant and ultimately self-defeating project – money, effort and expertise that could, instead, have been put into finding genuinely effective ways to police the internet as it is, rather than as they wish it still was.

 

 

The Saga Of the Privacy Shield…

Screen Shot 2016-02-09 at 06.23.54

(With apologies to all poets everywhere)

 

Listen to the tale I tell

Of Princes bold and monsters fell

A tale of dangers well conceal’d

And of a bright and magic shield

 

There was a land, across the bay

A fair land called the USA

A land of freedom: true and just

A land that all the world might trust

 

Or so, at least, its people cheered

Though others thought this far from clear

From Europe all the Old Folk scowled

And in the darkness something howled

 

For a monster grew across the bay

A beast they called the NSA,

It lived for one thing: information

And for this it scoured that nation

 

It watched where people went and came

It listened and looked with naught of shame

The beast, howe’er, was very sly

And hid itself from prying eyes

 

It watched while folk from all around

Grew wealthy, strong and seeming’ sound

And Merchant Princes soon emerged

Their wealth it grew surge after surge

 

They gathered data, all they could

And used it well, for their own good

They gave the people things they sought

While keeping more than p’rhaps they ought

 

And then they looked across the bay

Saw Old Folk there, across the way

And knew that they could farm those nations

And take from them their information

 

But those Old Folk were not the same

They did not play the Princes’ game

They cared about their hope and glory

Their laws protected all their stories

 

‘You cannot have our information

Unless we have negotiations

Unless our data’s safe and sound

We’ll not let you plough our ground’

 

The Princes thought, and then procured

A harbour safe and quite secure

Or so they thought, and so they said

And those Old Folk gave them their trade

 

And so that trade just grew and grew

The Old Folks loved these ideas new

They trusted in that harbour’s role

They thought it would achieve its goal

 

But while the Princes’ realms just grew

The beast was learning all they knew

Its tentacles reached every nook

Its talons gripped each face, each book

 

It sucked up each and ev’ry drop:

None knew enough to make it stop

Indeed, they knew not what it did

‘Til one brave man, he raised his head

 

And told us all, around the world

‘There is a beast, you must be told’

He told us of this ‘NSA’

And how it watched us day by day

 

He told us of each blood-drenched claw

He named each tentacle – and more

And with each word, he made us fear

That this beast’s evil held us near

 

In Europe one man stood up tall

“Your harbour is not safe at all!

You can’t protect us from that beast

That’s not enough, not in the least!”

 

He went unto Bourg of Luxem

The judges listened care’fly to him

‘A beast ‘cross the bay sees ev’rywhere

Don’t send our secrets over there!

 

The judges liked not what they saw

‘That’s no safe habour,’ they all swore

“No more stories over there!

Sort it out! We do all care!”

 

The Princes knew not what to do

They could not see a good way through

The beast still lurked in shadows dark

The Princes’ choices seemed quite stark

 

Their friends and fellows ‘cross the bay

Tried to help them find a way

They whispered, plotted, thought and plann’d

And then the Princes raised their hands

 

“Don’t worry now, the beast is beaten

It’s promised us you won’t be eaten

It’s changed its ways; it’s kindly now

And on this change you have our vow

 

Behold, here is our mighty shield

And in its face, the mighty yield

It’s magic, and its trusty steel

Is strong enough for all to feel

 

Be brave, be bold, you know you should

You know we only want what’s good”

But those old folk, they still were wary

That beast, they knew, was mighty scary

 

“That beast of yours, is it well chained?

Its appetites, are they contained?

Does it still sniff at every door?

Its tentacles, on every floor?

 

The Princes stood up tall and proud

“We need no chains”, they cried aloud

“Our beast obeys us, and our laws

You need not fear it’s blunted claws.”

 

“Besides,” they said, “you are contrary

You have your own beasts, just as scary”

The Old Folk looked a mite ashamed

‘Twas true their own beasts were not tamed

 

“‘Tis true our beasts remain a blight

But two wrongs never make a right

It’s your beast now that we all fear

Tell us now, and make it clear!”

 

“Look here” the Princes cried aloud

“Of this fair shield we all are proud,

Its face is strong, its colours bright

There’s no more need for any fright.”

Shield

The Old Folk took that shield in hand

‘Twas shiny, coloured, bright and grand

But as they held it came a worry

Why were things in such a hurry?

 

Was this shield just made of paper?

Were their words just naught but vapour?

Would that beast still suck them dry?

And their privacy fade and die?

 

Did they trust the shield was magic?

The consequences could be tragic

The monster lurked and sucked its claws

It knew its might meant more than laws

 

Whatever happened, it would win

Despite the tales the Princes spin

It knew that well, and so did they

In that fair land across the bay.

 

 

 

 

Does the UK engage in ‘mass surveillance’?

Screen Shot 2016-01-15 at 07.42.03

When giving evidence to the Parliamentary Committee on the Draft Investigatory Powers Bill Home Secretary Theresa May stated categorically that the UK does not engage in mass surveillance. The reaction from privacy advocates and many in the media was something to see – words like ‘delusional’ have been mentioned – but it isn’t actually as clear cut as it might seem.

Both the words ‘mass’ and ‘surveillance’ are at issue here. The Investigatory Powers Bill uses the word ‘bulk’ rather than ‘mass’ – and Theresa May and her officials still refuse to give examples or evidence to identify how ‘bulky’ these ‘bulk’ powers really are. While they refuse, the question of whether ‘bulk’ powers count as ‘mass’ surveillance is very hard to determine. As a consequence, Theresa May will claim that they don’t, while skeptics will understandably assume that they do. Without more information, neither side can ‘prove’ they’re right.

The bigger difference, though, is with the word ‘surveillance’. Precisely what constitutes surveillance is far from agreed. In the context of the internet (and other digital data surveillance) there are, very broadly speaking, three stages: the gathering or collecting of data, the automated analysis of the data (including algorithmic filtering), and then the ‘human’ examination of the results of that analysis of filtering. This is where the difference lies: privacy advocates and others might argue that the ‘surveillance’ happens at the first stage – when the data is gathered or collected – while Theresa May, David Omand and those who work for them would be more likely to argue that it happens at the third stage – when human beings are involved.

If the surveillance occurs when the data is gathered, there is little doubt that the powers envisaged by the Investigatory Powers Bill would constitute mass surveillance – the Internet Connection Records, which appear to apply to pretty much everyone (so clearly ‘mass’) would certainly count, as would the data gathered through ‘bulk’ powers,  whether it be by interception, through ICRs, through the mysterious ‘bulk personal datasets’ about which we are still being told very little.

If, however, the surveillance only occurs when human beings are involved in the process, then Theresa May can argue her point: the amount of information looked at by humans may well not be ‘massive’, regardless of how much data is gathered. That, I suspect, is her point here. The UK doesn’t engage in ‘mass surveillance’ on her terms.

Who is right? Analogies are always dangerous in this area, but it would be like installing a camera in every room of every house in the UK, turning that camera on, having the footage recorded and stored for a year – but having police officers only look at limited amounts of the footage and only when they feel they really need to.

Does the surveillance happen when the cameras are installed? When they’re turned on? When the footage is stored? When it’s filtered? Or when the police officers actually look at it.  That is the issue here. Theresa May can say, and be right, that the UK does not engage in mass surveillance, if and only if it is accepted that surveillance only occurs at the later stages of the process.

In the end, however, it is largely a semantic point. Privacy invasion occurs when the camera is installed and the capability of looking at the footage is enabled. That’s been consistently shown by recent rulings at both the Court of Justice of the European Union and of the European Court of Human Rights. Whether it is called ‘surveillance’ or something else, it invades privacy – which is a fundamental right. That doesn’t mean that it is automatically wrong – but that the balancing act between the rights of privacy (and freedom of expression, of assembly and association etc that are protected by that privacy) and the need for ‘security’ needs to be considered at the gathering stage, and not just at the stage when people look at the data.

In practice, too, the middle of the three stages – the automated analysis, filtering or equivalent – may be more important than the last one. Decisions are already made at that stage, and this is likely to increase. Surveillance by algorithm is likely to be (and may already be) more important than surveillance by human eyes, ears and minds. That means that we need to change our mindset about which part of the surveillance process matters. Whether we call it ‘mass surveillance’ or something else is rather beside the point.

Global letter on Encryption – why it matters.

I am one of the signatories on an open letter to the governments of the world that has been released today. The letter has been organised by Access Now and there are 195 signatories – companies, organisations and individuals from around the world.

The letter itself can be found here. The key demands are the following

Screen Shot 2016-01-11 at 06.10.45

It’s an important letter, and one that Should be shared as widely as possible. Encryption matters, and not just for technical reasons and not just for ‘technical’ people. Even more than that, the arguments over encryption are a manifestation of a bigger argument – and, I would argue, a massive misunderstanding that needs to be addressed: the idea that privacy and security are somehow ‘alternatives’ or at the very least that privacy is something that needs to be ‘sacrificed’ for security. The opposite is the case: privacy and security are not alternatives, they’re critical partners. Privacy needs security and security needs privacy.

The famous (and much misused) saying often attributed (probably erroneously) to Benjamin Franklin, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” is not, in this context at least, strong enough. In relation to the internet, those who would give up essential privacy to purchase a little temporary security will get neither. It isn’t a question of what they ‘deserve’ – we all deserve both security and privacy – but that by weakening privacy on the internet we weaken security.

The conflict over encryption exemplifies this. Build in backdoors, weaken encryption, prevent or limit the ways in which people can use it, and you both reduce their privacy and their security. The backdoors, the weaknesses, the vulnerabilities that are provided for the ‘good guys’ can and will be used by the ‘bad guys’. Ordinary people will be more vulnerable to criminals and scammers, oppressive regimes will be able to use them against dissidents, overreaching authorities against whistleblowers, abusive spouses against their targets and so forth. People may think they have ‘nothing to hide’ from the police and intelligence agencies – but that is to fundamentally miss the point. Apart from everything else, it is never just the police and the intelligence agencies that our information needs protection from.

What is just as important is that there is no reason (nor evidence) to suggest that building backdoors or undermining encryption helps even in the terms suggested by those advocating it. None examples have been provided – and whenever they are suggested (as in the aftermath of the Paris terrorist attacks) they quickly dissolve when examined. From a practical perspective it makes sense. ‘Tech-savvy’ terrorists will find their own way around these approaches – DIY encryption, at their own ends, for example – while non-tech savvy terrorists (the Paris attackers seem to have used unencrypted SMSs) can be caught in different ways, if we use different ways and a more intelligent approach. Undermining or ‘back-dooring’ encryption puts us all at risk without even helping. The superficial attractiveness of the idea is just that: superficial.

The best protection for us all is a strong, secure, robust and ‘privacy-friendly’ infrastructure, and those who see the bigger picture understand this. This is why companies such as Apple, Google, Microsoft, Yahoo, Facebook and Twitter have all submitted evidence to the UK Parliament’s Committee investigating the draft Investigatory Powers Bill – which includes provisions concerning encryption that are ambiguous at best. It is not because they’re allies of terrorists or because they make money from paedophiles, nor because they’re putty in the hands of the ‘privacy lobby’. Very much the opposite. It is because they know how critical encryption is to the way that the internet works.

That matters to all of us. The internet is fundamental to the way that we live our lives these days. Almost every element of our lives has an online aspect. We need the internet for our work, for our finances, for our personal and social lives, for our dealings with governments, corporations and more. It isn’t a luxury any more – and neither is our privacy. Privacy isn’t an indulgence – and neither is security. Encryption supports both. We should support it, and tell our governments so.

Read the letter here – and please pass it on.

The Surveillance Elephant in the Room…

IMG_4425

Yesterday’s decision in the Court of Justice of the European Union (CJEU) in what has been dubbed the ‘Europe vs Facebook’ case was, as the Open Rights Group puts it, a ‘landmark victory for privacy rights’. Much has already been written about it. I do not propose to cover the same territory in any depth – the Open Rights Group blog post linked to above gives much of the background – but instead to examine the response of the European Commission, and the elephant in the Commission’s room: surveillance.

The judgment was published yesterday morning, and its essence was very simple. The ‘safe harbor’ agreement, which effectively allows personal data to be transferred from the EU to the US by some 4,000 or so companies, was declared invalid, because though under the agreement the relevant US companies promise to provide protection for that data in many ways – security, promising not to repurpose it, misuse it, hold it longer than necessary and so forth, essentially along the lines of European Data Protection law – there was one thing that it could not provide protection from: surveillance by the US authorities.

As the CJEU put it (paragraph 94 of the ruling):

“…legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life…”

This is where the European Commission comes in. It was the Commission that made the ‘safe harbor’ decision, setting up the safe harbor system, which should, in accordance with data protection law, have ensured that data was adequately protected in the US. The Commission did not ensure that – and did not even state that it did – primarily because the state of US surveillance law (and, as far as we know, US surveillance practice) could not allow it. US surveillance law means that ‘national security, public interest, or law enforcement requirements’ override privacy and other rights where non-US citizens are concerned, and EU citizens have no form of protection against this, or legal remedies available.

The Elephant in the Room

This, it must be clear, is a fundamental issue. If the US can do this, without control or redress, then whatever systems are in place, whatever systems are brought in to replace the now invalidated ‘Safe Harbor’, will similarly breach fundamental privacy rights. No new ‘safe harbor’, no individual arrangements for particular companies, no other sidestepping plans would seem to be possible.  Unless US surveillance law – and, US surveillance practice – is changed, no safe harbor would seem to be possible.

The Commission, however, does not seem willing – or perhaps ready – to confront this issue. Their brief statement in response to the ruling, published yesterday afternoon, does not mention surveillance even once. That in itself is quite remarkable. The closest it gets to accepting what is, in fact, the essence of the ruling, is a tangential reference to ‘the Snowden revelations in 2013’ without mentioning anything about what those revelations related to. There is no mention of US surveillance law, of the NSA, of national security or of anything else relating to it. The surveillance elephant in the room looms over everything but the Commission seems to be pretending that it does not even exist.

The US authorities, however, are quite aware of the elephant – in a somewhat panicky press release last week, between the opinion of the Attorney General that presaged the CJEU ruling, the ‘US Mission to the European Union’ said that the ‘United States does not and has not engaged in indiscriminate surveillance of anyone, including ordinary European citizens‘. They do not, however, seem to have convinced the CJEU of this. Far from it.

Heads in the sand

In a way it should not be a surprise that the Commission seems to have their heads in the sand about this issue. It is not at all easy to see a way out of this. Will the US stop or change its surveillance practices and law? It is hard to imagine that they would, particularly in response to a ruling in a European court. Can they provide convincing evidence that they are not engaging in mass, indiscriminate surveillance? Again it seems unlikely, primarily because the evidence points increasingly precisely the opposite way.

There are big questions about what actually constitutes ‘surveillance’ – does surveillance occur when data is ‘collected’, when it is accessed automatically or analysed algorithmically, or when human eyes are involved? The US (and UK) authorities suggest the latter, but the European Courts (both the CJEU and the European Court of Human Rights) have found that privacy rights are engaged when data is gathered or held – and rightly so, in the view of most privacy scholars. There are many reasons for this. There is a chilling effect of the existence of the surveillance apparatus itself and the ‘panopticon’ issue: we alter our behaviour when we believe we might be being watched, not just when we are watched. There is the question of data vulnerability – if data has been gathered, then it might be hacked, lost or leaked even before it is analysed. The very existence of the Snowden leaks makes it clear that even the NSA isn’t able to guarantee its data security. Fundamentally, where data exists, it is vulnerable. There are other arguments – the strength of algorithmic analysis, for example, may well mean that there is more effective intrusion without human involvement in the process, the importance of meta-data and so forth – but they all point in the same direction. Data gathering, despite what the US and UK authorities might wish to say, does interfere with our privacy. That means, in the end, that fundamental rights are engaged.

What happens next?

That is the big question. The invalidation of safe harbor has huge repercussions and there will be some manic lobbying taking place behind the scenes. The Commission will have to consider the surveillance elephant in the room soon. It isn’t going away on its own.

And behind that elephant there are other elephants: if US surveillance and surveillance law is a problem, then what about UK surveillance? Is GCHQ any less intrusive than the NSA? It does not seem so – and this puts even more pressure on the current reviews of UK surveillance law taking place. If, as many predict, the forthcoming Investigatory Powers Bill will be even more intrusive and extensive than current UK surveillance laws this will put the UK in a position that could rapidly become untenable. If the UK decides to leave the EU, will that mean that the UK is not considered a safe place for European data? Right now that seems the only logical conclusion – but the ramifications for UK businesses could be huge.

More huge elephants are also looming – the various world-wide trade agreements currently being semi-secretly negotiated, from the TPP (Trans-Pacific Partnership – between the various Pacific Rim countries including the US, Australia, NZ, Japan) to the TISA (the Trade In Services Agreement), TTIP (Transatlantic Trade and Investment Partnership – between the EU and the US) and CETA (Comprehensive Economic and Trade Agreement – between Canada and the EU)  seem to involve data flows (and freedom from government interference with those data flows) that would seem to fly directly in the face of the CJEU ruling. If data needs to be safe from surveillance, it cannot be allowed to flow freely into places where surveillance is too indiscriminate and uncontrolled. That means the US.  These agreements would also seem likely to allow (or even require) various forms of surveillance to let copyright holders ensure their rights are upheld – and if surveillance for national security and public safety is an infringement of fundamental rights, so would surveillance to enforce copyright.

What happens next, therefore, is hard to foresee. What cannot be done, however, is to ignore the elephant in the room. The issue of surveillance has to be taken on. The conflict between that surveillance and fundamental human rights is not a merely semantic one, or one for lawyers and academics, it’s a real one. In the words of historian and philosopher Quentin Skinner “the current situation seems to me untenable in a democratic society.” The conflict over Safe Harbor is in many ways just a symptom of that far bigger problem. The biggest elephant of all.

The ethical case for ad-blocking

The ad-blocking wars have been hotting up over the last few months – triggered in part by Apple’s integration of ad-blocking into the new version of iOS, the operating system for iPhones and iPads. Some of the commentary, particularly from those associated with the advertising industry, has been more than a touch hyperbolic. Seasoned internet-watchers will be very familiar with ‘such-and-such will break the internet’ stories: the number of things that we’ve been told will break the internet over the years is huge. It’s as familiar as the ‘such-and-such technology/practice will kill music’ stories that have been around since the advent of recording – from home-taping to file-sharing, music has died almost as often as Sean Bean in the movies. And yet music still lives. And thrives. As does the internet, despite all the things that should have killed it.

The latest idea is that ad-blockers will break the internet. A particular piece in The Verge has been very widely read and shared – which puts forward the entirely believable suggestion that Apple has included ad-blocking in iOS as part of its global war with Google and Facebook. The overall premise is highly convincing – and of course Apple will do whatever it can to ‘win’ against Google and Facebook, and of course this is an opportunity to make some ground. Both Google and Facebook do make their money (or most of it) from advertising, so restricting, controlling or blocking advertising could potentially reduce that income. And Apple is a business, and will be looking for opportunities that give it a commercial advantage over its rivals. So, however, are Google and Facebook – despite their efforts to portray themselves as providers of free and wonderful services to all, guardians and supporters of freedom of expression and so fundamental to the infrastructure of the internet that we love that any challenges to them (and their business models) are challenges to the internet itself.

Publishers and the advertising industry – and in particular bodies that ‘represent’ the advertising industry – are equally aggressive, suggesting that ad-blocking is ‘unethical’, ‘hypocritical’ or worse. They have pursued ad-block software providers in the courts in Europe – consistently losing, most recently in Germany last week, where the makers of AdblockPlus made their fourth successful defence against a legal challenge. The media onslaught has been extensive, and supported by many commentators. And yet Adblock software seems to be increasingly popular and successful, both on computers and on mobile.

Why is this? Is it because those who use ad-blocking software are unethical? Because they come from the ‘something for nothing’ culture? Because they don’t understand the economics of the internet, and so are blindly going down a route that can lead only to disaster? I don’t think so. The reverse: I think that users of ad-blocking software are taking a positive route both ethically and economically. If anything, it is by extending the use of adblocking software that the future of the internet is being secured, not the reverse. The more people that use adblockers, the better the future for the internet.

Why do I think this? Well, first of all, I look at some of the positives and negatives of the use of adblockers.

In favour of ad-blocking:

  1. Makes your screen clearer and makes it easier to find and read the content (particularly important on mobile)
  2. Makes the experience cleaner, clearer and less annoying
  3. Speeds up your connection – stops those processor-hungry video ads in particular
  4. Saves you money if you pay for data (which many people do)
  5. Reduces your chances of picking up malware
  6. Protects (to some degree) your privacy by stopping trackers and profilers
  7. Protects (to some degree) your privacy by stopping others (e.g. government agencies) from piggybacking on the trackers and profilers
  8. It’s your freedom of choice to put whatever software you like on your own equipment.

Against ad-blocking

  1. Disrupts the current advertising model that supports much of the free content on the internet
  2. Stops you receiving relevant and attractive ads tailored to your profile and behaviour

This second anti-ad-blocking point is a stretch to say the least, though it is one that the advertising industry likes to push. I am far from convinced. That then leaves only the first point, that using adblockers disrupts the advertising model. And it does, no question about it. It has the potential to disrupt it hugely, which is why the advertising industry and the publishers that are supported by it are in such turmoil.

The points in favour of ad-blocking, however, include some very strong ones. Fundamentally, and this is the point that the advertising industry seems very reluctant to admit, the current model is broken. Very badly broken, from the point of view of the user – and particularly the mobile user. The first four points are critical: speed of connection for mobile is a fundamental issue, most people pay for data, and the screens of even the biggest phones (I have an iPhone 6 plus) are small enough that advertisements often make pages all but impossible to read. One of my favourite newspapers, The Independent, was completely unreadable on my phone until I installed an ad-blocker.

The remaining points are more ‘niche’ – I am a privacy advocate, so the privacy points really matter to me, but I realise that not all people care as much as I do, even if I believe they should – but the first four are strong enough that the points against ad-blocking would need to be very compelling, and ultimately, to me at least, they are not. Indeed, precisely the opposite.

The current situation is unsustainable

Let me return to the main point against ad-blockers. They disrupt the current advertising model that underpins much of the ‘free’ internet. Two key words: disrupt and current. Privacy-invasive, processor-intensive, screen-filling advertising is very much the current system, not something that has always existed nor something that need always exist. To assume that a current model is a ‘required’ model, is a necessary model and will (and must) last forever is ridiculous in the face of the most cursory examination of history. Things change all the time – and sometimes that change is necessary. For many people (as the uptake of adblockers reveals) the change in the current advertising model is necessary right now.

The need for disruption

The question then is how the situation can change – and part of that is the need for disruption. Without disruption, nothing will change. That is where adblockers come in, and why the use of them is a positive ethical step. If we want change, we have to act in order to make that change happen. Without adblockers, would the advertising industry be willing to change their model? The evidence points strongly against that. Advertisements have become more intrusive, more processor-hungry, more screen-filling over recent months and years, not less so. The past record of the advertising industry is not one to be celebrated. Here are just a few examples:

  • They have pretty consistently fought against attempts to make advertising less intrusive, and supported the worst excesses of advertisers. Phorm, the creepiest and most privacy invasive of all, which thought it was OK to monitor peoples entire internet activity without consent, and even engaged in extensive secret trials without telling anyone, was supported directly by the industry bodies right until the end, when its model was ditched in the face of legal threats, EU action and being abandoned by its business partners.
  • The Do Not Track initiative – through which advertisers were intended to abide by user choices set out in their browsers – was so heavily undermined by the advertisers that it fell apart. Firstly they turned ‘do not track’ into ‘do not target’ – still tracking those who opted out, gathering data and profiling them, but not serving them with targeted ads. Then they refused to accept the idea that ‘not being tracked’ could be set as the default, saying that they would ignore that choice.
  • Google and others appear to have effectively side-stepped the do not track settings in the Safari browser, still tracking users though they had actively chosen not to be tracked: this is the backing to the Google vs Vidal-Hall case.

This is just a part of it – and does not even touch on the many other ethical issues connected to advertising. For advertisers to lecture others on ethics is more than a little hard to swallow.

How, then, can the advertising industry be persuaded to change its ways? The use of disruptive technology is one key tool. If the current dysfunctional situation is to be changed, and that would seem to many to be a good thing, then more use of that disruptive technology would seem to the necessary. Just as civil disobedience is sometimes critical to get social change, the same is true on the internet. It might be pushing it too far to say that we have a duty to use ad-blockers, but I don’t think it’s that much of a push.

There are some signs that some advertisers are taking the hint. The Electronic Frontier Foundation reported last week that ‘Adblockers and Innovative Ad Companies are Working Together to Build a More Privacy-Friendly Web’ – and I hope that this is a sign of better things to come. Would the ad companies have taken this kind of step without the uptake of adblockers? I think it highly unlikely.

What is clear to me, however, is that we need a new economic model to replace the current broken one. I do not know what that model will be, but I am confident that it will emerge. The internet will not ‘break’, any more than the music industry will collapse. Our disruption is part of how that new model will be created and developed. We should not be cowed by the advertising industry, particularly on ethical grounds.

Ethical policing of the internet?

acpoheaderThe question of how to police the internet – and how the police can or should use the internet, which is a different but overlapping issue – is one that is often discussed on the internet. Yesterday afternoon, ACPO, the Association of Chief Police Officers, took what might (just might, at this stage) be a step in a positive direction towards finding a better way to do this. They asked for help – and it felt, for the most part at least, that they were asking with a genuine intent. I was one of those that they asked.

It was a very interesting gathering – a lot of academics, from many fields and most far more senior and distinguished than me – some representatives of journalism and civil society (though not enough of the latter), people from the police itself, from oversight bodies, from the internet industry and others. The official invitation had called the event a ‘Seminar to discuss possible Board of Ethics for the police use of communications data’ but in practice it covered far more than that, including the policing of social media, politics, the intelligence services, data retention and much more.

That in itself felt like a good thing – the breadth of discussion, and the openness of the people around the table really helped. Chatham House rules applied (so I won’t mention any names) but the discussion was robust from the start – very robust at one moment, when a couple of us caused a bit of a ruction and one even almost got ejected. That particular ruction came from a stated assumption that one of the weaknesses of ‘pressure groups’ was a lack of technical and legal knowledge – when those of us with experience of these ‘pressure groups’ (such as Privacy International, the Open Rights Group and Big Brother Watch) know that in many ways their technical knowledge is close to as good as it can be. Indeed, some of the best brains in the field on the planet work closely with those pressure groups.

That, however, was also indicative of one of the best things about the event: the people from ACPO were willing to tell us what they thought and believed, and let us challenge them on their assumptions, and tell them what we thought. And, to a great extent, we did. The idea behind all of this was to explore the possibility of establishing a kind of ‘Board of Ethics’ drawing upon academia, civil society, industry and others – and if so, what could such a board look like, what could and should it be able to do, and whether it would be a good idea to start with. This was very much early days – and personally I felt more positive after the event than I did before, mainly because I think many of the big problems with such an idea were raised, and the ACPO people did seem to understand them.

The first, and to me the most important. of those objections is to be quite clear that a board of this kind must not be just a matter of presentation. Alarm bells rang in the minds of a number of us when one of the points made by the ACPO presentation was that the police had ‘lost the narrative’ of the debate – there were slides of the media coverage, reference to the use of the term ‘snoopers’ charter’ and so forth. If the idea behind such a board is just to ‘regain the narrative’, or to provide better presentation of the existing actions of the police so as to reassure the public that everything is for the best in the best of all possible worlds, then it is not something that many of the people around the table would have wanted to be involved in.  Whilst a board like this could not (and probably should not) be involved in day-to-day operational matters, it must have the ability to criticise the actions, tactics and strategies of the police, and preferably in a way that could actually change those actions, tactics and strategies. One example given was the Met Police’s now notorious gathering of communications data from journalists – if such actions had been suggested to a ‘board of ethics’ that board, if the voices around the table yesterday were anything to go by, would have said ‘don’t do it’.  Saying that would have to have an effect – or if it had no effect, would have had to be made public – if the board is to be anything other than a fig leaf.

I got the impression that this was taken on board – and though there were other things that also rang alarm bells in quite a big way, including the reference on one of the slides to ‘technology driven deviance’ and the need to address it (Orwell might have rather liked that particular expression) it felt, after three hours of discussion, as though there were more possibilities to this idea than I had expected at the outset. For me, that’s a very good thing. The net must be policed – at least that’s how I feel – but getting that policing right, ensuring that it isn’t ‘over-policed’, and ensuring that the policing is by consent (which was something that all the police representatives around the table were very clear about) is vitally important. I’m currently far from sure that’s where we are – but it was good to feel that at least some really senior police officers want it to be that way.

I’m not quite clear what the next steps along this path will be – but I hope we find out soon. It is a big project, and at the very least ACPO should be applauded for taking it on.

So who’s breaking the internet this time?

I’m not sure how many times I’ve been told that the internet is under dire threat over the last few years. It sometimes seems as though there’s an apocalypse just around the corner pretty much all the time. Something’s going to ‘break’ the internet unless we do something about it right away. These last few weeks there seem to have been a particularly rich crop of apocalyptic warnings – Obama’s proposal about net neutrality yesterday being the most recent. The internet as we know it seems as though it’s always about to end.

Net neutrality will destroy us all…

If we are to believe the US cable companies, Obama’s proposals will pretty much break the internet, putting development back 20 years. How many of us remember what the internet was like in 1994? Conversely, many have been saying that if we don’t have net neutrality – and Obama’s proposals are pretty close to what most people I know would understand by net neutrality – then the cable companies will break the internet. It’s apocalypse one way, and apocalypse the other: no half measures here.

The cable companies are raising the spectre of government control of the net, something that has been a terror of internet freedom activists for a very long time – in our internet law courses we start by looking at John Perry Barlow’s 1996 ‘Declaration of the Independence of Cyberspace’, with its memorable opening:

“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” 

Another recent incarnation of this terror has been the formerly much hyped fear that the UN, through the International Telecommunication Union (ITU) was about to take over the internet, crushing our freedom and ending the Internet as we know it. Anyone with real experience of the way that UN bodies work would have realised this particular apocalypse had next-to-no chance of every coming into fruition, and last week that must have become clear to most of even the more paranoid of internet freedom fighters, as the ITU effectively resolved not to even try… Not that apocalypse, at least not now.

More dire warnings and apocalyptic worries have been circling about the notorious ‘right to be forgotten’ – either in its data protection reform version or in the Google Spain ruling back in May. The right to be forgotten, we were told, is the biggest threat to freedom of speech in the coming decade, and will change the internet as we know it. Another thing that’s going to break the internet. And yet, even though it’s now effectively in force in one particular way, there’s not much sign that the internet is broken yet…

The deep, dark, disturbing web…

At times we’re also told that a lack of privacy will break the net – or that privacy itself will break the net. Online behavioural advertisers have said that if they’re not allowed to track us, we’ll break the economic model that sustains the net, so the net itself will break. We need to let ourselves be tracked, profiled and targeted or the net itself will collapse.  The authorities seem to have a similar view – recent pronouncements by Metropolitan Police Commissioner Bernard Hogan-Howe and new head of GCHQ Robert Hannigan are decidedly apocalyptic, trying to terrify us with the nightmares of what they seemingly interchangeably call the ‘dark’ web or the ‘deep’ web. Dark or deep, it’s designed to disturb and frighten us – and warn us that if we keep on using encryption, claiming anonymity or pseudonymity or, in practice, any kind of privacy, we’ll turn the internet into a paradise only for paedophiles, murderers, terrorists and criminals. It’s the end of the internet as we know it, once more.

And of course there’s the converse view – that mass surveillance and intrusion by the NSA, GCHQ etc, as revealed by Edward Snowden – is itself destroying the internet as we know it.

Money, money, money

Mind you, there are also dire threats from other directions. Internet freedom fighters have fought against things like SOPA, PIPA and ACTA – ways in which the ‘copyright lobby’ sought to gain even more control over the internet. Again, the arguments go both ways. The content industry suggest that uncontrolled piracy is breaking the net – while those who fought against SOPA etc think that the iron fist of copyright enforcement is doing the same. And for those that have read Zittrain’s ‘The Future of the Internet and How to Stop It’, it’s something else that’s breaking the net – ‘appliancization’ and ‘tethering’. To outrageously oversimplify, it’s the iPhone that’s breaking the net, turning it from a place of freedom and creativity into a place for consumerist sheep.

It’s the end of the internet as we know it…..

…or as we think we know it. We all have different visions of the internet, some historical, some pretty much entirely imaginary, most with elements of history and elements of wishful thinking. It’s easy to become nostalgic about what we imagine was some golden age, and fearful about the future, without taking a step back and wondering whether we’re really right. The internet was never a ‘wild west’ – and even the ‘wild west’ itself was mostly mythical – and ‘freedom of speech’ has never been as absolute as its most ardent advocates seem to believe. We’ve always had some control and some freedom – but the thing about the internet is that, in reality, it’s pretty robust. We, as an internet community, are stronger and more wilful than some of those who wish to control it might think. Attempts to rein it in often fail – either they’re opposed or they’re side-stepped, or they’re just absorbed into the new shape of the internet, because the internet is always changing, and we need to understand that. The internet as we know it is always ending – and the internet as we don’t know it is always beginning.

That doesn’t mean that we shouldn’t fight for what we want – precisely the opposite. We should always do so. What it does mean is that we have to understand is that sometimes we will win, and sometimes we will lose. Sometimes it will be good that we win, sometimes it will be good when we lose. Whatever happens, we have to find a way – and we probably will.

Surveillance and Consent

I was fortunate enough to speak at the Internet and Human Rights Conference at the Human Rights Law Centre at the University of Nottingham on Wednesday. My talk was on the topic of internet surveillance – as performed both by governments and by commercial entities. This is approximately what I said – I very rarely have fully written texts when I talk or lecture, and this was no exception. As you can see, I had one ‘official’ title, but the talk had a number of alternative titles…

Surveillance and Consent

Or

Big Brother is watching you – and so are his commercial partners

Or

What Edward Snowden can teach us about the commercial Internet

Or

To what do we consent, when we enter the Internet?

In particular, do we consent to surveillance? If we do, by whom? When? And on what terms? There are three parts to this talk:

1) Government surveillance and consent

2) Commercial surveillance and consent

3) Forging a (more) privacy friendly future?

1: Government surveillance and consent.

Big Brother is Watching You. He really is. Some of us have always thought so – even if we’ve sometimes been called conspiracy theorists when we’ve articulated those thoughts. Since the revelations of Edward Snowden this summer, we’ve been taken a bit more seriously – and quite rightly so.

The first and perhaps most important question to ask is why the authorities perform surveillance? Counter-terrorism? That’s the one most commonly mentioned. Detection and enforcement of criminal law? Crime prevention? Prevention of disorder? Dealing with child abuse images and tracking down paedophiles? Monitoring of social trends? There are different degrees to all these areas – and potentially some very slippery slopes. Some of the surveillance is clearly beneficial – but some is highly debatable. When looking in the area of crime and disorder this is particularly true when one considers police tactics in the past, from dealing with the anti-nuclear movements in the sixties, seventies and eighties to the shocking revelation about the infiltration of environmental activists more recently. Even this summer, the government admitted that it monitored people’s social media activities in order to ‘head off’ the badger cull protests. Was that right? Are other forms of ‘social control’ through surveillance acceptable? They should at least raise questions.

When looking at government surveillance, we need to ask what is acceptable? Where do we draw the line? Who draws that line? How much of this do we consent to? There are a number of different ways to look at this.

Societal consent?

Do we, as a societies, consent to this kind of surveillance? It is not at all clear that we do, even in the UK, if the furore that lead to the defeat of the Snoopers Charter is anything to go by, or the reaction to Edward Snowden’s revelations in most of the world (though not so much in the UK) is any guide. Do we, as societies, understand the level of surveillance that our governments are performing? It doesn’t seem likely given the surprise shown as more and more of the reality of the situation is revealed. Can we, as societies, understand all of this? Perhaps not fully, but certainly a lot more than we currently do.

Parliamentary consent?

Do we effectively consent by delegating our decisions to our political representatives? By electing them, are we consenting to their decision-making, both in general and in the particular area of internet surveillance? This is a big political question in any situation – but anyone who has observed MPs, even supposedly expert MPs, knows that the level of knowledge and understanding of either the internet or surveillance is appalling. Labour’s Helen Goodman, the Tories’ Clare Perry, the Lib Dems’ Tom Brake, all of whom have been (and still are) in positions of power and responsibility within their own parties in relation to the internet have a level of understanding that would be disappointing in a secondary school pupil.

The Intelligence and Security Committee, who made their first public appearance in November, demonstrated that they were pretty much entirely incapable of providing the scrutiny necessary to represent us – and to hold Big Brother to account on our behalf. Most of the Home Affairs Committee – and the chair, Keith Vaz, in particular, demonstrated this even more dramatically this Tuesday, when questioning Guardian Editor Alan Rusbridger. Keith Vaz’s McCarthy-esque question to Rusbridger ‘do you love your country’ was sadly indicative of the general tone and level of much of the questioning.

There are some MPs who could understand this, but they are few and far between – Lib Dem Julian Huppert, Labour’s Tom Watson, the Tories’ David Davis are the best and perhaps only real examples, but they are mavericks. None are on the front benches, and none seem to have that much influence on their political bosses. Parliament, therefore, seems to offer little help. Whether it could ever offer that help – whether we could ever have politicians with enough understanding of the issues to act on our behalf in a meaningful way, is another question. I hope so – but I may well be pipe dreaming.

Automatic or assumed consent?

Perhaps none of this matters. Could it this kind of government surveillance something we automatically consent to when we use the Internet? Simply by using the net, do we automatically consent to being observed? Is this the price that we have to pay – and that we can be assumed to be willing to pay – in order to use the internet? Scott McNealy’s infamous quote – you have zero privacy anyway, get over it – may be old enough to represent common knowledge. Can we assume that everyone knows they have no privacy? Would that be reasonable, even if it were true? It isn’t true of the public telephone system – wholesale wiretapping isn’t acceptable or accepted, not even of the metadata.

I don’t think any of these – societal, parliamentary or ‘assumed’ really work, or would be sufficient even if they did – because amongst other things because we simply haven’t known what was going on. Our consent, such as it existed, could not have been informed consent, in either of the two ways that can be understood. We did not have the information. We were deliberately kept in the dark. And experience suggests that when we do know more, we tend to object more – as events like the defeat of the Snoopers’ Charter demonstrate.

Do we know what we are consenting to?

Do we understand what the implications of this surveillance actually are? This isn’t just about privacy, no matter how much people like Malcolm Rifkind tries to frame it that way. It isn’t just about individual either – sometimes through this kind of framing it can seem as though asking for privacy is an act of selfishness, and that we should be ashamed of ourselves, and sacrifice our privacy for the greater good – for security.

This is quite wrong – and in many ways framing it in this way is deliberately deceptive. There is a significant impact on many kinds of human rights, not just on privacy. Freedom of expression is chilled – both by overt surveillance through the panopticon effect and through covert surveillance through the imbalance of power that allows control to be exerted. Freedom of association and assembly are deeply affected – both online through the disruption and chilling of online communities, and offline through the disruption of the organisation of ‘real world’ protest and so forth. There’s more too – profiling can allow for discrimination. Indeed, as we shall see, discrimination of a different form is fundamental to commercial surveillance – so can be easily enabled in other ways. Ultimately, too, it can even impact upon freedom of thought – as profiling develops, it could allow the profiler to know what you want even before you do.

So even if we have given consent before, that consent is not really valid. The internet is not like old-fashioned communications. We do more online than we ever did through other forms of communication The nature of the surveillance itself has changed – and the impact of it. Any old consent that did exist should be revoked. If Big Brother wants to keep watching us, He needs to ask again.

2: Commercial surveillance and consent

This is an issue much closer to the common legal understanding of consent – and one that has been much debated. It’s one of the key subjects of the current discussions over the reform of the data protection regime. Edward Snowden, however, has thrown a bit of a spanner into that debate, and those discussions.

To understand what this means, we need to understand commercial surveillance better. Who does ‘commercial’ surveillance? What do I mean by commercial surveillance? Surveillance where money is the motivation – or, to be more precise, where commercial benefit is the motivation. This means things like behavioural tracking – for various purposes – but it also means profiling, it means analysis, all of which are done extensively by all the big players on the Internet, with little or no real idea of consent.

Does commercial surveillance matter?

Commercial surveillance does not often seem to be something people (other than a few privacy geeks like me) care about that much. It’s just about advertising, isn’t it? Doesn’t do anyone any harm? Opt-out’s OK, those paranoid privacy geeks can avoid it if they want, for the rest of us it’s what pays for the net, right? For people like me, there are big concerns – and in some ways it might matter more for most people than surveillance by the NSA and GCHQ. The idea – the one that’s being sold to us – is that it’s about ‘tailoring’ or ‘personalisation’ of your web experience. We can get more relevant content and and more appropriate advertising…

…but that also means that it can have a real impact on real people, from price and service discrimination to an influence on such things credit ratings, insurance premiums and job prospects. Real things that matter to almost all of us. There’s even the possibility of political manipulation – from personalised political advertising to detailed targeting of key ‘swing’ voters, putting even more political influence into the hands of those with the deepest pockets – for it is the deepest pockets that allow access to the ‘biggest’ data, and the most sophisticated profiling and targeting systems.

What Edward Snowden could teach us…

Some parts of the revelations from Edward Snowden should make us think again. PRISM, in particular, should change people’s attitudes to commercial surveillance. This is what Edward Snowden has to teach us. Look at the purported nature of the PRISM program. ‘Direct access’ to the servers of the big Internet companies – including Google and Facebook. Who does commercial surveillance more than Google and Facebook? What’s more, the interaction between governments and businesses is much closer than it might immediately seem. They share technology – and businesses have even let governments subvert their technology, building backdoors, undermining encryption systems and so forth. They share techniques – and even share data, whether willingly or otherwise.

Shared techniques…

Behavioural profiling is just what governments want to do. Behavioural analysis is just what governments want to do. Behavioural targeting is just what governments want to do Is identifying potential customers any different from identifying potential suspects? Is identifying potential markets any different from identifying potential protest groups (such as those involved in the aforementioned badger cull protest)? Or potential dissidents? Is predicting political trends and political risks any different from predicting market trends? Is ‘nudging’ a market that different from manipulating politics? The Internet companies have built engines to do all the authorities’ work for them (well, OK, most of the authorities’ work for them). They just need to tap into those engines. Tailor them a bit. It’s perfect surveillance, and we’ve helped build it. We’ve ‘consented’ to it.

Who is undermining privacy?

So who is undermining privacy? The spooks with their secret surveillance… ….or the business leaders telling us to share everything and that, as Mark Zuckerberg put it, ‘privacy is no longer a social norm’? This ‘de-normalisation’ of privacy – apologies for the word, which I suspect doesn’t really exist – amounts to an attempt to normalise surveillance. The extent to which this desired and pushed-for ‘de-normalisation’ has contributed to the increasing levels of surveillance is essentially a matter for conjecture, but it’s hard not to see a connection.

Paranoid privacy geeks like me have been warning about for a while – but just because we’re paranoid, it doesn’t mean we’re wrong. In this case, it’s looking increasingly as though we were right all along – and that the situation is even worse than we thought.

Is this what we consented to when we signed up for Facebook? Is this what we consent to each time we do a Google search? Is this what we expect when we watch a YouTube video or play a game of Words with Friends? I don’t think so. With new information there should come new understanding – and a reassessment of the situation. We need to decide.

3: A (more) privacy-friendly future?

A three-way consensus is needed. People, businesses and governments need to come to an agreement about what the parameters are, about what it acceptable. About what we consent to. All three groups have power – but at the moment only the authorities seem to be really wielding theirs.

Imagine what would happen if Facebook’s Mark Zuckerberg, Google’s Sergey Brin, Apple’s Tim Cook and their fellows from Microsoft, eBay, Twitter etc all came together and said to the US government ‘No’! Would they be locked up? Would their companies be viciously punished? It seems unlikely – they are much more powerful than they realise. We often talk about the power of the corporate lobbyists – this power could be wielded in a positive way, not just a negative way…

…but it only will if there’s a profit in it for the companies concerned. And that’s where we come in.

We have a key part to play. We need to keep making noises. We need to keep informing people, keep lobbying. Make sure that the companies know that we care about privacy – and not just in relation to governments. Then the companies might start to make a move that helps us.

There are some signs that this might be the case – from the noises from Zuckerberg and so on about how upset they are about the NSA to the current crop of ‘Outlook.com’ advertisements that proclaim loudly how they don’t scan your emails the way that Google do – though it is difficult to tell whether this is just lip service. They talk a lot about transparency, not so much about a reduction in actual surveillance by government – let alone by themselves. If they can wield this power in our favour it could help a lot – but it will only be wielded in this positive way if we make them. So we must be clear that we do not consent to the current situation. We do not consent to surveillance.