The snoopers charter

I have just made a ‘short submission’ to the Joint Committee on Human Rights (JCHR) regarding the Draft Communications Data Bill – I’ve reproduced the contents below. I have reformatted it in order to make it more readable here, but other than the formatting this is what I sent to the committee.

The JCHR will not be the only committee looking at the bill – at the very least there will be a special committee for the bill itself. The JCHR is important, however, because, as I set out in my short submission, internet surveillance should be viewed very much as a human rights issue. In the submission I refer to a number of the Articles of the European Convention on Human Rights (available online here). For reference, the Articles I refer to are the following: Article 8 (Right to Respect for Private and Family Life), Article 9 (Freedom of Thought, Conscience and Religion), Article 10 (Freedom of Expression), Article 11 (Freedom of Assembly and Association) and Article 14 (Prohibition of Discrimination).

Here is the submission in full

——————————————————–

Submission to the Joint Committee on Human Rights

Re: Draft Communications Data Bill

The Draft Communications Data Bill raises significant human rights issues – most directly in relation to Article 8 of the Convention, but also potentially in relation to Articles 9, 10, 11 and 14. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1            The fundamental approach of the bill

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below.

2            The nature of ‘Communications Data’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. This submission will focus on one particular form of data – internet browsing data – to demonstrate some of the crucial issues that arise. Article 8 of the Convention states that:

“Everyone has the right to respect for his private and family life, his home and his correspondence’

On the surface, it might appear that ‘communications data’ relates to the ‘correspondence’ part of this clause – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact upon the ‘private life’ part of the clause. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which search terms are used, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes – and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry – it is a key part of the business models of Google, Facebook and others. It is already advanced – but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and ultimately profiling. That ‘content’ data is not gathered is of far less significance – and that focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

3            Articles 9, 10, 11 and 14

The kind of profiling discussed above is what brings Articles 9, 10, 11 and 14 into play: it is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. It can enable discrimination – and even potentially automate it. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather will impact directly on the rights of the innocent, rights that according to Articles 8, 9, 10, 11 and 14 should be respected.

4            The vulnerability of data

The approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is flawed for a number of reasons.

Firstly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

What is more, designing and building systems to implement legislation like the Communications Data Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can spread without control.

5            Function Creep

As important as the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, and the justification for its initial imposition, can be lost.

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions for which the legislation and any systems put in place to bring the legislation, function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

6            Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By the very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: on the surface, it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those who are expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence.

That brings a bigger question into the spotlight, one that the Joint Committee on Human Rights might think is the most important of all. What kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the Draft Communications Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception.

Dr Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia
Norwich NR4 7TJ

——————————————

Votes for kids?

Earlier today I retweeted a tweet suggesting that we lower the voting age from 18 to 16. It was just a ‘casual’ retweet, on the spur of the moment, but when the excellent Owen Blacker (@owenblacker on twitter) challenged me to blog about it, I started thinking more about the subject… and the more I thought about it, the clearer I became that I’m definitely in favour of lowering the voting age, and possibly even beyond 16.

There are a number of reasons for this, some of which have really come to the fore over recent weeks and months. Some are very direct and practical, others much more philosophical. Some are based on what’s happening right now – others on aspirations for the future. Some are because I think the process of voting will benefit the kids, some will benefit the whole of society. Some are based on my own experience as a kid, others on my observations and work with children and young people over the years – and I should say, just to make it clear, I’m 47 years old, and have a 6 year old daughter!

Looking first at the kids: one of the main accusations made about kids is that they’re irresponsible. Putting them into a situation where they have the chance to vote could help shift that – if they understand that how they vote could actually change things, they might think more about it. I’ve worked a little in ‘democratic schools’, where the kids get to vote on all aspects of school policy, and though they sometimes put forward fairly silly policies in general they should immense capacity for taking responsibility. The more responsibility you give them, the better they are at respecting that responsibility.

The second, related factor, is that habits formed at that age can last a lifetime. Just as I (and many people I know) still listen to the music that was around in their youth (I’m still a huge fan of the Clash, the Cure, Elvis Costello, the Specials etc), the political habits we form as youths have a huge influence the rest of our lives. The habit formed by most is one of apathy – and the fact that as a 16 year old you know you have no influence supports that apathy. Bring in the vote for 16 year olds and you have a chance – not necessarily a big chance, but a chance – that you will find more politically engaged adults emerging. That, for me, would be a very good thing. At the moment, our voter turnout is generally abysmal and our engagement with the issues often superficial at best. That isn’t good for anyone!

The next issue is one that has come to the fore over the last few days: education policy. If 16-18 year olds had more of an influence over education policy, I don’t think it would be quite as easy for policies like Michael Gove’s horrendous suggestion of bringing back O levels to come about. As this policy highlights, it is all too easy for politicians to base their judgment and their policies on their own experience – and that experience is often so far from the real experience of real children in schools as to be useless at best, highly damaging at worst. Giving children at least some say could help improve things in that way. If more voters are closer to the ‘coalface’ of education, then politicians will have to pay a little bit more attention to the reality rather than the ideological dogma and their half-remembered childhoods.

In a somewhat similar way, if we had more young people voting – and if politicians had to pay more than lip-service to the youth – we might have more chance of a sensible set of digital policies. As I’ve blogged before, government digital policy is generally dire – partly because they’re almost completely out of touch with the reality of the internet. Young people are much more likely to understand, and to ‘get it’. They’d be much less likely to push absurdities like the snoopers charter, or to allow hideous abuses like the extradition of Richard O’Dwyer (which is being fought against by many – see here, for example). That, from someone who works in the digital field, would be wonderful.

There are other, more general reasons that I would support the lowering of the voting age – and why I think the arguments against lowering the voting age have less weight than they might. The first is about idealism: some suggest that the young are too idealistic, that they don’t understand the realities of the world, so they shouldn’t be allowed to vote. Frankly, I think that is a reason TO allow them to vote, not a reason to oppose it. We need more idealism, more radicalism, more willingness to challenge the status quo, not less. Have we ‘oldsters’ done such a good job after all? The Dalai Lama, who I had the privilege to hear lecture last week (see my blog here) made one of the key messages of his talk an emphasis on youth…. and he was right!

The next is a simple one: anything that increases awareness of democracy, of the responsibilities of the democratic process, must be, in general, a good thing. Democracies are far from faultless, but they’re the best we’ve got, and they must work at their best with more engagement and more understanding. Letting 16 year olds vote would surely help that.

Finally, what’s the downside? What would be wrong with letting 16 year-olds vote? Are they so bad? Maybe they do spend a lot of time agonising about their boyfriends, girlfriends or lack of either, maybe they do care a lot about music, and celebs – but so do huge numbers of grown-ups. What’s more, the 16 year-olds who will bother to vote are likely to be the more interested, intelligent and engaged 16 year-olds, and anyone who has worked with kids knows that some of them are wonderful, inspiring and interesting – and would vote every bit as responsibly and appropriately as any adults – and a good deal better than most.

The Dalai Lama at the LSE…

I had the deep privilege of attending the Dalai Lama’s lecture at the LSE this morning – and it really was a privilege. The subject was ‘tolerance’… …and, frankly, I thought he was remarkable.

I’m not sure exactly what I was expecting, but not what I heard and saw. Pretty much everyone knows who he is – or at least they think they do. Yes, he’s the spiritual leader of Tibetan Buddhism, and yes, he’s referred to as ‘his holiness’ – but though his speech was full of deep thoughts and ideas, that wasn’t what struck me or what impressed me. Some of what he said would be familiar to anyone who’s studied Buddhism even a little – about calmness, clearing the mind, about reducing attachments and so on – but that was only a small part of it. No, the things that impressed me weren’t what people always connect with spirituality. Three connected things stood out.

The first of these – and ultimately the point of the talk – was that life was ultimately to be enjoyed. And it was clear that he, the Dalai Lama, the deep spiritual leader, knew how to enjoy himself. He was funny! Really funny. He made jokes, he teased Conor Gearty, who was chairing the session, he laughed, he played – and completely naturally. I don’t know how to get this across properly – it was just fun.

The second he said directly: that wishful thinking, that prayer, didn’t change anything. To change things you have to take action. Real action, in the real world. I’m sure that wasn’t what people expected too – but it fitted perfectly with the first point. There was a connection between the philosophy and the real world.

The third was that he didn’t actually mention his religion at all – I don’t remember him mentioning Buddha or Buddhism at all. He talked generally very practically – though with philosophy behind what he said – and in terms that could at least in general terms be accepted by anyone of any religion, by atheists, by agnostics. He didn’t try to proselytise, he didn’t in any sense suggest that his ‘way’ was the best or only way – or to denigrate anyone else’s religion.

There was of course a lot more to what he said – the podcast and transcripts of what he said will doubtless find their way onto the LSE website soon (or may even be there already) – but, from my perspective at least, the details really didn’t matter nearly as much as the overall impression. I came away from the lecture full of hope and a sense of joy. That’s enough.

Labour and the ‘Snoopers’ Charter’…

The draft Communications Data Bill – dubbed, pretty accurately in my view, as the ‘Snoopers’ charter’ – has already been the subject of a great deal of scrutiny. I’ve blogged about it a number of times, as have many others far more expert than me. My own MP, Lib Dem Julian Huppert, will be on one of the parliamentary committees scrutinising the bill, and has spoken out about aspects of the bill with some vehemence. David Davis MP, Tory backbencher and former minister, has been one of the most vocal and eloquent opponents of the whole idea of the bill. His speech at the Scrambling for Safety conference a few months ago (which I blogged about here) was hugely impressive. I’m sure he will keep up the pressure – and I’m equally sure that there are a significant number of Tories and Lib Dems who will have at the very least sympathies for the respective positions of Davis and Huppert.

But what about Labour? No Labour MPs even appeared at the Scrambling for Safety conference – and very few have said anything much about it even since the draft bill was released. Tom Watson MP, one of very few MPs who really ‘gets’ the internet, and one who really understands privacy, has of course had one or two other things on his mind…. but what about the rest of them? All we’ve heard is cautious and even supportive noises from Yvette Cooper, and little else. That, for me, is deeply disturbing. It’s disturbing for two reasons:

  1. If we’re going to defeat this bill – and we need to defeat this bill – then we’re going to need to get the Labour Party on board, and not just because they’re the ‘opposition’.
  2. More importantly, because the Labour Party SHOULD oppose the kind of measures put forward in this bill, if they’re really the party of the ordinary person, if they’re in any sense a ‘progressive’ party, y’d if they’re any sort of a ‘positive’ party.

This second point is particularly important. I’ve blogged before about the problems that all our political parties have over the whole issue of privacy, but the issues for the Labour Party are particularly acute – and the challenge is particularly difficult. In order to take a positive and progressive stance on the Snoopers’ Charter, they need to make a break from the past. They need to recognise that all the anti-terror rhetoric that surrounded the invasion of Iraq and its repercussions was misguided at best – and deeply counter-productive at worst. They need to somehow acknowledge that mistakes were made both in approach and in detail. Can they do this?

It’s always hard for a politician to make a real break from the past – accusations of U-turns, of ‘flip-flopping’, of being indecisive and so forth abound, and politicians are often deeply scared of appearing ‘weak’. Moreover, the Labour Party, as I discussed in my earlier blog, can been very afraid of appearing not to understand the ‘harsh realities’ of the world. They want to appear tough, to be able to make the ‘tough decisions’ – and not to let the Tories be the ‘party of law and order’, and of ‘security’. The scars of the unilateral nuclear disarmament policies of the 80s are still not really healed.

…and yet, I think there might be a chance. Even now, with the infighting over the ‘Progress’ organisation, the soul of the Labour Party is in some ways being reforged. That could open up opportunities and not just old wounds – an opportunity for the Labour Party to assess what it actually stands for. If it makes the decisions that I hope it does – that it’s a party for ‘little people’, a party for ‘freedom’, a party that looks forward rather than back, and a party that understands the modern world, that understands young people – then it could be willing to take a positive stance over the Snoopers Charter.

The snoopers charter is an inherently repressive and retrograde piece of legislation, both in approach and in detail. It sets out a relationship between state and citizen that is not the kind of relationship that a progressive, liberal and positive political party should take – and it works on the basis of an old kind of thinking, an old kind of fear. That should be the bottom line. I hope we can get the Labour Party to understand that.

A police state?

Yesterday saw the release of the details of the Draft Communications Bill – and, despite the significant competing interest in David Cameron’s appearance at the Leveson Inquiry, its arrival was greeted with a lot of attention and reaction, both positive and negative. Theresa May characterised those of us who oppose the bill as ‘conspiracy theorists’, something that got even the Daily Mail into a bit of a state. Could she, however, have a point? Are we over-egging the pudding by linking the kind of thing in the Bill as moving us in the direction of a police state? I’ve been challenged myself over my fairly regular use of that famous quote by the excellent Bruce Schneier:

“It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.” (see his blog here)

One of the things I was questioned on was what do we actually mean by a ‘police state’ – and it started me thinking. I’ve looked at definitions (e.g. the ever-reliable(!) wikipedia entry on ‘police state’ here) – it’s not a simple definition, and no single thing can be seen as precisely characterising what constitutes a police state. I’m no political scientist – and this is not a political science blog – but we all need to think about these things in the current climate. The primary point for me, as triggered by the Schneier quote, is that the difference between a ‘police state’ and a ‘liberal’ state is about assumptions and defaults (something I find myself writing about a lot!).

Police states

In a police state, the assumption is one of suspicion and distrust. People are assumed to be untrustworthy, and as a consequence generalised and universal surveillance makes perfect sense – and the legal, technical and bureaucratic systems are built with that universal surveillance in mind. The two police states I have most direct experience of, Burma and pre-revolutionary Romania, both worked very much in that way – the question of definitions of ‘police state’ is of course a difficult one, but when you’ve seen or experienced the real thing, it does change things.

When I visited Burma back in 1991, I know that every local that even spoke to me in the street was picked up and ‘questioned’ after the event – I don’t know quite how ‘severely’ they were questioned, but when I first heard about it after I returned to the UK it shook me. It said a great deal – firstly, that I was being watched at all times, and secondly that even talking to me was considered suspicious, and in need of investigation. The assumption was of suspicion. The default was guilt.

My wife is Romanian, and was brought up in the Ceaucescu regime – and she generally laughs when people talk about trusting their government and believing government assurances about how they can be trusted. From all the stories she’s told me, Ceaucescu would have loved the kind of surveillance facilities and access to information that we in the UK seem to be willing to grant our government. So would Honecker in East Germany. So do all the despotic regimes currently holding power around the world – monitoring social networks etc comes naturally to them, as it does to anyone wanting to control through information. Everyone is a suspect, everyone might be a terrorist, a subversive, a paedophile, a criminal.

‘Liberal’ states

In a liberal state the reverse should be true – people are (or should be) generally assumed to be trustworthy and worthy of respect, with the ‘criminals’, ‘subversives’ and ‘terrorists’ very much the exception. The idea of ‘innocent until proven guilty’ is a reflection (though not a precise one) of this. That is both an ideal and something that, in general, I believe works in practice. What that means, relating back to the Schneier quote, is that we should avoid putting into place anything that is generalised rather than targeted, anything that assumes suspicion (or even guilt), anything that doesn’t have appropriately powerful checks and balances (rather than bland assurances) in place. It means that you should think very, very carefully about the advantages of things before putting them in place ‘just in case’.

At the recent ‘Scrambling for Safety’ meeting about the new UK surveillance plans (which I’ve blogged about before) two of the police representatives confirmed in no uncertain terms that the idea of universal rather than targeted surveillance was something that they neither supported nor believed was effective. They prefer the ‘intelligent’ and targeted approach – and not to put in place the kind of infrastructural surveillance systems and related legal mechanisms that people like me would call ‘bad civic hygiene’.

In a liberal rather than police state, policing should be by consent – the police ‘represent’ the people, enforcing rules and laws that the people generally believe in and support. The police aren’t enemies of the people – and the people aren’t enemies of the police. The police generally know that – on twitter, in particular, I have a lot of contact with some excellent police people, and I’m sure they don’t want to be put in that kind of position.

The Communications Bill

So where does the new bill come into this? As well as the detailed issues with it (which I will be looking into over the next few weeks and months) there’s a big question of assumptions going on. It’s not just the details that matter, it’s the thinking that lies behind it. The idea that universal surveillance is a good thing – and make no mistake, that’s what’s being envisaged here – should itself be challenged, not just the details. That’s the idea that lies behind a police state.

Anonymity, trolls – and defamation?

A headline on the BBC’s website this morning reads:

“Websites to be forced to identify trolls under new measures”

Beneath it, the first sentence says something somewhat different:

“Websites will soon be forced to identify people who have posted defamatory messages online”

It’s interesting that the two ideas are considered equivalent. Are ‘trolls’ those who post defamatory messages online? Are those who post defamatory messages online ‘trolls’? For me, at least, neither of those statements are really true – though of course the idea of a ‘troll’ is something that’s hard to define with any precision. Trolls, for me at least (and I’m a bit of an old hand in internet terms), are people who try to provoke and offend, to get people to ‘bite’ – not necessarily or even particularly regularly through the use of defamation. They use a variety of tactics, from just saying stupid and annoying things to the most direct and offensive – and intimidating – things imaginable. Defamation may indeed be one of their tools, but at best it’s a side issue.

Taking that a step further, the trigger for this appears to have been the Nicola Brookes case (see e.g. here) – which was about bullying, abuse and harassment much more than it was about defamation. Sure, being called a paedophile and a drug-dealer was technically defamatory, but I don’t think defamation was what bothered Nicola Brookes. She wasn’t worrying about her reputation – she was being harassed, even ‘tortured’ in her own words.

It’s not about defamation

So why are the stories about defamation – and why is Ken Clarke suggesting changes to the Defamation Bill to deal with them? Are there other motives here? Is there something quite different going on? I suspect so – and I fear that this may be yet another attempt to use a hideous event to bring in powers that can and will be used for something quite different from that which the event concerns.

We already have the law to deal with trolls and bullies – which is why Nicola Brookes won her case, and why the man who trolled Louise Mensch was convicted, and quite rightly, in my opinion. Harassment and bullying needs dealing with – but we have to be very careful about how we balance things here. Anonymity may sometimes be used to cloak bullies and trolls – but it is also crucial to protect whistleblowers, to protect victims of domestic abuse from being tracked down by their abusers, to enable people to express important and valid opinions without fear of oppression or retribution.

Anonymity matters

This may not appear obvious in a country like ours – but what about in places like China? In Syria? The extremes demonstrate the point – and when situations become more extreme, even ‘liberal’ governments can reveal their authoritarian tendencies. We need to be sure that we don’t set in place the infrastructure – both legal and technical – that allows those authoritarian tendencies to be used too easily. My favourite quote, from the excellent Bruce Schneier, is apt here (again):

“It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.” (see his blog here)

Acting to give too many powers (and imposing too many duties) to break anonymity would be a step in this direction – particularly through confusing (intentionally or otherwise) defamation and trolling! We should resist it, and resist it strongly.

Opt-in is no red herring…

Briefly, very briefly, Microsoft looked like being surprising but serious ‘good-guys’ in relation in Internet privacy. They announced that Internet Explorer 10 would be launched with ‘do not track’ set as ‘on’ by default. That is, that out of the box (or more likely when downloaded), Internet Explorer would be set to prevent tracking by behavioural advertisers. When I read the story, I was shocked, momentarily delighted, and then instantly cynical… and the cynic ended up being right, because within a week, and before it was launched, action was taken to stop it happening.

As Wired reported it, the new draft of the Do Not Track specification, less than a week after Microsoft’s announcement, required that the system be set to ‘opt-out’, rather than ‘opt-in’: users must make a specific decision NOT to be tracked, rather than a specific decision to allow tracking. The idea of Microsoft as heroes of privacy died a quick and sadly unsurprising death…..

Why did this happen?

…and why does it matter? Well, I’ve banged on a large number of times about the importance of ‘opt-in’ – partly because I’m in general an ‘autonomy’ person, who likes the idea of us having as much freedom of action as I can, and partly because I understand the importance of defaults. Defaults matter. They really matter. From a philosophical perspective they matter because they suggest (and even sometimes set) the norms of the society. Is our ‘norm’ that we’re happy to be tracked and surveilled? That’s what setting the default to ‘opt-out’ means. It means that ‘normal’ people don’t mind being tracked, it’s only extremists and privacy geeks that care, and they’ll find their way to turn the tracking off. I don’t know about the rest of you, but that’s a norm I don’t want to accept!

More importantly, perhaps, they matter  for a simple, practical reason: because the majority of people don’t ever bother to change their settings – so what they’re given to start with it what they’ll stick with. The internet advertisers know that, and know that very well, which is why Microsoft’s initial announcement must have sent shivers down their spines – and why they made sure that it was quickly and relatively quietly killed. They don’t want ‘normal’ people to avoid being tracked – or even to think about whether they’re being tracked, or at the implications of their being tracked.

Opt-in is NOT a red herring

At a few conferences recently I’ve been told that opt-in is a red herring, that it doesn’t matter, and that only old fuddy-duddies who really don’t ‘get it’ still care about it. At a Westminster e-Forum, the panel basically refused to answer my question about it, and tried to get the audience to laugh rather than respond. There have been good pieces written about the down side of opt-in – most notably ‘opt-in dystopias’ by Lundbad and Masiello (which you can find here), and it cannot be denied that opt-in is far from a panacea. We all know that when given terms and conditions we generally just scroll through them without reading them and just click ‘OK’ when we’re asked.

That, however, does not mean that we should abandon the idea of opt-in: it just means that we should be more intelligent and flexible about it. Find a way that emphasises the important bits about something rather than giving us page after page after page of mind-numbing legalese. Use the interactive and user-friendly nature of modern software to make the process work better – rather than make it work so badly that people ignore it.

The advertisers and others who want to track us understand this very well – and they’re almost certainly delighted that to an extent they’ve managed to shift the discussion away from the opt-in/opt-out agenda, that they’ve managed, to a great extent, to pull the wool over the eyes over even some very experienced and quite expert privacy activists into accepting their own agenda. We should not let this happen.

Defaults matter. Opt-in matters. This little story with Internet Explorer shows that the advertisers know this. Those of us working in the privacy field should remember it too.