Surveillance and Austerity

One of the most depressing aspects of the passing of the Data Retention and Investigatory Powers Act (DRIP)  this week was the level of political consensus. All three major parties backed it, aside from a few mavericks in Tory and Labour ranks. Despite some excellent speeches in the Lords, it passed through there in double-quick time, without their Lordships even deeming it worthy of a vote.  It got me thinking, what else has a similar level of consensus? The obvious answer, sadly, was austerity. Ed Miliband is due to give a speech today to Labour’s National Policy Forum which, it seems, will confirm Labour’s commitment to it.

There is no alternative…

There are more parallels between surveillance and austerity than we should feel comfortable with. Our main political parties view both surveillance and austerity as ‘given’, and as though there are no alternatives even worth considering, let alone exploring in any detail. Both, we are told, are for our own good. Those who resist both, we are told, are unrealistic dreamers or worse. If we don’t embrace both, we are told, there will be disasters, and the future is bleak.

Divisive and simplistic…

Both also rely on divisive and simplistic assumptions.

The essence of the drive to welfare ‘reform’, in particular, is the idea that there are ‘strivers’ and ‘scroungers’, and that the former are being made to suffer by the latter. The former, the ‘good’ people, don’t need welfare, and won’t suffer from the results of austerity.

The essence of the drive for surveillance is that there are ‘good’ people and ‘bad’ people – and that the ‘good’ people are being made to suffer by the ‘bad’. The former, the ‘good’ people, don’t need privacy, and won’t suffer from the results of surveillance.

In neither case are the divisive and simplistic assumptions true. As anyone who studies the details knows, the majority of people on benefits are also in work. People shift from being in work to being out of work, from being in need to being able to do without it. The whole idea of ‘scroungers’ is overplayed and divisive, particularly in relation to people with disability. Similarly, the idea that ‘good’ people have nothing to hide, so don’t need privacy, is one of the classic misunderstandings of privacy. We all need privacy – it’s part of what we need as humans, part of our dignity, our autonomy. It’s a pragmatic necessity too, as those in power do not always use their powers for good – the latest of the Snowden revelations, that the NSA pass around naked pictures of ordinary people that they find through their snooping is just another example of how this works. Privacy isn’t about hiding – it’s about what we need as people.

It’s all about power

Ultimately, though, the thing that surveillance and austerity really have in common is power. They’re ways that those with power can keep control over those without it. Keep poor people poor and desperate, and they’re more malleable and controllable. They’ll take jobs on whatever conditions those offering them suggest. Surveillance is ultimately about control – the more information those in power have, the more they can wield that control, whether it’s monitoring social media in order to stop protests or manipulating it to make people happy and like particular products or services.

What we can do about it is another question. The real point about the people in power is that they have power…. and reducing that power is hard. We should, however, at least do our best not to have the wool pulled over our eyes. This isn’t for our benefit. It’s for theirs.

DRIP: normalising the surveillance state.

Yesterday’s shameful passing of the Data Retention and Investigatory Powers Act, nodded through without amendment and without even the perceived need for a vote in the House of Lords, was not just very bad news for the UK, it was bad news for the world. The ease with which it was passed, the speed with which it was passed, and the breadth of the powers granted send signals around the world. Some of us have been warning about this effect for a long time – what we do in the UK is being watched around the world. If we, as a supposedly mature, liberal democracy believe that mass surveillance is OK, then that means that anyone could do it. Indeed, that any sensible state should do it.

I’ve been accused of paranoia by making such a suggestion. After all, this is just ‘emergency’ legislation, a mere stop-gap while a proper review of investigatory powers and data gathering goes on. Well,  within a few short hours of the passing of DRIP, its echoes were already being heard the other side of the world. Australia’s Attorney-General, George Brandis, used DRIP as an example, seemingly to help push forward his own proposals for data retention. As reported in ZDNet, he said:

“The question of data retention is under active consideration by the government. I might point out to you as recently as yesterday, the House of Commons passed a new data retention statute. This is very much the way in which western nations are going,”

This is how it goes – and one of the many reasons that the passing of DRIP yesterday was so shameful. If the UK does it, Australia does it. Then New Zealand and Canada.  Each new country adds to the weight of the argument. Everyone’s doing it, why not us? If the UK thinks it needs this to keep its citizens safe, we need it too? By the time the long-distant sunset clause kicks in, the end of 2016, every new country that’s added a data retention law to its books, however temporary, will be another reason to extend our own security services’ powers. It’s a vicious or virtuous circle, depending on your perspective.

Of course the normalisation works in different ways too. Less scrupulous nations will be able to say that if the Brits do it, so can we – and we won’t be able to claim that they’re oppressing their population, if we do the same to our own. Further, our security services will require more and more technology to do the surveillance – and the people who develop that technology will be looking for new markets. They may sell them to the Australians – but more likely they’ll find ready markets in governments with less of a tradition of liberalism and democracy. There’s a fine selection of such nations all around the world. They’ll also find markets of other kinds – businesses wishing to use surveillance for their own purposes… whether scrupulous or not. The very criminals that the supporters of DRIP like to scare us with will be looking too – there are so many uses for surveillance that it’s hard to know where to start.

Well, actually, it should have been easy to know where to start. To make a stand. To try to normalise freedom and privacy, respect for citizens fundamental rights and a willingness for open, honest debate on the subject. That, however, would have required rejecting DRIP. We didn’t do that. Shame on us.

 

DRIP: Parliament in disrepute?

I watched and listened to the parliamentary debate on the Data Retention and Investigatory Powers bill (DRIP) with a kind of grim fascination. The outcome was always inevitable – I knew that, as, I think did all opponents of the bill – but the debate itself seemed to me to be worth paying attention to. Not really in terms of the result, but in terms of the process, and in terms of the way in which parliament was engaging with the issues. There were, it has to be said, some quite wonderful speeches in opposition to the bill, and from many different directions. MPs like John McDonnell, Dominic Raab, Caroline Lucas, Diane Abbott, Pete Wishart, David Winnick, Duncan Hames, Clive Betts, Charles Walker, Dennis Skinner and of course Tom Watson and David Davis were all excellent. Indeed, as someone said at the time, the opponents didn’t lose the debate, they lost the vote.

Therein lies the problem – what was the point of the debate? The chamber was all-but empty for most of it. In the middle of the debate, I got so angry I tweeted a picture of the chamber – with a comment attached. The tweet went a bit wild…. retweeted 870 times at the last count, and included by Liberty in their summary of the debate.

Screen Shot 2014-07-16 at 07.32.41

I did, however, also get some serious criticism for the tweet. Some suggested I had faked it, because I missed out the caption at the bottom. Fair enough – I was too angry to get the screen capture right, but I don’t fake things. I satirise and parody, tease and joke – but I don’t fake. For avoidance of doubt, I took another soon after, this time with the caption:

Screen Shot 2014-07-15 at 16.36.42

Another criticism I received, quite aggressively, was that it was misleading to tweet the picture, and that most of the MPs were likely to be in their offices or their committee rooms, working hard, but following and listening to the debate as it was being broadcast throughout the house. That may well be true – and in no way was I suggesting that MPs don’t work hard. They do – well, a great many of them do – but at this particular moment, and on this particular issue, their attention was elsewhere, as was their physical presence.

I don’t blame the MPs for that part of it. Of course their attention was elsewhere – after all, they’d had this emergency debate foisted upon them at the last minute, and they already have busy lives and huge amounts of work to do, particularly with the parliamentary recess coming up, and with a reshuffle happening at that very moment. Naturally, MPs are distracted by the reshuffle – coalition MPs because their jobs are on the line, Labour MPs because they have to be ready to respond to the reshuffle. Naturally their jobs, their careers, their responsibilities come first.

That, though, is really where my tweet comes in. I said ‘This is how seriously our MPs take our privacy’. I meant it. They showed disrespect to the issue not just by not listening to the debate, but by accepting a process that meant that they only had a few hours of debate to listen to, and almost nothing to read or discuss about it. They accepted an unnecessary fast-tracking, effectively on trust – because they don’t really take our privacy seriously.

Frankly, I’m not convinced that they were listening to the debate – but if they were, that makes their voting even worse. If they listened to the debate and still voted the way they did, in a way that’s even more depressing than the more natural assumption that they were largely ignoring the debate and voting according to the whip. It would mean that they either didn’t understand the strong arguments against the bill, both analytical and impassioned – or they dismissed them as unimportant. Either way, it suggests they didn’t take our privacy seriously. At least, not seriously enough to think it needed proper, lengthy, public debate bringing in expert opinions and analysis. I’m a legal academic, specialising in internet privacy. I’ve written a book on the subject, and I’m one of the signatories of this open letter concerning DRIP – and frankly I haven’t had nearly enough time to properly analyse and understand this bill and its implications. We’ve only had a chance for the most basic of analyses – and if I can’t, how much understanding can MPs have of it?

As David Winnick, a veteran MP and member of the Home Affairs Select Committee put it:

“I consider this to be an outright abuse of parliamentary procedure. Even if one is in favour of what the home secretary intends to do, to do so in the manner in which it is intended, to pass all stages in one go, surely makes a farce of our responsibilities as MPs”

He’s right. It does. It brings parliament into disrepute. MPs should be ashamed of themselves.

Open letter from UK legal academic experts re DRIP

I’m one of the signatories to the letter below – not just a few, but many very serious legal academics, some of the most distinguished in the field.


 

Tuesday 15th July 2014

To all Members of Parliament,

Re: An open letter from UK internet law academic experts

On Thursday 10 July the Coalition Government (with support from the Opposition) published draft emergency legislation, the Data Retention and Investigatory Powers Bill (“DRIP”). The Bill was posited as doing no more than extending the data retention powers already in force under the EU Data Retention Directive, which was recently ruled incompatible with European human rights law by the Grand Chamber of the Court of Justice of the European Union (CJEU) in the joined cases brought by Digital Rights Ireland (C-293/12) and Seitlinger and Others (C-594/12) handed down on 8 April 2014.

In introducing the Bill to Parliament, the Home Secretary framed the legislation as a response to the CJEU’s decision on data retention, and as essential to preserve current levels of access to communications data by law enforcement and security services. The government has maintained that the Bill does not contain new powers.

On our analysis, this position is false. In fact, the Bill proposes to extend investigatory powers considerably, increasing the British government’s capabilities to access both communications data and content. The Bill will increase surveillance powers by authorising the government to;

  • compel any person or company – including internet services and telecommunications companies – outside the United Kingdom to execute an interception warrant (Clause 4(2));
  • compel persons or companies outside the United Kingdom to execute an interception warrant relating to conduct outside of the UK (Clause 4(2));
  • compel any person or company outside the UK to do anything, including complying with technical requirements, to ensure that the person or company is able, on a continuing basis, to assist the UK with interception at any time (Clause 4(6)).
  • order any person or company outside the United Kingdom to obtain, retain and disclose communications data (Clause 4(8)); and
  • order any person or company outside the United Kingdom to obtain, retain and disclose communications data relating to conduct outside the UK (Clause 4(8)).

The legislation goes far beyond simply authorising data retention in the UK. In fact, DRIP attempts to extend the territorial reach of the British interception powers, expanding the UK’s ability to mandate the interception of communications content across the globe. It introduces powers that are not only completely novel in the United Kingdom, they are some of the first of their kind globally.

Moreover, since mass data retention by the UK falls within the scope of EU law, as it entails a derogation from the EU’s e-privacy Directive (Article 15, Directive 2002/58), the proposed Bill arguably breaches EU law to the extent that it falls within the scope of EU law, since such mass surveillance would still fall foul of the criteria set out by the Court of Justice of the EU in the Digital Rights and Seitlinger judgment.

Further, the bill incorporates a number of changes to interception whilst the purported urgency relates only to the striking down of the Data Retention Directive. Even if there was a real emergency relating to data retention, there is no apparent reason for this haste to be extended to the area of interception.

DRIP is far more than an administrative necessity; it is a serious expansion of the British surveillance state. We urge the British Government not to fast track this legislation and instead apply full and proper parliamentary scrutiny to ensure Parliamentarians are not mislead as to what powers this Bill truly contains.

Signed,

 

Dr Subhajit Basu, University of Leeds

Dr Paul Bernal, University of East Anglia

Professor Ian Brown, Oxford University

Ray Corrigan, The Open University

Professor Lilian Edwards, University of Strathclyde

Dr Andres Guadamuz, University of Sussex

Dr Theodore Konstadinides, University of Surrey

Professor Chris Marsden, University of Sussex

Dr Karen Mc Cullagh, University of East Anglia

Dr. Daithí Mac Síthigh, Newcastle University

Professor Viktor Mayer-Schönberger, Oxford University

Professor David Mead, University of East Anglia

Professor Andrew Murray, London School of Economics

Professor Steve Peers, University of Essex
Julia Powles, University of Cambridge

Judith Rauhofer, University of Edinburgh

Professor Burkhard Schafer, University of Edinburgh

Professor Lorna Woods, University of Essex

Theresa May – even more reason to worry about DRIP….

Screen Shot 2014-07-14 at 19.00.29I watched and listened to the session of the Home Affairs Select Committee this afternoon: Home Secretary Theresa May was being questioned about a number of things, including DRIP. The session was, I suspect, intended to reassure us that everything was OK, and that we needn’t worry about DRIP. The result, for me at least, was precisely the opposite: it left me feeling even more concerned.

Theresa May is the minister responsible for DRIP, and her performance before the committee suggested neither competence in managing the process nor an understanding of what the issues were or why people would be concerned. It was a performance that mixed the incompetent with the contemptuous, not just failing to provide answers but suggesting that she didn’t think the questions were even worth asking.

Many things about it were poor. May failed to explain why the legislation had to be rushed through – she could not (or would not) explain why nothing had happened publicly since the ECJ ruling in April, and she could not (or would not) provide details as to why there was pressure now. Next, she could not answer the key question on extraterritoriality – whether the powers in DRIP were in fact new. She claimed to have had advice that the powers did exist before – but couldn’t say whether or not they had ever been used.

Most importantly, though, when pushed by David Winnick on the key point – compliance with the ECJ ruling that struck down the Data Retention Directive, she fumbled and obfuscated when asked about the ruling. She either did not understand or deliberately pretended not to understand that the key point of the ruling was that blanket gathering of data was in conflict with fundamental rights. Ultimately, that’s the real point here – and she either could not or would not answer it.

To put it directly, the ruling said that blanket gathering of data, gathering data on everyone, regardless of suspicion, guilt or innocence, or any particular reason, was not appropriate. That is what the Data Retention Directive (DRD) did, and why the ECJ struck it down. They’re right, too. This isn’t some esoteric or obscure point, it’s a fundamental one, parallel to the idea of the presumption of innocence. The DRD did it, and DRIP does it – which is why at the very least we need to discuss it in much more depth. The session with Theresa May left me thinking that she either didn’t understand it or she dismissed it as unimportant. Now you may disagree on proportionality, and believe that mass surveillance is a proportionate response, but to dismiss the issue as unimportant and unworthy of discussion is indefensible.

Mind you, I don’t think people will be talking that much about this – because Theresa May’s performance when questioned about the appointment and subsequent resignation of Lady Butler-Sloss was even worse, if that can be believed. All in all, Theresa May looked neither trustworthy nor competent. It’s hard to imagine someone less appropriate to trust with the open-ended and extensive powers granted by something like DRIP.

DRIP: a shabby process for a shady law.

[An earlier version of this post appeared at The Justice Gap, here]

Thursday’s announcement by David Cameron and Nick Clegg that the coalition was going to expedite emergency surveillance legislation is something that should concern all of us, not just privacy activists. The speed with which the Data Retention and Investigatory Powers bill (‘DRIP’) is being brought into play, the lack of consultation and the breadth of its powers should matter to everyone. There is a reason that legislation usually requires time and careful consideration – and with a contentious issue like surveillance this is especially true. This is a shabby process, for what seems to be a very shady law. And, as David Davis MP has suggested, the ‘emergency’ is theatrical, not real. The need for new legislation was entirely predictable – and politicians and civil servants should have known this.

A predictable emergency

The trigger for the legislation was the ruling by the ECJ, on 8th April, that the Data Retention Directive was invalid – more than three months ago – but the signs that new legislation was needed have been there for far longer. The ruling by the ECJ exceeded the expectations of privacy advocates – but not that significantly, and the declaration that the directive was invalid should have been an outcome that civil servants and politicians were prepared for. Indeed, the Data Retention Directive has been subject to significant challenge since its inception in 2005. Peter Hustinx, the European Data Protection Supervisor in 2010 called it:

“…without doubt the most privacy invasive instrument ever adopted by the EU in terms of scale and the number of people it affects.”

Across Europe there have been protests and legal challenges to data retention throughout its history, from 30,000 people on the streets of Germany in 2007 to the declaration that data retention itself was unconstitutional in Romania. The challenge that eventually brought down the directive began in 2013.

The signs have been there in the UK too, and for far longer than three months. The Communications Data Bill – more commonly and appropriately known as the Snoopers’ Charter – was effectively abandoned well over a year ago, after a specially set-up parliamentary committee, after taking detailed evidence, issued a damning report. At that stage, even before the revelations of Edward Snowden reared their ugly head, the need for further legislation was evident.

So why, given all these warnings, has this emergency been manufactured, and why is legislation being pushed through so quickly? Is it that those behind the bill are concerned that if it received full and detailed scrutiny, the full scale and impact of the bill will become evident and, like the Snoopers’ Charter before it, it will fail? It is hard not to think that this has played some part in the tactics being employed here. What would there be to lose by delaying this a few months?

Companies like data too…

The suggestion that if the legislation isn’t pushed through this quickly then companies will suddenly start deleting all their communications data is naïve to say the least. Firstly, it’s hardly in most communications providers’ interest to delete all that data – actually, rather the opposite. Back in 2007, Google attempted to use the existence of data retention legislation as an excuse not to delete search logs – companies generally like having more data, as they (just like the authorities) believe they can get value from it. Moreover, businesses don’t often change their practices at the drop of a hat, even if they want to. They might, however, if they’re required to by law – and that may well be the real key here. Legal challenges to specific practices by specific companies in terms of data retention may well be in the offing – but this would take time, far more time than the few days – less than a week – that MPs are being given to pass this legislation.

Fundamental Rights

The underlying point here is that there is a reason that the Data Retention Directive was declared invalid by the ECJ, and a reason that both privacy advocates and academics have been concerned about it from the very beginning. The mass collection of communications data breaches fundamental rights – and DRIP, just like the Communications Data Bill before it, does authorise the mass collection of this data. It has the same fundamental flaws as that bill – and a few extras to boot. With the very limited time available to review the bill so far, it appears to extend rather than limit the powers available through the contentious Regulation of Investigatory Powers Act (RIPA) rather than limit them or modernise them (see for example the analysis by David Allen Green in the FT here – registration needed), and attempt to extend powers outside the UK in a way that is at the very least contentious – and in need of much more scrutiny and consideration.

Most importantly, it still works on the assumption that there is no problem with collecting data, and that the only place for controls or targeting is at the accessing stage. This is a fundamentally flawed assumption – morally, legally and practically. At the moral level, it treats us all as suspects. Legally it has been challenged and beaten many times – consistently in the European Court of Human Rights, in cases from as far back as Leander in 1987, and now in the ECJ in the declaration of invalidity of the Data Retention Directive. Practically, it means that data gathered is vulnerable in many ways – from the all too evident risks of function creep that RIPA has demonstrated over the years (dog-fouling, fly-tippers etc) to vulnerability to leaking, hacking, human error, human malice and so forth. Moreover, it is the gathering of data that creates the chilling effect – impacting upon our freedom of speech, of assembly and association and so forth. This isn’t just about privacy.

Safeguards?

Nick Clegg made much of the concessions and safeguards in the new bill, emphasising that this isn’t a Snoopers’ Charter Mark 2, but it is hard to be enthusiastic about them at this stage. There is a sunset clause, meaning that DRIP will expire in December 2016 – but there is nothing in the bill itself to say that it won’t be replaced by similar ‘emergency’ legislation, railroaded through parliament in a similar way. Moreover, December 2016 is well after the election – and the Lib Dems are currently unlikely to still have any influence at that stage. Julian Huppert in particular, my MP in Cambridge, is in a very precarious position. Without him, it’s hard to see much Lib Dem resistance to either the Tories or the Labour Party who set the ball rolling on mass surveillance state in the Blair years.

The rest of the safeguards are difficult to evaluate at this stage – they were originally said to be contained in secondary legislation that was not published with the bill itself, but when that secondary legislation was actually released, at around 4pm on Friday afternoon, it contained almost none of what had been promised. For example, the suggestion that the number of bodies able to use RIPA was to be restricted, was entirely absent. This list doesn’t just include the police and intelligence services, but pretty much all local authorities, and bodies like the food standards agency and the charities commission – another part of the function creep of RIPA. The breadth and depth of the surveillance that this bill, in combination with RIPA, would not only allow but effectively normalise, is something that should be of the deepest concern to anyone who takes civil liberties seriously.

The shabbiest of processes

This is just one part of the shabbiness of the process. Two more crucial documents,  ‘Impact Assessments’ performed by the Home Office concerning the data retention and interception aspects of the bill, were also released – but without even a mention, so that the first that was heard of them by most concerned people was early on Saturday morning, when vigilant investigators found them all but hidden on the Home Office website. Two documents, full of technical details looking at why the laws were ‘needed’ and what the risks and benefits of the laws would be, the alternatives and so forth, pretty much hidden away. These, together with the Bill itself and the Regulations, combine to produce something with a serious level of both legal and technical complexity – something that needs very careful study and expert analysis. And to do this analysis, we are given essentially one weekend, and no warning.

How serious this is was highlighted by a brief twitter conversation between David Allen Green and MP Julian Huppert this morning:

Screen Shot 2014-07-12 at 18.53.05

 

David Allen Green (@JackofKent) is asking a straight and direct, technical and legal question – and Julian Huppert can’t answer it. Julian is perhaps the most technically expert of the entire House of Commons – if he doesn’t understand the bill, its impact and how it changes the current situation, how much less can other MPs? And yet they are expected to debate the bill on Monday, and pass it almost immediately. This is patently wrong – and highlights exactly why parliament generally has significant time for analysis and for debate, and parliamentary committees call experts to give testimony, to tease out these kinds of answers. Julian Huppert should not be criticised for not knowing the answer to the question – but he should be criticised for supporting a bill without allowing the time for these questions to be asked, investigated and answered. They need to be.

This is an wholly unsatisfactory state of affairs. Indeed, the whole thing is highly unsatisfactory, and in a democratic society, it should be unacceptable. That our MPs seem willing to accept it speaks volumes.

——————–

The key documents can be found here:- study them if you have time!

The draft bill

The draft regulations

The impact assessment for interception

The impact assessment for data retention.

Communications Surveillance – a miscast debate

GCHQI have just made a submission to the Intelligence and Security Committee’s call for evidence on their Privacy and Security Inquiry. The substance of the submission is set out below – the key point is that I believe that the debate, and indeed the questions asked by the Intelligence and Security Committee, miscast the debate in such a way as to significantly understate the impact of internet surveillance and hence make the case for that surveillance stronger than it really is. I am sure there will be many other excellent submission to the inquiry – this is my small contribution.

——————————

Submission to the Intelligence and Security Committee by Dr Paul Bernal

I am making this submission in response to the Privacy and Security Call for Evidence made by the Intelligence and Security Committee on 11th December 2013, in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy. I have a book dealing with the subject, Internet Privacy Rights, which will be published by Cambridge University Press, in March 2014. The subject of internet privacy, therefore, lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

Executive summary

There are a great many issues that are brought up by the subject of communications surveillance. This submission does not intend to deal with all of them. It focuses primarily on three key issues:

  1. The debate – and indeed the initial question asked by the ISC – which talks of a balance between ‘individual privacy’ and ‘collective security’ is a miscast one. Communications surveillance impacts upon much more than privacy. It has an impact on all the classical ‘civil liberties’: freedom of expression, freedom of assembly and association and so forth. Privacy is not a merely ‘individual’ issue. It, and the connected rights, are community rights, collective rights, and to undermine them does more than undermine individuals: it hits at the very nature of a free, democratic society.
  2. The invasion of privacy, the impact on the other rights mentioned above, occurs at the point when data is gathered, not when data is accessed. The mass surveillance approach that appears to have been adopted – a ‘gather all, put controls on at the access stage’ is misconceived. The very gathering of the data has an impact on privacy, and leaves data open for misuse, vulnerable to hacking, loss or misappropriation, and has a direct chilling effect.
  3. In terms of mass surveillance, meta-data can in practice be more useful – and have more of an impact on individual rights and freedoms – than content data. It can reveal an enormous amount of information about the individuals involved, and because of its nature it is more easily and automatically analysed and manipulated.

The implications of these three issues are significant: the current debate, as presented to the public and to politicians, is misleading and incomplete. That in turn means that experts remain sceptical about the motivations of those involved in the debate in favour of surveillance – and that it is very hard for there to be real trust between the intelligence services and the public.

It also means that the bar should be placed much higher in terms of evidence that this kind of surveillance is successful in achieving the aims of the intelligence services. Those aims need to be made clear, and the successfulness of the surveillance demonstrated, if the surveillance is to be appropriate in a democratic society. Given the impact in terms of a wide spectrum of human rights – not just individual rights to privacy – the onus is on the security services to demonstrate that success, or move away from mass surveillance as a tactic.

1      A new kind of surveillance

The kind of surveillance currently undertaken – and envisaged in legislation such as the Communications Data Bill in 2012 – is qualitatively different from that hitherto imagined. It is not like ‘old-fashioned’ wiretapping or even email interception. What also makes it new is the way that we use the internet – and in particular the way that the internet is, for most people in what might loosely be described as developed societies, used for almost every aspect of our lives. By observing our internet activities, therefore, the level of scrutiny in our private lives is vastly higher than any form of surveillance could have been in the past.

In particular, the growth of social networking sites and the development of profiling and behavioural tracking systems and their equivalents change the scope of the information available. In parallel with this, technological developments have changed the nature of the data that can be obtained by surveillance – most directly the increased use of mobile phones and in particular smartphones, provides new dimensions of data such as geo-location data, and allow further levels of aggregation and analysis. Other technologies such as facial recognition, in combination with the vast growth of use of digital, online photography – ‘selfie’ was the Oxford Dictionaries Word of the Year for 2013 – take this to a higher level.

This combination of factors means that the ‘new’ surveillance is both qualitatively and quantitatively different from what might be labelled ‘traditional’ surveillance or interception of communications. This means that the old debates, the old balances, need to be recast. Where traditional ‘communications’ was in some ways a subset of traditional privacy rights – as reflected in its part, for example, within Article 8 of the ECHR, the new form of communications has a much broader relevance, a wider scope, and brings into play a much broader array of human rights.

2      Individual right to privacy vs. collective right to security?

2.1      Privacy is not just an individual right

Privacy is often misconstrued as a purely individual right – indeed, it is sometimes characterised as an ‘anti-community’ right, a right to hide yourself away from society. Society, in this view, would be better if none of us had any privacy – a ‘transparent society’. In practice, nothing could be further from the truth: privacy is something that has collective benefit, supporting coherent societies. Privacy isn’t so much about ‘hiding’ things as being able to have some sort of control over your life. The more control people have, the more freely and positively they are likely to behave. Most of us realise this when we consider our own lives. We talk more freely with our friends and relations knowing (or assuming) that what we talk about won’t be plastered all over noticeboards, told to all our colleagues, to the police and so forth. Privacy has a crucial social function – it’s not about individuals vs. society. The opposite: societies cannot function without citizens having a reasonable expectation of privacy.

2.2      Surveillance doesn’t just impact upon privacy

The idea that surveillance impacts only upon privacy is equally misconceived. Surveillance impacts upon many different aspects of our lives – and how we function in this ‘democratic’ society. In human rights terms, it impacts upon a wide range of those rights that we consider crucial: in particular, it impacts upon freedom of expression, freedom of association and freedom of assembly, and others.

2.2.1      Freedom of expression

The issue of freedom of expression is particularly pertinent. Privacy is often misconstrued as somehow an ‘enemy’ of freedom of expression – blogger Paul Staines (a.k.a. Guido Fawkes) for example, suggested that ‘privacy is a euphemism for censorship’. He had a point in one particularly narrow context – the way that privacy law has been used by certain celebrities and politicians to attempt to prevent certain stories from being published – but it misses the much wider meaning and importance of privacy.

Without privacy, speech can be chilled. The Nightjack saga, of which the committee may be aware, is one case in point. The Nightjack blogger was a police insider, providing an excellent insight into the real lives of police officers. His blog won the 2009 Orwell Award – but as a result of email hacking by a journalist working for the Times, he was unable to keep his name private, and ultimately he was forced to close his blog. His freedom of expression was stifled – because his privacy was not protected. In Mexico, at least four bloggers writing about the drugs cartels have not just been prevented from blogging – they’ve been sought out, located, and brutally murdered. There are many others for whom privacy is crucial – from dissenters in oppressive regimes to whistle-blowers to victims of spousal abuse. The internet has given them hitherto unparalleled opportunities to have their voices heard – internet surveillance can take that away. Even the possibility of being located or identified can be enough to silence them.

Internet surveillance not only impacts upon the ability to speak, it impacts upon the ability to receive information – the crucial second part to freedom of speech, as set out in both the European Convention on Human Rights and the Universal Declaration of Human Rights. If people know that which websites they visit will be tracked and observed, they’re much more likely to avoid seeking out information that the authorities or others might deem ‘inappropriate’ or ‘untrustworthy’. That, potentially, is a huge chilling effect. The UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, in his report of 2013, made it clear that the link between privacy and freedom of expression is direct and crucial.

“States cannot ensure that individuals are able to freely seek and receive information or express themselves without respecting, protecting and promoting their right to privacy. Privacy and freedom of expression are interlinked and mutually dependent; and infringement upon one can be both the cause and consequence of an infringement upon the other.”

2.2.2      Freedom of association and of assembly

Freedom of association and assembly is equally at risk from surveillance. The internet offers unparalleled opportunities for groups to gather and work together – not just working online, but organising and coordinating assembly and association offline. The role the net played in the Arab Spring has probably been exaggerated – but it did play a part, and it continues to be crucial for many activists, protestors and so forth. The authorities realise this, and also that through surveillance they can counter it. A headline from a few months ago in the UK, “Whitehall chiefs scan Twitter to head off badger protests” should have rung the alarm bells – is ‘heading off’ a protest an appropriate use of surveillance? It is certainly a practical one – and with the addition of things like geo-location data the opportunities for surveillance to block association and assembly both offline and online is one that needs serious consideration. The authorities in the Ukraine recently demonstrated this through the use of surveillance of mobile phone geolocation data in order to identify people who might be protesting – and then sending threatening text messages warning those in the location that they were now on a list: a clear attempt to chill their protests. Once more, this is very much not about individual privacy – it is about collective and community rights.

3      Controls are required at the gathering stage

The essential approach in the current form of internet surveillance, as currently practiced and as set out in the Communications Data Bill in 2012, is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions.

3.1      Data vulnerability

Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data in laptops, and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks’ losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable. The high profile hacks of Apple, Facebook, Twitter, Sony and others show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

3.2      Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to internet surveillance this is a very significant risk: the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime can creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth – there is evidence that this is already taken place. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

There can be creep in terms of the types of data gathered. The split between ‘meta data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep is inherently difficult. As with data vulnerability, the only way to guard against it is not to gather the data in the first place. That means that controls need to be placed at the data gathering stage, not at the data access stage.

4      The role of metadata

Rather than being less important, or less intrusive, than ‘content’, the gathering of meta data in the new kinds of surveillance of the internet may well be more intrusive and more significant. Meta data is the primary form of data used in profiling of people as performed by commercial operators for functions such as behavioural advertising. It is easier to analyse and aggregate, easier for patterns to be determined, and much richer in its implications than content. It is also harder to ‘fake’: content can be concealed by the use of code words and so forth – meta data by its nature is more likely to be ‘true’.

In relation to trust, it is important that those who are engaged in surveillance acknowledge this: and those that scrutinise the intelligence services understand this. It was notable in the open session of the Intelligence and Security Committee at the end of 2013 that none of those questioning the heads of MI5, MI6 and GCHQ made the point, or questioned the use of statements to the effect that they were not reading our emails or listening to our phone calls. Those statements may be true, but they are beside the point: it is the gathering of metadata that matters more. It can reveal automatically – without the need of expert human intervention – great details. As Professor Ed Felten put it in his testimony to the Senate Judiciary Committee hearing on the Continued Oversight of the Foreign Intelligence Surveillance Act:

“Metadata can expose an extraordinary amount about our habits and activities. Calling patterns can reveal when we are awake and asleep; our religion, if a person regularly makes no calls on the Sabbath, or makes a large number of calls on Christmas Day; our work habits and our social attitudes; the number of friends we have; and even our civil and political affiliations.”

Professor Felten was talking about telephony metadata – metadata from internet browsing, emails, social network activity and so forth can be even more revealing.

5      Conclusion

The subject of internet surveillance is of critical importance. Debate is crucial if public support for the programmes of the intelligence service is to be found – and that debate must be informed, appropriate and on the right terms.

It isn’t a question of individual privacy, a kind of luxury in today’s dangerous world, being balanced against the deadly serious issue of security. If expressed in those misleading terms it is easy to see which direction the balance will go. Privacy matters far more than that – and it matters not just to individuals but to society as a whole. It underpins many of our most fundamental and hard-won freedoms – the civil rights that have been something we, as members of liberal and democratic societies, have been most proud.

Similarly, the question of where the controls are built needs to be opened up for debate – at present the assumption seems to be made that gathering is acceptable even without controls. As noted above, that opens up a wide range of risks, risks that should be acknowledged and assessed in relation to the appropriateness of surveillance.

Finally, those involved in the debate should be more open and honest about the role of meta-data: the bland reassurances that ‘we are not reading your emails or listening to your phone calls’ should always be qualified with the acknowledgment that this does not really offer much protection to privacy at all.

Dr Paul Bernal
Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia Norwich
NR4 7TJ
Email: paul.bernal@uea.ac.uk

‘Individual privacy vs collective security’? NO!

As reported in the BBC, “Parliament’s intelligence watchdog is to hear evidence from the public as part of a widening of its inquiry into UK spy agencies’ intercept activities.”

Whilst in many ways this is to be welcomed, the piece includes a somewhat alarming but extremely revealing statement from Sir Malcolm Rifkind, the Chairman of the Intelligence and Security Committee:

“There is a balance to be found between our individual right to privacy and our collective right to security.”

This hits at the heart of the problem – it reveals fundamental misconceptions of the nature and importance of privacy, as well as the impact on society of the kind of universal surveillance that the authorities in the UK, US and elsewhere are undertaking.

Privacy is not just an individual right

Privacy is often misconstrued as a purely individual right – indeed, it is sometimes characterised as an ‘anti-community’ right, a right to hide yourself away from society. Society, in this view, would be better if none of us had any privacy – a ‘transparent society’. In practice, nothing could be further from the truth: privacy is something that has collective benefit, supporting coherent societies. Privacy isn’t so much about ‘hiding’ things as being able to have some sort of control over your life. The more control people have, the more freely and positively they are likely to behave. Most of us realise this when we consider our own lives. We wear clothes, we present ourselves in particular ways, and we behave more positively as a result. We talk more freely with our friends and relations knowing (or assuming) that what we talk about won’t be plastered all over noticeboards, told to all our colleagues, to the police and so forth. Privacy has a crucial social function – it’s not about individuals vs. society. Very much the opposite.

Surveillance doesn’t just impact upon privacy

The idea that surveillance impacts only upon privacy is equally misconceived. Surveillance impacts upon many different aspects of our lives – and how we function in this ‘democratic’ society of ours. In human rights terms, it impacts upon a wide range of those rights that we consider crucial: in particular, as well as privacy it impacts upon freedom of expression, freedom of association and freedom of assembly, amongst others.

Freedom of expression

The issue of freedom of expression is particularly pertinent. Again, privacy is often misconstrued as somehow an ‘enemy’ of freedom of expression – Guido Fawkes, for example, suggested that ‘privacy is a euphemism for censorship’. He had a point in one particularly narrow context – the way that privacy law has been used by certain celebrities and politicians to attempt to prevent certain stories from being published – but it misses the much wider meaning and importance of privacy.

Without privacy, speech can be chilled. The Nightjack saga is one case in point – because the Nightjack blogger was unable to keep his name private, he had to stop providing an excellent ‘insider’ blog. In Mexico, at least four bloggers writing about the drugs cartels have not just been prevented from blogging – they’ve been sought out, located, and brutally murdered. There are many others for whom privacy is crucial – from whistleblowers to victims of spousal abuse. The internet has given them hitherto unparalleled opportunities to have their voices heard – internet surveillance can take that away. Even the possibility of  being located can be enough to silence them.

Internet surveillance not only impacts upon the ability to speak, it impacts upon the ability to receive information – the crucial second part to freedom of speech. If people know that which websites they visit will be tracked and observed, they’re much more likely to avoid seeking out information that the authorities or others might deem ‘inappropriate’ or ‘untrustworthy’. That, potentially, is a huge chilling effect. It should not be a surprise that the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, sees the link between privacy and freedom of expression as direct and crucial.

“States cannot ensure that individuals are able to freely seek and receive information or express themselves without respecting, protecting and promoting their right to privacy. Privacy and freedom of expression are interlinked and mutually dependent; and infringement upon one can be both the cause and consequence of an infringement upon the other.”

Freedom of association and assembly

Freedom of association and assembly is equally at risk from surveillance. The internet offers unparalleled opportunities for groups to gather and work together – not just working online, but organising and coordinating assembly and association offline. The role the net played in the Arab Spring has almost certainly been exaggerated – but it did play a part, and it continues to be crucial for many activists, protestors and so forth. The authorities realise this, and also that through surveillance they can counter it. A headline from a few months ago in the UK, “Whitehall chiefs scan Twitter to head off badger protests” should have rung the alarm bells – is ‘heading off’ a protest an appropriate use of surveillance? It is certainly a practical one – and with the addition of things like geo-location data the opportunities for surveillance to block association and assembly both offline and online is one that needs serious consideration.

A serious debate

All this matters. It isn’t a question of ‘quaint’ and ‘individual’ privacy, a kind of luxury in today’s dangerous world, being balanced against the heavy, important and deadly-serious issue of security. If expressed in those misleading terms it is easy to see which direction the balance will go. Privacy matters far more than that – and it matters not just to individuals but to society as a whole. It underpins many of our most fundamental and hard-won freedoms – the civil rights that have been something we, as members of liberal and democratic societies have been most proud of.

Security matters – of course it does – but even the suggestion that this kind of surveillance improves our security should be taken with a distinct pinch of salt. The evidence put forward to suggest that it works has been sketchy at best, and in many cases quickly and easily debunked when put forward. Much more has to be done to persuade people that this kind of surveillance is actually necessary. The evidential bar should be very high – because the impact of this surveillance can be very significant.

If privacy is dead, we need to resurrect it!

Back in 1999, Scott McNealy, then CEO of Sun Microsystems, told journalists that privacy was dead.

“You have zero privacy anyway,” he said, “Get over it.”

In internet terms, 1999 was a very long time ago. It was before Facebook even existed. Before the iPhone was even a glint in Steve Jobs’ eye. Google was barely a year old. And yet even then, serious people in the computer industry had already given up on privacy.

The reactions of many politicians around the world – and particularly in the US – to the revelations of the activities of the NSA, GCHQ and others has echoed this sentiment. Privacy was already dead, many of them seem to be assuming, the only problem here is transparency. ‘We should have told you what we were doing’ seems to be one of the most common lines, ‘and we’ll find a way to be more open about it in the future’. The big companies echo that line, wanting to be allowed to say more about when they’ve given over information, about how many requests for data there have been and so forth – rather than calling for anything stronger, rather than saying that they in any way resisted the authorities desire for surveillance. Indeed, the suspicion of many observers from outside the industry is that rather than resisting government agencies’ surveillance plans, some of these companies were actively cooperative or even complicit.

It’s not just about transparency

For me, that’s not enough. This shouldn’t be an issue of transparency – because it’s not just transparency over surveillance and privacy that matters, it’s the surveillance itself. At the Society of Legal Scholars conference in Edinburgh yesterday, I listened to Neil Richards talk about the dangers of surveillance (his written paper can be found here) and found myself in total agreement. Surveillance in itself is harmful to people, in a number of ways – it can chill action and even thought, it creates and exacerbates power imbalances, it allows for sorting and discrimination, and it can and often is misused for personal or inappropriate reasons.

There are benefits to surveillance too – and reasons that surveillance is sometimes necessary – but the kind of total and generally secret surveillance that seems to be being performed by both government agencies (and the NSA in particular) and corporations seems to be totally out of balance – and it seems to be based, to some degree, on the assumption that privacy is dead anyway. For many, the only question seems to be how they can convince people to ‘get over it’. That is not enough. Yes, privacy may be dead – but if it is, we need to resurrect it. It may take a miracle – but it still needs to be done.

Can privacy be resurrected?

In an excellent article in the Guardian, Bruce Schneier talks about the role of engineers in the process. As he puts it:

“By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.

This is not the internet the world needs, or the internet its creators envisioned. We need to take it back.

And by we, I mean the engineering community.”

Schneier knows what he is talking about – he is one of the real experts in the subject – and his piece is both compelling and surprisingly hopeful. Effectively he suggests – and I think he’s right – that there could be a way to re-engineer the internet, to take out the back doors, to rebuild the infrastructure of the internet so that surveillance is no longer the paradigm.

Schneier’s piece outlines what might be a technical route to the resurrection of privacy – but that resurrection needs more than just the technical possibility. It needs action from more than just the engineering community – it needs a political will, and that means that it needs action from a whole lot of us. It needs lawyers, advocates and academics to continue to challenge the legal justification for this kind of surveillance – the defeat last year of the Communications Data Bill (the UK’s ‘Snoopers’ Charter’) demonstrates that this kind of thing is possible. It needs journalists and bloggers to keep on writing about the subject – to make sure that surveillance and privacy isn’t just of passing interest, forgotten after a few weeks.

It needs ordinary people to keep taking an interest – because, ordinary people can and do make a difference. They make a difference to the companies who operate on the internet – Microsoft’s recent advertising campaign’s strap-line was ‘your privacy is out priority’, demonstrating that they at least thought that the idea of privacy could be a selling point, even if their complicity in the PRISM programme has made the words seem pretty hollow. Ordinary people matter to politicians, at least when election time comes around – and it’s worth noting that in the presidential debate in the German elections happening right now, the candidates were asked specifically about NSA surveillance. There IS public and political interest in this subject. The more there is, the more chance there is of action.

Ultimately, we need to challenge the very assumptions that underlie the surveillance. We need to challenge the idea that the threat of ‘International Terrorism’ is so great that almost anything that can be done to fight it should be done without question or fetter. That’s necessary for more than just privacy, of course, as a vast array of our civil liberties have been curtailed in the name of counter-terrorism – but it is still necessary.

Is it all doomed to failure?

It might be that privacy really is dead. It might be that resurrecting it is effectively impossible – and it will certainly be incredibly difficult. The strength of the security lobby, the power of those in whose interests the surveillance is carried out, from the commercial to the governmental, is more than intimidating. The whole thing may be doomed to failure – but even if it is, it’s a fight worth fighting. There’s a huge amount at stake. And miracles do happen.

Twitter abuse: one click to save us all?

A great deal has been said already about the twitter abuse issue – and I suspect a great deal more will be said, because this really is an important issue. The level and nature of the abuse that some people have been receiving – not just now, but pretty much as long as Twitter has existed – has been hideous. Anyone who suggests otherwise, or who suggests that those receiving the abuse, the threats, should get ‘thicker skins’, or shrug it off, is, in my opinion, very much missing the point. I’m lucky enough never to have been a victim of this sort of thing – but as a straight, white, able-bodied man I’m not one of the likely targets of the kind of people that generally perpetrate such abuse. It’s easy, from such a position, to tell others that they should rise above it. Easy, but totally unfair.

The effect of this kind of abuse, this kind of attack, is to stifle speech: to chill speech. That isn’t just bad for Twitter, it’s bad for all of us. There are very good reasons that ‘free expression’ is considered one of the most fundamental of human rights, included in every human rights declaration and pretty much every democratic country’s constitution. It’s crucial for holding the powerful to account – whether they be governments , companies or just powerful individuals.

Free speech, however, does need protection, moderation, if it is to avoid becoming just a shouting match, won by those with the loudest voice and the most powerful friends – so everywhere, even in the US, there are laws and regulations that make some kinds of speech unacceptable. How much speech is unacceptable varies from place to place – direct threats are unacceptable pretty much everywhere, for example, but racism, bullying, ‘hate speech’ and so forth have laws against them in some places, not in others.

In the UK, we have a whole raft of laws – some might say too many – and from what I have seen, a great deal of the kind of abuse that Caroline Criado-Perez, Stella Creasy, Mary Beard and many more have received recently falls foul of those laws. Those laws are likely to be enforced on a few examples – there has already been at least one arrest – but how can you enforce laws like this on thousands of seemingly anonymous online attackers? And should Twitter themselves be taken to task, and asked to do more about this?

That’s the big question, and lots of people have been coming up with ‘solutions’. The trouble with those solutions is that they, in themselves, are likely to have their own chilling effect – and perhaps even more significant consequences.

The Report Abuse Button?

The idea of a ‘report abuse’ button seems to be the most popular – indeed, Twitter have suggested that they’ll implement it – but it has some serious drawbacks. There are parallels with David Cameron’s nightmarish porn filter idea (about which I’ve blogged a number of times, starting here): it could be done ‘automatically’ or ‘manually’. The automatic method would use some kind of algorithmic solutions when a report is made – perhaps the number of reports made in a short time, or the nature of the accounts (number of followers, length of time it has existed etc), or a scan of the tweet that’s reported for key words, or some combination of these factors.

The trouble with these automatic systems is that they’re likely to include some tweets that are not really abusive, and miss others that are. More importantly, they allow for misuse – if you’re a troll, you would report your enemies for abuse, even if they’re innocent, and get your trollish friends and followers to do the same. Twitterstorms get the innocent as well as the guilty – and a Twitterstorm, with a report button and an automatic banning system would mean mob rule: if you’ve got enough of a mob behind you, the torches and pitchforks would have direct effect.

What’s more, the kind of people who orchestrate the sort of attacks suffered by Caroline Criado-Perez, Stella Creasy, Mary Beard and others are likely to be exactly the kind who will be able to ‘game’ an automatic system: work out how it can be triggered, and think it’s ‘fun’ to use it to get people banned. Even a temporary ban while an investigation is going on could be a nightmare.

The alternative to an automated system is to have every report of abuse examined by a real human  being – but given that there are now more than half a billion users on Twitter, this is pretty much guaranteed to fail – it will be slow, clunky and disappointing, and people will make mistakes because they’ll find themselves overwhelmed by the numbers of reports they have to deal with. Twitter, moreover, is a free service (of which more later) and doesn’t really have the resources to deal with this kind of thing. I would like it to remain free, and if it has to pay for a huge ‘abuse report centre’ that’s highly unlikely.

There are other, more subtle technological ideas – @flayman’s idea of a ‘panic mode’ which you can go into if you find yourself under attack, blocking all people from tweeting to you unless you follow them and they follow you has a lot going for it, and could even be combined with some kind of recording system that notes down all the tweets of those attacking you, potentially putting together a report that can be used for later investigation.

I would like to think that Twitter are looking into these possibilities – but more complex solutions are less likely to be attractive or to be understood and properly used. Most, too, can be ‘gamed’ by people who want to misuse them. They offer a very partial solution at best – and the broadly-specified abuse button, as I noted above, I suspect will have more drawbacks than advantages in practice. What’s more, as a relatively neutral observer of a number of Twitter conflicts – for example between the supporters and opponents of Julian Assange, or between different sides of the complex arguments over intersectional feminism, it’s sometimes hard to see who is the ‘abuser’ and who is the ‘abused’. With the Criado-Perez, Creasy and Beard cases it’s obvious – but that’s not always so. We need to be very careful not to build systems that end up reinforcing power-relationships, helping the powerful to put their enemies in their place.

Real names?

A second idea that has come up is that we should do more against anonymity and pseudonymity – we should make people use their ‘real’ names on Twitter, so that they can’t hide behind masks. That, for me, is even worse – and we should avoid it at all costs. The fact that the Chinese government are key backers of the idea should ring alarm bells – they want to be able to find dissidents, to stifle debate and to control their population. That’s what real names policies do – because if you know someone’s real name, you can find them in the real world.

Dissidents in oppressive regimes are one thing – but whistleblowers and victims of domestic abuse and violent partners need anonymity every bit as much, as do people who want to be able to explore their sexuality, who are concerned with possible medical problems, who are victims of bullying (including cyberbullying) and even people who are just a bit shy. Real names policies will have a chilling effect on all these people – and, disproportionately, on women, as women are more likely to be victims of abuse and violence from partners.

Enforcing real names policies helps the powerful to silence their critics, and reinforces power relationships. It should also be no surprise that the other big proponent of ‘real names’ is Facebook – because they know they can make more money out of you and out of your data if they know your real name. They can ‘fix’ you in the real world, and find ways to sell that information to more and more people. They don’t have your interests at heart – quite the opposite.

Paying for Twitter?

A third idea that has come up is that we should have to pay for twitter – a nominal sum has been mentioned, at least nominal to relatively rich people in countries like ours – but this is another idea that I don’t like at all. The strength of Twitter is its freedom, and the power that it has to encourage debate would be much reduced if it were to require payment. It could easily become a ‘club’ for a certain class of people – well, more of a club than it already is – and lose what makes it such a special place, such a good forum for discussion.

Things like the ‘Spartacus’ campaign against the abysmal actions of our government towards people with disability would be far less likely to happen if Twitter cost money: people on the edge, people without ‘disposable’ income or whose belts have already been tightened as far as they can go would lose their voice. Right now, more than ever, they need that voice.

Dealing with the real issues…

In the short term, I think Criado-Perez had the best idea – we need to do everything we can to ‘stand together’, to support the victims of abuse, to make sure that they know that the vast, vast majority of us are on their side and will do everything we can to support them and to emphasise the ‘good’ side of Twitter. Twitter can be immensely supportive as well as destructive – we need to make sure that, as much as possible, we help provide that support to those who need it.

The longer term problem is far more intractable. At the very least, it’s good that this stuff is getting more publicity – because, as I said, it matters very much. Misogyny and the ‘rape’ culture is real. Very real indeed – and deeply damaging, not just to the victims. What’s more, casual sexism is real – and shouldn’t be brushed off as irrelevant in this context. For me, there’s a connection between what John Inverdale said about Marion Bartoli, and what Boris Johnson said about women only going to universities to find husbands, and the sort of abuse suffered by Criado-Perez,  Creasy, Beard and others. It’s about the way that women are considered in our society – about objectifying women, trivialising women, suggesting women should be put in ‘their’ place.

That’s what we need to address, and to face up to. No ‘report abuse’ button is going to solve that. We also need to stop looking for scapegoats – to blame Twitter for what is a problem with our whole society. There’s also a similarity here with David Cameron’s porn filter. In both situations there’s a real, complex problem that’s deep-rooted in our society, and in both cases we seem to be looking for a quick, easy, one-click solutions.

One click to save us all? It won’t work, and suggesting that it would both trivialises the problem and could distract us from finding real solutions. Those solutions aren’t easy. They won’t be fast. They’ll force us to face up to some very ugly things about ourselves – things that many people don’t want to face up to. In the end, we’ll have to.