How not to reclaim the internet…

The new campaign to ‘Reclaim the Internet‘, to ‘take a stand against online abuse’ was launched yesterday – and it could be a really important campaign. The scale and nature of abuse online is appalling – and it is good to see that the campaign does not focus on just one kind of abuse, instead talking about ‘misogyny, sexism, racism, homophobia, transphobia’ and more. There is more than anecdotal evidence of this abuse – even if the methodology and conclusions from the particular Demos survey used at the launch has been subject to significant criticism: Dr Claire Hardaker of Lancaster University’s forensic dissection is well worth a read – and it is really important not to try to suggest that this kind of abuse is not hideous and should not be taken seriously. It should – but great care needs to be taken and the risks attached to many of the potential strategies to ‘reclaim the internet’ are very high indeed. Many of them would have precisely the wrong effect: silencing exactly those voices that the campaign wishes to have heard.

Surveillance and censorship

Perhaps the biggest risk is that the campaign is used to enable and endorse those twin tools of oppression and control, surveillance and censorship. The idea that we should monitor everything to try to find all those who commit abuse or engage in sexism, misogyny, racism, homophobia and transphobia may seem very attractive – find the trolls, root them out and punish them – but building a surveillance infrastructure and making it seem ‘OK’ is ultimately deeply counterproductive for almost every aspect of freedom. Evidence shows that surveillance chills free speech, discourages people from seeking out information, associating and assembling with people and more – as well as enabling discrimination and exacerbating power differences. Surveillance helps the powerful to oppress the weak – so should be avoided except in the worst of situations. Any ‘solutions’ to online abuse that are based around an increase in surveillance need a thorough rethink.

Censorship is the other side of the coin – but works with surveillance to let the powerful control the weak. Again, huge care is needed to make sure that attempts to ‘reclaim’ the internet don’t become tools to enforce orthodoxy and silence voices that don’t ‘fit’ the norm. Freedom of speech matters most precisely when that speech might offend and upset – it is easy to give those you like the freedom to say what they want, much harder to give those you disagree with that freedom.  It’s a very difficult area – because if we want to reduce the impact of abuse, that must mean restricting abusers’ freedom of speech – but it must be navigated very carefully, and tools not created that allow easy silencing of those who disagree with people rather than those who abuse them.

Real names

One particularly important trap not to fall into is that of demanding ‘real names’: it is a common idea that the way to reduce abuse is to prevent people being anonymous online, or to ban the use of pseudonyms. Not only does this not work, but it, again, damages many of those who the idea of ‘reclaiming the internet’ is intended to support. Victims of abuse in the ‘real’ world, people who are being stalked or victimised, whistleblowers and so forth need pseudonyms in order to protect themselves from their abusers, stalkers, enemies and so on. Force ‘real names’ on people, and you put those people at risk. Many will simply not engage – chilled by the demand for real names and the fear of being revealed. That’s even without engaging with the huge issue of the right to define your own name – and the joy of playing with identity, which for some people is one of the great pleasures of the internet, from parodies to fantasies. Real names are another way that the powerful can exert their power on the weak – it is no surprise that the Chinese government are one of the most ardent supporters of the idea of forcing real names on the internet. Any ‘solution’ to reclaiming the internet that demands or requires real names should be fiercely opposed.

Algorithms and errors

Another key mistake to be avoided is over-reliance on algorithmic analysis – particularly of content of social media posts. This is one of the areas that the Demos survey lets itself down – it makes assumptions about the ability of algorithms to understand language. As Dr Claire Hardaker puts it:

“Face an algorithm with messy features like sarcasm, threats, allusions, in-jokes, novel metaphors, clever wordplay, typographical errors, slang, mock impoliteness, and so on, and it will invariably make mistakes. Even supposedly cut-and-dried tasks such as tagging a word for its meaning can fox a computer. If I tell you that “this is light” whilst pointing to the sun you’re going to understand something very different than if I say “this is light” whilst picking up an empty bag. Programming that kind of distinction into a software is nightmarish.”

This kind of error is bad enough in a survey – but some of the possible routes to ‘reclaiming the internet’ include using this kind of analysis to identify offending social media comments, or even to automatically block or censor social media comments. Indeed, much internet filtering works that way – one of the posts on this blog which was commenting on ‘porn blocking’ was blocked by a filter as it had words relating to pornography in it a number of times. Again, reliance on algorithmic ‘solutions’ to reclaiming the internet is very dangerous – and could end up stifling conversations, reducing freedom of speech and much more.

Who’s trolling who? Double-edged swords…

One of the other major problems with dealing with ‘trolls’ (the quotation marks are entirely intentional) is that in practice it can be very hard to identify them. Indeed, in conflicts on the internet it is common for both sides to believe that the other side is the one doing the abuse, the other side are the ‘trolls’, and they themselves are the victims who need protecting. Anyone who observes even the most one-sided of disputes should be able to see this – from GamerGate to some of the conflicts over transphobia. Not that many who others would consider to be ‘trolls’ would consider themselves to be trolls.

The tragic case of Brenda Leyland should give everyone pause for thought. She was described and ‘outed’ as a ‘McCann troll’ – she tweeted as @Sweepyface and campaigned, as she saw it, for justice for Madeleine McCann, blaming Madeleine’s parents for her death. Sky News reporter Martin Brunt doorstepped her, and days later she was found dead, having committed suicide. Was she a ‘troll’? Was the media response to her appropriate, proportionate, or positive? These are not easy questions – because this isn’t an easy subject.

Further, one of the best defences of a ‘troll’ is to accuse the person they’re trolling of being a troll – and that is something that should be remembered whatever the tools you introduce to help reduce abuse online. Those tools are double-edged swords. Bring in quick and easy ways to report abuse – things like immediate blocking of social media accounts when those accounts are accused of being abusive – and you will find those tools being used by the trolls themselves against their victims. ‘Flame wars’ have existed pretty much since the beginning of the internet – any tools you create ‘against’ abuse will be used as weapons in flame wars in the future.

No quick fixes and no silver bullets

That should remind us of the biggest point here. There are no quick fixes to this kind of problem. No silver bullets that will slay the werewolves, or magic wands that will make everything OK. Technology often encourages the feeling that if only we created this one new tool, we could solve everything. In practice, it’s almost never the case – and in relation to online abuse this is particularly true.

Some people will suggest that it’s already easy. ‘All you have to do is block your abuser’ is all very well, but if you get 100 new abusive messages every minute you’ll spend your whole time blocking. Some will say that the solution is just not to feed the trolls – but many trolls don’t need any feeding at all. Others may suggest that people are just whining – none of this really hurts you, it’s just words – but that’s not true either. Words do hurt – and most of those suggesting this haven’t been subject to the kind of abuse that happens to others. What’s more, the chilling effect of abuse is real – if you get attacked every time you go online, why on earth would you want to stay online?

The problem is real, and needs careful thought and time to address. The traps involved in addressing it – and I’ve mentioned only a few of them here – are also real, and need to be avoided and considered very carefully. There really are no quick fixes – and it is really important not to raise false hopes that it can all be solved quickly and easily. That false hope may be the biggest trap of all.

That British Bill of Rights…

The much discussed ‘British Bill of Rights’ is already being drafted. I can exclusively bring you some extracts* of the current draft.


Article 1 – Right to Life

Everyone shall have the right to life, unless their death is deemed necessary in the interests of national security, or if they cannot afford the relevant insurance to pay for hospital bills.

—-

Article 6 – Right to a Fair Trial

Everyone shall have the right to a fair trial unless they cannot afford it or the Home Secretary should decide that such a trial is not necessary in the interests of national security

—-

Article 8 – Right to a Private Life

Everyone shall have the right to respect for their private and family life, except if any intrusion in that private or family life is performed by the police, the security services, tabloid newspapers, Google, Facebook or any other commercial enterprise as agreed with the Secretary of State for Business, Innovation and Skills.

—-

Article 10 – Right to Freedom of Expression

Everyone shall have the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers, except if such information is deemed unsuitable, extreme, or otherwise inappropriate by the Home Secretary, the Prime Minister, Rupert Murdoch, Paul Dacre or the Taxpayers Alliance

—-

Article 11 – Freedom of assembly and association

Everyone has the right to freedom of peaceful assembly and to freedom of association with others, excluding the right to form and to join trade unions for the protection of his interests, and excluding any form of assembly or association that the Home Secretary should deem disorderly, embarrassing, annoying or otherwise objectionable.

—-

Article 12 – Right to Marriage

Everyone has the right to marry and found a family, but the choice of partner shall be considered subject to approval by the Home Secretary, the Minister for Inequality and the media.

—-

Scope of these rights

These rights shall be accorded to all British Citizens, except Scots, Welsh people, Irish people, those who the Home Secretary determines are undeserving of rights, or decides to strip citizenship from, or are determined by the media to be scroungers, immigrants or children of immigrants, internet trolls or persons otherwise objectionable in what the Prime Minister deems to be a democratic society.”


This is understood to be the current draft, but it is believed that certain members of the cabinet believe these rights are too extensive and too generous.

*This may not actually be the real thing.

Debates, impartiality, Cameron and Chickens…

[AMENDED TO REFLECT THAT THE BBC TRUST RATHER THAN OFCOM ADMINISTER THE RULES]

The saga of the TV debates for the General Election has rumbled on over the weekend. The accusations that David Cameron is ‘chickening out’ of the debates have been gaining in volume, and the broadcasters let it be known that they might ’empty chair’ Cameron if he continues to refuse to participate. Then, on Sunday, a story appeared in the Independent that suggested that the real chickens won’t be Cameron but the BBC. According to the Independent, the BBC are trying to avoid confronting Cameron, and are even considering giving him a separate programme if he ducks out of the planned debate.

‘Sources at the BBC’ have told the Independent that:

“…to comply with election and Ofcom rules about impartiality, if it hosts a debate without Mr Cameron, it would feel compelled to let him have his own programme, an in-depth interview or allow an extended party political broadcast. It is believed that the other broadcasters would follow a similar approach as the BBC.”

But can this actually be true? On the face of it, this appears to be the opposite of impartiality – indeed, it appears to be rewarding Cameron for his avoiding the debates. The BBC’s rules on impartiality are derived from the Communications Act 2003 (as amended) and the Broadcasting Act 1996 (as amended) – that is, they are backed up by legislation. They are effectively similar to those rules set out in the Ofcom Broadcasting Code (which can be found online here), though it is the BBC Trust rather than Ofcom who administer the rules. There are two things immediately worthy of note:

  1. That the words in the act – including the word impartiality – are generally to be interpreted literally; and
  2. That there is no specific guidance about debates – mainly because debates are not a traditional part of the electoral process in the UK.

Still, it should be possible to work out what the rules mean in this context. The relevant parts of the code are as follows:

5.5 Due impartiality on matters of political or industrial controversy and matters relating to current public policy must be preserved on the part of any person providing a service (listed above). This may be achieved within a programme or over a series of programmes taken as a whole.
Meaning of “series of programmes taken as a whole”:
This means more than one programme in the same service, editorially linked, dealing with the same or related issues within an appropriate period and aimed at a like audience. A series can include, for example, a strand, or two programmes (such as a drama and a debate about the drama) or a ‘cluster’ or ‘season’ of programmes on the same subject.

No precise guidance is given about how to apply this to debates – because, as noted above, there is no guidance specifically about debates. You could, and perhaps the BBC is, interpret the piece about ‘series of programmes taken as a whole’ to mean that another programme would fit the bill, but helpfully there is an extra bit of guidance in the section on elections that seems to establish the principle. This is in Section 6, which applies to elections, though about ‘[c]onstituency coverage and electoral area coverage in election’:

6.9 If a candidate takes part in an item about his/her particular constituency, or electoral area, then candidates of each of the major parties must be offered the opportunity to take part. (However, if they refuse or are unable to participate, the item may nevertheless go ahead.)

So what does this all mean? Well, first of all, it seem entirely clear that the Ofcom Broadcasting Code does not ‘require’ the BBC or other broadcasters to offer Cameron his own programme. As I’ve mentioned twice before, there are no specific rules about debates that would bind them in this way. Indeed, as also shown, the general approach should be the opposite: the rules about specific constituency events should set the principle: if someone refuses or is unable to participate, as long as they have been offered the opportunity (and Cameron has) the debates should nevertheless still go ahead.

Indeed, impartiality should mean that if the BBC does offer Cameron a separate programme, Ed Miliband, Nick Clegg (and potentially others) should be able to demand one for themselves. I suspect each of these leaders’ advisers has already worked this out – if they haven’t, they should have!

Personally, I rather dislike the debates – they make our electoral system seem too ‘presidential’, they increase the focus on personality rather than policy, and they end up giving ‘telegenic’ politicians an advantage that may bear little relation to their intelligence, morality etc etc – but if we are to have them, we should be fair about it. If Cameron thinks they’re a bad idea, he should have been honest about that from the start – but he wasn’t.

I’m not sure Cameron is a ‘chicken’ about this – I suspect he’s making precise calculations about risks and benefits – but if the BBC is really trying to suggest that they are bound to give him his own programme, I think they really are being chickens, and that there is certainly not the obligation on them that has been suggested. They should be simple and straightforward: if Cameron wants to be included, include him. if he doesn’t, then go ahead anyway. That’s what the law, and the Ofcom Broadcasting Code, as I see it, requires. The BBC Trust, who follow the same legislation that backs up the Ofcom rules, should follow the same rules.

Ethical policing of the internet?

acpoheaderThe question of how to police the internet – and how the police can or should use the internet, which is a different but overlapping issue – is one that is often discussed on the internet. Yesterday afternoon, ACPO, the Association of Chief Police Officers, took what might (just might, at this stage) be a step in a positive direction towards finding a better way to do this. They asked for help – and it felt, for the most part at least, that they were asking with a genuine intent. I was one of those that they asked.

It was a very interesting gathering – a lot of academics, from many fields and most far more senior and distinguished than me – some representatives of journalism and civil society (though not enough of the latter), people from the police itself, from oversight bodies, from the internet industry and others. The official invitation had called the event a ‘Seminar to discuss possible Board of Ethics for the police use of communications data’ but in practice it covered far more than that, including the policing of social media, politics, the intelligence services, data retention and much more.

That in itself felt like a good thing – the breadth of discussion, and the openness of the people around the table really helped. Chatham House rules applied (so I won’t mention any names) but the discussion was robust from the start – very robust at one moment, when a couple of us caused a bit of a ruction and one even almost got ejected. That particular ruction came from a stated assumption that one of the weaknesses of ‘pressure groups’ was a lack of technical and legal knowledge – when those of us with experience of these ‘pressure groups’ (such as Privacy International, the Open Rights Group and Big Brother Watch) know that in many ways their technical knowledge is close to as good as it can be. Indeed, some of the best brains in the field on the planet work closely with those pressure groups.

That, however, was also indicative of one of the best things about the event: the people from ACPO were willing to tell us what they thought and believed, and let us challenge them on their assumptions, and tell them what we thought. And, to a great extent, we did. The idea behind all of this was to explore the possibility of establishing a kind of ‘Board of Ethics’ drawing upon academia, civil society, industry and others – and if so, what could such a board look like, what could and should it be able to do, and whether it would be a good idea to start with. This was very much early days – and personally I felt more positive after the event than I did before, mainly because I think many of the big problems with such an idea were raised, and the ACPO people did seem to understand them.

The first, and to me the most important. of those objections is to be quite clear that a board of this kind must not be just a matter of presentation. Alarm bells rang in the minds of a number of us when one of the points made by the ACPO presentation was that the police had ‘lost the narrative’ of the debate – there were slides of the media coverage, reference to the use of the term ‘snoopers’ charter’ and so forth. If the idea behind such a board is just to ‘regain the narrative’, or to provide better presentation of the existing actions of the police so as to reassure the public that everything is for the best in the best of all possible worlds, then it is not something that many of the people around the table would have wanted to be involved in.  Whilst a board like this could not (and probably should not) be involved in day-to-day operational matters, it must have the ability to criticise the actions, tactics and strategies of the police, and preferably in a way that could actually change those actions, tactics and strategies. One example given was the Met Police’s now notorious gathering of communications data from journalists – if such actions had been suggested to a ‘board of ethics’ that board, if the voices around the table yesterday were anything to go by, would have said ‘don’t do it’.  Saying that would have to have an effect – or if it had no effect, would have had to be made public – if the board is to be anything other than a fig leaf.

I got the impression that this was taken on board – and though there were other things that also rang alarm bells in quite a big way, including the reference on one of the slides to ‘technology driven deviance’ and the need to address it (Orwell might have rather liked that particular expression) it felt, after three hours of discussion, as though there were more possibilities to this idea than I had expected at the outset. For me, that’s a very good thing. The net must be policed – at least that’s how I feel – but getting that policing right, ensuring that it isn’t ‘over-policed’, and ensuring that the policing is by consent (which was something that all the police representatives around the table were very clear about) is vitally important. I’m currently far from sure that’s where we are – but it was good to feel that at least some really senior police officers want it to be that way.

I’m not quite clear what the next steps along this path will be – but I hope we find out soon. It is a big project, and at the very least ACPO should be applauded for taking it on.

Surveillance, power and chill…. and the Chatham House Rule

Yesterday I attended a conference at Wilton Park about privacy and security – some really stellar people from all the ‘stakeholders’, industry, government, civil society, academia and others, and from all over the world. A version of the Chatham House rule applied, making the discussion robust and open…. something to which I will return.

At one point, in a conversation over coffee, one of the other delegates asked me a direct question: had I seen any evidence of the ‘chilling effect’ of surveillance. They’d been told the previous day by someone from civil society that in the US there had been a direct chill – in particular of advocacy groups – as a result of the Snowden revelations, something that has been reported before a number of times, but that it’s hard to ‘prove’ in ways that seem to convince people. As I sipped my coffee I thought about it – and realised that I, personally, had seen two different but very graphic and direct examples of chill in the past few weeks, though I hadn’t thought of either of them in that kind of a direct way.

The first was the Samaritans Radar debacle. Not just theoretically, but individually I had been told by more than one person that they were keeping off Twitter for a while as a result of feeling under observation as a result of Samaritans Radar. Their tweets could be being scanned, and by people who they didn’t trust, and who they felt could do them harm. The second was even more direct, but I can’t give details. Another person, who felt under real, direct threat – their life in danger – told me they would be keeping offline for a while.

In both cases they felt threatened – not just because they felt under surveillance, but because they felt themselves under surveillance by others who have power over them. The power, it seemed to me, was one of keys – and one of the reasons that so many people, particularly in the UK, don’t find surveillance threatening. Where Samaritans Radar was concerned, a lot of the people affected were the sort of people who are vulnerable in various ways – partly because of their mental health issues, but more directly because they were under threat, whether from trolls and stalkers or from certain people in positions of authority. Some have very good reason to worry about how the local authorities or even mental health services might treat them. Or how their relatives might treat them. For my other friend, the threat was even more direct – and proven.

So yes, the chill of surveillance is real. And, perhaps most importantly, it’s real for precisely those people that need support in freedom of expression terms. People whose voices are heard the least often – and people who have the most need to be able to take advantage of the opportunities that our modern communications systems offer. The internet can enable a great deal, particularly for people in those kinds of positions – from freedom of expression to freedom of assembly and association and much, much more – but surveillance can not just jeopardise that but reverse it. If it only enables freedom of speech for those already with power, it exacerbates the power differences, and makes those already quiet even quieter, whilst those with power and voice can get their messages across even more powerfully.

…which bring me back to the conference, and the Chatham House Rule. Even the existence of the rule makes it clear that we understand that the chilling effect exists. If we know that for people to really speak freely, they need to know that their comments will not be attributed to them – the essence of the rule – then we must make the leap to recognise that surveillance chills. Surveillance is precisely about linking people’s communications to them as individuals – not just what they say, but what they seek out to read. At our conference, we gave ourselves – the vast majority of us people with at least some power and influence – the benefits of this. Surveillance, and mass surveillance by others with power over us – whether that means our or other governments, massive corporations (Google, Facebook etc) or others – denies that benefit to us all.

Trolls, threats, the law and deterrence…

trollhunter600

“Internet trolls face up to two years in jail under new laws” screamed the headline on the BBC’s website yesterday, after Chris Grayling decided to “take a stand against a baying cyber-mob”. It’s not the first time that so-called ‘trolls’ have been made the subject of a government ‘stand’ – and a media furore. This particular one arose after TV presenter Chloë Madeley suffered online abuse – that abuse itself triggered by the comments about rape made by her mother, Judy Finnigan, also a TV presenter, on Loose Women.

Twitter ‘trolls’ seem to be a big theme at the moment. Just a few weeks ago we had the tragic case of Brenda Leyland, who it appears committed suicide after being doorstepped by Sky News, accused of ‘trolling’ the parents of Madeleine McCann. A month ago, Peter Nunn was jailed for 18 weeks after a series of abusive tweets aimed at MP Stella Creasy. There are others – not forgetting the ongoing saga of GamerGate (one of my favourite posts on this is here), though that seems to be far bigger news in the US than it is here in the UK. The idea of a troll isn’t something new, and it doesn’t seem to be going away. Nothing’s very clear, though – and what I’ve set out below is very much my personal view.

What is a troll?

There’s still doubt about where the term comes from. It’s not clear that it refers to the kind of beast in the picture above – from the weirdly wonderful Norwegian film ‘Trollhunter’. A few years ago, I was certain it came from a totally different source – ‘trolling’, a kind of fishing where you trail a baited line behind your boat as you row, hoping that some fish comes along and bites it – but I understand now that even that’s in doubt. Most people think of monsters – perhaps hiding under bridges, ready to be knocked off them by billy goats, or perhaps huge, stupid Tolkeinian hulks – but what they are on the internet seems very contentious. In the old days, again, trolls were often essentially harmless – teasing, annoying, trying to get a rise out of people. The kind of thing that I might do on twitter by writing a poem about UKIP, for example – but what happens now can be quite different. The level of nastiness can get seriously extreme – from simple abuse to graphic threats of rape and murder. The threats can be truly hideous – and, from my perspective at least, if you haven’t been a victim of this kind of thing, it’s not possible to really understand what it’s like. I’ve seen some of the tweets – but only a tiny fraction, and I know that what I’ve seen is far from the worst.

The law

The first thing to say is that Grayling’s announcement doesn’t actually seem to be anything new: the ‘quadrupling of sentences’ was brought in in March this year, as an amendment to the Malicious Communications Act 1988.  This is just one of a number of laws that could apply to some of the activities that are described as ‘trolling’. Section 127 of the Communications Act 2003 is another, which includes the provision that a person is guilty of an offence if he: “sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character”.  The infamous ‘Twitter Joke Trial’ of Paul Chambers was under this Act. There have also been convictions for social media posting under the Public Order Act 1986 Section 4A, which makes it an offence to “…with intent to cause a person harassment, alarm or distress…  use[s] threatening, abusive or insulting words or behaviour, or disorderly behaviour, or …displays any writing, sign or other visible representation which is threatening, abusive or insulting,” Then there’s the Protection from Harassment Act 1997, and potentially Defamation Law too (though that’s civil rather than criminal law).  The law does apply to the internet, and plenty of so-called ‘trolls’ have been prosecuted – and indeed jailed.

What is a threat?

One of the most common reactions that I’ve seen when these issues come up is to say that ‘threats’ should be criminalised, but ‘offensive language’ should not. It’s quite right that freedom of speech should include the freedom to be offensive – if we only allow speech that we agree with, that’s not freedom of speech at all. The problem is that it’s not always possible to tell what is a threat and what is just an offensive opinion – or even a joke. If we think jokes are ‘OK’, then people who really are threatening and offensive will try to say that what they said was just a joke – Peter Nunn did so about his tweets to Stella Creasy. If we try to set rules about what is an opinion and what is a threat, we may find that those who want to threaten couch their language in a way that makes it possible to argue that it’s an opinion.

For example, tweeting to someone that you’re going to rape and murder them is clearly a threat, but tweeting to a celebrity who’s had naked pictures leaked onto the internet that ‘celebrities who take naked pictures of themselves deserve to be raped’ could, potentially, be argued to be an opinion, however offensive. And yet it would almost certainly actually be a threat. A little ‘cleverness’ with language can mask a hideous threat – a threat with every bit as nasty an effect on the person receiving it. It’s not just the words, it’s the context, it’s the intent. It’s whether it’s part of a concerted campaign – or a spontaneous twitter storm.

One person’s troll is another person’s freedom fighter…

The other thing that’s often missed here is that many (perhaps most) so-called trolls wouldn’t consider themselves to be trolls. Indeed, quite the opposite. A quick glance at GamerGate shows that: many of those involved think they’re fighting for survival against forces of oppression. There’s the same story elsewhere: those involved in the so-called ‘trolling’ of the McCanns would (and do) say that they’re campaigning to expose a miscarriage of justice, to fight on behalf of a dead child. Whether someone’s a terrorist or a freedom fighter can depend on the perspective – and that means that laws presented in terms like those used by Grayling used are even less likely to have any kind of deterrent effect. If you don’t consider yourself a troll, why would a law against trolls have any impact?

Whether increasing sentences has any deterrent effect to start with is also deeply questionable. Do those ‘trolling’ even consider the possible sentence? Do they know that what they’re doing is against the law – even with the many laws enumerated above, and the series of convictions under them, many seem to think that the law doesn’t really apply on the internet. Many believe (falsely) that their ‘anonymity’ will protect them – despite the evidence that it won’t. It’s hard to see that sentences are likely to make any real difference at all to ‘trolling’.

There are no silver bullets…

The problem is, that there really isn’t a simple answer to the various things that are labelled ‘trolling’. A change in law won’t make the difference on its own. A change in technology won’t make a difference on its own – those who think that better enforcement by Twitter themselves will make everything OK are sadly far too optimistic. What’s more, any tools – legal or technological – can be used by the wrong people in the wrong way as well as by the right people in the right way. Put in a better abuse reporting system and the ‘trolls’ themselves will use it to report their erstwhile ‘victims’ for abuse. What used to be called ‘flame wars’ where two sides of an argument continually accuse the others of abuse still exist. Laws will be misused – the Twitter Joke Trial is just one example of the prosecutors really missing the point.

There is no simple ‘right’ answer. The various problems lumped together under the vague and misleading term ‘trolling’ are complex societal problems – so solving them is a complex process. Making the law work better is one tiny part – and that doesn’t mean just making it harsher. Indeed, my suspicion is that the kind of pronouncement that Chris Grayling made is likely to make things worse, not better: it doesn’t help understanding at all, and understanding is the most important thing. If we don’t know what we mean by the term ‘troll’, and we don’t understand why people do it, how can we solve – or at least reduce – the problems that arise?

Posturing – and obscuring

The thing is, I’m not convinced that the politicians necessarily even want to solve these problems. Internet trolls are very convenient villains – they’re scary, they’re hidden, they’re dangerous, they’re new, they’re nasty. It’s very easy for the fog of fear to be built up very quickly when internet trolling comes up as a subject. Judy Finnigan’s original (and in my view deeply offensive) remarks about Ched Evans’ rape conviction have been hidden under this troll-fog. Trolls make a nice soundbite, a nice headline – and they’re in some ways classical ‘folk devils’ upon which to focus anger and hate. Brenda Leyland’s death was a stark reminder of how serious this can get. A little more perspective, a little more intelligence and a little less posturing could really help here.