How not to reclaim the internet…

The new campaign to ‘Reclaim the Internet‘, to ‘take a stand against online abuse’ was launched yesterday – and it could be a really important campaign. The scale and nature of abuse online is appalling – and it is good to see that the campaign does not focus on just one kind of abuse, instead talking about ‘misogyny, sexism, racism, homophobia, transphobia’ and more. There is more than anecdotal evidence of this abuse – even if the methodology and conclusions from the particular Demos survey used at the launch has been subject to significant criticism: Dr Claire Hardaker of Lancaster University’s forensic dissection is well worth a read – and it is really important not to try to suggest that this kind of abuse is not hideous and should not be taken seriously. It should – but great care needs to be taken and the risks attached to many of the potential strategies to ‘reclaim the internet’ are very high indeed. Many of them would have precisely the wrong effect: silencing exactly those voices that the campaign wishes to have heard.

Surveillance and censorship

Perhaps the biggest risk is that the campaign is used to enable and endorse those twin tools of oppression and control, surveillance and censorship. The idea that we should monitor everything to try to find all those who commit abuse or engage in sexism, misogyny, racism, homophobia and transphobia may seem very attractive – find the trolls, root them out and punish them – but building a surveillance infrastructure and making it seem ‘OK’ is ultimately deeply counterproductive for almost every aspect of freedom. Evidence shows that surveillance chills free speech, discourages people from seeking out information, associating and assembling with people and more – as well as enabling discrimination and exacerbating power differences. Surveillance helps the powerful to oppress the weak – so should be avoided except in the worst of situations. Any ‘solutions’ to online abuse that are based around an increase in surveillance need a thorough rethink.

Censorship is the other side of the coin – but works with surveillance to let the powerful control the weak. Again, huge care is needed to make sure that attempts to ‘reclaim’ the internet don’t become tools to enforce orthodoxy and silence voices that don’t ‘fit’ the norm. Freedom of speech matters most precisely when that speech might offend and upset – it is easy to give those you like the freedom to say what they want, much harder to give those you disagree with that freedom.  It’s a very difficult area – because if we want to reduce the impact of abuse, that must mean restricting abusers’ freedom of speech – but it must be navigated very carefully, and tools not created that allow easy silencing of those who disagree with people rather than those who abuse them.

Real names

One particularly important trap not to fall into is that of demanding ‘real names’: it is a common idea that the way to reduce abuse is to prevent people being anonymous online, or to ban the use of pseudonyms. Not only does this not work, but it, again, damages many of those who the idea of ‘reclaiming the internet’ is intended to support. Victims of abuse in the ‘real’ world, people who are being stalked or victimised, whistleblowers and so forth need pseudonyms in order to protect themselves from their abusers, stalkers, enemies and so on. Force ‘real names’ on people, and you put those people at risk. Many will simply not engage – chilled by the demand for real names and the fear of being revealed. That’s even without engaging with the huge issue of the right to define your own name – and the joy of playing with identity, which for some people is one of the great pleasures of the internet, from parodies to fantasies. Real names are another way that the powerful can exert their power on the weak – it is no surprise that the Chinese government are one of the most ardent supporters of the idea of forcing real names on the internet. Any ‘solution’ to reclaiming the internet that demands or requires real names should be fiercely opposed.

Algorithms and errors

Another key mistake to be avoided is over-reliance on algorithmic analysis – particularly of content of social media posts. This is one of the areas that the Demos survey lets itself down – it makes assumptions about the ability of algorithms to understand language. As Dr Claire Hardaker puts it:

“Face an algorithm with messy features like sarcasm, threats, allusions, in-jokes, novel metaphors, clever wordplay, typographical errors, slang, mock impoliteness, and so on, and it will invariably make mistakes. Even supposedly cut-and-dried tasks such as tagging a word for its meaning can fox a computer. If I tell you that “this is light” whilst pointing to the sun you’re going to understand something very different than if I say “this is light” whilst picking up an empty bag. Programming that kind of distinction into a software is nightmarish.”

This kind of error is bad enough in a survey – but some of the possible routes to ‘reclaiming the internet’ include using this kind of analysis to identify offending social media comments, or even to automatically block or censor social media comments. Indeed, much internet filtering works that way – one of the posts on this blog which was commenting on ‘porn blocking’ was blocked by a filter as it had words relating to pornography in it a number of times. Again, reliance on algorithmic ‘solutions’ to reclaiming the internet is very dangerous – and could end up stifling conversations, reducing freedom of speech and much more.

Who’s trolling who? Double-edged swords…

One of the other major problems with dealing with ‘trolls’ (the quotation marks are entirely intentional) is that in practice it can be very hard to identify them. Indeed, in conflicts on the internet it is common for both sides to believe that the other side is the one doing the abuse, the other side are the ‘trolls’, and they themselves are the victims who need protecting. Anyone who observes even the most one-sided of disputes should be able to see this – from GamerGate to some of the conflicts over transphobia. Not that many who others would consider to be ‘trolls’ would consider themselves to be trolls.

The tragic case of Brenda Leyland should give everyone pause for thought. She was described and ‘outed’ as a ‘McCann troll’ – she tweeted as @Sweepyface and campaigned, as she saw it, for justice for Madeleine McCann, blaming Madeleine’s parents for her death. Sky News reporter Martin Brunt doorstepped her, and days later she was found dead, having committed suicide. Was she a ‘troll’? Was the media response to her appropriate, proportionate, or positive? These are not easy questions – because this isn’t an easy subject.

Further, one of the best defences of a ‘troll’ is to accuse the person they’re trolling of being a troll – and that is something that should be remembered whatever the tools you introduce to help reduce abuse online. Those tools are double-edged swords. Bring in quick and easy ways to report abuse – things like immediate blocking of social media accounts when those accounts are accused of being abusive – and you will find those tools being used by the trolls themselves against their victims. ‘Flame wars’ have existed pretty much since the beginning of the internet – any tools you create ‘against’ abuse will be used as weapons in flame wars in the future.

No quick fixes and no silver bullets

That should remind us of the biggest point here. There are no quick fixes to this kind of problem. No silver bullets that will slay the werewolves, or magic wands that will make everything OK. Technology often encourages the feeling that if only we created this one new tool, we could solve everything. In practice, it’s almost never the case – and in relation to online abuse this is particularly true.

Some people will suggest that it’s already easy. ‘All you have to do is block your abuser’ is all very well, but if you get 100 new abusive messages every minute you’ll spend your whole time blocking. Some will say that the solution is just not to feed the trolls – but many trolls don’t need any feeding at all. Others may suggest that people are just whining – none of this really hurts you, it’s just words – but that’s not true either. Words do hurt – and most of those suggesting this haven’t been subject to the kind of abuse that happens to others. What’s more, the chilling effect of abuse is real – if you get attacked every time you go online, why on earth would you want to stay online?

The problem is real, and needs careful thought and time to address. The traps involved in addressing it – and I’ve mentioned only a few of them here – are also real, and need to be avoided and considered very carefully. There really are no quick fixes – and it is really important not to raise false hopes that it can all be solved quickly and easily. That false hope may be the biggest trap of all.

Dear Tristram Hunt

Dear Tristram Hunt

I was very interested to read about your speech at the University of Sheffield last night – sorry not to have been able to attend, but having read various reports, including some tweeted by your good self, I wonder if you have really understood some of the issues you’re discussing. I mean, there is a great deal that I agree with in what you say, but there is one particular issue that you have highlighted that I suspect needs more careful analysis: the role of social media, and of Twitter in particular.

You are quoted as saying that the Labour Party pays too much attention to the ‘narrow online world of Twitter’, and that ‘What the algorithms which underpin our digital lives do is take information about us and fire similar information back at us,’ There is a good deal of truth in that – indeed, academics and other experts have been discussing the issue for some time. Professor Cass Sunstein, in his seminal work ‘Republic 2.0‘, raised the issue of political polarisation within online communities in 2002. Eli Pariser’s ‘The Filter Bubble‘ in 2012 addressed the effect of Google algorithms on what we see and don’t see on the net, while my own Internet Privacy Rights in 2014 discusses what I call ‘Back-door Balkanisation’, through which communities are automatically polarised by the combination of Google algorithms, invasions of privacy and the desires of commercial enterprises. It is a known effect, albeit one known within fairly narrow communities. It is not, however, so simple as ‘algorithms firing back similar information at us: it is more complex than that, and I’d recommend some serious study in the area.

Most importantly, it is not something to be afraid of, but something to be understood and to be harnessed. It is something powerful and important – and something modern that you, as a self-proclaimed ‘moderniser’ should embrace. It is a feature of online communities that isn’t going away, either, no matter how many speeches are made against it, or how many articles are written about it in the Spectator or the New Statesman.

You see, there are two fundamental problems with dismissing the ‘narrow online world’: firstly that it consists of real people, and secondly that those people are likely to be exactly the politically engaged people who are crucial in getting a political party moving, particularly a party like the Labour Party, who doesn’t have the mainstream media on its side and doesn’t have massive donations from vested interests. Labour needs its activists, and those activists are more likely than most to use the social media. The clue is in the social. Dismissing the social media means dismissing the very people that you need on your side.

The fact that  you and the other ‘modernisers’ dismiss the online world is sadly characteristic of their problems in the Labour leadership contest: a misreading of the nature of the contest. Many ‘modernisers’ seemed to think they were fighting a general election, trying to win the middle ground, to persuade the readers of the Daily Mail that their candidates were the best – when the contest was actually with Labour members and activists. Those members and activists were far from persuaded by the appeals to the Daily Mail. They were actively put off by the appearance of Tony Blair, the interventions of John McTernan (calling the nominators of Corbyn morons, for example) and by the suggestions that anyone voting for Corbyn was stupid. In your speech, Tristram, you suggest that Labour is losing touch with the voters – why did you not apply that logic to the leadership contest? It was the self-styled ‘modernisers’ and ‘moderates’ who had lost touch with the voters in the leadership contest – and seemed to have forgotten who those voters actually were.

And that brings me back to the online world, in its narrow, polarised, echo-chamber form. As I noted at the start, it is true that this effect can and does happen. However, it happens only when there are voices to echo, and when those echoes resonate. That is what happened with Corbyn and his enormous victory both in the social media and in the leadership contest. His words and views resonated within the relevant community, and gained power as a result.

The lesson to learn is not that this is irrelevant and should be avoided – but, as I said earlier, that it should be understood and harnessed. In some situations – and a leadership election is one of them – it is critical, and if the ‘modernisers’ had been modern enough to understand the online world they might have done a lot better in that contest. The online world can have great power and effect in some situations. It works really well for some forms of activism – and the ‘echo-chamber’ effect is actually one of the reasons for that.

That doesn’t mean, of course, that it is the only tool, or that this lesson means we should spend all our time and effort in online campaigning. The ‘Twitter bubble’ is a bubble, just as the ‘Westminster bubble’ is a bubble, and the ‘media bubble’ is a bubble. Social media has its place, just as focus groups have their place, and working with the mainstream media has its place. They have strengths and weaknesses, and different uses at different times. Each should be used with huge pinches of salt, but should be used. Labour, and you and your fellow ‘modernisers’ need to understand that. Don’t dismiss the online world. If you are truly a ‘moderniser’ you should embrace it, understand it, and engage with it. Don’t treat Twitter as somewhere for you to broadcast your views, but as the interactive and responsive medium that it can be at its best. Then you might harness its power rather than fear it.

Kind regards

Paul Bernal

P.S. There are a great many people on Twitter and elsewhere who have the best interests of the Labour Party very much at heart, and who would be not only willing but able to help you and others with better engagement and understanding of the often unruly and sometimes intimidating online world. I am one – and having recently rejoined Labour I would be very happy to do my bit.

#TweetlikeanMP?

A year or two back, the hashtag #TweetlikeanMP trended – and it was fun. Inane tweets about meeting and greeting constituents, about party loyalty, about attending crucial meetings with business groups, lovely photo opportunities and so on. It was funny because it was, to a great extent, true – and because it revealed something about the way that our politics works. It also showed how badly MPs generally used Twitter – how they missed the opportunities that Twitter provides, opportunities to genuinely engage with their constituencies, to listen as well as to broadcast to the populace how wonderful they are. Opportunities to show that they’re human – and not just this remote, elite group looking down on the rest of us.

In the last couple of years, I’ve ‘met’ a fair few MPs who have been able to do it differently – to understand how Twitter can really work, and to engage with it. My own MP, Julian Huppert, is one of them – in practice, tweeting ‘to’ him is the best way to engage with him. I get answers – and real ones – most of the time, and I get the sense that he’s actually listening.He’s not alone – and it’s not been, so far, a party thing. I’ve engaged with MPs of all parties online, and of a wide range of views within each party. Michael Fabricant, Jamie Reed, Caroline Lucas amongst others – and members of the House of Lords from Ralph Lucas, Meral Hussein-Ece, Steve Bassam. I’ve even exchanged tweets with Nigel Farage. It felt as though twitter gave an opportunity to reach out to politicians, and to actually engage with them…

….which is one of the reasons I’m deeply saddened by what happened to Emily Thornberry last night. It’s not that I think her tweeted picture was anything but foolish, ill-judged, insensitive and revealing. It was all of those things… but the consequences are likely to be that MPs will retreat into their shells on social media. The way that she resigned just a few hours after ‘the’ tweet will have sent shivers down the spines of MPs across the spectrum – and party whips will be, well, cracking the whip, to keep their MPs in line for these next six months. We’ll see less humanity, less engagement, less humour – and much more ‘tweeting like an MP’ from everyone. An opportunity for politics to become more engaged will be lost – and at a time when the detachment of MPs from ordinary people is one of the main problems in politics, as Thornberry’s tweet sadly shows.

Of course there are other reasons to find yesterday’s turn of events saddening – from the level of abuse that Thornberry got on Twitter (regardless of what you think of the tweet, abuse like that is deeply unpleasant) to the fact that we’ve lost another woman from frontline politics, and another of those increasingly rare lawmakers who actually understands law has departed for the backbenches, at a time when parliament is trying to put through legal absurdities like Chris Grayling’s ‘Social Action, Responsibility and Heroism Bill’ (SARAH).

Don’t misunderstand me – in the circumstances I’m not at all surprised that Thornberry resigned, and I do understand why Labour MPs like Chris Bryant were so sure that she was right to do so. I do, however, think that the consequences may be wider than we suspect – and one part is that we’ll see far more MPs just tweeting like MPs, not like human beings. That, regardless of the rest, is sad.

Samaritans Radar: understanding how people use twitter…

On the Samaritans website, in a recent ‘update’ on Samaritans Radar, they note:

“We understand that there are some people who use Twitter as a broadcast platform to followers they don’t know personally, and others who use Twitter to communicate with friends. Samaritans Radar is aimed particularly at Twitter users who are more likely to use Twitter to keep in touch with friends and people they know.”

So the people behind Samaritans Radar – and I don’t believe for a moment that this is the Samaritans as a whole – think that there are basically two modes of usage of Twitter: broadcasting information to people you don’t know, and communicating with friends. Now I’m a pretty prolific Twitter user – I’m closing in on 150,000 tweets – but I would say that even now I’ve only scratched the surface of the possible uses of Twitter, and the possible ways to use Twitter. I’ve developed my own way of using Twitter – and I suspect pretty much everyone who uses Twitter has done the same. Indeed, that’s one of the great things about Twitter: it’s relatively non-prescriptive. There’s no particular ‘way’ to use twitter – there are an infinite number of ways. Just off the top of my head, I can think of a whole number of distinctly different reasons that I use twitter.

  1. To keep up with the news – people I follow post links to fascinating stories, often far faster than mainstream media news
  2. To get updates from people I know in a professional capacity – I’m an academic, working in law and privacy, and there’s a great community of legal and privacy people on Twitter.
  3. To publicise my blog – it’s the best way to get readers (and yes, that fits the broadcast platform idea)
  4. To make contacts – some become friends, some are professional, some both
  5. To exchange ideas with people that I know – and with people that I don’t know. These may be work ideas, or just general ideas
  6. To live-tweet events that I’m attending, to allow those not present to learn what’s happening
  7. To have fun! I play hashtag games, watch silly videos, make jokes and so on.
  8. To follow live events and programmes – following BBC’s Questiontime via the #bbcqt hashtag is much more fun than watching the real thing
  9. To have political arguments – some of my ‘favourites’ at the moment are fights with UKIP supporters…
  10. To let off steam – when I’m angry or annoyed about something
  11. To express pleasure – if I’ve enjoyed something, I like to say so! Watching a good TV programme, for example
  12. To access and read material about subjects I’m interested in
  13. To follow my football team (the mighty Wolverhampton Wanderers)
  14. To support people I like – whether they’re friends or not
  15. To Retweet tweets or links to blogs from people that I like – and new blogs that I’ve not found before. A kind of ‘blog-networking’
  16. To spread interesting stories to the people that follow me
  17. To keep in touch with friends (yes, that fits the Samaritans idea) and to be there when they need support
  18. To feel in contact with current events and issues – not just news, but the ‘buzz’
  19. To experiment with ideas…
  20. To crowdsource the answer to questions – ‘ask twitter’
  21. Creating online performance art! (h/t @SusanhallUK)
  22. Just to see what happens – something wonderful might! Serendipity (h/t @LizIxer)
  23. Getting helpful corrections to blog posts!
  24. Receiving/conveying first-hand information from people ‘on the scene’ regarding events in the news (#Ferguson, say) (via @Doremus42)

This is only part of it – the ones that I could think of in a few minutes – and they overlap, merge, combine and produce new things all the time. I have around 9,000 followers, and follow around 7,500 people, and the relationships I have with each of them vary immensely. Some I know in ‘real life’ and consider my friends. Some are colleagues. Some I know well online but have never met. Some I have no idea about at all, but it seemed like a good thing at the time to follow them – or, presumably, they thought it might be interesting to follow me. Some are my political ‘allies’, some very much my opponents. Some I will tweet personally with, others I will just exchange professional information with. Some I will tease – and some I will offend immensely. I try to be sensitive – but often fail. What I do know, though, is that there’s no one way to use Twitter. There’s no prescriptive model. Twitter is particularly adaptable…

…which is one of the reasons it’s particularly suitable for many people with mental health issues. People can use Twitter as they want to, and find a way that suits them, and their own personality, their own views, their own way of being. And that’s one of the many reasons that ideas like Samaritans Radar are misconceived. As set out on their update, they have a particular model in mind – and have not properly considered that this model is only one of a vast range of possibilities. Their idea fits their preconception – but it doesn’t fit the ways that other people use Twitter. And when those other people – particularly people who are vulnerable – have other ways to use Twitter, their ideas don’t fit, and end up being potentially deeply damaging. Further, when Samaritans fail to listen to exactly those people when those people say ‘that’s not how it is for me’, they make things worse. Far worse.

Again, I’d like to appeal to the Samaritans to reconsider this whole project. Withdraw it now, and have a rethink. An organisation that listens should be able to do that.

Samaritans Radar: misunderstanding privacy and ‘publicness’

The furore over the launch of the Samaritans Radar app has many dimensions: whether it’s ethical, whether it will help, whether it will chill – putting vulnerable people off using Twitter, whether it’s legal – there are huge data protection issues – are just a start. Many excellent pieces have been written about it from all these angles, and they almost all leave me thinking that the whole thing is misconceived, however positive its motivations may be.

I’m not going to go over much of these, but want to look at one particular angle where it seems to me that the creators of the app have made a fundamental misunderstanding. To recap, once someone authorises the Samaritans Radar app, that app will automatically scan the tweets of all the people that person follows, looking for signs in those tweets of potentially worrying words or phrases: triggers that suggest that the tweeter may be at risk. The tweeter does not know that their tweets are being scanned, as it’s only the person who’s authorised the app whose consent has been sought – and it’s important to remember that we don’t generally have control over who follows us. Yes, we can block people, but that often seems an overly aggressive act. I very rarely block, for example.

The logic behind the Samaritans Radar approach to privacy is simple: tweets are ‘public’, therefore they’re fair game to be scanned and analysed. Their response to suggestions that this might not be right is that people always have the option of making their twitter accounts private – thus effectively locking themselves out of the ‘public’ part of Twitter. On the surface this is logical – but only if you think that ‘private-public’ is a two-valued, black-and-white issue. Either something is ‘public’ and available to all, or it’s ‘private’ and hidden. Privacy, both in the ‘real’ world and on Twitter, doesn’t work like that. It’s far more complex and nuanced than that – and anyone who thinks in those simple terms is fundamentally misunderstanding privacy.

The two extremes are fairly obvious. If you sit in a TV studio on a live programme being broadcast to millions, everything you say is clearly public. If you’re in a private, locked room with one other person, and have sworn them to secrecy, what you say is clearly private. Between the two, however, there is a whole spectrum, and defining precisely where things fit is hard. You can have an intimate, private conversation in a public place – whispering to a friend in a pub, for example. Anyone who’s been to a football match, or been on a protest march, knows theoretically that it’s a public place, but might well have private conversations, whether wisely or not. Chatting around the dinner table when you don’t know all the guests – where would that fit in? In law, we can analyse what we call a ‘reasonable expectation of privacy’, but it’s not always an easy analysis – and many people who might be potentially interested in the Samaritans should not be expected to understand the nuances of the law, or even the technicalities of Twitter.

On Twitter, too, we have very different expectations of how ‘visible’ or obscure what we tweet might be. We’re not all Stephen Fry, with millions of followers and an expectation that everything we write is read by everyone. Very much the opposite. We know how many followers we have – and some might assume, quite reasonably, that this is a fair representation of how many people might see our Tweet. It’s very different having 12 followers to having 12 million – and there are vastly more at the bottom end. Indeed, analysis at the end of 2013 suggested that 60% of active Twitter accounts have fewer than 100 followers, and 97% have fewer than 1000. That, to start with, suggests that most Twitter users might quite reasonably imagine that their tweets are only seen by a relatively small number of people – particularly as at any time only a fraction of those who follow you may be online and bother to read your tweet.

Further, not all tweets are equally visible – and experienced tweeters should know that. There are ways to make your tweets a little more intimate, and ways to make them more easily visible. If you tweet in response to someone, and leave their twitter tag at the start of the tweet, it will only appear on the timelines of people that follow both you and the person you are responding.  That’s why people sometimes put a ‘.’ in front of the tag.

A tweet like this, for example, would only be immediately visible to myself and the first tweeter named, and people who follow both of us, which is not likely to be a very large number.

Screen Shot 2014-11-01 at 09.44.00

If I had put a ‘.’ (or indeed any other characters) in front of @ABeautifulMind1, it would have been visible to all of the 9,000+ people who follow me. I made the decision not to do that – choosing to limit the visibility of the tweet. Having a semi-private conversation in a very public forum. Of course other people could find the tweet, but it would be harder – just as other people could hear a conversation on a public street, but it would be harder.

You can do the reverse, and try to make your tweet more rather than less visible. Adding a hashtag, for example, highlights the tweet to people following that hashtag – live tweeting my anger at BBC Question Time by adding the hashtag #bbcqt, for example. I could mention the name of a prominent tweeter, in the hope that they would read the tweet and choose to re-tweet it to their thousands or millions of followers. I could even ‘direct-message’ someone asking them to retweet my tweet as a special favour. All of these things can and do change the visibility – and, in effect, the publicness of the tweet.

Some people will understand all this. Some people won’t. Some people will have the two-valued idea about privacy that seems to underlie the Samaritans Radar logic – but, by both their thoughts and their actions, most people are unlikely to. We don’t all guard our thoughts on Twitter – indeed, that’s part of its attraction and part of its benefit for people with mental health issues, or indeed people potentially interested in the services of the Samaritans. Many people use twitter for their private conversations in the pub – and that’s great. Anyone who uses Twitter often, and anyone with any understanding of vulnerable people should know that – and see beyond the technical question of whether a tweet is ‘public’ or not.

The Samaritans responded to some of these questions, after their initial and depressing ‘you can lock your account’ response, by suggesting that people could join a ‘white list’ that says their tweets should not be scanned by Samaritans Radar – but that doesn’t just fail to solve the real issue, it might even exacerbate them. First of all, you have to be aware that you’re being scanned in order to want to be on the white list. Secondly, you’re adding yourself to a list – and not only is that list potentially vulnerable (both to misuse and to being acquired, somehow, by people with less than honourable motives), but the very idea of being added to yet another list is off-putting in the extreme. Anyone with negative experiences of the mental health services, for example, would immediately worry that being on that list marks you out as ‘of interest’. We don’t like lists, and with good reason.

At the very least, the system should be the other way around – you should have to actively ‘opt-in’ to being scanned. Having an opt-in system would be closer to the Samaritans’ role: the person would say ‘please, watch me, look after me’, as though they were phoning Samaritans. Even then, it’s far from perfect, as a decision to let people watch you at one point may not be relevant later. People’s minds change, their sensitivity changes, their level of trust changes. They should be able to revoke that decision to be watched – but even making them do that could be a negative. Why should it be up to them to say ‘stop scanning me’? With sensitive, vulnerable people, that could be yet another straw on the camel’s back.

Personally, I’d like the Samaritans to withdraw the app and have a rethink. This isn’t just a theoretical exercise, or a bit of neat technology – these are real issues for real people. It needs sensitivity, it needs care, it needs a willingness to admit ‘Oh, we hadn’t realise that, and we were wrong.’ With Samaritans Radar, I think the Samaritans have really got it wrong, in many ways. The privacy and publicness issue is just one of them. It does, however, add weight to the feeling that this whole idea was misconceived.

Trolls, threats, the law and deterrence…

trollhunter600

“Internet trolls face up to two years in jail under new laws” screamed the headline on the BBC’s website yesterday, after Chris Grayling decided to “take a stand against a baying cyber-mob”. It’s not the first time that so-called ‘trolls’ have been made the subject of a government ‘stand’ – and a media furore. This particular one arose after TV presenter Chloë Madeley suffered online abuse – that abuse itself triggered by the comments about rape made by her mother, Judy Finnigan, also a TV presenter, on Loose Women.

Twitter ‘trolls’ seem to be a big theme at the moment. Just a few weeks ago we had the tragic case of Brenda Leyland, who it appears committed suicide after being doorstepped by Sky News, accused of ‘trolling’ the parents of Madeleine McCann. A month ago, Peter Nunn was jailed for 18 weeks after a series of abusive tweets aimed at MP Stella Creasy. There are others – not forgetting the ongoing saga of GamerGate (one of my favourite posts on this is here), though that seems to be far bigger news in the US than it is here in the UK. The idea of a troll isn’t something new, and it doesn’t seem to be going away. Nothing’s very clear, though – and what I’ve set out below is very much my personal view.

What is a troll?

There’s still doubt about where the term comes from. It’s not clear that it refers to the kind of beast in the picture above – from the weirdly wonderful Norwegian film ‘Trollhunter’. A few years ago, I was certain it came from a totally different source – ‘trolling’, a kind of fishing where you trail a baited line behind your boat as you row, hoping that some fish comes along and bites it – but I understand now that even that’s in doubt. Most people think of monsters – perhaps hiding under bridges, ready to be knocked off them by billy goats, or perhaps huge, stupid Tolkeinian hulks – but what they are on the internet seems very contentious. In the old days, again, trolls were often essentially harmless – teasing, annoying, trying to get a rise out of people. The kind of thing that I might do on twitter by writing a poem about UKIP, for example – but what happens now can be quite different. The level of nastiness can get seriously extreme – from simple abuse to graphic threats of rape and murder. The threats can be truly hideous – and, from my perspective at least, if you haven’t been a victim of this kind of thing, it’s not possible to really understand what it’s like. I’ve seen some of the tweets – but only a tiny fraction, and I know that what I’ve seen is far from the worst.

The law

The first thing to say is that Grayling’s announcement doesn’t actually seem to be anything new: the ‘quadrupling of sentences’ was brought in in March this year, as an amendment to the Malicious Communications Act 1988.  This is just one of a number of laws that could apply to some of the activities that are described as ‘trolling’. Section 127 of the Communications Act 2003 is another, which includes the provision that a person is guilty of an offence if he: “sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character”.  The infamous ‘Twitter Joke Trial’ of Paul Chambers was under this Act. There have also been convictions for social media posting under the Public Order Act 1986 Section 4A, which makes it an offence to “…with intent to cause a person harassment, alarm or distress…  use[s] threatening, abusive or insulting words or behaviour, or disorderly behaviour, or …displays any writing, sign or other visible representation which is threatening, abusive or insulting,” Then there’s the Protection from Harassment Act 1997, and potentially Defamation Law too (though that’s civil rather than criminal law).  The law does apply to the internet, and plenty of so-called ‘trolls’ have been prosecuted – and indeed jailed.

What is a threat?

One of the most common reactions that I’ve seen when these issues come up is to say that ‘threats’ should be criminalised, but ‘offensive language’ should not. It’s quite right that freedom of speech should include the freedom to be offensive – if we only allow speech that we agree with, that’s not freedom of speech at all. The problem is that it’s not always possible to tell what is a threat and what is just an offensive opinion – or even a joke. If we think jokes are ‘OK’, then people who really are threatening and offensive will try to say that what they said was just a joke – Peter Nunn did so about his tweets to Stella Creasy. If we try to set rules about what is an opinion and what is a threat, we may find that those who want to threaten couch their language in a way that makes it possible to argue that it’s an opinion.

For example, tweeting to someone that you’re going to rape and murder them is clearly a threat, but tweeting to a celebrity who’s had naked pictures leaked onto the internet that ‘celebrities who take naked pictures of themselves deserve to be raped’ could, potentially, be argued to be an opinion, however offensive. And yet it would almost certainly actually be a threat. A little ‘cleverness’ with language can mask a hideous threat – a threat with every bit as nasty an effect on the person receiving it. It’s not just the words, it’s the context, it’s the intent. It’s whether it’s part of a concerted campaign – or a spontaneous twitter storm.

One person’s troll is another person’s freedom fighter…

The other thing that’s often missed here is that many (perhaps most) so-called trolls wouldn’t consider themselves to be trolls. Indeed, quite the opposite. A quick glance at GamerGate shows that: many of those involved think they’re fighting for survival against forces of oppression. There’s the same story elsewhere: those involved in the so-called ‘trolling’ of the McCanns would (and do) say that they’re campaigning to expose a miscarriage of justice, to fight on behalf of a dead child. Whether someone’s a terrorist or a freedom fighter can depend on the perspective – and that means that laws presented in terms like those used by Grayling used are even less likely to have any kind of deterrent effect. If you don’t consider yourself a troll, why would a law against trolls have any impact?

Whether increasing sentences has any deterrent effect to start with is also deeply questionable. Do those ‘trolling’ even consider the possible sentence? Do they know that what they’re doing is against the law – even with the many laws enumerated above, and the series of convictions under them, many seem to think that the law doesn’t really apply on the internet. Many believe (falsely) that their ‘anonymity’ will protect them – despite the evidence that it won’t. It’s hard to see that sentences are likely to make any real difference at all to ‘trolling’.

There are no silver bullets…

The problem is, that there really isn’t a simple answer to the various things that are labelled ‘trolling’. A change in law won’t make the difference on its own. A change in technology won’t make a difference on its own – those who think that better enforcement by Twitter themselves will make everything OK are sadly far too optimistic. What’s more, any tools – legal or technological – can be used by the wrong people in the wrong way as well as by the right people in the right way. Put in a better abuse reporting system and the ‘trolls’ themselves will use it to report their erstwhile ‘victims’ for abuse. What used to be called ‘flame wars’ where two sides of an argument continually accuse the others of abuse still exist. Laws will be misused – the Twitter Joke Trial is just one example of the prosecutors really missing the point.

There is no simple ‘right’ answer. The various problems lumped together under the vague and misleading term ‘trolling’ are complex societal problems – so solving them is a complex process. Making the law work better is one tiny part – and that doesn’t mean just making it harsher. Indeed, my suspicion is that the kind of pronouncement that Chris Grayling made is likely to make things worse, not better: it doesn’t help understanding at all, and understanding is the most important thing. If we don’t know what we mean by the term ‘troll’, and we don’t understand why people do it, how can we solve – or at least reduce – the problems that arise?

Posturing – and obscuring

The thing is, I’m not convinced that the politicians necessarily even want to solve these problems. Internet trolls are very convenient villains – they’re scary, they’re hidden, they’re dangerous, they’re new, they’re nasty. It’s very easy for the fog of fear to be built up very quickly when internet trolling comes up as a subject. Judy Finnigan’s original (and in my view deeply offensive) remarks about Ched Evans’ rape conviction have been hidden under this troll-fog. Trolls make a nice soundbite, a nice headline – and they’re in some ways classical ‘folk devils’ upon which to focus anger and hate. Brenda Leyland’s death was a stark reminder of how serious this can get. A little more perspective, a little more intelligence and a little less posturing could really help here.