A great deal has been said already about the twitter abuse issue – and I suspect a great deal more will be said, because this really is an important issue. The level and nature of the abuse that some people have been receiving – not just now, but pretty much as long as Twitter has existed – has been hideous. Anyone who suggests otherwise, or who suggests that those receiving the abuse, the threats, should get ‘thicker skins’, or shrug it off, is, in my opinion, very much missing the point. I’m lucky enough never to have been a victim of this sort of thing – but as a straight, white, able-bodied man I’m not one of the likely targets of the kind of people that generally perpetrate such abuse. It’s easy, from such a position, to tell others that they should rise above it. Easy, but totally unfair.
The effect of this kind of abuse, this kind of attack, is to stifle speech: to chill speech. That isn’t just bad for Twitter, it’s bad for all of us. There are very good reasons that ‘free expression’ is considered one of the most fundamental of human rights, included in every human rights declaration and pretty much every democratic country’s constitution. It’s crucial for holding the powerful to account – whether they be governments , companies or just powerful individuals.
Free speech, however, does need protection, moderation, if it is to avoid becoming just a shouting match, won by those with the loudest voice and the most powerful friends – so everywhere, even in the US, there are laws and regulations that make some kinds of speech unacceptable. How much speech is unacceptable varies from place to place – direct threats are unacceptable pretty much everywhere, for example, but racism, bullying, ‘hate speech’ and so forth have laws against them in some places, not in others.
In the UK, we have a whole raft of laws – some might say too many – and from what I have seen, a great deal of the kind of abuse that Caroline Criado-Perez, Stella Creasy, Mary Beard and many more have received recently falls foul of those laws. Those laws are likely to be enforced on a few examples – there has already been at least one arrest – but how can you enforce laws like this on thousands of seemingly anonymous online attackers? And should Twitter themselves be taken to task, and asked to do more about this?
That’s the big question, and lots of people have been coming up with ‘solutions’. The trouble with those solutions is that they, in themselves, are likely to have their own chilling effect – and perhaps even more significant consequences.
The Report Abuse Button?
The idea of a ‘report abuse’ button seems to be the most popular – indeed, Twitter have suggested that they’ll implement it – but it has some serious drawbacks. There are parallels with David Cameron’s nightmarish porn filter idea (about which I’ve blogged a number of times, starting here): it could be done ‘automatically’ or ‘manually’. The automatic method would use some kind of algorithmic solutions when a report is made – perhaps the number of reports made in a short time, or the nature of the accounts (number of followers, length of time it has existed etc), or a scan of the tweet that’s reported for key words, or some combination of these factors.
The trouble with these automatic systems is that they’re likely to include some tweets that are not really abusive, and miss others that are. More importantly, they allow for misuse – if you’re a troll, you would report your enemies for abuse, even if they’re innocent, and get your trollish friends and followers to do the same. Twitterstorms get the innocent as well as the guilty – and a Twitterstorm, with a report button and an automatic banning system would mean mob rule: if you’ve got enough of a mob behind you, the torches and pitchforks would have direct effect.
What’s more, the kind of people who orchestrate the sort of attacks suffered by Caroline Criado-Perez, Stella Creasy, Mary Beard and others are likely to be exactly the kind who will be able to ‘game’ an automatic system: work out how it can be triggered, and think it’s ‘fun’ to use it to get people banned. Even a temporary ban while an investigation is going on could be a nightmare.
The alternative to an automated system is to have every report of abuse examined by a real human being – but given that there are now more than half a billion users on Twitter, this is pretty much guaranteed to fail – it will be slow, clunky and disappointing, and people will make mistakes because they’ll find themselves overwhelmed by the numbers of reports they have to deal with. Twitter, moreover, is a free service (of which more later) and doesn’t really have the resources to deal with this kind of thing. I would like it to remain free, and if it has to pay for a huge ‘abuse report centre’ that’s highly unlikely.
There are other, more subtle technological ideas – @flayman’s idea of a ‘panic mode’ which you can go into if you find yourself under attack, blocking all people from tweeting to you unless you follow them and they follow you has a lot going for it, and could even be combined with some kind of recording system that notes down all the tweets of those attacking you, potentially putting together a report that can be used for later investigation.
I would like to think that Twitter are looking into these possibilities – but more complex solutions are less likely to be attractive or to be understood and properly used. Most, too, can be ‘gamed’ by people who want to misuse them. They offer a very partial solution at best – and the broadly-specified abuse button, as I noted above, I suspect will have more drawbacks than advantages in practice. What’s more, as a relatively neutral observer of a number of Twitter conflicts – for example between the supporters and opponents of Julian Assange, or between different sides of the complex arguments over intersectional feminism, it’s sometimes hard to see who is the ‘abuser’ and who is the ‘abused’. With the Criado-Perez, Creasy and Beard cases it’s obvious – but that’s not always so. We need to be very careful not to build systems that end up reinforcing power-relationships, helping the powerful to put their enemies in their place.
Real names?
A second idea that has come up is that we should do more against anonymity and pseudonymity – we should make people use their ‘real’ names on Twitter, so that they can’t hide behind masks. That, for me, is even worse – and we should avoid it at all costs. The fact that the Chinese government are key backers of the idea should ring alarm bells – they want to be able to find dissidents, to stifle debate and to control their population. That’s what real names policies do – because if you know someone’s real name, you can find them in the real world.
Dissidents in oppressive regimes are one thing – but whistleblowers and victims of domestic abuse and violent partners need anonymity every bit as much, as do people who want to be able to explore their sexuality, who are concerned with possible medical problems, who are victims of bullying (including cyberbullying) and even people who are just a bit shy. Real names policies will have a chilling effect on all these people – and, disproportionately, on women, as women are more likely to be victims of abuse and violence from partners.
Enforcing real names policies helps the powerful to silence their critics, and reinforces power relationships. It should also be no surprise that the other big proponent of ‘real names’ is Facebook – because they know they can make more money out of you and out of your data if they know your real name. They can ‘fix’ you in the real world, and find ways to sell that information to more and more people. They don’t have your interests at heart – quite the opposite.
Paying for Twitter?
A third idea that has come up is that we should have to pay for twitter – a nominal sum has been mentioned, at least nominal to relatively rich people in countries like ours – but this is another idea that I don’t like at all. The strength of Twitter is its freedom, and the power that it has to encourage debate would be much reduced if it were to require payment. It could easily become a ‘club’ for a certain class of people – well, more of a club than it already is – and lose what makes it such a special place, such a good forum for discussion.
Things like the ‘Spartacus’ campaign against the abysmal actions of our government towards people with disability would be far less likely to happen if Twitter cost money: people on the edge, people without ‘disposable’ income or whose belts have already been tightened as far as they can go would lose their voice. Right now, more than ever, they need that voice.
Dealing with the real issues…
In the short term, I think Criado-Perez had the best idea – we need to do everything we can to ‘stand together’, to support the victims of abuse, to make sure that they know that the vast, vast majority of us are on their side and will do everything we can to support them and to emphasise the ‘good’ side of Twitter. Twitter can be immensely supportive as well as destructive – we need to make sure that, as much as possible, we help provide that support to those who need it.
The longer term problem is far more intractable. At the very least, it’s good that this stuff is getting more publicity – because, as I said, it matters very much. Misogyny and the ‘rape’ culture is real. Very real indeed – and deeply damaging, not just to the victims. What’s more, casual sexism is real – and shouldn’t be brushed off as irrelevant in this context. For me, there’s a connection between what John Inverdale said about Marion Bartoli, and what Boris Johnson said about women only going to universities to find husbands, and the sort of abuse suffered by Criado-Perez, Creasy, Beard and others. It’s about the way that women are considered in our society – about objectifying women, trivialising women, suggesting women should be put in ‘their’ place.
That’s what we need to address, and to face up to. No ‘report abuse’ button is going to solve that. We also need to stop looking for scapegoats – to blame Twitter for what is a problem with our whole society. There’s also a similarity here with David Cameron’s porn filter. In both situations there’s a real, complex problem that’s deep-rooted in our society, and in both cases we seem to be looking for a quick, easy, one-click solutions.
One click to save us all? It won’t work, and suggesting that it would both trivialises the problem and could distract us from finding real solutions. Those solutions aren’t easy. They won’t be fast. They’ll force us to face up to some very ugly things about ourselves – things that many people don’t want to face up to. In the end, we’ll have to.