Facebook And Twitter – Handling Extremism And Disorder

After extensive consultation, FAT-HEAD has been amended to take into account its lack of clarity over costs (see 8) and the unfortunate limitation of extent (see 9).


 

Facebook And Twitter – Handling Extremism And Disorder Bill (‘FAT-HEAD’)

Contents:

  1. When this Act applies
  2. Facebook and Twitter
  3. Social and Moral Responsibility
  4. Code of conduct
  5. Extremism
  6. Disorder
  7. Acceptance of blame
  8. Costs
  9. Extent, commencement and short title

A

Bill

to

Make provision as to matters concerning the social and moral responsibility of Facebook and Twitter, to ensure that proper cooperation is made with the authorities in relation to morality, extremism and disorder.

BE IT ENACTED by the Queen’s most Excellent Majesty, by and with the advice and consent of the Lords Spiritual and Temporal, and Commons, in this present Parliament assembled, and by the authority of the same, as follows:—

1. When this Act applies

This Act applies whenever an event of such significance, as determined by the Secretary of State, requires it to. Events include but are not restricted to acts of extremism, of disorder and of embarrassment to the Secretary of State, the government, the intelligence and security services and the police, or any other event deemed appropriate by the Secretary of State.

2. Facebook and Twitter

The powers conferred through this Act apply to Facebook, Twitter and any other online services, systems, or their equivalents, successors or alternatives (‘the services’) as determined by the Secretary of State.

3. Social and moral responsibility

The services shall recognise that they have a social and moral responsibility above and beyond any requirements hitherto required by the law. The requirements that constitute this social and moral responsibility shall be determined by the Secretary of State, in consultation with the editors of the Sun and the Daily Mail.

4. Code of Conduct

The Secretary of State shall prepare a Code of Conduct to cover the actions of the services, in accordance with the social and moral responsibility as set out in section 1. This code of conduct shall cover extremism, disorder, obscenity, dissent and other factors as determined by the Secretary of State.

5. Extremism

i)  The services shall monitor the activities of all those who use their services for evidence of extremism, including but not limited to reading all their posts, messages and other communications, analysing all photographs, monitoring all location information, all music listened to and all areas of the internet linked to.

ii)  The services shall provide real-time access to all of their servers and all user information to the security services, the police and any others authorised by the Secretary of State, including the provision of tools to enable that access.

iii)  The services shall prepare reports on all its users activities, including but not limited to those activities relating to extremism, including contact information, personal details, locations visited and any other information that may be determined from such information.

iv)  The services shall provide these reports to the security services, the police and any others authorised by the Secretary of State.

v) The services shall delete the accounts of any user upon the request of the security services, the police or any others authorised by the Secretary of State.

vi)  The services may not report that they have provided the access or these reports to anyone without the express permission of the Secretary of State.

6. Disorder

At a time of disorder, as determined by the Secretary of State, the security services or a police officer, the services shall provide the following:

i) Immediate access to location data of all users.

ii) Immediate access to all communications data of all users

iii) Detailed information on all accounts that have any relationship to the disorder

iv) Deletion of accounts of any users deemed to be involved, or likely to be involved, in disorder.

v) Upon order by the Secretary of State, the security services or a police officer, the services shall block all access to their services in an area to be determined by the Secretary of State.

7. Acceptance of Blame

The services shall recognise that their social and moral responsibility includes the requirement to accept the blame for the existence, escalation or consequences of any extremism or disorder. This acceptance of blame must be acknowledged in writing and in the broadcast media, ensuring that the government, the security services and the police are not held responsible for their own roles in such extremism or disorder or their consequences.

8. Costs

All costs for the development, implementation, monitoring, updating and supporting the systems required for the services to comply with the Facebook And Twitter – Handling Extremism And Disorder Act 2014 shall be borne by the services.

9. Extent, commencement and short title

i) This Act extends to England, Wales, and anywhere else on the entire planet, and in addition to inner and outer space, the moon, any planets, comets and other bodies as deemed appropriate by the Secretary of State.

ii) This Act comes into force on the day on which this Act is passed.

iii) This Act may be cited as the Facebook And Twitter – Handling Extremism And Disorder Act 2014.


 

Trolls, threats, the law and deterrence…

trollhunter600

“Internet trolls face up to two years in jail under new laws” screamed the headline on the BBC’s website yesterday, after Chris Grayling decided to “take a stand against a baying cyber-mob”. It’s not the first time that so-called ‘trolls’ have been made the subject of a government ‘stand’ – and a media furore. This particular one arose after TV presenter Chloë Madeley suffered online abuse – that abuse itself triggered by the comments about rape made by her mother, Judy Finnigan, also a TV presenter, on Loose Women.

Twitter ‘trolls’ seem to be a big theme at the moment. Just a few weeks ago we had the tragic case of Brenda Leyland, who it appears committed suicide after being doorstepped by Sky News, accused of ‘trolling’ the parents of Madeleine McCann. A month ago, Peter Nunn was jailed for 18 weeks after a series of abusive tweets aimed at MP Stella Creasy. There are others – not forgetting the ongoing saga of GamerGate (one of my favourite posts on this is here), though that seems to be far bigger news in the US than it is here in the UK. The idea of a troll isn’t something new, and it doesn’t seem to be going away. Nothing’s very clear, though – and what I’ve set out below is very much my personal view.

What is a troll?

There’s still doubt about where the term comes from. It’s not clear that it refers to the kind of beast in the picture above – from the weirdly wonderful Norwegian film ‘Trollhunter’. A few years ago, I was certain it came from a totally different source – ‘trolling’, a kind of fishing where you trail a baited line behind your boat as you row, hoping that some fish comes along and bites it – but I understand now that even that’s in doubt. Most people think of monsters – perhaps hiding under bridges, ready to be knocked off them by billy goats, or perhaps huge, stupid Tolkeinian hulks – but what they are on the internet seems very contentious. In the old days, again, trolls were often essentially harmless – teasing, annoying, trying to get a rise out of people. The kind of thing that I might do on twitter by writing a poem about UKIP, for example – but what happens now can be quite different. The level of nastiness can get seriously extreme – from simple abuse to graphic threats of rape and murder. The threats can be truly hideous – and, from my perspective at least, if you haven’t been a victim of this kind of thing, it’s not possible to really understand what it’s like. I’ve seen some of the tweets – but only a tiny fraction, and I know that what I’ve seen is far from the worst.

The law

The first thing to say is that Grayling’s announcement doesn’t actually seem to be anything new: the ‘quadrupling of sentences’ was brought in in March this year, as an amendment to the Malicious Communications Act 1988.  This is just one of a number of laws that could apply to some of the activities that are described as ‘trolling’. Section 127 of the Communications Act 2003 is another, which includes the provision that a person is guilty of an offence if he: “sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character”.  The infamous ‘Twitter Joke Trial’ of Paul Chambers was under this Act. There have also been convictions for social media posting under the Public Order Act 1986 Section 4A, which makes it an offence to “…with intent to cause a person harassment, alarm or distress…  use[s] threatening, abusive or insulting words or behaviour, or disorderly behaviour, or …displays any writing, sign or other visible representation which is threatening, abusive or insulting,” Then there’s the Protection from Harassment Act 1997, and potentially Defamation Law too (though that’s civil rather than criminal law).  The law does apply to the internet, and plenty of so-called ‘trolls’ have been prosecuted – and indeed jailed.

What is a threat?

One of the most common reactions that I’ve seen when these issues come up is to say that ‘threats’ should be criminalised, but ‘offensive language’ should not. It’s quite right that freedom of speech should include the freedom to be offensive – if we only allow speech that we agree with, that’s not freedom of speech at all. The problem is that it’s not always possible to tell what is a threat and what is just an offensive opinion – or even a joke. If we think jokes are ‘OK’, then people who really are threatening and offensive will try to say that what they said was just a joke – Peter Nunn did so about his tweets to Stella Creasy. If we try to set rules about what is an opinion and what is a threat, we may find that those who want to threaten couch their language in a way that makes it possible to argue that it’s an opinion.

For example, tweeting to someone that you’re going to rape and murder them is clearly a threat, but tweeting to a celebrity who’s had naked pictures leaked onto the internet that ‘celebrities who take naked pictures of themselves deserve to be raped’ could, potentially, be argued to be an opinion, however offensive. And yet it would almost certainly actually be a threat. A little ‘cleverness’ with language can mask a hideous threat – a threat with every bit as nasty an effect on the person receiving it. It’s not just the words, it’s the context, it’s the intent. It’s whether it’s part of a concerted campaign – or a spontaneous twitter storm.

One person’s troll is another person’s freedom fighter…

The other thing that’s often missed here is that many (perhaps most) so-called trolls wouldn’t consider themselves to be trolls. Indeed, quite the opposite. A quick glance at GamerGate shows that: many of those involved think they’re fighting for survival against forces of oppression. There’s the same story elsewhere: those involved in the so-called ‘trolling’ of the McCanns would (and do) say that they’re campaigning to expose a miscarriage of justice, to fight on behalf of a dead child. Whether someone’s a terrorist or a freedom fighter can depend on the perspective – and that means that laws presented in terms like those used by Grayling used are even less likely to have any kind of deterrent effect. If you don’t consider yourself a troll, why would a law against trolls have any impact?

Whether increasing sentences has any deterrent effect to start with is also deeply questionable. Do those ‘trolling’ even consider the possible sentence? Do they know that what they’re doing is against the law – even with the many laws enumerated above, and the series of convictions under them, many seem to think that the law doesn’t really apply on the internet. Many believe (falsely) that their ‘anonymity’ will protect them – despite the evidence that it won’t. It’s hard to see that sentences are likely to make any real difference at all to ‘trolling’.

There are no silver bullets…

The problem is, that there really isn’t a simple answer to the various things that are labelled ‘trolling’. A change in law won’t make the difference on its own. A change in technology won’t make a difference on its own – those who think that better enforcement by Twitter themselves will make everything OK are sadly far too optimistic. What’s more, any tools – legal or technological – can be used by the wrong people in the wrong way as well as by the right people in the right way. Put in a better abuse reporting system and the ‘trolls’ themselves will use it to report their erstwhile ‘victims’ for abuse. What used to be called ‘flame wars’ where two sides of an argument continually accuse the others of abuse still exist. Laws will be misused – the Twitter Joke Trial is just one example of the prosecutors really missing the point.

There is no simple ‘right’ answer. The various problems lumped together under the vague and misleading term ‘trolling’ are complex societal problems – so solving them is a complex process. Making the law work better is one tiny part – and that doesn’t mean just making it harsher. Indeed, my suspicion is that the kind of pronouncement that Chris Grayling made is likely to make things worse, not better: it doesn’t help understanding at all, and understanding is the most important thing. If we don’t know what we mean by the term ‘troll’, and we don’t understand why people do it, how can we solve – or at least reduce – the problems that arise?

Posturing – and obscuring

The thing is, I’m not convinced that the politicians necessarily even want to solve these problems. Internet trolls are very convenient villains – they’re scary, they’re hidden, they’re dangerous, they’re new, they’re nasty. It’s very easy for the fog of fear to be built up very quickly when internet trolling comes up as a subject. Judy Finnigan’s original (and in my view deeply offensive) remarks about Ched Evans’ rape conviction have been hidden under this troll-fog. Trolls make a nice soundbite, a nice headline – and they’re in some ways classical ‘folk devils’ upon which to focus anger and hate. Brenda Leyland’s death was a stark reminder of how serious this can get. A little more perspective, a little more intelligence and a little less posturing could really help here.

The Ballad of KipperNick

Nick RobinsonIn the run up to the local and European elections, I became increasingly frustrated by the way that the BBC were dealing with them. It wasn’t really something new so much as an accumulation of frustrations over the last few years – the way that, it seemed to me, the BBC had played a pivotal part in the rise of UKIP. Anyway, more of that later. I decided to have a little experiment. I created a Twitter account, @KipperNick – a parody of Nick Robinson, who seemed to be playing the role of cheerleader-in-chief for Nigel Farage and UKIP. The main reason was to vent a little of my anger at the BBC, but I also thought I would have some fun – and I really did. I learned a little bit too…

This was @KipperNick’s first tweet:

Screen Shot 2014-06-02 at 17.10.40

I followed it with a few along similar lines – I didn’t try to hide the fact that this was a parody account. The name ‘KipperNick’ should have made it pretty obvious for a start, and the bio clearly described it as a parody. Perhaps my humour was a little dark – though I think that darkness was appropriate for the subject matter. Anyway, the little parody was pretty successful from the start – a lot of RTs (as in that case), including a couple with over 200:

Screen Shot 2014-06-02 at 17.11.02

Screen Shot 2014-06-02 at 17.11.15All in all, it was fun and a bit strange – I found it surprisingly easy to parrot the kind of language that Nick Robinson uses, and a lot of fun to tease him. I did wonder whether the man himself ever read the tweets – I did @mention him a couple of times – but I doubt it very much. There were, however, a couple of things that happened that surprised me. The first was that within about 10 tweets, the account was briefly suspended – I imagine someone reported me for something. On my main account, I’ve never been suspended – I’ve done over 127,000 tweets with it, and some pretty provocative – but with @KipperNick it took no time at all. I assumed at the time it was a disgruntled UKIPper… they do seem to be a bit trigger happy.

The second thing that surprised me was the number of people who thought I was the real Nick Robinson. As I’ve said, I didn’t exactly disguise the account very well, but I had a lot of people tweet at me as though I was the real Nick. Some thought I was serious about there being interviews with Nigel Farage on the hour every hour on election day. Others were seemingly genuinely angry with the BBC’s obsession with UKIP, and thought my tweets were the real thing. It wasn’t just one or two, but lots.

Screen Shot 2014-06-02 at 17.30.15The trouble was, I don’t think my parody was far from the mark at all. When the BBC really did try to link a report from the French Open tennis to Nigel Farage, it was beyond the level of parody. When I posted this, people didn’t believe it – but it was the one entirely genuine post of the whole story of @KipperNick.

So what does all of this mean? Well, for me, it means that the BBC should be thoroughly ashamed of themselves – and as I listened to David Dimbleby’s increasingly nervous chuckle during the European election broadcast, I think they were beginning to feel a little of that themselves. They’re not stupid – well, I don’t think so.

The idea of putting Nigel Farage on Question Time regularly probably seemed like fun to start with – and the broadcasters do like to shake things up. Mainstream politics IS incredibly dull at the moment, with three main parties pursuing seemingly identical policies in most ways, with candidates looking pretty much identical and sounding pretty much identical. Having a ‘funny’ character like Farage on to spice things up sounds like a great idea – but the more they did it, and without serious criticism, the bigger a hole they were digging. When you add to the equation the huge amounts of xenophobia, homophobia and misogyny in the tabloid press in particular, the momentum starts to build.

The BBC is hardly blameless in other ways – and the rest of the TV industry could be even worse. The amount of ‘poverty porn’ on our screens over the last few years has been part of a larger level of encouragement of a divisive, blame-based approach to our problems. It fosters hate – and the UKIP agenda feeds directly into it. Over the last few weeks the media seems to have realised this a little, and started to scrutinise UKIP a bit more – but until James O’Brien’s interview on LBC mere days before the election, Farage had never been called properly to account either on TV or on radio. The BBC should feel thoroughly ashamed of their role in this – and there should be some serious soul-searching going on.

Mind you, I doubt very much that any is happening at all. Memories seem almost as absent as consciences in the BBC.

A Defence of Responsible Tweeting…

I presented a paper at the Society of Legal Scholars conference in Edinburgh with the title ‘Twitter Defamation: A Defence of Responsible Tweeting”. I’ve put a little movie version of the slides of my presentation at the bottom of this blog post.

The primary idea behind the paper was to develop a little further an idea that I had soon after the Sally Bercow/Lord McAlpine business, and which I blogged about for The Justice Gap at the time. At a detailed level, the question I am asking is whether there should be a specific form of defence against defamation available for tweeters – a ‘defence of responsible tweeting’ – when tweeters have behaved ‘responsibly’ in terms that make sense for twitter, rather than for conventional journalism. As Alex Andreou asked in the New Statesman at the time, ‘Can every Twitter user be expected to fact check Newsnight?’

I think not – and in my paper (see the slides below) I set out a broad-brush, first draft idea of the kind of level of fact checking and verification that I think would be reasonable and suit the nature of Twitter, as well as how this might fit with the law. As I said, this is very much a work in progress…

More research is needed, and some of the ideas are still rudimentary – but the more I have looked into the subject the clearer it has seemed to me that our defamation law, even after the reforms in the Defamation Act 2013, has not taken on board the changes that have come about as a result of the development of the social media, and of Twitter in particular. It is still law based in the ‘old’ world, designed to deal with conventional journalism – and the reforms have been designed to shift the balance more in favour of freedom of expression also in the old sense, to help conventional journalists. The defences provided also seem to suit conventional journalists rather than bloggers – and in particular Tweeters.

I hope this can change – and that a way can be found to help Tweeters more – because, as well as outlining a legal defence of ‘responsible tweeting’, in the end my paper is intended as a ‘real’ defence of responsible tweeting. For me, tweeting is important, and makes a valuable contribution to freedom of expression – it does things that conventional journalism in particular fails to do. It is a two way process – and though people often seem to forget it, freedom of expression, as set out in the various human rights documents (and in particular the European Convention on Human Rights, which celebrated 60 years of existence yesterday) includes the right to both impart and receive information. Twitter, and other forms of social media, allow that two-way process in a way that has never been possible before. It is also a process that is available to ordinary people, not just professional journalists – and freedom of expression is a human right, not a journalists’ right.

This is not just a theoretical right – Twitter has a practical and real impact on freedom of speech. It’s pretty much impossible to list all the ways in which Twitter enables freedom of speech, but one particular set of ways relate to its interaction with conventional media. It allows people to comment on things in the conventional media, to correct for errors, to criticise and highlight bias or prejudices, to add value by adding links to more information. It can take programmes or stories that have small audiences and disseminate them to much, much wider audiences. It can spread stories from one part of the world to another – so we can see make comparisons and see things in perspective. It provides a voice for people who aren’t professional journalists, politicians or celebrities – people who find it very hard to have a voice through the conventional media.

All of this matters – and all of this is worth defending. Of course there are some hideous problems with Twitter, and some thoroughly irresponsible uses, from the horrendous threats and abuse we’ve seen recently, to hate speech, to rumour-mongering and defamation – but we shouldn’t forget the great benefits and throw the baby out with the bathwater. Responsible tweeting matters.

These are the slides – I hope that there will be a proper written paper in the reasonably near future.

Twitter abuse: one click to save us all?

A great deal has been said already about the twitter abuse issue – and I suspect a great deal more will be said, because this really is an important issue. The level and nature of the abuse that some people have been receiving – not just now, but pretty much as long as Twitter has existed – has been hideous. Anyone who suggests otherwise, or who suggests that those receiving the abuse, the threats, should get ‘thicker skins’, or shrug it off, is, in my opinion, very much missing the point. I’m lucky enough never to have been a victim of this sort of thing – but as a straight, white, able-bodied man I’m not one of the likely targets of the kind of people that generally perpetrate such abuse. It’s easy, from such a position, to tell others that they should rise above it. Easy, but totally unfair.

The effect of this kind of abuse, this kind of attack, is to stifle speech: to chill speech. That isn’t just bad for Twitter, it’s bad for all of us. There are very good reasons that ‘free expression’ is considered one of the most fundamental of human rights, included in every human rights declaration and pretty much every democratic country’s constitution. It’s crucial for holding the powerful to account – whether they be governments , companies or just powerful individuals.

Free speech, however, does need protection, moderation, if it is to avoid becoming just a shouting match, won by those with the loudest voice and the most powerful friends – so everywhere, even in the US, there are laws and regulations that make some kinds of speech unacceptable. How much speech is unacceptable varies from place to place – direct threats are unacceptable pretty much everywhere, for example, but racism, bullying, ‘hate speech’ and so forth have laws against them in some places, not in others.

In the UK, we have a whole raft of laws – some might say too many – and from what I have seen, a great deal of the kind of abuse that Caroline Criado-Perez, Stella Creasy, Mary Beard and many more have received recently falls foul of those laws. Those laws are likely to be enforced on a few examples – there has already been at least one arrest – but how can you enforce laws like this on thousands of seemingly anonymous online attackers? And should Twitter themselves be taken to task, and asked to do more about this?

That’s the big question, and lots of people have been coming up with ‘solutions’. The trouble with those solutions is that they, in themselves, are likely to have their own chilling effect – and perhaps even more significant consequences.

The Report Abuse Button?

The idea of a ‘report abuse’ button seems to be the most popular – indeed, Twitter have suggested that they’ll implement it – but it has some serious drawbacks. There are parallels with David Cameron’s nightmarish porn filter idea (about which I’ve blogged a number of times, starting here): it could be done ‘automatically’ or ‘manually’. The automatic method would use some kind of algorithmic solutions when a report is made – perhaps the number of reports made in a short time, or the nature of the accounts (number of followers, length of time it has existed etc), or a scan of the tweet that’s reported for key words, or some combination of these factors.

The trouble with these automatic systems is that they’re likely to include some tweets that are not really abusive, and miss others that are. More importantly, they allow for misuse – if you’re a troll, you would report your enemies for abuse, even if they’re innocent, and get your trollish friends and followers to do the same. Twitterstorms get the innocent as well as the guilty – and a Twitterstorm, with a report button and an automatic banning system would mean mob rule: if you’ve got enough of a mob behind you, the torches and pitchforks would have direct effect.

What’s more, the kind of people who orchestrate the sort of attacks suffered by Caroline Criado-Perez, Stella Creasy, Mary Beard and others are likely to be exactly the kind who will be able to ‘game’ an automatic system: work out how it can be triggered, and think it’s ‘fun’ to use it to get people banned. Even a temporary ban while an investigation is going on could be a nightmare.

The alternative to an automated system is to have every report of abuse examined by a real human  being – but given that there are now more than half a billion users on Twitter, this is pretty much guaranteed to fail – it will be slow, clunky and disappointing, and people will make mistakes because they’ll find themselves overwhelmed by the numbers of reports they have to deal with. Twitter, moreover, is a free service (of which more later) and doesn’t really have the resources to deal with this kind of thing. I would like it to remain free, and if it has to pay for a huge ‘abuse report centre’ that’s highly unlikely.

There are other, more subtle technological ideas – @flayman’s idea of a ‘panic mode’ which you can go into if you find yourself under attack, blocking all people from tweeting to you unless you follow them and they follow you has a lot going for it, and could even be combined with some kind of recording system that notes down all the tweets of those attacking you, potentially putting together a report that can be used for later investigation.

I would like to think that Twitter are looking into these possibilities – but more complex solutions are less likely to be attractive or to be understood and properly used. Most, too, can be ‘gamed’ by people who want to misuse them. They offer a very partial solution at best – and the broadly-specified abuse button, as I noted above, I suspect will have more drawbacks than advantages in practice. What’s more, as a relatively neutral observer of a number of Twitter conflicts – for example between the supporters and opponents of Julian Assange, or between different sides of the complex arguments over intersectional feminism, it’s sometimes hard to see who is the ‘abuser’ and who is the ‘abused’. With the Criado-Perez, Creasy and Beard cases it’s obvious – but that’s not always so. We need to be very careful not to build systems that end up reinforcing power-relationships, helping the powerful to put their enemies in their place.

Real names?

A second idea that has come up is that we should do more against anonymity and pseudonymity – we should make people use their ‘real’ names on Twitter, so that they can’t hide behind masks. That, for me, is even worse – and we should avoid it at all costs. The fact that the Chinese government are key backers of the idea should ring alarm bells – they want to be able to find dissidents, to stifle debate and to control their population. That’s what real names policies do – because if you know someone’s real name, you can find them in the real world.

Dissidents in oppressive regimes are one thing – but whistleblowers and victims of domestic abuse and violent partners need anonymity every bit as much, as do people who want to be able to explore their sexuality, who are concerned with possible medical problems, who are victims of bullying (including cyberbullying) and even people who are just a bit shy. Real names policies will have a chilling effect on all these people – and, disproportionately, on women, as women are more likely to be victims of abuse and violence from partners.

Enforcing real names policies helps the powerful to silence their critics, and reinforces power relationships. It should also be no surprise that the other big proponent of ‘real names’ is Facebook – because they know they can make more money out of you and out of your data if they know your real name. They can ‘fix’ you in the real world, and find ways to sell that information to more and more people. They don’t have your interests at heart – quite the opposite.

Paying for Twitter?

A third idea that has come up is that we should have to pay for twitter – a nominal sum has been mentioned, at least nominal to relatively rich people in countries like ours – but this is another idea that I don’t like at all. The strength of Twitter is its freedom, and the power that it has to encourage debate would be much reduced if it were to require payment. It could easily become a ‘club’ for a certain class of people – well, more of a club than it already is – and lose what makes it such a special place, such a good forum for discussion.

Things like the ‘Spartacus’ campaign against the abysmal actions of our government towards people with disability would be far less likely to happen if Twitter cost money: people on the edge, people without ‘disposable’ income or whose belts have already been tightened as far as they can go would lose their voice. Right now, more than ever, they need that voice.

Dealing with the real issues…

In the short term, I think Criado-Perez had the best idea – we need to do everything we can to ‘stand together’, to support the victims of abuse, to make sure that they know that the vast, vast majority of us are on their side and will do everything we can to support them and to emphasise the ‘good’ side of Twitter. Twitter can be immensely supportive as well as destructive – we need to make sure that, as much as possible, we help provide that support to those who need it.

The longer term problem is far more intractable. At the very least, it’s good that this stuff is getting more publicity – because, as I said, it matters very much. Misogyny and the ‘rape’ culture is real. Very real indeed – and deeply damaging, not just to the victims. What’s more, casual sexism is real – and shouldn’t be brushed off as irrelevant in this context. For me, there’s a connection between what John Inverdale said about Marion Bartoli, and what Boris Johnson said about women only going to universities to find husbands, and the sort of abuse suffered by Criado-Perez,  Creasy, Beard and others. It’s about the way that women are considered in our society – about objectifying women, trivialising women, suggesting women should be put in ‘their’ place.

That’s what we need to address, and to face up to. No ‘report abuse’ button is going to solve that. We also need to stop looking for scapegoats – to blame Twitter for what is a problem with our whole society. There’s also a similarity here with David Cameron’s porn filter. In both situations there’s a real, complex problem that’s deep-rooted in our society, and in both cases we seem to be looking for a quick, easy, one-click solutions.

One click to save us all? It won’t work, and suggesting that it would both trivialises the problem and could distract us from finding real solutions. Those solutions aren’t easy. They won’t be fast. They’ll force us to face up to some very ugly things about ourselves – things that many people don’t want to face up to. In the end, we’ll have to.

Leveson: Bloggers and the Royal Charter

One of the immediate reactions to the last minute deal over the implementation of the Leveson recommendations was that it would hit bloggers and tweeters very hard. I’m not sure that’s really true – and will set out here why. I should say these are just a few first thoughts – it will be quite some time before everything becomes clear, partly because the Royal Charter itself needs careful and detailed analysis and partly because it’s not just the Charter itself that matters, but the documents and guidelines that follow. The Royal Charter is only part of the story. It sets out terms for a ‘recognition panel’ that ‘recognises’ regulators – it doesn’t set up the regulators themselves. As Cameron and others have been at pains to point out, the idea is that the ‘press’ sets up the regulator(s) itself.  We have yet to see what form any regulator the press sets up will take. It has to be good enough for the recognition panel to accept – that’s the key…

So what about bloggers?

Attention has been focused on Schedule 4 of the Royal Charter (which can be found here), which sets out two definitions:

relevant publisher” means a person (other than a broadcaster) who publishes in the United Kingdom:

i. a newspaper or magazine containing news-related material, or

ii. a website containing news-related material (whether or not related to a newspaper or magazine);”

news-related material” means:

i. news or information about current affairs;

ii. opinion about matters relating to the news or current affairs; or

iii. gossip about celebrities, other public figures or other persons in the news.”

So, according to those definitions, many – perhaps most – bloggers would count as ‘relevant publishers’. Certainly I would say that my own blog – this one – would fit the definition. This seems to have caused many people to panic – but you need to look a little further: in particular, what does it mean to say that I’m a ‘relevant publisher’?

On a quick review of the Royal Charter, all it appears to mean at present is whether I would be eligible to part of the ‘recognition’ panel, or employed by that recognition panel – part of the rules intended to keep the recognition panel independent of the press, one of the key parts of the Leveson recommendations.

It may of course mean more than that in time – but we don’t know. We need to see more – the real details of how this will work have yet to emerge beyond the initial Royal Charter Draft. The fact that the definitions are there doesn’t mean much – though it could be a pointer as to the direction that the new regulatory regime is headed. It may indeed be that the new scheme is intended to ‘regulate the web’ but it doesn’t do so yet.

What’s the difference between a newspaper’s website and a blog?

That’s the big question that has yet to be answered. There’s a clear difference between the Guardian Online and my little blog – but where does things like Conservative Home, Liberal Conspiracy and Guido’s Order Order fit into the spectrum? There were even rumours last year that the Guardian was going to abandon its ‘real’ paper and focus only on its online version – they were quickly scotched, but they were believable enough for a lot of people to accept them. If they had happened, should the Guardian Online have been regulated as though it were a newspaper?

If the press is to be regulated at all – and the consensus between the political parties that lay behind yesterday’s deal suggests that non-regulation is not an option – then online newspapers that are effectively the same as ‘paper’ newspapers should have to be regulated too. Small blogs shouldn’t – and Cameron and others have been quick to say that social media won’t be covered, though quite how they bring that into action has yet to be seen. The difficulty lies in the greyer areas, and that’s where we have to be vigilant – the devil will be in the detail.

What about those huge fines?

The Charter actually says the body should have “…the power to impose appropriate and proportionate sanctions (including but not limited to financial sanctions up to 1% of turnover attributable to the publication concerned with a maximum of £1,000,000)…”

Appropriate and proportionate sanctions for a non-profit blogger would therefore be likely to be qualitative – remedies like proper and prominent apologies come to mind. The fining capability – the £1,000,000 that has made its way into press headlines – may mean something to big newspapers, but it’s effectively irrelevant to bloggers. We don’t have ‘turnovers’ of any significance – and big fines would (in general) be inappropriate and disproportionate.

The real key is the idea of ‘exemplary damages’, introduced by the Crime and Courts Bill. That, however, introduces a different definition of ‘relevant publisher’. It says:

“(1) In sections (Awards of exemplary damages) to (Awards of costs), “relevant publisher” means a person who, in the course of a business (whether or not carried on with a view to profit), publishes news-related material—

(a) which is written by different authors, and

(b) which is to any extent subject to editorial control.”

That means that individual bloggers are automatically exempt – but leaves the bigger bloggers like Conservative Home, Liberal Conspiracy and Guido’s Order Order subject to possible exemplary damages.

Personally I don’t think the risk is at all high – exemplary damages are highly unlikely to apply except in the most extreme of circumstances, but it is still something to be alert to.

…and anyway, blogs are already subject to the law

This is a key point that many seem to miss. This regulatory framework isn’t acting in a vacuum. Bloggers and tweeters are already subject to the law – to defamation law, to privacy law, to copyright law, to public order law, to laws concerning hate speech, to obscenity law. This framework would do nothing to change that. Those laws are complex and variably effective – and variably enforced.

Personally that’s what I’d be concerned about, much more than Leveson. The illiberality of the use of public order and related law on tweeters and bloggers is something that, for me, is far more dangerous a trend than anything this Royal Charter could bring about.

Keep vigilant

These are just some first thoughts – there’s a long way to go with this. Monday wasn’t the last word in this. Far from it – we need to watch very carefully and lobby very strongly if things seem to be moving the wrong way, but we shouldn’t be distracted and forced into a panic over anything at this stage.

Personally, I wonder whether those who are against the regulation for their own reasons are just trying to scare bloggers and tweeters, and enlist them on their side. Not me. Not yet.

12 days…. of privacy?

NOW ALSO AVAILABLE ON VIDEO (if you want proof that I can’t sing): here

Privacy is the gift that keeps on giving…. and for privacy advocates and lawyers, this year particularly! To keep festive, here’s a little song for the season…. Now if only I could sing!

—————————————–
On the first day of Christmas
My true love gave to me
The Leveson Inquiry
—————————————–
On the second day of Christmas
My true love gave to me
Two Royal boobies (1)
And the Leveson Inquiry
—————————————–
 On the third day of Christmas
My true love gave to me
Three data breaches (2)
Two Royal boobies
And the Leveson Inquiry
 —————————————–
On the fourth day of Christmas
My true love gave to me
Four Cops resigning (3)
Three data breaches
Two Royal boobies
And the Leveson Inquiry
 —————————————–
On the fifth day of Christmas
My true love gave to me
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
—————————————– 
On the sixth day of Christmas
My true love gave to me
Six BBC fiascos (4)
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
 —————————————–
On the seventh day of Christmas
My true love gave to me
Seven super-injunctions (5)
Six BBC fiascos
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
 —————————————–
On the eighth day of Christmas
My true love gave to me
Eight hacks arrested (6)
Seven super-injunctions
Six BBC fiascos
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
—————————————– 
On the ninth day of Christmas
My true love gave to me
Nine leakers leaking
Eight hacks arrested
Seven super-injunctions
Six BBC fiascos
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
—————————————– 
On the tenth day of Christmas
My true love gave to me
Ten snoopers snooping (7)
Nine leakers leaking
Eight hacks arrested
Seven super-injunctions
Six BBC fiascos
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
 —————————————–
On the eleventh day of Christmas
My true love gave to me
Eleven bloggers blogging
Ten snoopers snooping
Nine leakers leaking
Eight hacks arrested
Seven super-injunctions
Six BBC fiascos
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
 —————————————–
On the twelfth day of Christmas
My true love gave to me
Twelve tweeters tweeting
Eleven bloggers blogging
Ten snoopers snooping
Nine leakers leaking
Eight hacks arrested
Seven super-injunctions
Six BBC fiascos
The News – of the – World
Four cops resigning
Three data breaches
Two Royal boobs
And the Leveson Inquiry
—————————————–
Notes:
(1) The Duchess of Cambridge was photographed topless in France… and if you don’t remember that farrago, lucky you.
(2) Actually far, far more than three data breaches…….
(3) To be more accurate, two resigned, one was suspended and one put on extended leave. 
(4) There may be fewer, but it feels like at least six, from the Savile and Newsnight cases downwards! Poetic license….
(5) It’s not clear precisely how many have been granted – but far fewer than people might think!
(6) Actually significantly more, even in connection with Leveson alone. A lot. 
(7) If the Home Office had had its way, we’d have had far, far more than 10 snoopers snooping with the Snoopers’ Charter (Communications Data Bill). Fortunately, we managed to head them off at the pass, at least for now!