Do we even need an Online Safety Bill?

There are many reasons to be concerned about the #OnlineSafetyBill, the latest manifestation of which has just been launched, to a mixture of fanfares and fury. The massive attacks on privacy (including an awful general monitoring requirement) and freedom of speech (most directly through the highly contentious ‘legal but harmful’ concept) are just the starting point. The likely use of the ‘duty of care’ demanded of online service providers to limit or even ban both encryption and anonymity, thereby making all of us less – and in particular children – less safe and less free is another. The political control of censorship via Ofcom is in some ways even worse – as is the near certain inability of Ofcom to do the gargantuan tasks being required of it – and that’s not even starting on the mammoth and costly bureaucratic burdens being foisted on people operating online services. Cans of worms like age verification and other digital identity issues are just waiting to be opened, without their extensive downsides being even mentioned. And that’s not all – it’s such a huge and all-encompassing bill there are too many problems with it to mention in a blog post.

All that, however, misses the main point. Why are we even doing this? Do we even need an Online Safety Bill?

The main reasons the government seem to be doing this are based on what is a kind of classical misunderstanding of the internet. In my 2018 book, The Internet, Warts and All, I wrote about how the way we look at the internet overall impacted upon how we thought it should be regulated. The net is a complex, messy and confusing place at times – it has many warts. The challenge is to see it warts and all: to look at the big picture, to see the messy reality, and approach it accordingly.

Some people don’t even see the warts, so don’t think anything needs to be done – we should leave the internet alone, let it regulate itself. Others see only the warts, and miss the big picture. That’s what lies behind the Online Safety Bill. An obsession with the warts, and a desire to eradicate them with the strongest of caustic medicine, regardless of the damage to the face itself. That’s the view of the internet as a ‘Wild West’, full only of trolls and bots, ravaged by abuse and misinformation, where no-one dares roam without their trusty six-shooter.

The thing is, it’s just not true. Almost all the time, for the vast majority of people, the internet is something they use without much problem. They work, they shop, they get their news and their entertainment, they converse and socialise. They find romance. They buy their cars and homes – not just their books and groceries. They live. The internet does have warts – and no-one should underestimate the impact of trolling or misinformation in particular (there’s a chapter on each in The Internet, Warts and All) but neither should we forget what the internet really is.

If we see only the warts, we end up with disastrous legislation like the Online Safety Bill. If we see the warts, but treat them as warts, we have a chance to do regulation more reasonably, and not do untold damage on the way. As an example, the inclusion of cyber flashing in the bill is very welcome. It’s a wart that can be treated, and without anything in the way of negative consequences. Smaller, piecemeal legislation dealing with particular harms is a far more logical – and effective – way of dealing with the problems we have on the net than grand gestures like the Online Safety Bill, which will almost certainly do far more harm than good.

The gaping void at the heart of the Online Safety Bill

The latest manifestation of the much heralded Online Safety Bill is due to make its appearance tomorrow. It’s a massive bill, covering a wide range of topics and a huge number of issues about what happens online – and yet there’s a gaping void at its heart, a void that means that it will have almost no chance of succeeding in any of its key aims.

There are many things that should worry us about the Online Safety Bill. The vagueness of the ‘duty of care’ that it imposes on online service providers. The deliberately grey area of ‘harmful but legal’ content. Its focus on content rather than behaviour (which means it misses a massive amount of trolling, bullying and hate). The inevitable inadequacy of Ofcom as a regulator for something it knows very little about – clever trolls and others will run rings around it, and will even take joy in doing so. And, indeed, its aim – why do we want the U.K. to be the safest place to be online rather than the most creative, the most productive, even the best place to be online?

All that is vital, and most of it has been written about by people much more expert than I am in the field. That, however, is not what this piece focuses on. This is about something rather different: a blind spot at the heart of the bill. For all its focus on online harms and online safety, the bill misses how a great deal of the harms take place – because those harms come from the people behind the bill itself. It is easy to focus on evil, anonymous trolls and bots, and on hidden Russian creators of fake news – they’re convenient enemies, particularly right now – but at the heart of a great deal of harm are people very different: mainstream politicians and journalists. Blue tick accounts. The Press. The Online Safety bill says almost nothing about them, and as a result it is highly unlikely to have any kind of success, except on the periphery.

Trolling begins at home

Everyone hates trolls – indeed, the idea that the internet is full of evil trolls was one of the reasons behind the whole online harms approach – but they rarely think the whole thing through. What is generally considered to be trolling encompasses a lot of different activities – but most people’s ideas of what a troll looks like seem to be relatively consistent. Sad, angry, anonymous people – images like furious men tapping away at their keyboards in the basement of their parents’ homes are very common. There is of course some truth in this kind of image – but it’s a tiny part of the picture. Indeed, it’s very much a symptom rather than the disease itself.

Two factors are rarely discussed enough. One is the observation that many (perhaps most) trolls don’t consider themselves to be trolls. Indeed, very much the opposite: they consider their enemies to be trolls, and they themselves are either the victims of trolls or the noble warriors fighting against evil trolls. This is true not only of those debates where there is some kind of relative equality of argument or of power, but of those where to most relatively neutral observers there’s clearly a ‘good’ side and a ‘bad’ side.

The other is to ask how trolls find their victims. How they choose who to target, who to victimise, who to abuse. One of the most direct ways is through a pile-on. That is, someone points at a potential victim, saying ‘look at this idiot,’ or words to that effect, hinting that they deserve to be attacked. When the person pointing has thousands (or millions) of followers, those followers then pile on to the victim.

Who’s the troll here? The big account who just said something relatively innocent (‘look at this idiot’) or the followers who add the abuse, the racism, the misogyny, the death or rape threats? The big account stands back, claiming innocence, and pretending that the trolls had nothing to do with them. And of course those big accounts can be politicians or journalists – indeed some of the worst pile-ons are instigated by the biggest and most mainstream of accounts. MPs. Journalists from big newspapers or broadcasters.

That’s not the only kind of trolling that MPs and journalists engage in – without recognising or acknowledging that it is trolling. Indeed, the minister responsible for the Online Safety Bill, Nadine Dorries, has herself been called out for what many would describe as trolling. And yet she would vehemently deny being a troll – and believe that she is right in doing so.

The trouble is, not only are these kinds of activities by MPs and journalists actually trolling, but they’re much more dangerous trolling than that of the small, anonymous accounts that people tend to focus on. One relatively innocent tweet by someone with 100,000 followers can bring about thousands of vicious attacks. If we want to deal with the viciousness, we need to look at the big accounts, and at the structural trolling that goes on as a result. The Online Safety Bill does nothing for that at all – because it would mean both challenging the whole structure of social media and admitting the role that politicians themselves play in the online harm they claim to be dealing with.

Fakery begins at home

It’s a similar – or even worse – story with harmful misinformation. Again, the pantomime villains are Russian trolls, creating fake news in troll farms outside St Petersburg. These, of course, do exist – but again, they’re just a small part of the picture. As I’ve written before, mainstream politicians such as Jacob Rees-Mogg employ some of the same tactics and methods of those we usually think of as spreading fake news – and he’s far from alone. Fake news and other forms of misinformation do not exist in a vacuum – very much the opposite. Fake news works when it fits with people’s existing prejudices and biases, when it confirms what they already think. So, to make fake news work, you create it to fit in with those prejudices – and you twist reality to fit with those prejudices.

If this sounds familiar, it should. Fake news isn’t something new, it’s just a new manifestation of the techniques employed by politicians and (particularly tabloid) journalists ever since politics and journalism has existed. Of course neither the politicians or journalists would be happy to acknowledge this. ‘Spin’ sounds much better than misinformation. And yet the relationship is very close. Spin helps create a fake narrative that is every bit as damaging as actual fake news – and far harder to detect, disprove or oppose.

As with trolling, the effect of all of this is much greater if the accounts spreading it have both credibility and large numbers of followers. That means that the ones that matter are the big, blue tick accounts rather than the dodgy anonymous trolls – and again, the structure of social media that allows information to be spread so rapidly via those big, blue tick accounts. And again, this is not the focus of of the Online Safety Bill. Safer to focus on the obviously villainous than acknowledge our own role in villainy.

Who gets a free (press) pass?

One final thought. If the Online Safety Bill gets passed – and it almost certainly will – it will mean that the press is the only bit of the media that is not regulated. Broadcast media has had statutory regulation for a long time – with Ofcom as the regulator. After the Online Safety Bill, the same will be true about social media. And yet those of us with memories long enough to remember the Leveson Inquiry will remember the vehemence with which the press resisted any idea of statutory regulation of the press, as though it were an intolerable affront to free speech.

I don’t think they were necessarily wrong – but they should be clear that statutory regulation of social media is every bit as much of an affront to free speech. Indeed, in many ways a worse one – as it is the ordinary people, rather than the relatively privileged peoples who run the newspapers and magazines, whose free speech is being curtailed. That ought to matter.

A gaping void

As it is, the Online Safety Bill looks likely to attack the symptoms rather than the causes of online harms. Unless it finds a way to address the underlying problems – and to confront the massive blind spot it has for the role of politicians and journalists – it will be just yet another massive game of Whac-A-Mole, doomed to failure and disappointment.

That, frankly, is what I expect to happen. The bill will be passed, everything will trumpet how we’re finally taming the Wild West, but nothing will really happen. Trolls will continue trolling – new ones replacing those who do get caught – and misinformation will continue to spread. The powerful will still be unscathed, and the hate will still spread. And a few years later we will have another go. With the same result.

In praise of hiding..

The new government anti-encryption campaign, ‘No Place to Hide’, has a great many problems. It’s based on many false assumptions, but the biggest of all of these is the whole idea that hiding is a bad thing. It can be, of course, when ‘bad guys’ hide from the authorities, which is what the government is grasping at, but in practice we *all* need to be able to hide sometimes.

Indeed, the weaker and more vulnerable we are, the more we need places to hide. The more predators we face, the more we need places to hide. And if we believe – and the government campaign is based on this assumption – that there are a lot of dangerous predators around on the internet – that becomes especially important. Places to hide become critical. Learning how to hide becomes critical. Having the tools and techniques not just available for a few, specially talented or trained individuals but for everyone, including the most vulnerable, becomes critical.

This means that the tools and systems used by those people – the mainstream systems, the most popular networks and messaging services – are the ones where safety is the most important, where privacy is the most important. Geeks and nerds can always find their own way to do this – it’s no problem for an adept to use their own encryption tools, or to communicate using secure systems such as Signal, or even to build their own tools. They’re not the ones that are the issue here. It’s the mainstream that matters – which is why the government campaign is so fundamentally flawed. They want to stop Facebook rolling out end-to-end encryption on Facebook’s messenger – when that’s exactly what’s needed to help.

We should be encouraging more end-to-end encryption, not less. We should be teaching our kids how to be more secure and more private online – and letting them teach us at the same time. They know more about the need for privacy than we often give them credit for. We need to learn how to trust them too.

Who needs privacy?

You might be forgiven for thinking that this government is very keen on privacy. After all, MPs all seem to enjoy the end-to-end encryption provided by the WhatsApp groups that they use to make their plots and plans, and they’ve been very keen to keep the details of their numerous parties during lockdown as private as possible – so successfully that it seems to have taken a year or more for information about evidently well-attended (work) events to become public. Some also seem enthused by the use of private email for work purposes, and to destroy evidence trails to keep other information private and thwart FOI requests – Sue Gray even provided some advice on the subject a few years back.

On the other hand, they also love surveillance – 2016’s Investigatory Powers Act gives immense powers to the authorities to watch pretty much our every move on the internet, and gather pretty much any form of data about us that’s held by pretty much anyone. They’ve also been very keen to force everyone to use ‘real names’ on social media – which, though it may not seem completely obvious, is a move designed primarily to cut privacy. And, for many years, they’ve been fighting against the expansion of the use of encryption. Indeed, a new wave of attacks on encryption is just beginning.

So what’s going on? In some ways, it’s very simple: they want privacy for themselves, and no privacy for anyone else. It fits the general pattern of ‘one rule for us, another for everyone else’, but it’s much more insidious than that. It’s not just a double-standard, it’s the reverse of what is appropriate – because it needs to be understood that privacy is ultimately about power.

People need privacy against those who have power over them – employees need privacy from their employers (something exemplified by the needs of whistleblowers for privacy and anonymity), citizens need privacy from their governments, victims need privacy from their stalkers and bullies and so on. Kids need privacy from their parents, their teachers and more. The weaker and more vulnerable people are, the more they need privacy – and the approach by the government is exactly the opposite. The powerful (themselves) get more privacy, the weaker (ordinary people, and in particular minority groups and children) get less or even no privacy. The people who should have more accountability – notably the government – get privacy to prevent that accountability – whilst the people who need more protection lose the protection that privacy can provide

This is why moves to ban or limit the use of end-to-end encryption are so bad. Powerful people – and tech-savvy people, like the criminals that they use as the excuse for trying to restrict encryption – will always be able to get that encryption. You can do it yourself, if you know how. The rest of the people – the ‘ordinary’ users of things like Facebook messenger – are the ones who need it, to protect themselves from criminals, stalkers, bullies etc – and are the ones that moves like this from the government are trying to stop getting it.

The push will be a strong one – trying to persuade us that in order to protect kids etc we need to be able to see everything they’re doing, so we need to (effectively) remove all their privacy. That’s just wrong. Making their communications ‘open’ to the authorities, to their parents etc also makes it open to their enemies – bullies, abusers, scammers etc, and indeed those parents or authority figures who are themselves dangerous to kids. We need to understand that this is wrong.

None of this is easy – and it’s very hard to give someone privacy when you don’t trust them. That’s another key here. We need to learn who to trust and how to trust them – and we need to do our best to teach our kids how to look after themselves. To a great extent they know – kids understand privacy far more that people give them credit for – and we need to trust that too.

BT’s ‘Walk me home’: tech solutionism at its worst

Magical thinking

It’s all too easy to see a difficult, societal problem and try to solve it with a technological ‘magic wand’. We tend to treat technology as magical a lot of the time – Arthur C Clarke’s Third Law, from as far back as 1962, that “Any sufficiently advanced technology is indistinguishable from magic” has a great deal of truth to it, and is the route to a great many problems. This latest one, BT’s idea that women can ‘opt-in’, probably via an app, to being tracked in real-time as they walk home alone, is just the latest in a long series of these kinds of ideas. Click on the app, and you have a fairy godmother watching you, ready to protect you from the evil monsters who might be out to get you.

More surveillance doesn’t mean more security

That’s the essence of this kind of thinking. By tech, we can sort everything out. And, as so often, the method by which this tech will solve everything is surveillance. It’s another classical trap – the idea that as long as we can monitor things, track things, gather more data, we can solve the problems. If only we knew, if only we were able to watch, everything would be OK.

This is the logic that lies behind ideas such as backdoors into encryption – still being touted on a big scale by many in governments all over the world – which mean, just as BT’s ‘walk me home’ would – actually reducing security and increasing risks for most of those involved. Just as breaking encryption will make children more vulnerable, getting women to put themselves under real-time surveillance at their key moments of risk will be likely to make them more vulnerable rather than less.

Look at the downsides….

It will make them easier to identify, and easier to locate – they will be effectively ‘registered’ on the system through downloading and activating the app, it will record their location, their regular routes – and the times they use them, their phone numbers and more. It will identify them as vulnerable – and make them even more of a target.

This, again, is a classical trap of tech solutionism. It’s easy just to look at a piece of tech in terms of how it’s intended to be used, and in terms to the intended user. In this case, that the people tracking the relevant woman will be only people who have her best interests at heart, and who will only intervene in the best way, as the system intends. The good police officer, acting in the best possible way.

All systems – and all data – will be misused

This is in itself magical thinking, and the opposite of the way we should be looking at this. We have to be aware that all systems will be misused. History show this – particularly in relation to technology. Just as one example, there are a whole series of data protection cases involving police officers misusing their ‘authorised’ access to data – from the Bignell case in 1998, where officers used their access to a motor vehicle database to find out details of cars for personal purposes onwards. It must never be forgotten that Wayne Couzens was a serving police officer when he abducted, raped and murdered Sarah Everard

This kind of a system will also create a database of vulnerable women – together with their personal details, their phone numbers, their home addresses, the routes they take to get home – including when they use them – and that they feel vulnerable coming home. This will be a honeypot of data for any potential stalker – and again, we must not forget that Wayne Couzens was a serving police officer, and that he planned the abduction, rape and murder of Sarah Everard carefully. Systems like this would be a perfect tool for another would-be Wayne Couzens – and also to ‘smaller scale’ creeps and misogynists. The plethora of stories about police officers and others misusing their position to pester women – and worse – that have come out in the wake of the abduction, rape and murder of Sarah Everard should make it abundantly clear that this isn’t a minor concern.

A route to victim-blaming – and infringing women’s rights

Perhaps even more importantly, systems like this are part of a route to blame the victim for the crime. ‘If only she’d used her ‘walk me home’ she would have been OK’ could be the new ‘if only she hadn’t dressed provocatively’. It puts pressure on women to let themselves be tracked and monitored – as well as making it their fault if they don’t use this ‘tool’ to save themselves.

This in itself is an infringement on women’s rights. Not just the right to be safe – which is fundamental – but the right to privacy, to freedom of action, and much more. It’s treating women as though they are like pets, to be microchipped for their own protection, registered on a database so that men can protect them. And if they don’t take advantage of this, well, they deserve what they get.

Avoiding the issue – and avoiding responsibility

All of which brings us back to the real problem: male violence. Tech solutionism is about attempting to use tech to solve societal problems – the societal problem here is male violence. So long as the focus is on the tech, and the tech that can be used by the women, the focus is off the men whose violence is the real problem. And so long as we thing that problem can be solved with an app, we fail to acknowledge how serious a problem it is, how deep a problem it is, and how serious a solution it requires.

It also means that many of those involved avoid taking the responsibility that they have for the problem. The police. The Home Office. Men. Avoiding responsibility has become an art form for the Metropolitan Police, and for Cressida Dick in particular. Some of the officers who shared abusive messages with Wayne Couzens are still working at the Met – and those are just the ones that we know about. This problem is deep-set. It is societal

Societal problems need societal solutions

The bottom line here is that this a massive societal problem – and that is something that won’t be solved by an app. It requires a societal solution – and that isn’t easy, it isn’t quick, and it isn’t something that can be done without pain and sacrifice. The pain and sacrifice, though, should not come from the victims. At the moment, and with ‘solutions’ like BT’s ‘Walk me home’, it is only the victims who are being expected to sacrifice anything. That is straightforwardly wrong.

The starting point should be with the police. That there have been no resignations – least of all from Cressida Dick – is no surprise at all. Beyond a few pseudo-apologies and a concerted attempt to present Couzens as an ‘ex’ police officer, there’s been almost nothing. He was a serving officer when he did the crime. The Met should be facing radical change – if it expects to regain trust, it must change. Societal solutions mean that we need to be able to trust the police.

It is only when we can trust the police that technological tools like BT’s ‘Walk Me Home’ have a chance of playing a part – a small part – in helping women. The trust has to come first. The change in the police has to come first. Without that, we have no chance.

Children need anonymity and encryption!

In recent weeks, two of the oldest issues on the internet have reared their ugly heads again: the demand that people use their ‘real names’ on social media, and the suggestion that we should undermine or ban the use of encryption – in particular end-to-end encryption. As has so often been the case, the argument has been made that we need to do this to ‘protect’ children. ‘Won’t someone think of the children’ has been a regular cry from people seeking to ‘rein in’ the internet for decades – this is the latest manifestation of something with which those concerned with internet governance are very familiar.

Superficially, both these ideas are attractive. If we force people to use their real names, bullies and paedophiles will be easier to catch, and trolls won’t dare do their trolling – for shame, perhaps, or because it’s only the mask of anonymity that gives them the courage to be bad. Similarly, if we ban encryption we’ll be able to catch the bullies and paedophiles, as the police will be able to see their messages, the social media companies will be able to algorithmically trawl through kids’ feeds and see if they’re being targeted and so forth. That, however, is very much only the superficial view. In reality, forcing real names and banning or restricting end-to-end encryption will make everyone less safe and secure, but will be particularly damaging for kids. For a whole series of reasons, kids benefit from both anonymity and encryption. Indeed, it can be argued that they need to have both anonymity and encryption available to them. A real ‘duty of care’ – as suggested by the Online Safety Bill – should mean that all social media systems implement end-to-end encryption for its messaging and make anonymity and pseudonymity easily available for all.

Children need anonymity

The issues surrounding anonymity on the internet have a long history – Pete Steiner’s seminal cartoon ‘On the Internet, Nobody Knows You’re a Dog’ was in the New Yorker in 1993, before social media in its current form was even conceived: Mark Zuckerberg was 9 years old.

It’s barely true these days – indeed, very much the reverse a lot of the time, as the profiling and targeting systems of the social media and advertising companies often mean they know more about us that we know ourselves – but it makes a key point about anonymity on the net. It can allow people to at least hide some things about themselves.

This is seen by many as a bad thing – but for children, particularly children who are the victims of bullies and worse, it’s a critical protector. As those who bully kids are often those who know the kids – from school, for example – being forced to use your real name means leaving yourself exposed to exactly those bullies. Real names becomes a tool for bullies online – and will force victims either to accept the bullying or avoid using the internet. This, of course, is not just true for bullies, but for overbearing parents, sadistic teachers and much worse. It is really important not to just think about good parents and protective teachers. For the vulnerable children, parents and teachers can be exactly the people they need to avoid – and there’s a good reason for that, as we shall see.

Some of those who had advocated for real names have recognised this negative impact, and instead suggest a system like ‘verified IDs’. That is, people don’t have to use their real names, but in order to use social media they need to prove to the social media company who they are – providing some kind of ID verification documentation (passports, even birth certificates etc) – but can then use a pseudonym. This might help a little – but has another fundamental flaw. The information that is gathered – the ID data – will be a honeypot of critically important and dangerous data, both a target for hackers and a temptation for the social media companies to use for other purposes – profiling and targeting in particular. Being able to access this kind of information about kids in particular is critically dangerous. Hacking and selling such information to child abusers in particular isn’t just a risk, it is pretty much inevitable. The only way to safeguard this kind of data is not to gather it at all, let alone put it in a database that might as well have the words ‘hack me’ written in red letters a hundred feet tall.

Children need encryption

Encryption is a vital protection for almost everything we do that really matters on the internet. It’s what makes online banking even possible, for example. This is just as true for kids as it is for adults – indeed, in some particular ways it is even more true for kids. End-to-end encryption is especially important – that is, the kind of encryption that means on the sender and recipient of a message can read it, not even the service that the message is sent on.

The example that Priti Patel and others are fighting in particular is the implementation of end-to-end encryption across all Facebook’s messaging system – it already exists on WhatsApp. End-to-end encryption would mean that not even Facebook could read the messages sent over the system. The opposers of the idea think it means that they won’t be able to find out when bullies, paedophiles etc are communicating with kids – bullying or grooming for example – but that misses a key part of the problem. Encryption doesn’t just protected ‘bad guys’, it protect everyone. Breaking encryption doesn’t just give a way in for the police and other authorities, it gives a way in for everyone. It removes the protection that the kids have from those who might target them.

End-to-end encryption protects against one other group that can and does pose a very significant risk to kids: the social media companies themselves. It should be remembered that the profiling and targeting of kids that is done by the social media companies is itself a significant danger to kids. In 2017, for example, a leaked document revealed that Facebook in Australia was offering advertisers (and hence not just advertisers) the opportunity to target vulnerable teens in real time.

“…By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”,”

Facebook, of course, backed off from this particular programme when it was revealed – but it should not be seen as an anomaly but as part of the way that this kind of system works, and of the harm that the social media services themselves can represent for kids. End-to-end encryption to begin to limit this kind of thing – only to a certain extent, as the profiling and targeting mechanisms work on much more than just the content of messages. It could be a start though, and as kids move towards more private messaging systems the capabilities of this kind of hard could be reduced. If more secure, private and encrypted systems become the norm, children in particular will be safer and more secure

Children need privacy

The reason that kids need anonymity and encryption is a fundamental one. It’s because they need privacy, and in the current state of the internet anonymity and encryption are key protectors of privacy. More fundamentally than that, we need to remember that everyone needs privacy. This is especially true for children – because privacy is about power. We need privacy from those who have power over us – an employee needs privacy from their employer, a citizen from their government, everyone needs privacy from criminals, terrorists and so forth. For children this is especially intense, because so many kinds of people have power over children. By their nature, they’re more vulnerable – which is why we have the instinct to wish to protect them. We need to understand, though, what that protection could and should really mean.

As noted at the start, ‘won’t someone think of the children‘ has been a regular mantra – but it only gives one small side of the story. We need not just to think of the children, but think like the children and think with the children. Move more to thinking from their perspective, and not just treat them as though they need to be wrapped in cotton wool and watched like hawks. We also need to prepare them for adulthood – which means instilling in them good practices and arming them with the skills they need for the future. That means anonymity and encryption too.

Duty of care?

Priti Patel has suggested the duty of care could mean no end-to-end encryption, and Dowden has suggested everyone should have verified ID. There’s a stronger argument in both cases precisely the opposite way around – that a duty of care should mean that end-to-end encryption is mandatory on all messenger apps and messaging systems within social networking services, and that real names mandates should not be allowed on social networking systems. If we really have a duty of care to our kids, that’s what we should do.

Paul Bernal is Professor of Information Technology Law at UEA Law School

Verified IDs for social media? I don’t think so….

‘Verified ID’ is almost (but not quite) as bad an idea as ‘real names’: this is why. It does have some advantages. It’s not as quick a stalker’s tool as real names. It doesn’t chill quite as much as real names, but it is still has some very bad features.

Firstly, and most importantly, it is very unlikely to solve any problems. It still assumes that trolls act rationally and are ashamed of their trolling or expect to be punished for their trolling if caught. If real names doesn’t do that, why would verified ID? That is, it doesn’t provide any real deterrent to trolling, or to racial abuse. So what problem are you trying to solve with it? If you want to make subsequent investigation and prosecutions easier, you’re still missing the point: we don’t have the capacity.

You need to specify very carefully what you’re trying to solve first. Deterrence won’t work. Supporting prosecutions won’t work. So what is it? Set that down first, before you suggest them as a solution. Remember that many trolls think their comments are justified. Trolls tend not to think of themselves as trolls or their activities as trolling – they think their abuse of Diane Abbott is really about her mathematical skills and so forth – so measure to get ‘trolls’ don’t apply to them.

Next, the downsides. Who will hold all this vital ID data? The social media companies? They’re the last people who should get vital information to add to their databases. Giving them more power is disastrous to all of us. Some ‘trusted’ third party? Who? How? Why? Remember who the government wanted to get to be ‘trusted’ over Age Verification? Mind geek, who own Porn Hub. Who would they get here? Dido Harding? Trust is critical here, and trust is missing.

Next, the chilling effect. The people most in need of protection, the ones most at risk from Real Names, will still be chilled. Will someone with mental health issues want to give information that might be handed over to a service that might get them sectioned.And people who don’t trust the government or the police? Remember that the Investigatory Powers Act will mean they can get access to all that data. This will chill them. Maybe that’s the intention.

Then we have the data itself. Whoever holds it, it’s vulnerable to misuse and to hacking. It’s a honeypot of data that will be vulnerable. Experience makes that very clear. Even those with the best intentions make mistakes. There are hackers, leakers and more.

So if we want to do this, we need the benefits to outweigh these risks. So far, the benefits are minimal if they exist at all. The risks are not minimal at all. And that still leaves the biggest elephant in the room. What lies behind the REAL problem.

…because the real problem with racial abuse is the racism in our society. The racism in our media. The racism in our politicians. I like to blame Mark Zuckerberg for a huge amount – but here, he’s less responsible than Boris Johnson, Priti Patel, Nigel Farage etc.

So let’s not be distracted. I’m not against this in the absolute way I am about real names – but there are so many obstacles to be overcome before it could be made to work I find it hard to believe that it’s a realistic solution. AND it’s a distraction from the real problem.

Real Names: the wrong tool for the wrong problem

The drive towards enforcing ‘real names’ on the internet – and on social media in particular – is gathering momentum at the moment. Katie Price’s petition to require people to provide verifiable ID before opening a social media account is just a new variation on a very old theme – and though well intentioned (as are many of the similar drives) it is badly misdirected and not just unlikely to solve any of the problems it is intended to solve it would make things worse – and make it even harder to find genuine solutions to what are, for the most part, genuine problems.

The attraction of ending anonymity

Ending – or significantly curbing – anonymity on social media is superficially attractive. ‘They wouldn’t behave that way if they had to use their real names’ is one argument. ‘They only do it because we can’t find them’ is another. Neither of these things are really true. Evidence that people are less aggressive or less antagonistic if they are forced to use their real names is mixed at best – and indeed some large scale studies have shown that trolls can be worse if they have to use their real names. More importantly, however, curtailing anonymity would have very damaging consequences for many vulnerable people, as well as distracting us from the real problems behind a lot of trolling. It isn’t the anonymity that’s the problem, it’s the trolling – and the reasons for the trolling are far deeper than the names people use when they troll. It isn’t the anonymity, it’s the aggression, it’s the anger, it’s the hate and it’s the lies. Whilst anger, hate and lies are endemic in our society – and notably successful in our media and our politics – that anger, hate and lies will be manifested online and in the social media in particular.

Trolls don’t need anonymity….

There are many assumptions behind the idea that real names would stop trolling. One is that people imagine that trolls are ashamed of their trolling, so would no longer do it if they were forced to do it using their real names. For some trolls, this may be the case – but for others exactly the opposite is the case. They may even be proud of their trolling, happy to be seen to be calling out their enemies and abusing them. For still more, they don’t consider themselves to be trolls, so wouldn’t think this applies to them. In troll-fights, it’s very common for both sides to think they’re the good guys, fighting the good fight against the evil on the other side. Their victims are the real trolls, they’re just defending themselves or fighting their own corner. This has been a characteristic of many of the major trolling phenomena of the last few decades – GamerGate is one of the most dramatic example. Neither side in a conflict thinks they’re the Nazis, they both think they’re the French Resistance.

The downsides of ‘real names’.

Another is that forcing real names only has downsides for trolls. No-one else has anything to fear from having to use their real names – or having to provide verifiable IDs for their social media accounts. Very much the opposite is true. There are many people who rely on anonymity or pseudonymity – some for their own protection, as they have enemies who might target them (whistle-blowers, victims of spousal abuse, gay teens with conservative parents, people living under oppressive regimes etc) – others to enable their freedom of speech (people in responsible positions who might be compromised are just one of the examples) including those who want their words to be taken on face value rather than being judged because of who has said them. ‘Real’ names can reveal things about a person that make them a target – revealing ethnicity, religion, gender, age, class, and much more – and in the current era that revelation can be more precise, more detailed and more damaging because of the kind of profiling possible through data analysis. Forcing real names is something that privileged people (including people like me) may not understand the impact of – because it won’t damage them or put them at risk. For millions of others, it would. People who are in that kind of privileged position should think twice before assuming their own position is the only one that matters.

Real names make the link between the online person and the ‘real’ person easier. That’s good when you think it will allow you to ‘catch’ the bad guy – but bad when you realise it will allow the bad guys to catch their victims. There’s a reason ‘doxxing’ is a classic troll tool – revealing documents about their victims is a way to punish them. Forcing real names makes doxxing much easier – in practice, it’s like automatically doxxing people. Moreover, even if you don’t force real names but you do require some kind of verified ID, you’re providing an immediate source of doxxing information for the trolls to use to find their victims. You might as well be painting ‘HACK ME PLEASE’ in red letters 100 feet high on your database of IDs. It’s a recipe for disaster for a great many people

What is the real problem?

This is the question that is often missed. What are we worried about? There are many forms of trolling – but there are two that are particularly important here. The first is the specific, individual direct and vicious attacks – death and rape threats, misogyny and racism etc. Real names won’t stop this – even if it can be enforced – and we already have tools to deal with it, even if they’re not as often or easily applied as they should be. ‘Anonymous’ trolls can be and are identified and prosecuted for these kinds of attacks. We have the technological tools to do this, and the law is in place to prosecute them (the Malicious Communications Act 1988, S127 of the Communications Act 2003 and more). People have been successfully prosecuted and jailed for trolling of this kind. There wasn’t any need for real names or digital IDs for this. It’s not easy, it’s not quick, and it’s not ‘summary justice’ – but it can be done.

The second is the ‘pile-on’ where a victim gets attacked by hundreds or thousands of smaller scale bits of nastiness simultaneously – often from many anonymous accounts. Some of the attacks are as vicious as the individual direct attacks mentioned above, but many won’t be – and wouldn’t easily be prosecuted under the laws mentioned above. It can be the sheer weight of the numbers of attacks that can be overwhelming – you can block one or two attackers, you can mute more, you can ignore some others, but when there are hundreds every minute it is impossible to deal with other than by locking your account or withdrawing from social media. This is where technological solutions – and social media company action – could help, and indeed is helping. The ability on Twitter, for example, to automatically mute all people with default pictures, can clean up a timeline a bit – taking out the most obvious of trolls. More of this is happening all the time – and again, does not require real names or digital IDs.

What is more important in the latter example – and indeed in the former example – is why it happens. Pile-ons happen because they’re instigated – and they’re instigated not by anonymous trolls, but by exactly the opposite. By the big names, the ‘blue ticks’, the mainstream media, the mainstream politicians. When a blue tick (and I’m a blue tick) quote-tweets someone with a sarcastic comment, the thousands (or millions) of followers who see that tweet can and will pile in on the person quote tweeted. The sarcastic comment from a big name is the cause of the pile on, though in itself it isn’t harmful (and certainly not a prosecutable death threat or piece of hate speech). If you go after the individual (and sometimes anonymous) who does the death threat without considering the reason they targeted that individual, you don’t really do anything to solve the problem.

And that’s the bottom line. Right now, our political climate encourages hatred and anger. The ‘war on woke’, Trump, Brexit, Le Pen, Modi, the Daily Mail, all encourage it. Anonymity on social media isn’t the problem. Our society and our political climate is the problem. Ending anonymity would cause vast and permanent damage to exactly the people who we need to protect, and for only a slight chance of making it easier to catch a small subsection of those who cause problems online. It should be avoided strenuously.

(For more serious an academic analysis of this issue, see Chapter 8 of my 2018 book The Internet, Warts and All, or my 2020 book What do we know and what should we do about internet privacy)

Why a real names policy won’t solve trolling

I don’t know how many times I’ve had to write about it, but it’s a lot. It comes up again and again. Anyway, once more I see that ‘real names’ are being touted as the solution to trolling. They aren’t. They won’t ever be – and in fact they’re highly likely to be counterproductive and deeply damaging to many of the vulnerable people they’re supposed to be protecting. Anyway, I’m not going to write something new, but give you an extract from my 2020 book, ‘What do we know and what should we do about Internet Privacy’ – which is relatively cheap (less than £10) and written, I hope, in language even an MP can understand. You can find it here or at any decent online bookseller.


Whenever there is any kind of ‘nastiness’ on social media – trolling, hate speech, cyber bullying, ‘revenge porn’ – there are immediate calls to force people to use their real names. It is seen as some kind of panacea, based in part on the idea that ‘hiding’ behind a false name makes people feel free to behave badly and the related idea that they would be ashamed to do so if they were forced to reveal their real names. ‘Sunlight is the best disinfectant’ is a compelling argument on the surface but when examined more closely it is not just likely to be ineffective abut counterproductive, discriminatory and with the side effect of putting many different groups of people at significant risk. Moreover, there are already both technical and legal methods to discover who is behind an online account without the negative side effects.

The empirical evidence, counterintuitive though it might seem, suggests that when forced to use their real names internet trolls actually become more rather than less aggressive. There are a number of possible explanations for this. It might be seen as a ‘badge of honour’. Sometimes being a troll is something to boast about – and showing your real identity gives you kudos. Having to use your real name might actually free you from the shackles of wanting to hide. Perhaps it just makes trolls feel there’s nothing more to hide.

Whatever the explanation, forcing real names on people does not seem to stem the tide of nastiness. Platforms where real names are required – Facebook is the most obvious here – are conspicuously not free from harmful material, bullying and trolling. The internet is no longer anything like the place where ‘nobody knows you’re a dog’, even if you are operating under a pseudonym. There are many technological ways to know all kinds of thing about someone on the internet regardless of ‘real-names’ policies. The authorities can break down most forms of pseudonymity and anonymity when they need to, while others can use a particular legal mechanism, the Norwich Pharmacal Order, to require the disclosure of information about an apparently anonymous individual from service providers when needed.

Even more importantly, requirements for real names can be deeply damaging to many people, as they provide the link between the online and ‘real-world’ identities. People operating under oppressive regimes – it should be no surprise that the Chinese government is very keen on real-names policies – are perhaps the most obvious, but whistle-blowers, people in positions of responsibility like police officers or doctors who want to share important insider stories, victims of domestic violence, young people who quite reasonably might not want their parents to know what they are doing, people with illnesses who wish to find out more about those illnesses, are just a start.

There are some specific groups who can and do suffer discrimination as a result of real-names policies: people with names that identify their religion or ethnicity, for a start, and indeed their gender. Transgender people suffer particularly badly – who defines what their ‘real’ name is, and how? Real names can also allow trolls and bullies to find and identify their victims – damaging exactly the people that the policies are intended to protect. It is not a coincidence that a common trolling tactic is doxxing – releasing documents about someone so that they can be targeted for abuse in the real world.

When looked at in the round, rather than requiring real names we should be looking to enshrine the right to pseudonymity online. It is a critical protection for many of exactly the people who need protection. Sadly, just as with encryption, it is much more likely that the authorities will push in exactly the wrong direction on this.


Our Dom

A tavern in the shadow of a castle, somewhere in France, or perhaps County Durham. Dom sits in a large chair, looking a little morose. In comes a young, northern lad, a salt of the earth type, who looks over at Dom and stops.

Darren (for it is he): Why are you looking so sad, Dom? What’s wrong?

Dom looks up, but barely registers Darren’s existence. Darren is unfazed, and comes up to Dom and tries to cheer him up with a smile. In the background, a brass band (good Northern stuff) starts up, in a tune recognisable as coming from Disney’s Beauty and the Beast.

Darren starts, in a sing-song voice

Gosh, it disturbs me to see you Our Dom
Looking so down in the dumps
Every guy here’d love to be you, Our Dom
Even when taking your lumps
You’re Boris’s trusted adviser
You’re Laura K’s favourite source
I’ve never met anyone wiser
There’s a reason for that: it’s because…

the band strikes up a jaunty tune…

No… one… lies like our Dom
Fakes his cries like our Dom
Cannot tell the truth if he tries like our Dom

His lying can never be bested
From London to Durham and more
To drive so his eyesight is tested?
I laughed so much my ribs were sore…

No… one… cheats like our Dom
Does deceit like our Dom
Turns his enemies white as a sheet like our Dom

When it comes down to rewriting history
There no folk who can quite compare
Why people believe it’s a mystery
But it drives his foes hearts to despair…

No… one… takes like our Dom
On the make like our Dom
Makes his news quite so perfectly fake like our Dom

His lies they are brash, they are brazen
But the media just doesn’t care
He crafts lies for ev’ry occasion
And his army of trolls and of bots can then share…

No… one.. drives like our Dom
Coaches wives like our Dom
Cares nothing for old people’s lives like our Dom

He can break any law with impunity
In elections and lockdown who cares?
His denials of planned herd immunity
Are about as convincing as Donald Trump’s hair…

No… one… sneers like our Dom
Stokes up fears like our Dom
No… one… lies like our Dom
Porkie pies like our Dom

He’s….
Our…
Dom!

Darren sits down, exhausted. Dom just ignores him, but a secret smile just touches his eyes…

With apologies to anyone even slightly associated to Disney.