Verified IDs for social media? I don’t think so….

‘Verified ID’ is almost (but not quite) as bad an idea as ‘real names’: this is why. It does have some advantages. It’s not as quick a stalker’s tool as real names. It doesn’t chill quite as much as real names, but it is still has some very bad features.

Firstly, and most importantly, it is very unlikely to solve any problems. It still assumes that trolls act rationally and are ashamed of their trolling or expect to be punished for their trolling if caught. If real names doesn’t do that, why would verified ID? That is, it doesn’t provide any real deterrent to trolling, or to racial abuse. So what problem are you trying to solve with it? If you want to make subsequent investigation and prosecutions easier, you’re still missing the point: we don’t have the capacity.

You need to specify very carefully what you’re trying to solve first. Deterrence won’t work. Supporting prosecutions won’t work. So what is it? Set that down first, before you suggest them as a solution. Remember that many trolls think their comments are justified. Trolls tend not to think of themselves as trolls or their activities as trolling – they think their abuse of Diane Abbott is really about her mathematical skills and so forth – so measure to get ‘trolls’ don’t apply to them.

Next, the downsides. Who will hold all this vital ID data? The social media companies? They’re the last people who should get vital information to add to their databases. Giving them more power is disastrous to all of us. Some ‘trusted’ third party? Who? How? Why? Remember who the government wanted to get to be ‘trusted’ over Age Verification? Mind geek, who own Porn Hub. Who would they get here? Dido Harding? Trust is critical here, and trust is missing.

Next, the chilling effect. The people most in need of protection, the ones most at risk from Real Names, will still be chilled. Will someone with mental health issues want to give information that might be handed over to a service that might get them sectioned.And people who don’t trust the government or the police? Remember that the Investigatory Powers Act will mean they can get access to all that data. This will chill them. Maybe that’s the intention.

Then we have the data itself. Whoever holds it, it’s vulnerable to misuse and to hacking. It’s a honeypot of data that will be vulnerable. Experience makes that very clear. Even those with the best intentions make mistakes. There are hackers, leakers and more.

So if we want to do this, we need the benefits to outweigh these risks. So far, the benefits are minimal if they exist at all. The risks are not minimal at all. And that still leaves the biggest elephant in the room. What lies behind the REAL problem.

…because the real problem with racial abuse is the racism in our society. The racism in our media. The racism in our politicians. I like to blame Mark Zuckerberg for a huge amount – but here, he’s less responsible than Boris Johnson, Priti Patel, Nigel Farage etc.

So let’s not be distracted. I’m not against this in the absolute way I am about real names – but there are so many obstacles to be overcome before it could be made to work I find it hard to believe that it’s a realistic solution. AND it’s a distraction from the real problem.

Real Names: the wrong tool for the wrong problem

The drive towards enforcing ‘real names’ on the internet – and on social media in particular – is gathering momentum at the moment. Katie Price’s petition to require people to provide verifiable ID before opening a social media account is just a new variation on a very old theme – and though well intentioned (as are many of the similar drives) it is badly misdirected and not just unlikely to solve any of the problems it is intended to solve it would make things worse – and make it even harder to find genuine solutions to what are, for the most part, genuine problems.

The attraction of ending anonymity

Ending – or significantly curbing – anonymity on social media is superficially attractive. ‘They wouldn’t behave that way if they had to use their real names’ is one argument. ‘They only do it because we can’t find them’ is another. Neither of these things are really true. Evidence that people are less aggressive or less antagonistic if they are forced to use their real names is mixed at best – and indeed some large scale studies have shown that trolls can be worse if they have to use their real names. More importantly, however, curtailing anonymity would have very damaging consequences for many vulnerable people, as well as distracting us from the real problems behind a lot of trolling. It isn’t the anonymity that’s the problem, it’s the trolling – and the reasons for the trolling are far deeper than the names people use when they troll. It isn’t the anonymity, it’s the aggression, it’s the anger, it’s the hate and it’s the lies. Whilst anger, hate and lies are endemic in our society – and notably successful in our media and our politics – that anger, hate and lies will be manifested online and in the social media in particular.

Trolls don’t need anonymity….

There are many assumptions behind the idea that real names would stop trolling. One is that people imagine that trolls are ashamed of their trolling, so would no longer do it if they were forced to do it using their real names. For some trolls, this may be the case – but for others exactly the opposite is the case. They may even be proud of their trolling, happy to be seen to be calling out their enemies and abusing them. For still more, they don’t consider themselves to be trolls, so wouldn’t think this applies to them. In troll-fights, it’s very common for both sides to think they’re the good guys, fighting the good fight against the evil on the other side. Their victims are the real trolls, they’re just defending themselves or fighting their own corner. This has been a characteristic of many of the major trolling phenomena of the last few decades – GamerGate is one of the most dramatic example. Neither side in a conflict thinks they’re the Nazis, they both think they’re the French Resistance.

The downsides of ‘real names’.

Another is that forcing real names only has downsides for trolls. No-one else has anything to fear from having to use their real names – or having to provide verifiable IDs for their social media accounts. Very much the opposite is true. There are many people who rely on anonymity or pseudonymity – some for their own protection, as they have enemies who might target them (whistle-blowers, victims of spousal abuse, gay teens with conservative parents, people living under oppressive regimes etc) – others to enable their freedom of speech (people in responsible positions who might be compromised are just one of the examples) including those who want their words to be taken on face value rather than being judged because of who has said them. ‘Real’ names can reveal things about a person that make them a target – revealing ethnicity, religion, gender, age, class, and much more – and in the current era that revelation can be more precise, more detailed and more damaging because of the kind of profiling possible through data analysis. Forcing real names is something that privileged people (including people like me) may not understand the impact of – because it won’t damage them or put them at risk. For millions of others, it would. People who are in that kind of privileged position should think twice before assuming their own position is the only one that matters.

Real names make the link between the online person and the ‘real’ person easier. That’s good when you think it will allow you to ‘catch’ the bad guy – but bad when you realise it will allow the bad guys to catch their victims. There’s a reason ‘doxxing’ is a classic troll tool – revealing documents about their victims is a way to punish them. Forcing real names makes doxxing much easier – in practice, it’s like automatically doxxing people. Moreover, even if you don’t force real names but you do require some kind of verified ID, you’re providing an immediate source of doxxing information for the trolls to use to find their victims. You might as well be painting ‘HACK ME PLEASE’ in red letters 100 feet high on your database of IDs. It’s a recipe for disaster for a great many people

What is the real problem?

This is the question that is often missed. What are we worried about? There are many forms of trolling – but there are two that are particularly important here. The first is the specific, individual direct and vicious attacks – death and rape threats, misogyny and racism etc. Real names won’t stop this – even if it can be enforced – and we already have tools to deal with it, even if they’re not as often or easily applied as they should be. ‘Anonymous’ trolls can be and are identified and prosecuted for these kinds of attacks. We have the technological tools to do this, and the law is in place to prosecute them (the Malicious Communications Act 1988, S127 of the Communications Act 2003 and more). People have been successfully prosecuted and jailed for trolling of this kind. There wasn’t any need for real names or digital IDs for this. It’s not easy, it’s not quick, and it’s not ‘summary justice’ – but it can be done.

The second is the ‘pile-on’ where a victim gets attacked by hundreds or thousands of smaller scale bits of nastiness simultaneously – often from many anonymous accounts. Some of the attacks are as vicious as the individual direct attacks mentioned above, but many won’t be – and wouldn’t easily be prosecuted under the laws mentioned above. It can be the sheer weight of the numbers of attacks that can be overwhelming – you can block one or two attackers, you can mute more, you can ignore some others, but when there are hundreds every minute it is impossible to deal with other than by locking your account or withdrawing from social media. This is where technological solutions – and social media company action – could help, and indeed is helping. The ability on Twitter, for example, to automatically mute all people with default pictures, can clean up a timeline a bit – taking out the most obvious of trolls. More of this is happening all the time – and again, does not require real names or digital IDs.

What is more important in the latter example – and indeed in the former example – is why it happens. Pile-ons happen because they’re instigated – and they’re instigated not by anonymous trolls, but by exactly the opposite. By the big names, the ‘blue ticks’, the mainstream media, the mainstream politicians. When a blue tick (and I’m a blue tick) quote-tweets someone with a sarcastic comment, the thousands (or millions) of followers who see that tweet can and will pile in on the person quote tweeted. The sarcastic comment from a big name is the cause of the pile on, though in itself it isn’t harmful (and certainly not a prosecutable death threat or piece of hate speech). If you go after the individual (and sometimes anonymous) who does the death threat without considering the reason they targeted that individual, you don’t really do anything to solve the problem.

And that’s the bottom line. Right now, our political climate encourages hatred and anger. The ‘war on woke’, Trump, Brexit, Le Pen, Modi, the Daily Mail, all encourage it. Anonymity on social media isn’t the problem. Our society and our political climate is the problem. Ending anonymity would cause vast and permanent damage to exactly the people who we need to protect, and for only a slight chance of making it easier to catch a small subsection of those who cause problems online. It should be avoided strenuously.

(For more serious an academic analysis of this issue, see Chapter 8 of my 2018 book The Internet, Warts and All, or my 2020 book What do we know and what should we do about internet privacy)

Why a real names policy won’t solve trolling

I don’t know how many times I’ve had to write about it, but it’s a lot. It comes up again and again. Anyway, once more I see that ‘real names’ are being touted as the solution to trolling. They aren’t. They won’t ever be – and in fact they’re highly likely to be counterproductive and deeply damaging to many of the vulnerable people they’re supposed to be protecting. Anyway, I’m not going to write something new, but give you an extract from my 2020 book, ‘What do we know and what should we do about Internet Privacy’ – which is relatively cheap (less than £10) and written, I hope, in language even an MP can understand. You can find it here or at any decent online bookseller.

Whenever there is any kind of ‘nastiness’ on social media – trolling, hate speech, cyber bullying, ‘revenge porn’ – there are immediate calls to force people to use their real names. It is seen as some kind of panacea, based in part on the idea that ‘hiding’ behind a false name makes people feel free to behave badly and the related idea that they would be ashamed to do so if they were forced to reveal their real names. ‘Sunlight is the best disinfectant’ is a compelling argument on the surface but when examined more closely it is not just likely to be ineffective abut counterproductive, discriminatory and with the side effect of putting many different groups of people at significant risk. Moreover, there are already both technical and legal methods to discover who is behind an online account without the negative side effects.

The empirical evidence, counterintuitive though it might seem, suggests that when forced to use their real names internet trolls actually become more rather than less aggressive. There are a number of possible explanations for this. It might be seen as a ‘badge of honour’. Sometimes being a troll is something to boast about – and showing your real identity gives you kudos. Having to use your real name might actually free you from the shackles of wanting to hide. Perhaps it just makes trolls feel there’s nothing more to hide.

Whatever the explanation, forcing real names on people does not seem to stem the tide of nastiness. Platforms where real names are required – Facebook is the most obvious here – are conspicuously not free from harmful material, bullying and trolling. The internet is no longer anything like the place where ‘nobody knows you’re a dog’, even if you are operating under a pseudonym. There are many technological ways to know all kinds of thing about someone on the internet regardless of ‘real-names’ policies. The authorities can break down most forms of pseudonymity and anonymity when they need to, while others can use a particular legal mechanism, the Norwich Pharmacal Order, to require the disclosure of information about an apparently anonymous individual from service providers when needed.

Even more importantly, requirements for real names can be deeply damaging to many people, as they provide the link between the online and ‘real-world’ identities. People operating under oppressive regimes – it should be no surprise that the Chinese government is very keen on real-names policies – are perhaps the most obvious, but whistle-blowers, people in positions of responsibility like police officers or doctors who want to share important insider stories, victims of domestic violence, young people who quite reasonably might not want their parents to know what they are doing, people with illnesses who wish to find out more about those illnesses, are just a start.

There are some specific groups who can and do suffer discrimination as a result of real-names policies: people with names that identify their religion or ethnicity, for a start, and indeed their gender. Transgender people suffer particularly badly – who defines what their ‘real’ name is, and how? Real names can also allow trolls and bullies to find and identify their victims – damaging exactly the people that the policies are intended to protect. It is not a coincidence that a common trolling tactic is doxxing – releasing documents about someone so that they can be targeted for abuse in the real world.

When looked at in the round, rather than requiring real names we should be looking to enshrine the right to pseudonymity online. It is a critical protection for many of exactly the people who need protection. Sadly, just as with encryption, it is much more likely that the authorities will push in exactly the wrong direction on this.

Our Dom

A tavern in the shadow of a castle, somewhere in France, or perhaps County Durham. Dom sits in a large chair, looking a little morose. In comes a young, northern lad, a salt of the earth type, who looks over at Dom and stops.

Darren (for it is he): Why are you looking so sad, Dom? What’s wrong?

Dom looks up, but barely registers Darren’s existence. Darren is unfazed, and comes up to Dom and tries to cheer him up with a smile. In the background, a brass band (good Northern stuff) starts up, in a tune recognisable as coming from Disney’s Beauty and the Beast.

Darren starts, in a sing-song voice

Gosh, it disturbs me to see you Our Dom
Looking so down in the dumps
Every guy here’d love to be you, Our Dom
Even when taking your lumps
You’re Boris’s trusted adviser
You’re Laura K’s favourite source
I’ve never met anyone wiser
There’s a reason for that: it’s because…

the band strikes up a jaunty tune…

No… one… lies like our Dom
Fakes his cries like our Dom
Cannot tell the truth if he tries like our Dom

His lying can never be bested
From London to Durham and more
To drive so his eyesight is tested?
I laughed so much my ribs were sore…

No… one… cheats like our Dom
Does deceit like our Dom
Turns his enemies white as a sheet like our Dom

When it comes down to rewriting history
There no folk who can quite compare
Why people believe it’s a mystery
But it drives his foes hearts to despair…

No… one… takes like our Dom
On the make like our Dom
Makes his news quite so perfectly fake like our Dom

His lies they are brash, they are brazen
But the media just doesn’t care
He crafts lies for ev’ry occasion
And his army of trolls and of bots can then share…

No… one.. drives like our Dom
Coaches wives like our Dom
Cares nothing for old people’s lives like our Dom

He can break any law with impunity
In elections and lockdown who cares?
His denials of planned herd immunity
Are about as convincing as Donald Trump’s hair…

No… one… sneers like our Dom
Stokes up fears like our Dom
No… one… lies like our Dom
Porkie pies like our Dom


Darren sits down, exhausted. Dom just ignores him, but a secret smile just touches his eyes…

With apologies to anyone even slightly associated to Disney.

Contact tracing, privacy, magical thinking – and trust!

The saga of the UK’s contact tracing app has barely begun but already it is fraught with problems. Technical problems – the app barely works on iPhones, for example, and communication between iPhones requires someone with an Android phone to be in close proximity – are just the start of it. Legal problems are another issue – the app looks likely to stretch data protection law at the very least. Then there are practical problems – will the app record you as having contact with people from whom you are blocked by a wall, for example – and the huge issue of getting enough people to download it when many don’t have smartphones, many won’t be savvy enough to get it going, and many more, it seems likely, won’t trust the app enough to use it.

That’s not even to go into the bigger problems with the app. First of all, it seems unlikely to do what people want it to do – though even what is wanted is unclear, a problem which I will get back to. Secondly, it rides roughshod over privacy in not just a legal but a practical way, and despite what many might suggest people do care about privacy enough to make decisions on its basis.

This piece is not about the technical details of the app – there are people far more technologically adept than me who have already written extensively and well about this – and nor is it about the legal details, which have also been covered extensively and well by some real experts (see the Hawktawk blog on data protection, and the opinion of Matthew Ryder QC, Edward Craven, Gayatri Sarathy & Ravi Naik for example) but rather about the underlying problems that have beset this project from the start: misunderstanding privacy, magical thinking, and failure to grasp the nature of trust.

These three issues together mean that right now, the project is likely to fail, do damage, and distract from genuine ways to help deal with the coronavirus crisis, and the best thing people should do is not download or use the app, so that the authorities are forced into a rethink and into a better way forward. It would be far from the first time during this crisis that the government has had to be nudged in a positive direction.

Misunderstanding Privacy – Part 1

Although people often underplay it – particularly in relation to other people – privacy is important to everyone. MPs, for example, will fiercely guard their own privacy whilst passing the most intrusive of surveillance laws. Journalists will fight to protect the privacy of their sources even whilst invading the privacy of the subjects of their investigations. Undercover police officers will resist even legal challenges to reveal their identities after investigations go wrong.

This is for one simple reason: privacy matters to people when things are important.

That is particularly relevant here, because the contact tracing app hits at three of the most important parts of our privacy: our health, our location, and our social interactions. Health and location data, as I detail in my most recent book, what do we know and what should we do about internet privacy, are two of the key areas of the current data world, in part because we care a lot about them and in part because they can be immensely valuable in both positive and negative ways. We care about them because they’re intensely personal and private – but that’s also why they can be valuable to those who wish to exploit or harm us. Health data, for example, can be used to discriminate – something the contact tracing app might well enable, as it could force people to self-isolate whilst others are free to move, or even act as an enabler for the ‘immunity passports’ that have been mooted but are fraught with even more problems than the contact tracing app.

Location data is another matter and something worthy of much more extensive discussion – but suffice it to say that there’s a reason we don’t like the idea of being watched and followed at all times, and that reason is real. If people know where you are or where you have been, they can learn a great deal about you – and know where you are not (if you’re not at home, you might be more vulnerable to burglars) as well as where you might be going. Authoritarian states can find dissidents. Abusive spouses can find their victims and so forth. More ‘benignly’, it can be used to advertise and sell local and relevant products – and in the aggregate can be used to ‘manage’ populations.

Relationship data – who you know, how well you know them, what you do with them and so forth – is in online terms one of the things that makes Facebook so successful and at the same time so intrusive. What a contact tracing system can do is translate that into the offline world. Indeed, that’s the essence of it: to gather data about who you come into contact with, or at least in proximity to, by getting your phone to communicate with all the phones close to you in the real world.

This is something we do and should care about, and could and should be protective over. Whilst it makes sense in relation to protecting against the spread of an infection, the potential for misuse of this kind of data is perhaps even greater than that of health and location data. Authoritarian states know this – it’s been standard practice for spies for centuries. The Stasi’s files were full of details of who had met whom and when, and for how long – this is precisely the kind of data that a contact tracing system has the potential to gather. This is also why we should be hugely wary of establishing systems that enable it to be done easily, remotely and at scale. This isn’t just privacy as some kind of luxury – this is real concern about things that are done in the real world and have been for many, many years, just not with the speed, efficiency and cheapness of installing an app on people’s phones.

Some of this people ‘instinctively’ know – they feel that the intrusions on their privacy are ‘creepy’ – and hence resist. Businesses and government often underestimate how much they care and how much they resist – and how able they are to resist. In my work I have seen this again and again. Perhaps the most relevant here was the dramatic nine day failure that was the Samaritans Radar app, which scanned people’s tweets to detect whether they might be feeling vulnerable and even suicidal, but didn’t understand that even this scanning would be seen as intrusive by the very people it was supposed to protect. They rebelled, and the app was abandoned almost immediately it had started. The NHS’s own ‘’ scheme, far bigger and grander, collapsed for similar reasons – it wanted to suck up data from GP practices into a great big central database, but didn’t get either the legal or the practical consent from enough people to make it work. Resistance was not futile – it was effective.

This resistance seems likely in relation to the contact tracing app too – not least because the resistance grows spectacularly when there is little trust in the people behind a project. And, as we shall see, the government has done almost everything in its power to make people distrust their project.

Magical thinking

The second part of the problem is what can loosely be called ‘magical thinking’. This is another thing that is all too common in what might loosely be called the ‘digital age’. Broadly speaking, it means treating technology as magical, and thinking that you can solve complex, nuanced and multifaceted problems with a wave of a technological wand. It is this kind of magic that Brexiters believed would ‘solve’ the Irish border problems (it won’t) and led anti-porn campaigners to think that ‘age verification’ systems online would stop kids (and often adults) from accessing porn (it won’t).

If you watched Matt Hancock launch the app at the daily Downing Street press conference, you could have seen how this works. He enthused about the app like a child with a new toy – and suggested that it was the key to solving all the problems. Even with the best will in the world, a contact tracing app could only be a very small part of a much bigger operation, and only make a small contribution to solving whatever problems they want it to solve (more of which later). Magical thinking, however, makes it the key, the silver bullet, the magic spell that needs just to be spoken to transform Cinderella into a beautiful princess. It will never be that, and the more it is thought of in those terms the less chance it has of working in any way at all. The magical thinking means that the real work that needs to go on is relegated to the background or eliminated at all, replaced only by the magic of tech.

Here, the app seems to be designed to replace the need for a proper and painstaking testing regime. As it stands, it is based on self-reporting of symptoms, rather than testing. A person self-reports, and then the system alerts anyone who it thinks has been in contact with that person that they might be at risk. Regardless of the technological safeguards, that leaves the system at the mercy of hypochondriacs who will report the slightest cough or headache, thus alerting anyone they’ve been close to, or malicious self-reporters who either just want to cause mischief (scare your friends for a laugh) or who actually want to cause damage – go into a shop run by a rival, then later self-report and get all the workers in the shop worried into self-isolation.

These are just a couple of the possibilities. There are more. Stoics, who have symptoms but don’t take it seriously and don’t report – or people afraid to report because it might get them into trouble with work or friends. Others who don’t even recognise the symptoms. Asymptomatic people who can go around freely infecting people and not get triggered on the system at all. The magical thinking that suggests the app can do everything doesn’t take human nature into account – let alone malicious actors. History shows that whenever a technological system is developed the people who wish to find and exploit flaws in it – or different ways to use it – are ready to take advantage.

Magical thinking also means not thinking anything will go wrong – whether it be the malicious actors already mentioned or some kind of technical flaw that has not been anticipated. It also means that all these problems must be soluble by a little bit of techy cleverness, because the techies are so clever. Of course they are clever – but there are many problems that tech alone can’t solve

The issue of trust

One of those is trust. Tech can’t make people trust you – indeed, many people are distinctly distrustful of technology. The NHS generates trust, and those behind the app may well be assuming that they can ride on the coattails of that trust – but that itself may be wishful thinking, because they have done almost none of the things that generate real trust – and the app depends hugely on trust, because without it people won’t download and won’t use the app.

How can they generate that trust? The first point, and perhaps the hardest, is to be trustworthy. The NHS generates trust but politicians do the opposite. These particular politicians have been demonstrably and dramatically untrustworthy, noted for their lies – Boris Johnson having been sacked from more than one job for having lied. Further, their tech people have a particularly dishonourable record – Dominic Cummings is hardly seen as a paragon of virtue even by his own side, whilst the social media manipulative tactics of the leave campaign were remarkable for their effectiveness and their dishonesty.

In those circumstances, that means you have to work hard to generate trust. There are a few keys here. The first is to distance yourself from the least trustworthy people – the vote leave campaigners should not have been let near this with a barge pole, for example. The second is to follow systems and procedures in an exemplary way, building in checks and balances at all times, and being as transparent as possible.

Here, they’ve done the opposite. It has been almost impossible to find out what was going to until the programme was actually already in pilot stage. Parliament – through its committee system – was not given oversight until the pilot was already under way, and the report of the Human Rights Committee was deeply critical. There appears to have been no Data Protection Impact Assessment done in advance of the pilot – which is almost certainly in breach of the GDPR.

Further, it is still not really clear what the purpose of the project is – and this is also something crucial for the generation of trust. We need to know precisely what the aims are – and how they will be measured, so that it is possible to ascertain whether it is a success or not. We need to know the duration, what happens on completion – to the project, to the data gathered and to the data derived from the data gathered. We need to know how the project will deal with the many, many problems that have already been discussed – and we needed to know that before the project went into its pilot stage.

Being presented with a ‘fait accompli’ and being told to accept it is one way to reduce trust, not to gain it. All these processes need to take place whilst there is still a chance to change the project, and change is significantly – because all the signs are that a significant change will be needed. Currently it seems unlikely that the app will do anything very useful, and it will have significant and damaging side effects.

Misunderstanding Privacy – part 2

…which brings us back to privacy. One of the most common misunderstandings of privacy is the idea that it’s about hiding something away – hence the facetious and false ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument that is made all the time. In practice, privacy is complex and nuanced and more about controlling – or at least influencing – what kind of information about you is made available to whom.

This last part is the key. Privacy is relational. You need privacy from someone or something else, and you need it in different ways. Privacy scholars are often asked ‘who do you worry about most, governments or corporations?’ Are you more worried about Facebook or GCHQ. It’s a bit of a false question – because you should be (and probably are) worried about them in different ways, just as you’re worried about privacy from your boss, your parents, your kids, your friends in different ways. You might tell your doctor the most intimate details about your health, but you probably wouldn’t tell your boss or a bloke you meet in the pub.

With the coronavirus contact tracing app, this is also the key. Who gets access to our data, who gets to know about our health, our location, our movements and our contacts? If we know this information is going to be kept properly confidential, we might be more willing to share it. Do we trust our doctors to keep it confidential? Probably. Would we trust the politicians to keep it confidential? Far less likely. How can we be sure who will get access to it?

Without getting into too much technical detail, this is where the key current argument is over the app. When people talk about a centralised system, they mean that the data (or rather some of the data) is uploaded to a central server when you report symptoms. A decentralised system does not do that – the data is only communicated between phones, and doesn’t get stored in a central database. This is much more privacy-friendly, but does not build up a big central database for later use and analysis.

This is why privacy people much prefer the idea of a decentralised system – because, amongst other things, it keeps the data out of the hands of people that we cannot and should not trust. Out of the hands of the people we need privacy from.

The government does not seem to see this. They’re keen to stress how well the data is protected in ‘security’ terms – protected from hackers and so forth – without realising (or perhaps admitting) that the people we really want privacy from, the people who present the biggest risk to the users, are the government themselves. We don’t trust this government – and we should not really trust any government, but build in safeguards and protections from those governments, and remember that what we build now will be available not just to this government but to successors, which may be even worse, however difficult that might be to imagine.

Ways forward?

Where do we go from here? It seems likely that the government will try to push on regardless, and present whatever happens as a great success. That should be fought against, tooth and nail. They can and should be challenged and pushed on every point – legal, technical, practical, and trust-related. That way they may be willing to move to a more privacy-friendly solution. They do exist, and it’s not too late to change.

what do we know and what should we do about…? internet privacy

My new book, what do we know and what should we do about internet privacy has just been published, by Sage. It is part of a series of books covering a wide range of current topics – the first ones have been on immigrationinequality, the future of work and housing. 

This is a very different kind of book from my first two books – Internet Privacy Rights, and The Internet, Warts and All, both of which are large, relatively serious academic books, published by Cambridge University Press, and sufficiently expensive and academic as to be purchasable only by other academics – or more likely university libraries. The new book is meant for a much more general audience – it is short, written intentionally accessibly, and for sale at less than £10. It’s not a law book – the series is primarily social science, and in many ways I would call the book more sociology than anything else. I was asked to write the book by the excellent Chris Grey – whose Brexit blogs have been vital reading over the last few years – and I was delighted to be asked, because making this subject in particular more accessible has been something I’ve been wanting to do for a long time. Internet privacy has been a subject for geeks and nerds for years – but as this new book tries to show, it’s something that matters more and more for everyone these days.


It may be a short book (well, it is a short book, well under 100 pages) but it covers a wide range. It starts by setting the context – a brief history of privacy, a brief history of the internet, and then showing how we got from what were optimistic, liberal and free beginnings to the current situation – all-pervading surveillance, government involvement at every level, domination by a few, huge corporations with their own interests at heart. It looks at the key developments along the way – the world-wide-web, search, social networks – and their privacy implications. It then focusses on the biggest ‘new’ issues: location data, health data, facial recognition and other biometrics, the internet of things, and political data and political manipulation. It sketches out how each of these matters significantly – but how the combination of them matters even more, and what it means in terms of our privacy, our autonomy and our future.

The final part of the book – the ‘what should we do about…’ section – is by its nature rather shorter. There is not as much that we can do as many of us would like – as the book outlines, we have reached a position from which it is very difficult to escape. We have built dependencies that are hard to find alternatives to – but not impossible. The book outlines some of the key strategies – from doing our best to extricate ourselves from the disaster that is Facebook to persuading our governments not to follow the current ultimately destructive paths that it seems determined to pursue. Two policies get particular attention: Real Names, which though superficially attractive are ultimately destructive and authoritarian, fail to deal with the issues they claim to and put vulnerable people in more danger, and the current and fundamentally misguided attempts to undermine the effectiveness of encryption.

Can we change? I have to admit this is not a very optimistic book, despite the cheery pink colour of its cover, but it is not completely negative. I hope that the starting point is raising awareness, which is what this book is intended to do.

The book can be purchased directly from Sage here, or via Amazon here, though if you buy it through Amazon, after you’ve read the book you might feel you should have bought it another way!


Paul Bernal

February 2020

For Brexit

When hate and lies

Found wings to fly

And ignorance

Gained prominence

Those Empire songs

And Big Ben bongs

Nostalgic dreams

Weren’t what they seemed

Rose-tinted specs

With dire effect

And science dies

Beneath those lies

With knowledge lost

Old friendships tossed

For hateful thoughts

A mood they caught

And migrants blamed

Old hates inflamed

“Take back control”

And lose your soul

And so we go

Although we know

That wounded future

Finds no suture

All the madness

Leaves just sadness

It’s over now.

And how.

P Bernal

The BBC’s problems are no conspiracy theory…

The BBC’s latest response to their challenges over their election coverage, in a piece in the Guardian by Fran Unsworth, their director of news and current affairs, has a very welcome headline:

“At the BBC, impartiality is precious. We will protect it”

Fran, and the BBC, are right that their impartiality is precious – as well as being required by law – but by dismissing those who are challenging them as conspiracy theorists they are doing the opposite of protecting it. They’re helping to ensure its demise.

Not a conspiracy theory

The first and most important thing to say is that very few people – and no-one serious – is suggesting there is any kind of conspiracy going on here. To suggest that they are is a classic straw man argument. Conspiracy theories are easily dismissed, and often make little sense when analysed. Of course it’s impossible to get a large number of independent minded journalists and individual editors to follow a conspiracy. We know that very well – but it’s absolutely not what the BBC is being accused of, so attacking it and dismissing it bears no relationship to the real problem – or real problems, because there are a number of connected problems involved here.

The problems with the BBC are qualitatively different. Unconscious or subconscious bias. A tendency to groupthink. Subservience to authority. High-handedness to the rest of us. This, coupled with a kind of naïveté and misunderstanding of the new media environment, is what produces the problems that we see with the BBC – and which the BBC either don’t see or don’t want to see or address.

Making mistakes

Everyone makes mistakes – and though many might take issue with Fran Unsworth’s description of ‘a couple of editorial mistakes’ as perhaps something of an underestimate –  and no-one expects all mistakes to be avoided. The big questions, though, are what kind of mistakes are made, how they are corrected and avoided in the future, and what kind of apologies are made for them. That’s where the question of unconscious or subconscious bias comes in. The two mistakes Fran Unsworth is presumably referring to are using the wrong clip for Boris Johnson at the Cenotaph and editing out the laughter that followed his answer about trust in the Question Time debate, but there are a number of others. The most noticeable thing about them, however, is not the individual errors, but that they all lean in the same direction. All tend to favour Boris Johnson. That’s where the question of bias comes in. Not a conspiracy theory that the mistakes are made deliberately, under some kind of orders, but that they tend to follow the subconscious bias.

Subservience to authority

This is closely related to the accusation – made in particular by Peter Oborne – that the BBC is too servile to the Prime Minister’s Office. Again, this isn’t a conspiracy theory, but an observation, and certainly not one restricted to the BBC. Robert Peston fits the profile every bit as much as Laura Kuenssberg, for example. This is nothing new for the BBC, however, as the role of being a state broadcaster has consequences, but it has a particular significance in a time when those in authority – and those in Number 10 in particular – are notably less trustworthy than in the past.

Being willing to make compromises in order to get access is normal journalistic practice, but there are balances to be found and the main accusation is that the balance has been tipped too far. When Number 10 is restricting other media – bans on Channel 4 News and on the Daily Mirror for example – it should ring alarm bells in the minds of any journalists. When the criticisms of Peter Oborne are taken into account, those alarm bells should be listened to even more carefully.  Denying that it’s even possible that the balance may have been missed, rather than critical self-examination, is a recipe for disaster.

Fran Unsworth assures us that the BBC are not ‘cowed or unconfident’. I hope she’s right, but the evidence does not really support her. The other ‘mistake’ – failing to secure a date for an Andrew Neil interview with Boris Johnson whilst telling (or at the very least hinting) to the other leaders that they had – does not look at all good. Acquiescing to Johnson’s subsequent request to get the Sunday morning chat with Andrew Marr rather than the evening grilling by Neil makes it look even worse. A strong, ‘uncowed’ BBC would not have let either of those things happen.

Understanding the new media

Another key aspect of the current political climate – and again, the current occupants of Number 10 are critical here – is that the relationship between the old and the new media is vitally important. It is very easy for the ‘old media’ to get ‘played’ by skilful operators of the new media. Selectively RTing poorly phrased and incomplete tweets by BBC journalists, taking them out of context and not mentioning critiques that had been put in separate tweets is just one example. Using clips from interviews similarly selectively or even editing them to create an effect (making Keir Starmer pause and look as though he didn’t answer a question that he did, or editing out the laughter that followed Boris Johnson’s answer on trust) is pretty standard practice now – and the BBC should be aware of that.

There are things that the BBC journalists could do to slow down this manipulation – including the criticism within the tweet rather than separately. “Mr Johnson again mentioned the 50,000 new nurses” in a tweet leaves it open to magnification without criticism, “Mr Johnson again claimed the debunked number of 50,000 new nurses” does not. Taking care over words more: say that a politician ‘says’ or ‘claims’ rather than ‘reveals’ something if they thing they are claiming is dubious at least. Being cynical in the face of people with a track record of dishonesty isn’t being unfair, it’s being a proper journalist.

High-handedness to critics

The responses to criticism – and Fran Unsworth’s is just the latest of many – have been perhaps the most disappointing of all. Anyone even slightly criticising the BBC is dismissed as a conspiracy theorist, fobbed off with straw man arguments or worse. Huw Edwards suggested Peter Oborne looked ‘crackers’ for suggesting the clipped version of Boris Johnson’s response on trust had been edited – and even when the BBC eventually admitted it had been edited there has been no apology from Edwards.

This is pretty much the definition of gaslighting – and the BBC should know this and should find a much, much better way.

Trusting the BBC

Right now, we need the BBC to be working well. We need to be able to trust the BBC – and the BBC needs us to trust them. Calling its critics conspiracy theorists and miscasting their criticism as ‘crackers’ is pretty much guaranteed to damage that trust. It is already close to breaking point. Unless the BBC starts to understand this – and to openly acknowledge it, because I am quite sure there are a fair number of journalists and others in the BBC who are quite aware of the problems – that trust will be gone. The BBC needs to understand how it appears to others.

The dramatic cartoon in the Dutch newspaper Volkskrant, showing Boris Johnson raping Britain whilst Nigel Farage and Jacob Rees-Mogg et al hold her down, has the BBC pushing away the crowd saying the Dutch equivalent of  ‘move along, nothing to see here’. This should really give the BBC pause for thought. What role are they taking? How do they want to be remembered? When the rest of the world can see it but the BBC themselves can’t, things have got very bad. This may be the BBC’s last chance. I hope it takes it.

Tories, Twitter and Fake News

The furore surrounding the Conservative Party’s ‘rebranding’ of its press office Twitter account as ‘FactcheckUK’ during the leadership debate has been quite spectacular.

The BBC’s Emily Maitlis called it ‘dystopian’ on Newsnight, and the reaction on Twitter itself was, as many things on Twitter are, a mixture of outrage, anger, defensiveness and humour. And yet the full impact and the real importance of this seemingly small piece of deception do not seem to have been properly appreciated by many – at least in part because they need to be considered in the context of the much misunderstood phenomenon of ‘fake news’. It is not just that the Tories were contributing to fake news – using some well known techniques – but that their activities directly undermined some of the few effective tools that exist to combat (or at least reduce the impact of) fake news.

Fake news is very difficult to fight. It is not a new phenomenon – its history can be traced back pretty much as far as human history. Classic examples include its use to demonise Vlad the Impaler in 15th Century Wallachia: its use is one of the reasons he became a byword for brutality and the basis of the myth of Dracula. What is different now, and what makes it more of a problem in the current environment is the way that social media works – the speed and sharing networks of Facebook and Twitter, the gameable curation algorithms of YouTube, the ease with which content can be created and tailored all contribute to something which can have a huge effect, particularly in times of political turbulence.

The question of how to deal with this has been wrestled with by lawyers, academics, tech companies, governments and more, and many suggestions have been made and many ‘solutions’ suggested – including, importantly, the use of law to clamp down on fake news, tech to detect fake news and ‘rules’ applied and enforced by the social media companies. The UK government introduced its ‘Online Harms White Paper’ earlier in the year, and one of the key harms it aimed to deal with was misinformation…. …so one of the first reactions to seeing the UK’s governing party engaging in fakery should be to question their suitability to govern the regulation of fake news.

This isn’t just a particular problem for the Tory Party in the UK. All over the world governments are wringing their hands about fake news, bringing in often harsh and censorious laws and worst – whilst using fake news themselves. Fake news, from a government perspective – and certainly from the Tory Party perspective – is only a bad thing when other people engage in it.

Most of the ideas and tools suggested to ‘deal with’ fake news are unlikely to be effective, can be gamed or sidestepped, or have significant and damaging side effects. All, however, do rely on one key factor: if we are to have any chance of dealing with fake news and other forms of misinformation we need to have some kind of ‘anchor points’ of reality to judge the fakery against. The Tory Party’s little deception yesterday directly undermined two of the main ways that those anchor points are established.

The first of these is the verified account – the ‘blue tick’. This is not, as some seem to think, a badge of honour, or a status symbol, but is intended to be a way that you can be sure that the account is what it says it is. For someone with a verified account to be misleading as to what they are is to directly undermine this – and when CCHQ ‘relabelled’ itself as a seemingly neutral ‘fact checker’ it was being directly misleading, and in a way specifically forbidden by Twitter in the terms and conditions for a verified account.

The second is the existence of fact checkers themselves – they’re intended to provide those anchor points, to measure claims against reality. By creating a fake fact check account, and then by using it to do fake fact checks, spreading propaganda, they were not only being misleading but undermining the whole concept of fact checking, damaging another of the key ways in which people have a chance to work out what is true.

Dominic Raab tried to suggest this didn’t matter, because anyone looking at the account would see from the details that it was still ‘@CCHQPress’, so know it wasn’t really a fact checker. “No one who looks at it for more than a moment will have been fooled,” he said, missing the key point that one of the main techniques of fake news is to create things precisely for those who only glance at them for a moment, who catch them in passing. Empirical evidence shows that even one impression of a headline (or a tweet) can have an effect and make a story more likely to be believed. That’s how Twitter, in particular, works very often. The ‘rebranding’ include a simple, bright colour and a large tick mark, just like those of real fact checkers. The immediate impression for those glancing for a moment was that of a fact checker – and what other reason did they have for doing the ‘rebrand’?

Some of the other defences of the approach from the Tories have attempted to suggest that this is all normal, that no-one outside the Westminster Bubble or the media geeks would be interested. Others have feigned naïveté, as though this isn’t any kind of ‘trick’ or deception, or acted as though they don’t understand why people are upset about it. To believe these ‘explanations’ is the real naïveté: the strategists involved in the Tory campaign may not be geniuses, but they do have a more than working knowledge of how social media works. Boris Johnson has surrounded himself with the people who worked for the ‘leave’ campaign that used social media as central to their strategy, who used profiling and targeted ads and all kinds of other related practises, and used them very effectively during the Brexit campaign. This is their area.

The response from Twitter when alerted to this breach of their rules was fast – but very disappointing. CCHQ Press had to revert to their real name, and were told not to do it again or they would be punished properly. A slap on the wrist at best, and a chance to laugh it off – as they’ve been trying to do ever since. A much more appropriate punishment – and one available to Twitter under their rules – would have been to take away their verified status for a period. Until the General Election, perhaps? A verified status matters, and removing it would make the point that it is both a privilege and a recognition of ‘truth’. CCHQ broke the rules, undermined the concept of a verified account, and damaged the integrity of the system. They directly opposed the truth. Taking away that verified status would make that point – and without any form of ‘censorship’. They can still tweet, but their tweets would not carry the authority that they did. That would seem entirely appropriate.

Without it, it is easy for the Tories to continue the tactics – indeed, when asked, Michael Gove doubled down on the approach, saying it was the right thing to do, and that they were the ones who were working for the truth. Again, this is a classic tactic of misinformation – and one familiar from all the many years that those in power have engaged in propaganda – to accuse your enemies of the things you are guilty of, shifting the blame and muddying the waters at the same time. That muddying of the waters, the blurring to the issues, is all about making truth harder to find, and creating a kind of exhaustion amongst those who seek to find it. Given the knowledge and understanding that Boris Johnson’s team have of social media, we can expect more and different examples of the use of social media ‘dark arts’ in the rest of the campaign.

We need to be ready for this, and in particular to be ready to counter it, to alert people to it, and to fight. Misinformation is hard to fight, but even harder to fight if we take away those few tools we have. If we don’t fight to keep those – and verified accounts and relatively reliable fact-checkers are two of those tools – we will lose the bigger fights for truth and for an even slightly functional democracy. Right now, it looks as though that’s exactly what’s happening.

UPDATE: Since I wrote this post, which included a warning that the Tories would engage in ‘more and different examples of the use of social media ‘dark arts” they have provided an excellent example – their fake/spoof Labour manifesto, which they’ve linked via paid advertisements to searches on Google for ‘labour’. Not only have they done this, but they’ve done what in the past we would have called ‘cybersquatting’ – registering a domain name that looks as though it’s ‘official’ with a deliberate attempt to mislead. In this case, it’s… looks real, makes no mention of the fact that it’s run by the Tories…. Yup, we’re in for a lot of ‘games’ this election…

Note that whilst the manifesto itself says that it’s by the Tories, the domain name doesn’t, the advertisement and headline that appears on the Google search results didn’t in its original form , and that’s what you would see if you don’t click on it! This has now been corrected – seemingly it was an error on Google’s part.

GEEK POINT: This isn’t immediately illegal, though IT law people might well suggest that it’s an ‘abusive registration’ of a domain name, and that it Labour applied to Nominet to take over the domain, they might well be successful. By that time, of course, the damage would have been done….

Response to Online Harms White Paper

My submission to the Online Harms White Paper consultation is set out below. This has been one of the hardest government consultations for me to respond to. In part this is because the White Paper covers so much ground that there is far too much to say than can fit into a reasonably sized response – one that stands a chance of being read properly – but in part it is because the consultation looks very much as though it has already assumed the main answers. The questions as set out in the consultation are very much on the detail level about how to do what they’ve already decided to do, although a great deal of what they’ve decided to do is at best questionable, at worst extremely likely to be not just ineffective but actually counterproductive, as well as restricting crucial internet freedom for many of the people who need it the most.

That means my response is somewhat ‘bitty’, covering only a few select areas as well as giving general comments. Fortunately there are some other really excellent responses out there,

Response to the Online Harms White Paper consultation

I am making this submission in my capacity as Senior Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research into internet law and specialise in internet regulation from both a theoretical and a practical perspective. My first book, Internet Privacy Rights – Rights to Protect Autonomy, was published by Cambridge University Press in 2014. My second book, The Internet, Warts and All: Free Speech, Privacy and Truth, was also published by Cambridge University Press in August 2018, and has the question of regulation of the Internet as one of its central themes. ‘Online harms’, as set out in the White Paper, are central to that book, from the chapters covering freedom of speech and fake news to the chapter on the nature and practice of trolling. There are direct recommendations about regulation of all of this contained within that book.

I have previously responded to a series of government consultations in this and related fields, including the House of Lords Internet Regulation Inquiry and the DCMS Fake News Inquiry in 2018, and was involved in the Law Commission Abusive and Offensive Online Communications Project that same year. This area falls squarely within my field of expertise and I have written extensively about it in forms other than the two academic books already mentioned. I would be happy to contribute further if that would be of assistance.

Introduction to this submission

Whilst the problem of online harms is a significant one, there are significant dangers associated with inappropriate and excessive regulation. As well as potentially putting freedom of speech, freedom of association and assembly and other human rights at risk, many of the methods suggested could end up being counterproductive, actually causing more harm than they address. They can encourage countermeasures that mean that the real ‘villains’ avoid being held to account, they can create tools that are used bywhat might loosely be called ‘trolls’ against their victims, and they can produce more arbitrary punishment that make it harder to respect the laws and those attempting to enforce them.

It is important not to be persuaded by inaccurate characterisations of the internet as a ‘wild west’ that is ungoverned and needs ‘reining in’. For the vast majority of people, the vast majority of the time, the internet is a place that provides great benefits and an essentially safe and secure environment to socialise, do business, find information and much more. Moreover, the internet is already regulated by a wide range of laws, from those governing speech (such as S127 of the Communications Act 2003, the Malicious Communication Act 1988) and public order law to data protection, copyright, fraud and ‘revenge porn’, as well as civil law such as defamation law, misuse of private information and much more. Regulators such as Ofcom and the Information Commissioner’s Office already have extensive powers to operate online. This is not in any real sense an unregulated area – indeed, in many ways, speech online is subject to tighter control and more regulation than speech ‘offline’.

Further, the nature of the online world means that rather than being a place where anonymity provides excessive ‘protection’, it is an environment where records are more precise, more persistent and more easily analysed than the ‘offline’ world, often making people moreaccountable for their speech than in the past. There are technological and legal mechanisms to both locate and potentially prosecute perpetrators of online harms – and many have indeed been prosecuted in ways that might well have been seen as disproportionate if their speech had been offline rather than on the internet.

All this means is that much more care needs to be taken about how – and indeed whetherto regulate speech any more harshly than it is currently regulated. There are some specific areas where it might be appropriate, but a heavy-handed approach to the regulation of online speech, though politically attractive, will almost certainly cause much more harm than good. Moreover, it can create a sense of complacency about dealing with much more important problems in the online environment, as well as providing inappropriate reassurance that distracts from the critical need to encourage people to be self-supportive and ‘savvy’ online, which is much more important than any regulator could be.

This is perhaps the most important point, and it is good to see that a section of the White Paper is devoted to awareness and in particular to empowering users. This should be emphasised in all communications – and the idea that we can somehow create an internet that is completely ‘safe’ should not be promoted so positively. A ‘safe’ internet can become a sterile internet, losing the creativity and dynamism that is the lifeblood of the environment. We should neither overplay the dangers – as the portrayal of the internet as a lawless ‘wild west’ suggests – nor exaggerate the capability to remove those dangers entirely, as the idea of making the internet ‘safe’ implies.

Similarly, if the government does try to regulate along the lines set out in the White Paper, it is important not to expect too much. This kind of regulation is highly unlikely to have a significant impact on the level of ‘online harms’ that are encountered. The risks associated with this kind of regulation, as well as the significant costs involved in setting it up, make it hard to justify pursuing it as it stands.

1          Focussing on illegal content and activity

1.1       That the White Paper starts by referring to illegal ‘and unacceptable’content and activity should be a concern from the start. If something is really ‘unacceptable’, it should be made illegal – and unless it is illegal, it should not be deemed unacceptable. If acceptability can be determined by policy or politics rather than law, the scope for abuse, uncertainty and bias is enormous. Setting what amounts to a ‘moral’ or ‘ethical’ view of acceptability is a very slippery slope.

1.2       Further, setting one set of standards of ‘acceptability’ for the whole of the internet is not only doomed to failure but likely to destroy some online communities that are in most ways positive and supportive for people who spend time in them – something that should be strenuously avoided. One of the key strengths of the internet is that it allows space for the existence of very different communities and very different platforms – this has been true since the beginning of the ‘social’ internet in particular. Imposing a set of standards ‘from above’ that do not meet either the needs or the expectations of those communities is not only unlikely to succeed in any meaningful way but is likely to cause anger and resentment.

1.3       Where content and behaviour is illegalthe law should apply across all platforms and communities. Deciding ‘acceptability’ should be kept to the platforms and the communities to decide. This way the different platforms and communities can develop in ways that suit them. Encouraging a diversity of platforms and communities has the additional potential benefit of dispersing the power currently wielded by the internet giants – and reducing vulnerability to things like fake news and political manipulation, as part of the reason for the effectiveness of both has been the concentration of data and audience on particular platforms, Facebook and YouTube in particular.[1]

2          Online harms

2.1       The online harms discussed in the White Paper need to be considered in the light of this. The first main types discussed in the White Paper fit clearly into the illegalcategory: CSEA, terrorist contents, content uploaded from prisons, the sale of illegal opioids. Law already exists to address all of these, and to a significant degree this law is already effective, insofar as it can be effective given the nature of the problem. A new online regulator, following the lines discussed in the White Paper is unlikely to have a significant effect on any of these areas – more resources to law enforcement, to prisons (to take more control over the supply of mobile phones for example) and so forth is much more likely to be effective.

2.2       The other harms discussed, ‘[b]eyond illegal activity’, from section 1.15 of the paper on, are another matter. Cyber bullying, misogyny and other forms of online abuse can cross the threshold into illegality, and many have been successfully prosecuted, (e.g. under the Malicious Communications Act 1988 and S127 of the Communications Act 2003). This does not mean that further law or regulation is required, but that more consistency, better training and clarity from those enforcing the law, and more resources to them, could improve matters, particular where the application of these laws has been seen to appear arbitrary and out of touch. The notorious ‘Twitter Joke Trial’ of Paul Chambers in 2012, which eventually saw the conviction quashed after a series of appeals, left those enforcing the law looking more than foolish. This was not a result of too little law or too little regulation but of authorities that did not understand the online world.

3          Anonymity online

3.1       The White Paper notes that ‘tackling online anonymous abuse’ is a key concern. This has been a subject of discussion for those studying the internet for many years – and it is important to raise a strong, cautionary note against the idea that requiring ‘real names’ would be an effective tool against online abuse. In practice, there is little evidence so suggest that it might be, and significant evidence that it would not – and that it would put vulnerable people in particular situations at risk.[2]

3.2       It may seem counterintuitive but empirical evidence has shown that ‘trolls’ required to use their real names online actually become morerather than less aggressive. Trolls often ‘waive their anonymity’ online, becoming even more aggressive when posting with their real names.[3]As I note in my 2018 book, The Internet, Warts and All, it may be that having a real name displayed emboldens trolls, adding credibility and kudos to their trolling activities. It may also be that they feel they have less to lose and less to protect when their names are revealed – or that it creates a ‘badge of honour’. Whatever the reason, the evidence does notsuggest that requiring real names deters trolls or trolling.

3.3       Further, forcing people to use real names puts some people at risk – from whistle-blowers to victims of spousal abuse, to people with religious or ethnically identifying names and many more groups. It also makes the victimsof online abuse more vulnerable, as their attackers can learn more about them and use that to abuse or threaten them – finding out their personal details and using them against them, threatening to report them or tell lies about them to their families and friends, employers and so forth. The classical troll tactic of ‘doxxing’ – releasing documents about a victim – is made much easier by a real names policy

3.4       There are already legal and technical methods for revealing who lies behind an anonymous account – anonymity online is never more than a basic protection – which can and should be used when required. There are also platforms where real names are required already – Facebook for one – but there is little evidence that they provide more protection from abuse. What could help, as noted above, is a greater diversity of platforms and communities online, so that people can find places that are safer for themonline. The rise of group-based private social media system like WhatsApp may be in part a response to this problem: groups kept private and secure are less open to external abusers.

4          Young people online

4.1       The note in the paper that most children nave a positive experience online is very welcome: it is really important not to portray the online world as somewhere fundamentally dangerous for children and young people. An overly protective approach to children online would reflect a mischaracterisation and misunderstanding of how the internet works for children, and any regulation that restricts rather than supports children online should be avoided.

4.2       It is critical in understanding this notto put too much emphasis on the worries of parents about their children’s online activities, particularly when those worries are actively encouragedby the ways that they are questioned about it. If can be a reflection of the way that parents misunderstand what their children are doing, and feel out of touch. A greater emphasis on educating parents so that they don’t worry would be very welcome.

4.3       Recent studies also show that concerns about the impact of ‘screen time’ on adolescents mental health are likely to be unfounded.[4]This fits into a common pattern of misplaced fears and concerns based on misunderstanding of both technology and the lives of young people. It is important not to overreact to ideas and fears spread through ignorance. An overly onerous regulatory approach towards young people online should be avoided. This is not to underplay the importance of dealing with key issues such as self-harm and suicide, sexting and revenge porn, but to place them in context. It is important also to understand the causality here: where there are correlations between online activity and self-harm, for example.

4.4       An area where the government is already attempting to regulate in relation to children – age verification for access to pornographic and other ‘adult’ content – is another example where regulation is highly unlikely to be effective, and a prime example of another classical failure of regulation, the failure to listen to experts. Almost everyone in the technology industry has advised against the path that the government has taken: it won’t in practice help protect children from harm, will encourage complacency, has already encouraged countermeasures both technical (including the rise in usage of VPNs) and tactical (using privacy groups and so forth), and does not address the real issue of harm. Moreover, it is likely to be very expensive and technologically almost impossible to function well. It was and remains a regulatory trap – the government should do its best not to fall into similar regulatory traps in other areas. Caution, care, and a willingness to listen to experts even when they go against what might seem ‘obvious’, are very much needed in the area of internet regulation.

4.5       One area where regulation in relation to children could, however, be useful, is privacy – in common with other areas mentioned in this submission, privacy underpins protection in other ways. A requirement for real names, as noted above, would be likely to harmrather than help children at risk of cyber bullying and other online abuse. The ability for children to protect their privacy is critical – and restricting the gathering of data about children by social media platforms, advertisers and so forth should be encouraged.

5          Privacy, fake news and political manipulation

5.1       That leads to the more general point about privacy and personal data: the gathering and use of personal information underpins many of the worst problems on the internet at present. Privacy invasion and profiling lies behind the current manifestation of the fake news phenomenon and the broader issue of political manipulation (as graphically illustrated by the Cambridge Analytica saga) discussed in the White Paper, as well as providing tools for scammers and other criminals, creating vulnerabilities that can be exploited and much more.

5.2       Indeed, rather than focus on the symptoms of fake news and related harms, as the White Paper seems to do in paragraphs 7.25 onwards, focus should be placed on privacy, on data gathering, profiling and targeting. It is these techniques (again, as graphically illustrated by the Cambridge Analytica saga) that make misinformation and political manipulation so particularly effective in the current internet. The White Paper notes that it will be looking at advertising online – but does not make the connection between the techniques used by online advertisers and those used by people spreading fake news and misinformation. They are, in practice, the same methods, the same techniques (data analysis, profiling and targeting), and whilst these are seen as essentially harmless, normal business practices, any attempts to ‘deal with’ fake news, political manipulation and electoral interference are bound to fail. ‘Fact checking’ and labelling of fake news or unreliable sources has been empirically demonstrated to be counterproductive, making people morelikely to believe the fake news, one of the reasons Facebook abandoned its practice in 2017.[5]Making this kind of labelling part of any ‘duty of care’ would be directly counterproductive to combatting this kind of online harm.

5.3       Privacy and personal data is also an area where extensive law already exists. Data protection law, and in particular the new General Data Protection Regulation, has the potential to provide a good deal of support for individual privacy – but only if it is enforced with sufficient rigour and support. The Information Commissioner’s Office (‘ICO’) needs to be given more resources both in terms of finance and expertise, and perhaps more responsibilities.

6          The role of a regulator

6.1       As noted in various sections above, there are many areas discussed in the White Paper for which either regulation already exists or further regulation is likely to be counterproductive. The idea of imposing a ‘duty of care’ on internet platforms for some of subjects discussed in the White Paper should therefore be viewed with great caution. There are further areas where internet platforms are already working extensively to address, and where the question of whether a regulator is really needed should be asked. These include the online abuse of public figures – much of what is suggested is already being done, particular by Facebook and Twitter, and it is easy to fall into a trap of saying ‘it’s all the fault of the social media companies’ when there is a much bigger, underlying issue that is on a societal level.  The online abuse of public figures is closely connected with racism and misogyny – female and ethnic minority public figures are subjected to more and more virulent abuse than others – and whilst these are still tightly embedded in our society, blaming the social media companies for the existence of such abuse can easily become a form of deflection or avoidance.

6.2       Codes of practice could be welcome in these areas, but as noted above, imposing one set of standards on all (or most) platforms is likely to be ineffective and to have significant side effects. Enforcing that code of practice is likely to be difficult and hard to make consistent, fair or appropriate.

6.3       As noted above, privacy is of critical importance, and yet some of the suggestions for the ‘duty of care’ involve actually invading or weakening privacy for precisely the people who need it the most. In 7.35 for example, it is suggested that ‘vulnerable users and users who actively search for…’ certain content should be monitored – how is this to be done without extensive invasions of privacy, and how are those invasions of privacy to be done in ways that do not put the specifically vulnerable users at further risk? Again, the encouragement of people to take countermeasures and to develop tools and techniques to avoid this kind of monitoring should not be underestimated. Much of this kind of content will be driven to areas where it is lesseasy to provide support and help for people who really need it.

6.4       These are just some of the example that indicate quite how difficult doing effective regulation in this kind of way is likely to be. It is vital to understand that this regulatory exercise be understood to be highly challenging, very likely to be ineffective, as well as extremely expensive. Expectations as to its effectiveness, in particular, should be kept in check, as well as the potential damage to internet freedom at precisely the time when it is most needed.

7          Internet Freedom

7.1       It is easy to blame the internet for problems that have other causes, and easy to see it as something that needs to be ‘reined in’ or controlled. As well as being, as noted in various sections above, a mischaracterisation of the current situation, where for the vast majority of people the vast majority of the time the internet is something immensely positive, productive and supportive, providing for most ordinary people forms of communication and access to information that was previously the province only of the extremely privileged. Part of the reason for this huge positive is the amount of freedom that we currently have – and how it underpins many of our human rights, from freedom of expression to assembly and association, both online and off, freedom from discrimination, the right to a fair trial and more.

7.2       This freedom is something that should not be lightly sacrificed, particularly on the basis of myths and misunderstandings, or from an intention to assuage particular sections of the media. Almost all of the measures suggested in the White Paper have an impact on both freedom of speech and access to information, and many have a significant impact on privacy and the other vital human rights already mentioned. That is not to say that they should not be considered, but that those impacts need to be considered very seriously, and regulation not undertaken lightly. Excessive regulation can end up arbitrary and unfair, it can exacerbate existing problems, it can be gamedby people to the detriment of their enemies – and internet trolls and others wishing harm can be experts in such gaming, using tools created to protect people to actively harm them.

7.3       It should also be borne in mind that tools created now, with authorities that we deem to be benign, can be used by successor authorities that are less benign – we need to learn the lessons from history about this, and to avoid setting things up that can end up being used to oppress rather than protect. This is another key reason for caution in regulating too harshly.

8          Responses to specific questions in the consultation

This response has focussed on the overall effect of the White Paper, and on some particular areas where problems might arise, rather than on the specific consultation questions. Some of the questions are beyond the scope of this response but some do warrant a specific answer. In particular:

Q1:       The first and most important thing that the government should do is demonstrate more transparency, trust and accountability itself. The government should lead by example – and a code of conduct for ministers in relation to things like misinformation would be a good start. In practice, minsters not only spread misinformation themselves but contribute to an environment in which information is not trusted. Proper accountability should begin with the government.

Q4        Any regulator needs to be fully accountable to Parliament, through parliamentary committee, rather than through the DCMS itself. It should be responsible to parliamentrather that to the government, particularly as it needs at times to hold the government to account (see response to Q1).

Q5        As noted throughout this submission, great care needs to be taken to avoid excessive regulation.

Q6-7     These are crucial questions, but I am afraid it betrays a misunderstanding of the nature of privacy, something discussed in depth in Chapter 6 of my book The Internet, Warts and All.Privacy is not ‘two-valued’ with some communications being private, others public. It is much more nuanced than that, and sometimes ‘public’ forums include extremely private conversations and communications. The infamous Samaritans Radarfailed precisely because it misunderstood this – and the ICO confirmed at the time that private and personal information can exist on ‘public’ social media platforms.[6]Much more care and thought is needed here, rather than assuming that private and public can be easily separated. Moreover, if the criteria for what counts as private becomes known, it can (a) drive people to more private forms of communication that mean they are less easily helped and (b) create an opportunity for ‘gaming’ the regulations.

Q8        As noted throughout this submission, this is the big question for the whole plan. Much more time and thought is needed to avoid the regulation being both heavy-handed and ineffective.

Q10      The bigger question is whether the regulator should exist at all in the form proposed. The government should be asking that bigger question before looking at the precise legal form. If a regulator is definitely decided upon, a new public body would seem more appropriate than an existing one: the ICO has too much to do already, broadcast and related areas are too dissimilar for Ofcom to have much chance of succeeding, the BBFC is struggling over the contentious issue of age verification.

Q11      Making a regulator ‘cost neutral’ is laudable but brings about the risk of even more potent lobbying than already exists, and the power of the lobbies of Google, Facebook et al is already remarkably powerful. Whatever funding mechanism is determined needs to be clear, simple and not gameable – and that is very difficult to do given the expertise of those likely to be required to pay.

Q12      i) Unless any regulator has the power to disrupt business activities it is unlikely to have any impact at all. ii) ISP blocking already exists in relation to copyright, CSEA (via the IWF) and other areas. Extending those areas should be very much resisted, as the impact on freedom of expression is direct and significant, but given that it already exists for those areas there is little logical reason why not to extend it. iii) Senior management liability, though attractive, is unlikely to be sustainable.

Q13      Under terms similar to the GDPR.

Q14      Yes, but the details would depend very much on precisely how the regulations are set out.

Q15      The risks associated with Brexit, the excessive nature of our surveillance regime and in particular things like demands for backdoors to encryption are the biggest barriers to innovation in the UK technology industry. Both are understandably beyond the remit of this consultation, but those involved should be aware how damaging both are to the technology industry.

Q17-18See section 4 above. Children need empowerment more than protection, and parents need to learn more than the children do. The regulator should play an informative role – but be aware that this is very limited, and not place too heavy an expectation of its success.

I hope this response is helpful. If you need any further information, or links to the research that underpins any of the answers, please let me know.


Dr Paul Bernal

Senior Lecturer in Information Technology, Intellectual Property and Media Law

UEA Law School

University of East Anglia

Norwich NR4 7TJ


[1]See my article in the Northern Ireland Legal Quarterly in December 2018, Fakebook: why Facebook makes the fake news problem inevitable, online at

[2]This area is covered in depth in Chapter 8 of my bookThe Internet, Warts and All: Free Speech, Privacy and Truth, published by Cambridge University Press, 2018.

[3]Most notably the 2016 study from the University of Zurich, reported in Rost, Stahel and Frey, Digital Social Norm Enforcement: Online Firestorms in Social Media,  PLoS ONE 11 (6)

[4]See Orben and Przybylski, Screens, Teens, and Psychological Well-Being: Evidence From Three Time-Use-Diary Studies, 2019


[6]The Samaritans Radar story is the central case study of chapter 6 of The Internet, Warts and All.It involved analysing social media postings in order to identify when vulnerable people might be contemplating suicide, and failed within ten days of its launch as its privacy invasions were found to be deeply intrusive to exactly the online community it intended to support, and seen as putting them at intense risk.