Our Dom

A tavern in the shadow of a castle, somewhere in France, or perhaps County Durham. Dom sits in a large chair, looking a little morose. In comes a young, northern lad, a salt of the earth type, who looks over at Dom and stops.

Darren (for it is he): Why are you looking so sad, Dom? What’s wrong?

Dom looks up, but barely registers Darren’s existence. Darren is unfazed, and comes up to Dom and tries to cheer him up with a smile. In the background, a brass band (good Northern stuff) starts up, in a tune recognisable as coming from Disney’s Beauty and the Beast.

Darren starts, in a sing-song voice

Gosh, it disturbs me to see you Our Dom
Looking so down in the dumps
Every guy here’d love to be you, Our Dom
Even when taking your lumps
You’re Boris’s trusted adviser
You’re Laura K’s favourite source
I’ve never met anyone wiser
There’s a reason for that: it’s because…

the band strikes up a jaunty tune…

No… one… lies like our Dom
Fakes his cries like our Dom
Cannot tell the truth if he tries like our Dom

His lying can never be bested
From London to Durham and more
To drive so his eyesight is tested?
I laughed so much my ribs were sore…

No… one… cheats like our Dom
Does deceit like our Dom
Turns his enemies white as a sheet like our Dom

When it comes down to rewriting history
There no folk who can quite compare
Why people believe it’s a mystery
But it drives his foes hearts to despair…

No… one… takes like our Dom
On the make like our Dom
Makes his news quite so perfectly fake like our Dom

His lies they are brash, they are brazen
But the media just doesn’t care
He crafts lies for ev’ry occasion
And his army of trolls and of bots can then share…

No… one.. drives like our Dom
Coaches wives like our Dom
Cares nothing for old people’s lives like our Dom

He can break any law with impunity
In elections and lockdown who cares?
His denials of planned herd immunity
Are about as convincing as Donald Trump’s hair…

No… one… sneers like our Dom
Stokes up fears like our Dom
No… one… lies like our Dom
Porkie pies like our Dom


Darren sits down, exhausted. Dom just ignores him, but a secret smile just touches his eyes…

With apologies to anyone even slightly associated to Disney.

Contact tracing, privacy, magical thinking – and trust!

The saga of the UK’s contact tracing app has barely begun but already it is fraught with problems. Technical problems – the app barely works on iPhones, for example, and communication between iPhones requires someone with an Android phone to be in close proximity – are just the start of it. Legal problems are another issue – the app looks likely to stretch data protection law at the very least. Then there are practical problems – will the app record you as having contact with people from whom you are blocked by a wall, for example – and the huge issue of getting enough people to download it when many don’t have smartphones, many won’t be savvy enough to get it going, and many more, it seems likely, won’t trust the app enough to use it.

That’s not even to go into the bigger problems with the app. First of all, it seems unlikely to do what people want it to do – though even what is wanted is unclear, a problem which I will get back to. Secondly, it rides roughshod over privacy in not just a legal but a practical way, and despite what many might suggest people do care about privacy enough to make decisions on its basis.

This piece is not about the technical details of the app – there are people far more technologically adept than me who have already written extensively and well about this – and nor is it about the legal details, which have also been covered extensively and well by some real experts (see the Hawktawk blog on data protection, and the opinion of Matthew Ryder QC, Edward Craven, Gayatri Sarathy & Ravi Naik for example) but rather about the underlying problems that have beset this project from the start: misunderstanding privacy, magical thinking, and failure to grasp the nature of trust.

These three issues together mean that right now, the project is likely to fail, do damage, and distract from genuine ways to help deal with the coronavirus crisis, and the best thing people should do is not download or use the app, so that the authorities are forced into a rethink and into a better way forward. It would be far from the first time during this crisis that the government has had to be nudged in a positive direction.

Misunderstanding Privacy – Part 1

Although people often underplay it – particularly in relation to other people – privacy is important to everyone. MPs, for example, will fiercely guard their own privacy whilst passing the most intrusive of surveillance laws. Journalists will fight to protect the privacy of their sources even whilst invading the privacy of the subjects of their investigations. Undercover police officers will resist even legal challenges to reveal their identities after investigations go wrong.

This is for one simple reason: privacy matters to people when things are important.

That is particularly relevant here, because the contact tracing app hits at three of the most important parts of our privacy: our health, our location, and our social interactions. Health and location data, as I detail in my most recent book, what do we know and what should we do about internet privacy, are two of the key areas of the current data world, in part because we care a lot about them and in part because they can be immensely valuable in both positive and negative ways. We care about them because they’re intensely personal and private – but that’s also why they can be valuable to those who wish to exploit or harm us. Health data, for example, can be used to discriminate – something the contact tracing app might well enable, as it could force people to self-isolate whilst others are free to move, or even act as an enabler for the ‘immunity passports’ that have been mooted but are fraught with even more problems than the contact tracing app.

Location data is another matter and something worthy of much more extensive discussion – but suffice it to say that there’s a reason we don’t like the idea of being watched and followed at all times, and that reason is real. If people know where you are or where you have been, they can learn a great deal about you – and know where you are not (if you’re not at home, you might be more vulnerable to burglars) as well as where you might be going. Authoritarian states can find dissidents. Abusive spouses can find their victims and so forth. More ‘benignly’, it can be used to advertise and sell local and relevant products – and in the aggregate can be used to ‘manage’ populations.

Relationship data – who you know, how well you know them, what you do with them and so forth – is in online terms one of the things that makes Facebook so successful and at the same time so intrusive. What a contact tracing system can do is translate that into the offline world. Indeed, that’s the essence of it: to gather data about who you come into contact with, or at least in proximity to, by getting your phone to communicate with all the phones close to you in the real world.

This is something we do and should care about, and could and should be protective over. Whilst it makes sense in relation to protecting against the spread of an infection, the potential for misuse of this kind of data is perhaps even greater than that of health and location data. Authoritarian states know this – it’s been standard practice for spies for centuries. The Stasi’s files were full of details of who had met whom and when, and for how long – this is precisely the kind of data that a contact tracing system has the potential to gather. This is also why we should be hugely wary of establishing systems that enable it to be done easily, remotely and at scale. This isn’t just privacy as some kind of luxury – this is real concern about things that are done in the real world and have been for many, many years, just not with the speed, efficiency and cheapness of installing an app on people’s phones.

Some of this people ‘instinctively’ know – they feel that the intrusions on their privacy are ‘creepy’ – and hence resist. Businesses and government often underestimate how much they care and how much they resist – and how able they are to resist. In my work I have seen this again and again. Perhaps the most relevant here was the dramatic nine day failure that was the Samaritans Radar app, which scanned people’s tweets to detect whether they might be feeling vulnerable and even suicidal, but didn’t understand that even this scanning would be seen as intrusive by the very people it was supposed to protect. They rebelled, and the app was abandoned almost immediately it had started. The NHS’s own ‘care.data’ scheme, far bigger and grander, collapsed for similar reasons – it wanted to suck up data from GP practices into a great big central database, but didn’t get either the legal or the practical consent from enough people to make it work. Resistance was not futile – it was effective.

This resistance seems likely in relation to the contact tracing app too – not least because the resistance grows spectacularly when there is little trust in the people behind a project. And, as we shall see, the government has done almost everything in its power to make people distrust their project.

Magical thinking

The second part of the problem is what can loosely be called ‘magical thinking’. This is another thing that is all too common in what might loosely be called the ‘digital age’. Broadly speaking, it means treating technology as magical, and thinking that you can solve complex, nuanced and multifaceted problems with a wave of a technological wand. It is this kind of magic that Brexiters believed would ‘solve’ the Irish border problems (it won’t) and led anti-porn campaigners to think that ‘age verification’ systems online would stop kids (and often adults) from accessing porn (it won’t).

If you watched Matt Hancock launch the app at the daily Downing Street press conference, you could have seen how this works. He enthused about the app like a child with a new toy – and suggested that it was the key to solving all the problems. Even with the best will in the world, a contact tracing app could only be a very small part of a much bigger operation, and only make a small contribution to solving whatever problems they want it to solve (more of which later). Magical thinking, however, makes it the key, the silver bullet, the magic spell that needs just to be spoken to transform Cinderella into a beautiful princess. It will never be that, and the more it is thought of in those terms the less chance it has of working in any way at all. The magical thinking means that the real work that needs to go on is relegated to the background or eliminated at all, replaced only by the magic of tech.

Here, the app seems to be designed to replace the need for a proper and painstaking testing regime. As it stands, it is based on self-reporting of symptoms, rather than testing. A person self-reports, and then the system alerts anyone who it thinks has been in contact with that person that they might be at risk. Regardless of the technological safeguards, that leaves the system at the mercy of hypochondriacs who will report the slightest cough or headache, thus alerting anyone they’ve been close to, or malicious self-reporters who either just want to cause mischief (scare your friends for a laugh) or who actually want to cause damage – go into a shop run by a rival, then later self-report and get all the workers in the shop worried into self-isolation.

These are just a couple of the possibilities. There are more. Stoics, who have symptoms but don’t take it seriously and don’t report – or people afraid to report because it might get them into trouble with work or friends. Others who don’t even recognise the symptoms. Asymptomatic people who can go around freely infecting people and not get triggered on the system at all. The magical thinking that suggests the app can do everything doesn’t take human nature into account – let alone malicious actors. History shows that whenever a technological system is developed the people who wish to find and exploit flaws in it – or different ways to use it – are ready to take advantage.

Magical thinking also means not thinking anything will go wrong – whether it be the malicious actors already mentioned or some kind of technical flaw that has not been anticipated. It also means that all these problems must be soluble by a little bit of techy cleverness, because the techies are so clever. Of course they are clever – but there are many problems that tech alone can’t solve

The issue of trust

One of those is trust. Tech can’t make people trust you – indeed, many people are distinctly distrustful of technology. The NHS generates trust, and those behind the app may well be assuming that they can ride on the coattails of that trust – but that itself may be wishful thinking, because they have done almost none of the things that generate real trust – and the app depends hugely on trust, because without it people won’t download and won’t use the app.

How can they generate that trust? The first point, and perhaps the hardest, is to be trustworthy. The NHS generates trust but politicians do the opposite. These particular politicians have been demonstrably and dramatically untrustworthy, noted for their lies – Boris Johnson having been sacked from more than one job for having lied. Further, their tech people have a particularly dishonourable record – Dominic Cummings is hardly seen as a paragon of virtue even by his own side, whilst the social media manipulative tactics of the leave campaign were remarkable for their effectiveness and their dishonesty.

In those circumstances, that means you have to work hard to generate trust. There are a few keys here. The first is to distance yourself from the least trustworthy people – the vote leave campaigners should not have been let near this with a barge pole, for example. The second is to follow systems and procedures in an exemplary way, building in checks and balances at all times, and being as transparent as possible.

Here, they’ve done the opposite. It has been almost impossible to find out what was going to until the programme was actually already in pilot stage. Parliament – through its committee system – was not given oversight until the pilot was already under way, and the report of the Human Rights Committee was deeply critical. There appears to have been no Data Protection Impact Assessment done in advance of the pilot – which is almost certainly in breach of the GDPR.

Further, it is still not really clear what the purpose of the project is – and this is also something crucial for the generation of trust. We need to know precisely what the aims are – and how they will be measured, so that it is possible to ascertain whether it is a success or not. We need to know the duration, what happens on completion – to the project, to the data gathered and to the data derived from the data gathered. We need to know how the project will deal with the many, many problems that have already been discussed – and we needed to know that before the project went into its pilot stage.

Being presented with a ‘fait accompli’ and being told to accept it is one way to reduce trust, not to gain it. All these processes need to take place whilst there is still a chance to change the project, and change is significantly – because all the signs are that a significant change will be needed. Currently it seems unlikely that the app will do anything very useful, and it will have significant and damaging side effects.

Misunderstanding Privacy – part 2

…which brings us back to privacy. One of the most common misunderstandings of privacy is the idea that it’s about hiding something away – hence the facetious and false ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument that is made all the time. In practice, privacy is complex and nuanced and more about controlling – or at least influencing – what kind of information about you is made available to whom.

This last part is the key. Privacy is relational. You need privacy from someone or something else, and you need it in different ways. Privacy scholars are often asked ‘who do you worry about most, governments or corporations?’ Are you more worried about Facebook or GCHQ. It’s a bit of a false question – because you should be (and probably are) worried about them in different ways, just as you’re worried about privacy from your boss, your parents, your kids, your friends in different ways. You might tell your doctor the most intimate details about your health, but you probably wouldn’t tell your boss or a bloke you meet in the pub.

With the coronavirus contact tracing app, this is also the key. Who gets access to our data, who gets to know about our health, our location, our movements and our contacts? If we know this information is going to be kept properly confidential, we might be more willing to share it. Do we trust our doctors to keep it confidential? Probably. Would we trust the politicians to keep it confidential? Far less likely. How can we be sure who will get access to it?

Without getting into too much technical detail, this is where the key current argument is over the app. When people talk about a centralised system, they mean that the data (or rather some of the data) is uploaded to a central server when you report symptoms. A decentralised system does not do that – the data is only communicated between phones, and doesn’t get stored in a central database. This is much more privacy-friendly, but does not build up a big central database for later use and analysis.

This is why privacy people much prefer the idea of a decentralised system – because, amongst other things, it keeps the data out of the hands of people that we cannot and should not trust. Out of the hands of the people we need privacy from.

The government does not seem to see this. They’re keen to stress how well the data is protected in ‘security’ terms – protected from hackers and so forth – without realising (or perhaps admitting) that the people we really want privacy from, the people who present the biggest risk to the users, are the government themselves. We don’t trust this government – and we should not really trust any government, but build in safeguards and protections from those governments, and remember that what we build now will be available not just to this government but to successors, which may be even worse, however difficult that might be to imagine.

Ways forward?

Where do we go from here? It seems likely that the government will try to push on regardless, and present whatever happens as a great success. That should be fought against, tooth and nail. They can and should be challenged and pushed on every point – legal, technical, practical, and trust-related. That way they may be willing to move to a more privacy-friendly solution. They do exist, and it’s not too late to change.

what do we know and what should we do about…? internet privacy

My new book, what do we know and what should we do about internet privacy has just been published, by Sage. It is part of a series of books covering a wide range of current topics – the first ones have been on immigrationinequality, the future of work and housing. 

This is a very different kind of book from my first two books – Internet Privacy Rights, and The Internet, Warts and All, both of which are large, relatively serious academic books, published by Cambridge University Press, and sufficiently expensive and academic as to be purchasable only by other academics – or more likely university libraries. The new book is meant for a much more general audience – it is short, written intentionally accessibly, and for sale at less than £10. It’s not a law book – the series is primarily social science, and in many ways I would call the book more sociology than anything else. I was asked to write the book by the excellent Chris Grey – whose Brexit blogs have been vital reading over the last few years – and I was delighted to be asked, because making this subject in particular more accessible has been something I’ve been wanting to do for a long time. Internet privacy has been a subject for geeks and nerds for years – but as this new book tries to show, it’s something that matters more and more for everyone these days.


It may be a short book (well, it is a short book, well under 100 pages) but it covers a wide range. It starts by setting the context – a brief history of privacy, a brief history of the internet, and then showing how we got from what were optimistic, liberal and free beginnings to the current situation – all-pervading surveillance, government involvement at every level, domination by a few, huge corporations with their own interests at heart. It looks at the key developments along the way – the world-wide-web, search, social networks – and their privacy implications. It then focusses on the biggest ‘new’ issues: location data, health data, facial recognition and other biometrics, the internet of things, and political data and political manipulation. It sketches out how each of these matters significantly – but how the combination of them matters even more, and what it means in terms of our privacy, our autonomy and our future.

The final part of the book – the ‘what should we do about…’ section – is by its nature rather shorter. There is not as much that we can do as many of us would like – as the book outlines, we have reached a position from which it is very difficult to escape. We have built dependencies that are hard to find alternatives to – but not impossible. The book outlines some of the key strategies – from doing our best to extricate ourselves from the disaster that is Facebook to persuading our governments not to follow the current ultimately destructive paths that it seems determined to pursue. Two policies get particular attention: Real Names, which though superficially attractive are ultimately destructive and authoritarian, fail to deal with the issues they claim to and put vulnerable people in more danger, and the current and fundamentally misguided attempts to undermine the effectiveness of encryption.

Can we change? I have to admit this is not a very optimistic book, despite the cheery pink colour of its cover, but it is not completely negative. I hope that the starting point is raising awareness, which is what this book is intended to do.

The book can be purchased directly from Sage here, or via Amazon here, though if you buy it through Amazon, after you’ve read the book you might feel you should have bought it another way!


Paul Bernal

February 2020

For Brexit

When hate and lies

Found wings to fly

And ignorance

Gained prominence

Those Empire songs

And Big Ben bongs

Nostalgic dreams

Weren’t what they seemed

Rose-tinted specs

With dire effect

And science dies

Beneath those lies

With knowledge lost

Old friendships tossed

For hateful thoughts

A mood they caught

And migrants blamed

Old hates inflamed

“Take back control”

And lose your soul

And so we go

Although we know

That wounded future

Finds no suture

All the madness

Leaves just sadness

It’s over now.

And how.

P Bernal

The BBC’s problems are no conspiracy theory…

The BBC’s latest response to their challenges over their election coverage, in a piece in the Guardian by Fran Unsworth, their director of news and current affairs, has a very welcome headline:

“At the BBC, impartiality is precious. We will protect it”

Fran, and the BBC, are right that their impartiality is precious – as well as being required by law – but by dismissing those who are challenging them as conspiracy theorists they are doing the opposite of protecting it. They’re helping to ensure its demise.

Not a conspiracy theory

The first and most important thing to say is that very few people – and no-one serious – is suggesting there is any kind of conspiracy going on here. To suggest that they are is a classic straw man argument. Conspiracy theories are easily dismissed, and often make little sense when analysed. Of course it’s impossible to get a large number of independent minded journalists and individual editors to follow a conspiracy. We know that very well – but it’s absolutely not what the BBC is being accused of, so attacking it and dismissing it bears no relationship to the real problem – or real problems, because there are a number of connected problems involved here.

The problems with the BBC are qualitatively different. Unconscious or subconscious bias. A tendency to groupthink. Subservience to authority. High-handedness to the rest of us. This, coupled with a kind of naïveté and misunderstanding of the new media environment, is what produces the problems that we see with the BBC – and which the BBC either don’t see or don’t want to see or address.

Making mistakes

Everyone makes mistakes – and though many might take issue with Fran Unsworth’s description of ‘a couple of editorial mistakes’ as perhaps something of an underestimate –  and no-one expects all mistakes to be avoided. The big questions, though, are what kind of mistakes are made, how they are corrected and avoided in the future, and what kind of apologies are made for them. That’s where the question of unconscious or subconscious bias comes in. The two mistakes Fran Unsworth is presumably referring to are using the wrong clip for Boris Johnson at the Cenotaph and editing out the laughter that followed his answer about trust in the Question Time debate, but there are a number of others. The most noticeable thing about them, however, is not the individual errors, but that they all lean in the same direction. All tend to favour Boris Johnson. That’s where the question of bias comes in. Not a conspiracy theory that the mistakes are made deliberately, under some kind of orders, but that they tend to follow the subconscious bias.

Subservience to authority

This is closely related to the accusation – made in particular by Peter Oborne – that the BBC is too servile to the Prime Minister’s Office. Again, this isn’t a conspiracy theory, but an observation, and certainly not one restricted to the BBC. Robert Peston fits the profile every bit as much as Laura Kuenssberg, for example. This is nothing new for the BBC, however, as the role of being a state broadcaster has consequences, but it has a particular significance in a time when those in authority – and those in Number 10 in particular – are notably less trustworthy than in the past.

Being willing to make compromises in order to get access is normal journalistic practice, but there are balances to be found and the main accusation is that the balance has been tipped too far. When Number 10 is restricting other media – bans on Channel 4 News and on the Daily Mirror for example – it should ring alarm bells in the minds of any journalists. When the criticisms of Peter Oborne are taken into account, those alarm bells should be listened to even more carefully.  Denying that it’s even possible that the balance may have been missed, rather than critical self-examination, is a recipe for disaster.

Fran Unsworth assures us that the BBC are not ‘cowed or unconfident’. I hope she’s right, but the evidence does not really support her. The other ‘mistake’ – failing to secure a date for an Andrew Neil interview with Boris Johnson whilst telling (or at the very least hinting) to the other leaders that they had – does not look at all good. Acquiescing to Johnson’s subsequent request to get the Sunday morning chat with Andrew Marr rather than the evening grilling by Neil makes it look even worse. A strong, ‘uncowed’ BBC would not have let either of those things happen.

Understanding the new media

Another key aspect of the current political climate – and again, the current occupants of Number 10 are critical here – is that the relationship between the old and the new media is vitally important. It is very easy for the ‘old media’ to get ‘played’ by skilful operators of the new media. Selectively RTing poorly phrased and incomplete tweets by BBC journalists, taking them out of context and not mentioning critiques that had been put in separate tweets is just one example. Using clips from interviews similarly selectively or even editing them to create an effect (making Keir Starmer pause and look as though he didn’t answer a question that he did, or editing out the laughter that followed Boris Johnson’s answer on trust) is pretty standard practice now – and the BBC should be aware of that.

There are things that the BBC journalists could do to slow down this manipulation – including the criticism within the tweet rather than separately. “Mr Johnson again mentioned the 50,000 new nurses” in a tweet leaves it open to magnification without criticism, “Mr Johnson again claimed the debunked number of 50,000 new nurses” does not. Taking care over words more: say that a politician ‘says’ or ‘claims’ rather than ‘reveals’ something if they thing they are claiming is dubious at least. Being cynical in the face of people with a track record of dishonesty isn’t being unfair, it’s being a proper journalist.

High-handedness to critics

The responses to criticism – and Fran Unsworth’s is just the latest of many – have been perhaps the most disappointing of all. Anyone even slightly criticising the BBC is dismissed as a conspiracy theorist, fobbed off with straw man arguments or worse. Huw Edwards suggested Peter Oborne looked ‘crackers’ for suggesting the clipped version of Boris Johnson’s response on trust had been edited – and even when the BBC eventually admitted it had been edited there has been no apology from Edwards.

This is pretty much the definition of gaslighting – and the BBC should know this and should find a much, much better way.

Trusting the BBC

Right now, we need the BBC to be working well. We need to be able to trust the BBC – and the BBC needs us to trust them. Calling its critics conspiracy theorists and miscasting their criticism as ‘crackers’ is pretty much guaranteed to damage that trust. It is already close to breaking point. Unless the BBC starts to understand this – and to openly acknowledge it, because I am quite sure there are a fair number of journalists and others in the BBC who are quite aware of the problems – that trust will be gone. The BBC needs to understand how it appears to others.

The dramatic cartoon in the Dutch newspaper Volkskrant, showing Boris Johnson raping Britain whilst Nigel Farage and Jacob Rees-Mogg et al hold her down, has the BBC pushing away the crowd saying the Dutch equivalent of  ‘move along, nothing to see here’. This should really give the BBC pause for thought. What role are they taking? How do they want to be remembered? When the rest of the world can see it but the BBC themselves can’t, things have got very bad. This may be the BBC’s last chance. I hope it takes it.

Tories, Twitter and Fake News

The furore surrounding the Conservative Party’s ‘rebranding’ of its press office Twitter account as ‘FactcheckUK’ during the leadership debate has been quite spectacular.

The BBC’s Emily Maitlis called it ‘dystopian’ on Newsnight, and the reaction on Twitter itself was, as many things on Twitter are, a mixture of outrage, anger, defensiveness and humour. And yet the full impact and the real importance of this seemingly small piece of deception do not seem to have been properly appreciated by many – at least in part because they need to be considered in the context of the much misunderstood phenomenon of ‘fake news’. It is not just that the Tories were contributing to fake news – using some well known techniques – but that their activities directly undermined some of the few effective tools that exist to combat (or at least reduce the impact of) fake news.

Fake news is very difficult to fight. It is not a new phenomenon – its history can be traced back pretty much as far as human history. Classic examples include its use to demonise Vlad the Impaler in 15th Century Wallachia: its use is one of the reasons he became a byword for brutality and the basis of the myth of Dracula. What is different now, and what makes it more of a problem in the current environment is the way that social media works – the speed and sharing networks of Facebook and Twitter, the gameable curation algorithms of YouTube, the ease with which content can be created and tailored all contribute to something which can have a huge effect, particularly in times of political turbulence.

The question of how to deal with this has been wrestled with by lawyers, academics, tech companies, governments and more, and many suggestions have been made and many ‘solutions’ suggested – including, importantly, the use of law to clamp down on fake news, tech to detect fake news and ‘rules’ applied and enforced by the social media companies. The UK government introduced its ‘Online Harms White Paper’ earlier in the year, and one of the key harms it aimed to deal with was misinformation…. …so one of the first reactions to seeing the UK’s governing party engaging in fakery should be to question their suitability to govern the regulation of fake news.

This isn’t just a particular problem for the Tory Party in the UK. All over the world governments are wringing their hands about fake news, bringing in often harsh and censorious laws and worst – whilst using fake news themselves. Fake news, from a government perspective – and certainly from the Tory Party perspective – is only a bad thing when other people engage in it.

Most of the ideas and tools suggested to ‘deal with’ fake news are unlikely to be effective, can be gamed or sidestepped, or have significant and damaging side effects. All, however, do rely on one key factor: if we are to have any chance of dealing with fake news and other forms of misinformation we need to have some kind of ‘anchor points’ of reality to judge the fakery against. The Tory Party’s little deception yesterday directly undermined two of the main ways that those anchor points are established.

The first of these is the verified account – the ‘blue tick’. This is not, as some seem to think, a badge of honour, or a status symbol, but is intended to be a way that you can be sure that the account is what it says it is. For someone with a verified account to be misleading as to what they are is to directly undermine this – and when CCHQ ‘relabelled’ itself as a seemingly neutral ‘fact checker’ it was being directly misleading, and in a way specifically forbidden by Twitter in the terms and conditions for a verified account.

The second is the existence of fact checkers themselves – they’re intended to provide those anchor points, to measure claims against reality. By creating a fake fact check account, and then by using it to do fake fact checks, spreading propaganda, they were not only being misleading but undermining the whole concept of fact checking, damaging another of the key ways in which people have a chance to work out what is true.

Dominic Raab tried to suggest this didn’t matter, because anyone looking at the account would see from the details that it was still ‘@CCHQPress’, so know it wasn’t really a fact checker. “No one who looks at it for more than a moment will have been fooled,” he said, missing the key point that one of the main techniques of fake news is to create things precisely for those who only glance at them for a moment, who catch them in passing. Empirical evidence shows that even one impression of a headline (or a tweet) can have an effect and make a story more likely to be believed. That’s how Twitter, in particular, works very often. The ‘rebranding’ include a simple, bright colour and a large tick mark, just like those of real fact checkers. The immediate impression for those glancing for a moment was that of a fact checker – and what other reason did they have for doing the ‘rebrand’?

Some of the other defences of the approach from the Tories have attempted to suggest that this is all normal, that no-one outside the Westminster Bubble or the media geeks would be interested. Others have feigned naïveté, as though this isn’t any kind of ‘trick’ or deception, or acted as though they don’t understand why people are upset about it. To believe these ‘explanations’ is the real naïveté: the strategists involved in the Tory campaign may not be geniuses, but they do have a more than working knowledge of how social media works. Boris Johnson has surrounded himself with the people who worked for the ‘leave’ campaign that used social media as central to their strategy, who used profiling and targeted ads and all kinds of other related practises, and used them very effectively during the Brexit campaign. This is their area.

The response from Twitter when alerted to this breach of their rules was fast – but very disappointing. CCHQ Press had to revert to their real name, and were told not to do it again or they would be punished properly. A slap on the wrist at best, and a chance to laugh it off – as they’ve been trying to do ever since. A much more appropriate punishment – and one available to Twitter under their rules – would have been to take away their verified status for a period. Until the General Election, perhaps? A verified status matters, and removing it would make the point that it is both a privilege and a recognition of ‘truth’. CCHQ broke the rules, undermined the concept of a verified account, and damaged the integrity of the system. They directly opposed the truth. Taking away that verified status would make that point – and without any form of ‘censorship’. They can still tweet, but their tweets would not carry the authority that they did. That would seem entirely appropriate.

Without it, it is easy for the Tories to continue the tactics – indeed, when asked, Michael Gove doubled down on the approach, saying it was the right thing to do, and that they were the ones who were working for the truth. Again, this is a classic tactic of misinformation – and one familiar from all the many years that those in power have engaged in propaganda – to accuse your enemies of the things you are guilty of, shifting the blame and muddying the waters at the same time. That muddying of the waters, the blurring to the issues, is all about making truth harder to find, and creating a kind of exhaustion amongst those who seek to find it. Given the knowledge and understanding that Boris Johnson’s team have of social media, we can expect more and different examples of the use of social media ‘dark arts’ in the rest of the campaign.

We need to be ready for this, and in particular to be ready to counter it, to alert people to it, and to fight. Misinformation is hard to fight, but even harder to fight if we take away those few tools we have. If we don’t fight to keep those – and verified accounts and relatively reliable fact-checkers are two of those tools – we will lose the bigger fights for truth and for an even slightly functional democracy. Right now, it looks as though that’s exactly what’s happening.

UPDATE: Since I wrote this post, which included a warning that the Tories would engage in ‘more and different examples of the use of social media ‘dark arts” they have provided an excellent example – their fake/spoof Labour manifesto, which they’ve linked via paid advertisements to searches on Google for ‘labour’. Not only have they done this, but they’ve done what in the past we would have called ‘cybersquatting’ – registering a domain name that looks as though it’s ‘official’ with a deliberate attempt to mislead. In this case, it’s labourmanifesto.co.uk… looks real, makes no mention of the fact that it’s run by the Tories…. Yup, we’re in for a lot of ‘games’ this election…

Note that whilst the manifesto itself says that it’s by the Tories, the domain name doesn’t, the advertisement and headline that appears on the Google search results didn’t in its original form , and that’s what you would see if you don’t click on it! This has now been corrected – seemingly it was an error on Google’s part.

GEEK POINT: This isn’t immediately illegal, though IT law people might well suggest that it’s an ‘abusive registration’ of a domain name, and that it Labour applied to Nominet to take over the domain, they might well be successful. By that time, of course, the damage would have been done….

Response to Online Harms White Paper

My submission to the Online Harms White Paper consultation is set out below. This has been one of the hardest government consultations for me to respond to. In part this is because the White Paper covers so much ground that there is far too much to say than can fit into a reasonably sized response – one that stands a chance of being read properly – but in part it is because the consultation looks very much as though it has already assumed the main answers. The questions as set out in the consultation are very much on the detail level about how to do what they’ve already decided to do, although a great deal of what they’ve decided to do is at best questionable, at worst extremely likely to be not just ineffective but actually counterproductive, as well as restricting crucial internet freedom for many of the people who need it the most.

That means my response is somewhat ‘bitty’, covering only a few select areas as well as giving general comments. Fortunately there are some other really excellent responses out there,

Response to the Online Harms White Paper consultation

I am making this submission in my capacity as Senior Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research into internet law and specialise in internet regulation from both a theoretical and a practical perspective. My first book, Internet Privacy Rights – Rights to Protect Autonomy, was published by Cambridge University Press in 2014. My second book, The Internet, Warts and All: Free Speech, Privacy and Truth, was also published by Cambridge University Press in August 2018, and has the question of regulation of the Internet as one of its central themes. ‘Online harms’, as set out in the White Paper, are central to that book, from the chapters covering freedom of speech and fake news to the chapter on the nature and practice of trolling. There are direct recommendations about regulation of all of this contained within that book.

I have previously responded to a series of government consultations in this and related fields, including the House of Lords Internet Regulation Inquiry and the DCMS Fake News Inquiry in 2018, and was involved in the Law Commission Abusive and Offensive Online Communications Project that same year. This area falls squarely within my field of expertise and I have written extensively about it in forms other than the two academic books already mentioned. I would be happy to contribute further if that would be of assistance.

Introduction to this submission

Whilst the problem of online harms is a significant one, there are significant dangers associated with inappropriate and excessive regulation. As well as potentially putting freedom of speech, freedom of association and assembly and other human rights at risk, many of the methods suggested could end up being counterproductive, actually causing more harm than they address. They can encourage countermeasures that mean that the real ‘villains’ avoid being held to account, they can create tools that are used bywhat might loosely be called ‘trolls’ against their victims, and they can produce more arbitrary punishment that make it harder to respect the laws and those attempting to enforce them.

It is important not to be persuaded by inaccurate characterisations of the internet as a ‘wild west’ that is ungoverned and needs ‘reining in’. For the vast majority of people, the vast majority of the time, the internet is a place that provides great benefits and an essentially safe and secure environment to socialise, do business, find information and much more. Moreover, the internet is already regulated by a wide range of laws, from those governing speech (such as S127 of the Communications Act 2003, the Malicious Communication Act 1988) and public order law to data protection, copyright, fraud and ‘revenge porn’, as well as civil law such as defamation law, misuse of private information and much more. Regulators such as Ofcom and the Information Commissioner’s Office already have extensive powers to operate online. This is not in any real sense an unregulated area – indeed, in many ways, speech online is subject to tighter control and more regulation than speech ‘offline’.

Further, the nature of the online world means that rather than being a place where anonymity provides excessive ‘protection’, it is an environment where records are more precise, more persistent and more easily analysed than the ‘offline’ world, often making people moreaccountable for their speech than in the past. There are technological and legal mechanisms to both locate and potentially prosecute perpetrators of online harms – and many have indeed been prosecuted in ways that might well have been seen as disproportionate if their speech had been offline rather than on the internet.

All this means is that much more care needs to be taken about how – and indeed whetherto regulate speech any more harshly than it is currently regulated. There are some specific areas where it might be appropriate, but a heavy-handed approach to the regulation of online speech, though politically attractive, will almost certainly cause much more harm than good. Moreover, it can create a sense of complacency about dealing with much more important problems in the online environment, as well as providing inappropriate reassurance that distracts from the critical need to encourage people to be self-supportive and ‘savvy’ online, which is much more important than any regulator could be.

This is perhaps the most important point, and it is good to see that a section of the White Paper is devoted to awareness and in particular to empowering users. This should be emphasised in all communications – and the idea that we can somehow create an internet that is completely ‘safe’ should not be promoted so positively. A ‘safe’ internet can become a sterile internet, losing the creativity and dynamism that is the lifeblood of the environment. We should neither overplay the dangers – as the portrayal of the internet as a lawless ‘wild west’ suggests – nor exaggerate the capability to remove those dangers entirely, as the idea of making the internet ‘safe’ implies.

Similarly, if the government does try to regulate along the lines set out in the White Paper, it is important not to expect too much. This kind of regulation is highly unlikely to have a significant impact on the level of ‘online harms’ that are encountered. The risks associated with this kind of regulation, as well as the significant costs involved in setting it up, make it hard to justify pursuing it as it stands.

1          Focussing on illegal content and activity

1.1       That the White Paper starts by referring to illegal ‘and unacceptable’content and activity should be a concern from the start. If something is really ‘unacceptable’, it should be made illegal – and unless it is illegal, it should not be deemed unacceptable. If acceptability can be determined by policy or politics rather than law, the scope for abuse, uncertainty and bias is enormous. Setting what amounts to a ‘moral’ or ‘ethical’ view of acceptability is a very slippery slope.

1.2       Further, setting one set of standards of ‘acceptability’ for the whole of the internet is not only doomed to failure but likely to destroy some online communities that are in most ways positive and supportive for people who spend time in them – something that should be strenuously avoided. One of the key strengths of the internet is that it allows space for the existence of very different communities and very different platforms – this has been true since the beginning of the ‘social’ internet in particular. Imposing a set of standards ‘from above’ that do not meet either the needs or the expectations of those communities is not only unlikely to succeed in any meaningful way but is likely to cause anger and resentment.

1.3       Where content and behaviour is illegalthe law should apply across all platforms and communities. Deciding ‘acceptability’ should be kept to the platforms and the communities to decide. This way the different platforms and communities can develop in ways that suit them. Encouraging a diversity of platforms and communities has the additional potential benefit of dispersing the power currently wielded by the internet giants – and reducing vulnerability to things like fake news and political manipulation, as part of the reason for the effectiveness of both has been the concentration of data and audience on particular platforms, Facebook and YouTube in particular.[1]

2          Online harms

2.1       The online harms discussed in the White Paper need to be considered in the light of this. The first main types discussed in the White Paper fit clearly into the illegalcategory: CSEA, terrorist contents, content uploaded from prisons, the sale of illegal opioids. Law already exists to address all of these, and to a significant degree this law is already effective, insofar as it can be effective given the nature of the problem. A new online regulator, following the lines discussed in the White Paper is unlikely to have a significant effect on any of these areas – more resources to law enforcement, to prisons (to take more control over the supply of mobile phones for example) and so forth is much more likely to be effective.

2.2       The other harms discussed, ‘[b]eyond illegal activity’, from section 1.15 of the paper on, are another matter. Cyber bullying, misogyny and other forms of online abuse can cross the threshold into illegality, and many have been successfully prosecuted, (e.g. under the Malicious Communications Act 1988 and S127 of the Communications Act 2003). This does not mean that further law or regulation is required, but that more consistency, better training and clarity from those enforcing the law, and more resources to them, could improve matters, particular where the application of these laws has been seen to appear arbitrary and out of touch. The notorious ‘Twitter Joke Trial’ of Paul Chambers in 2012, which eventually saw the conviction quashed after a series of appeals, left those enforcing the law looking more than foolish. This was not a result of too little law or too little regulation but of authorities that did not understand the online world.

3          Anonymity online

3.1       The White Paper notes that ‘tackling online anonymous abuse’ is a key concern. This has been a subject of discussion for those studying the internet for many years – and it is important to raise a strong, cautionary note against the idea that requiring ‘real names’ would be an effective tool against online abuse. In practice, there is little evidence so suggest that it might be, and significant evidence that it would not – and that it would put vulnerable people in particular situations at risk.[2]

3.2       It may seem counterintuitive but empirical evidence has shown that ‘trolls’ required to use their real names online actually become morerather than less aggressive. Trolls often ‘waive their anonymity’ online, becoming even more aggressive when posting with their real names.[3]As I note in my 2018 book, The Internet, Warts and All, it may be that having a real name displayed emboldens trolls, adding credibility and kudos to their trolling activities. It may also be that they feel they have less to lose and less to protect when their names are revealed – or that it creates a ‘badge of honour’. Whatever the reason, the evidence does notsuggest that requiring real names deters trolls or trolling.

3.3       Further, forcing people to use real names puts some people at risk – from whistle-blowers to victims of spousal abuse, to people with religious or ethnically identifying names and many more groups. It also makes the victimsof online abuse more vulnerable, as their attackers can learn more about them and use that to abuse or threaten them – finding out their personal details and using them against them, threatening to report them or tell lies about them to their families and friends, employers and so forth. The classical troll tactic of ‘doxxing’ – releasing documents about a victim – is made much easier by a real names policy

3.4       There are already legal and technical methods for revealing who lies behind an anonymous account – anonymity online is never more than a basic protection – which can and should be used when required. There are also platforms where real names are required already – Facebook for one – but there is little evidence that they provide more protection from abuse. What could help, as noted above, is a greater diversity of platforms and communities online, so that people can find places that are safer for themonline. The rise of group-based private social media system like WhatsApp may be in part a response to this problem: groups kept private and secure are less open to external abusers.

4          Young people online

4.1       The note in the paper that most children nave a positive experience online is very welcome: it is really important not to portray the online world as somewhere fundamentally dangerous for children and young people. An overly protective approach to children online would reflect a mischaracterisation and misunderstanding of how the internet works for children, and any regulation that restricts rather than supports children online should be avoided.

4.2       It is critical in understanding this notto put too much emphasis on the worries of parents about their children’s online activities, particularly when those worries are actively encouragedby the ways that they are questioned about it. If can be a reflection of the way that parents misunderstand what their children are doing, and feel out of touch. A greater emphasis on educating parents so that they don’t worry would be very welcome.

4.3       Recent studies also show that concerns about the impact of ‘screen time’ on adolescents mental health are likely to be unfounded.[4]This fits into a common pattern of misplaced fears and concerns based on misunderstanding of both technology and the lives of young people. It is important not to overreact to ideas and fears spread through ignorance. An overly onerous regulatory approach towards young people online should be avoided. This is not to underplay the importance of dealing with key issues such as self-harm and suicide, sexting and revenge porn, but to place them in context. It is important also to understand the causality here: where there are correlations between online activity and self-harm, for example.

4.4       An area where the government is already attempting to regulate in relation to children – age verification for access to pornographic and other ‘adult’ content – is another example where regulation is highly unlikely to be effective, and a prime example of another classical failure of regulation, the failure to listen to experts. Almost everyone in the technology industry has advised against the path that the government has taken: it won’t in practice help protect children from harm, will encourage complacency, has already encouraged countermeasures both technical (including the rise in usage of VPNs) and tactical (using privacy groups and so forth), and does not address the real issue of harm. Moreover, it is likely to be very expensive and technologically almost impossible to function well. It was and remains a regulatory trap – the government should do its best not to fall into similar regulatory traps in other areas. Caution, care, and a willingness to listen to experts even when they go against what might seem ‘obvious’, are very much needed in the area of internet regulation.

4.5       One area where regulation in relation to children could, however, be useful, is privacy – in common with other areas mentioned in this submission, privacy underpins protection in other ways. A requirement for real names, as noted above, would be likely to harmrather than help children at risk of cyber bullying and other online abuse. The ability for children to protect their privacy is critical – and restricting the gathering of data about children by social media platforms, advertisers and so forth should be encouraged.

5          Privacy, fake news and political manipulation

5.1       That leads to the more general point about privacy and personal data: the gathering and use of personal information underpins many of the worst problems on the internet at present. Privacy invasion and profiling lies behind the current manifestation of the fake news phenomenon and the broader issue of political manipulation (as graphically illustrated by the Cambridge Analytica saga) discussed in the White Paper, as well as providing tools for scammers and other criminals, creating vulnerabilities that can be exploited and much more.

5.2       Indeed, rather than focus on the symptoms of fake news and related harms, as the White Paper seems to do in paragraphs 7.25 onwards, focus should be placed on privacy, on data gathering, profiling and targeting. It is these techniques (again, as graphically illustrated by the Cambridge Analytica saga) that make misinformation and political manipulation so particularly effective in the current internet. The White Paper notes that it will be looking at advertising online – but does not make the connection between the techniques used by online advertisers and those used by people spreading fake news and misinformation. They are, in practice, the same methods, the same techniques (data analysis, profiling and targeting), and whilst these are seen as essentially harmless, normal business practices, any attempts to ‘deal with’ fake news, political manipulation and electoral interference are bound to fail. ‘Fact checking’ and labelling of fake news or unreliable sources has been empirically demonstrated to be counterproductive, making people morelikely to believe the fake news, one of the reasons Facebook abandoned its practice in 2017.[5]Making this kind of labelling part of any ‘duty of care’ would be directly counterproductive to combatting this kind of online harm.

5.3       Privacy and personal data is also an area where extensive law already exists. Data protection law, and in particular the new General Data Protection Regulation, has the potential to provide a good deal of support for individual privacy – but only if it is enforced with sufficient rigour and support. The Information Commissioner’s Office (‘ICO’) needs to be given more resources both in terms of finance and expertise, and perhaps more responsibilities.

6          The role of a regulator

6.1       As noted in various sections above, there are many areas discussed in the White Paper for which either regulation already exists or further regulation is likely to be counterproductive. The idea of imposing a ‘duty of care’ on internet platforms for some of subjects discussed in the White Paper should therefore be viewed with great caution. There are further areas where internet platforms are already working extensively to address, and where the question of whether a regulator is really needed should be asked. These include the online abuse of public figures – much of what is suggested is already being done, particular by Facebook and Twitter, and it is easy to fall into a trap of saying ‘it’s all the fault of the social media companies’ when there is a much bigger, underlying issue that is on a societal level.  The online abuse of public figures is closely connected with racism and misogyny – female and ethnic minority public figures are subjected to more and more virulent abuse than others – and whilst these are still tightly embedded in our society, blaming the social media companies for the existence of such abuse can easily become a form of deflection or avoidance.

6.2       Codes of practice could be welcome in these areas, but as noted above, imposing one set of standards on all (or most) platforms is likely to be ineffective and to have significant side effects. Enforcing that code of practice is likely to be difficult and hard to make consistent, fair or appropriate.

6.3       As noted above, privacy is of critical importance, and yet some of the suggestions for the ‘duty of care’ involve actually invading or weakening privacy for precisely the people who need it the most. In 7.35 for example, it is suggested that ‘vulnerable users and users who actively search for…’ certain content should be monitored – how is this to be done without extensive invasions of privacy, and how are those invasions of privacy to be done in ways that do not put the specifically vulnerable users at further risk? Again, the encouragement of people to take countermeasures and to develop tools and techniques to avoid this kind of monitoring should not be underestimated. Much of this kind of content will be driven to areas where it is lesseasy to provide support and help for people who really need it.

6.4       These are just some of the example that indicate quite how difficult doing effective regulation in this kind of way is likely to be. It is vital to understand that this regulatory exercise be understood to be highly challenging, very likely to be ineffective, as well as extremely expensive. Expectations as to its effectiveness, in particular, should be kept in check, as well as the potential damage to internet freedom at precisely the time when it is most needed.

7          Internet Freedom

7.1       It is easy to blame the internet for problems that have other causes, and easy to see it as something that needs to be ‘reined in’ or controlled. As well as being, as noted in various sections above, a mischaracterisation of the current situation, where for the vast majority of people the vast majority of the time the internet is something immensely positive, productive and supportive, providing for most ordinary people forms of communication and access to information that was previously the province only of the extremely privileged. Part of the reason for this huge positive is the amount of freedom that we currently have – and how it underpins many of our human rights, from freedom of expression to assembly and association, both online and off, freedom from discrimination, the right to a fair trial and more.

7.2       This freedom is something that should not be lightly sacrificed, particularly on the basis of myths and misunderstandings, or from an intention to assuage particular sections of the media. Almost all of the measures suggested in the White Paper have an impact on both freedom of speech and access to information, and many have a significant impact on privacy and the other vital human rights already mentioned. That is not to say that they should not be considered, but that those impacts need to be considered very seriously, and regulation not undertaken lightly. Excessive regulation can end up arbitrary and unfair, it can exacerbate existing problems, it can be gamedby people to the detriment of their enemies – and internet trolls and others wishing harm can be experts in such gaming, using tools created to protect people to actively harm them.

7.3       It should also be borne in mind that tools created now, with authorities that we deem to be benign, can be used by successor authorities that are less benign – we need to learn the lessons from history about this, and to avoid setting things up that can end up being used to oppress rather than protect. This is another key reason for caution in regulating too harshly.

8          Responses to specific questions in the consultation

This response has focussed on the overall effect of the White Paper, and on some particular areas where problems might arise, rather than on the specific consultation questions. Some of the questions are beyond the scope of this response but some do warrant a specific answer. In particular:

Q1:       The first and most important thing that the government should do is demonstrate more transparency, trust and accountability itself. The government should lead by example – and a code of conduct for ministers in relation to things like misinformation would be a good start. In practice, minsters not only spread misinformation themselves but contribute to an environment in which information is not trusted. Proper accountability should begin with the government.

Q4        Any regulator needs to be fully accountable to Parliament, through parliamentary committee, rather than through the DCMS itself. It should be responsible to parliamentrather that to the government, particularly as it needs at times to hold the government to account (see response to Q1).

Q5        As noted throughout this submission, great care needs to be taken to avoid excessive regulation.

Q6-7     These are crucial questions, but I am afraid it betrays a misunderstanding of the nature of privacy, something discussed in depth in Chapter 6 of my book The Internet, Warts and All.Privacy is not ‘two-valued’ with some communications being private, others public. It is much more nuanced than that, and sometimes ‘public’ forums include extremely private conversations and communications. The infamous Samaritans Radarfailed precisely because it misunderstood this – and the ICO confirmed at the time that private and personal information can exist on ‘public’ social media platforms.[6]Much more care and thought is needed here, rather than assuming that private and public can be easily separated. Moreover, if the criteria for what counts as private becomes known, it can (a) drive people to more private forms of communication that mean they are less easily helped and (b) create an opportunity for ‘gaming’ the regulations.

Q8        As noted throughout this submission, this is the big question for the whole plan. Much more time and thought is needed to avoid the regulation being both heavy-handed and ineffective.

Q10      The bigger question is whether the regulator should exist at all in the form proposed. The government should be asking that bigger question before looking at the precise legal form. If a regulator is definitely decided upon, a new public body would seem more appropriate than an existing one: the ICO has too much to do already, broadcast and related areas are too dissimilar for Ofcom to have much chance of succeeding, the BBFC is struggling over the contentious issue of age verification.

Q11      Making a regulator ‘cost neutral’ is laudable but brings about the risk of even more potent lobbying than already exists, and the power of the lobbies of Google, Facebook et al is already remarkably powerful. Whatever funding mechanism is determined needs to be clear, simple and not gameable – and that is very difficult to do given the expertise of those likely to be required to pay.

Q12      i) Unless any regulator has the power to disrupt business activities it is unlikely to have any impact at all. ii) ISP blocking already exists in relation to copyright, CSEA (via the IWF) and other areas. Extending those areas should be very much resisted, as the impact on freedom of expression is direct and significant, but given that it already exists for those areas there is little logical reason why not to extend it. iii) Senior management liability, though attractive, is unlikely to be sustainable.

Q13      Under terms similar to the GDPR.

Q14      Yes, but the details would depend very much on precisely how the regulations are set out.

Q15      The risks associated with Brexit, the excessive nature of our surveillance regime and in particular things like demands for backdoors to encryption are the biggest barriers to innovation in the UK technology industry. Both are understandably beyond the remit of this consultation, but those involved should be aware how damaging both are to the technology industry.

Q17-18See section 4 above. Children need empowerment more than protection, and parents need to learn more than the children do. The regulator should play an informative role – but be aware that this is very limited, and not place too heavy an expectation of its success.

I hope this response is helpful. If you need any further information, or links to the research that underpins any of the answers, please let me know.


Dr Paul Bernal

Senior Lecturer in Information Technology, Intellectual Property and Media Law

UEA Law School

University of East Anglia

Norwich NR4 7TJ

Email: paul.bernal@uea.ac.uk

[1]See my article in the Northern Ireland Legal Quarterly in December 2018, Fakebook: why Facebook makes the fake news problem inevitable, online at https://nilq.qub.ac.uk/index.php/nilq/article/view/189

[2]This area is covered in depth in Chapter 8 of my bookThe Internet, Warts and All: Free Speech, Privacy and Truth, published by Cambridge University Press, 2018.

[3]Most notably the 2016 study from the University of Zurich, reported in Rost, Stahel and Frey, Digital Social Norm Enforcement: Online Firestorms in Social Media,  PLoS ONE 11 (6)

[4]See Orben and Przybylski, Screens, Teens, and Psychological Well-Being: Evidence From Three Time-Use-Diary Studies, 2019 https://journals.sagepub.com/doi/10.1177/0956797619830329

[5]See https://www.newsweek.com/facebook-label-fake-news-believe-hoaxes-756426

[6]The Samaritans Radar story is the central case study of chapter 6 of The Internet, Warts and All.It involved analysing social media postings in order to identify when vulnerable people might be contemplating suicide, and failed within ten days of its launch as its privacy invasions were found to be deeply intrusive to exactly the online community it intended to support, and seen as putting them at intense risk.

Impartiality and the BBC…

The issue of BBC and impartiality seems never far from the surface these days – and during the highly charged Brexit process it seems to have erupted more and more. So much so that Ofcom, who have been the BBC’s regulator since it took the role from the BBC Trust in 2017, are now undertaking a review.

This review is very much to be welcomed – but it needs to be understood too. This is not happening, as some seem to think, as a recognition of BBC bias, or an acknowledgement that something is wrong in practice with the BBC’s output. The terms of the review give a somewhat less direct explanation. The Terms of Reference of the review state that:

“We will seek to understand more clearly the importance the audience places on the BBC’s impartiality; whether they are satisfied that the current tools used to ensure due impartiality are effective; and how audience attitudes to impartiality, accuracy and trust relate to one another.”

This is much more about perception of impartiality than the practice. It is primarily a review of audience opinion rather than the BBC’s practices. It is, however, an in depth review and is very much to be welcomed – but whether it will go far enough or deep enough to satisfy the BBC’s critics is another matter entirely. The first signs are not as positive as they might be. When the BBC’s Director General, was reported to have said that “We must stand up for it and defend our role like never before,” it set alarm bells ringing. Defensiveness, in many ways, is the last thing that the BBC should be thinking about now.

Indeed, in relation to impartiality in particular, the BBC’s defensiveness has been a big part of the problem. Whenever the BBC has been seen to be partial, it’s first reaction – and indeed that of its senior journalists and producers, particularly when responding in the social media – has been a kind of aggressive and dismissive defensiveness. ‘How dare you suggest that’ and ‘don’t be ridiculous’ has been the general tone pretty much every time, even when it is pretty clear that the BBC has made a mistake, an error of judgment, or been ‘played’ by someone in order to gain an advantage. These things happen – and the BBC’s often intense denial that it is even conceivable that they have looks not only ridiculous in itself but undermines the very trust that the BBC seeks to protect.

Neutrality and impartiality

It is important to be clear in what way the BBC is required to be impartial. The Broadcasting Code requires that the BBC (and other broadcasters)”…ensure that news, in whatever form, is reported with due accuracy and presented with due impartiality.”

Note that the requirements for accuracy and impartiality are together – impartiality and accuracy are two parts of the same requirement, and quite rightly. Whilst ensuring that there is impartiality, broadcasters should not take their eye of accuracy. If one side of a debate is telling the truth and the other is telling lies, it is not breaching impartiality to call out the lies, even if that looks as though it is being harder on one side than the other. If one side is lying, the due in due impartiality actually requires the lies to be called out.

The code makes a big point of what due is supposed to imply:

“Due” is an important qualification to the concept of impartiality. Impartiality itself means not favouring one side over another. “Due” means adequate or appropriate to the subject and nature of the programme. So “due impartiality” does not mean an equal division of time has to be given to every view, or that every argument and every facet of every argument has to be represented.

This is where, at least in perception, the BBC starts to get itself into trouble. That trouble has been official in relation to climate change, when it was rebuked by Ofcom for not challenging Lord Lawson sufficiently in an interview on Radio 4’s Today Programme in 2017, but has been made more intense over Brexit, when many people have suggested that the BBC’s journalists have not challenged the claims made, particularly by leading Brexiters, or called out things that are known to be untrue. In order to be seen to be impartial, it looks as though the BBC has let the requirement for accuracy slip, and potentially to a dangerous degree.

Being criticised by both sides isn’t evidence of impartiality

The BBC is of course criticised by both sides in the Brexit debate. Remainers claim the BBC is biased towards Brexit. Brexiters claim the BBC is biased towards Remainers. It is sometimes claimed, even by people in the BBC, that this is in some ways evidence that the BBC is impartial, or is getting the balance right. It is really important to understand that this is a logical fallacy. If you are getting the balance right, then you might well be criticised by both sides – but the converse is not true. One of the sides may be criticising you fairly, the other side unfairly. If one side sees that the bias is going their way, then they may well criticise anyway, to try to keep that bias in place, and to try to cancel out the criticism from their opponents. Bad faith criticism, to try to bully the journalists and indeed the BBC, is to be expected – particularly when the facts and evidence surrounding a particular issue point clearly in one way rather than the other. If you don’t have the facts on your side, using under-hand methods is one of the tools at your disposal – and that includes bad faith attacks on the broadcasters.

This does not, of course, mean that all criticism of the BBC and other broadcasters is done in bad faith – very much the opposite – but it does mean that the argument that because both sides are attacking we must be being neutral or impartial is fundamentally flawed. Criticisms should be taken seriously, but taken with a distinct pinch of salt

Particular problems in the current era

Times are particularly challenging right now, and not just because politics is particularly heated. The BBC is facing a complex environment that puts particular pressures on its news and current affairs role – and particularly its impartiality.

One of the most important is the danger that it faces in being ‘played’ by people with a vested interest. This manifests itself in many different ways, from murkier funded ‘Think tanks’ pretending to be neutral themselves and portraying themselves as researchers when they are really highly manipulative lobbyists, to people trying to gain fame or push their particular personal agendas. Problems like the ‘fake vicar‘ who appeared on Newsnight might have many different explanations, but they certainly do not inspire trust.

Newsnight’s use of one of his own propaganda pictures as the backdrop for their feature on Stephen Yaxley-Lennon (‘Tommy Robinson’) might also fit into this category – it was certainly a significant mistake, but whether it was accidental or Newsnight being played we may never find out – particularly as the BBC refuses to even countenance the possibility that it was wrong. The regular problems with apparent ‘plants’ in the audiences of Question Time might fit along the same lines – again it is not always clear whether these people blag their way into the audiences or are actually invited, as one UKIP candidate who managed to get on the programme three times has claimed.

Another challenge to the BBC is the apparent untouchability of its ‘big beasts’ – presenters such as Andrew Neil and perhaps most prominently John Humphrys – who headline the highest profile programmes. This points to another of the BBC’s biggest dilemmas – balancing ‘box office’ with informative, impartial and accurate journalism. When the presenters become the stars, that balance is challenged.

Headlines and Tweets

Another huge challenge, and one faced by the whole of the media, not just the BBC, is a failure to grasp the importance of headlines and summaries. Traditionally journalists have had little or no control over the headlines that accompany their stories – and in the past this has mostly been an annoyance, but little more. Now, however, it matters in many more ways. It is not just that people will often only see the headlines – that was always true, as you look at a newspaper in a shop, or see someone else reading it on the train – but that those headlines can often be the only thing that can be seen. It is what appears in search results, for example, or in a Twitter or Facebook feed. It is what is automatically generated if you click the button to tweet a story – you have to manually go in and change it if you want to say something different.

It can be screen-shotted without providing a link to the actual story. It can be used by manipulative politicians to imply something quite different from the intention of the story itself – Jacob Rees-Mogg has a particular track record here, but he is far from alone. There are people trying to manipulate news output all the time – it is one of the key features of the ‘post-truth’ era.

What needs to be done?

The first thing is to welcome this current review – and to encourage Ofcom to take it seriously, and the BBC to address it properly, and without this instinctive defensiveness that has characterised their approach to criticism so far. That, indeed, might be the single most important thing for the BBC to do. We all make mistakes – and we all know that others make mistakes. That includes the BBC.

Without acknowledging, let alone apologising for their mistakes, the BBC looks worse and worse. The kind of dismissive responses to questions about impartiality, from the Corbyn/Kremlin backdrop to the Fake Vicar to the ‘jokes’ about Diane Abbott prior to airtime on Question Time, do the BBC a lot of harm. The BBC can easily seem aloof, arrogant, looking down its nose at its audience – that really needs to change.

One way it needs to change is more openness about the debates that are going on. I am sure, from the people that I know at the BBC, that when something bad happens there are many people worried about it, wondering whether they misjudged the issue, and worse. I know there are people in the BBC, for example, who are concerned about the BBC’s role in the rise to prominence of Nigel Farage, and others who are embarrassed about the arrogance of some presenters on the Today Programme. These debates must be happening in the BBC behind closed doors – there should be some way to show the public that the BBC is at least aware that there are problems, and problems with practice not just with the perception of audiences.

If the BBC could, just once, say something along the lines of ‘yes, we may have misjudged that, and with hindsight we shouldn’t have used that picture/invited that person to be interviewed, or we could have been tougher in our questioning of that politician, rather than going straight onto the ‘how dare you criticise us’ road, it would really help.

Taking genuine experts more seriously would also help – and again, I know the BBC does try hard here, and I know that many experts are hard to reach or less good ‘box office’ than politicians or ‘think tank’ representatives, but it really matters. That last part, the use of lobbyists without knowing or acknowledging their real background, funding and so forth, has been a perennial problem.

We need the BBC

The importance of the BBC in the current era cannot be overstated. We need high quality, relatively impartial, and accurate and informative journalism now more than ever. In the struggle with fake news and other forms of misinformation, the existence of reliable ‘real’ news is a crucial tool. The BBC ought to be able to provide it – it holds a unique and critical position. Its grip on that position, however, is far from firm. Changes need to be made if it is to be maintained. I hope the BBC is brave enough to make them.

(This is of course my own personal, biased and far from impartial perspective).

SLS 2019 – University of Central Lancashire, Preston


Here’s the official call for papers for the Cyberlaw section of the SLS

SLS Cyberlaw Section: Call for Papers/Panels for 2019 SLS Annual Conference at the University of Central Lancashire, Preston

This is a call for papers and panels for the Cyberlaw section of the 2019 Society of Legal Scholars Annual Conference to be held at the University of Central Lancashire in Preston, from Tuesday 3rd September – Friday 6th September. This year’s theme is ‘Central Questions About Law.

The Cyberlaw section will meet in the first half of the conference on Tuesday 3rd and Wednesday 4th September.

If you are interested in delivering a paper or organising a panel, please submit your paper abstract or panel details by 11:59pm UK time on Monday 18th March 2019. All abstracts and panel details must be submitted through the Oxford Abstracts conference system which can be accessed using the following link – https://app.oxfordabstracts.com/stages/1028/submission – and following the instructions (select ‘Track’ for the relevant subject section). If you registered for Oxford Abstracts for last year’s conference, please ensure that you use the same e-mail address this year if that address remains current. If you experience any issues in using Oxford Abstracts, please contact slsconference@mosaicevents.co.uk.

Decisions will be communicated by the end of April.

I would welcome proposals for papers and panels on any issue relating to social media regulation, data protection, copyright reform and surveillance, including those addressing this year’s conference theme and though it might seem hard to predict, on the impact of Brexit on all aspects of cyber law. We welcome proposals representing a full range of intellectual perspectives in the subject section, and from those at all stages of their careers.

Those wishing to present a paper should submit a title and abstract of around 300 words. Those wishing to propose a panel should submit a document outlining the theme and rationale for the panel and the names of the proposed speakers (who must have agreed to participate) and their abstracts. Sessions are 90 minutes in length and so we recommend panels of three to four speakers, though the conference organisers reserve the right to add speakers to panels in the interests of balance and diversity.

As the SLS is keen to ensure that as many members with good quality papers as possible are able to present, we discourage speakers from presenting more than one paper at the conference. With this in mind, when you submit an abstract via Oxford Abstracts you will be asked to note if you are also responding to calls for papers or panels from other sections.

Please also note that the SLS offers a Best Paper Prize which can be awarded to academics at any stage of their career and which is open to those presenting papers individually or within a panel. The Prize carries a £250 monetary award and the winning paper will, subject to the usual process of review and publisher’s conditions, appear in Legal Studies. To be eligible:

  • speakers must be fully paid-up members of the SLS (Where a paper has more than one author, all authors eligible for membership of the Society under its rule 3 must be members. The decision as to eligibility of any co-authors will be taken by the Membership Secretary, whose decision will be final.)
  • papers must not exceed 12,000 words including footnotes (as counted in Word);
  • papers must be uploaded to the paperbank by 11:59pm UK time on Monday 26th August; and
  • papers must not have been published previously or have been accepted or be under consideration for publication.
  • papers must have been accepted by a convenor in a subject section and an oral version of the paper must be presented at the Annual Conference.

I have also been asked to remind you that all speakers will need to book and pay to attend the conference and that they will need to register for the conference by Friday 14th of June in order to secure their place within the programme, though please do let me know if this deadline is likely to pose any problems for you. Booking information will be circulated in due course, and will open after the decisions on the response to the calls are made.

With best wishes,

Paul Bernal

Corbyn and those European Courts

Jeremy Corbyn caused some distress amongst legal commentators over the weekend when he said to Andrew Marr that the European Court of Human Rights was ‘only in part an EU institution’. That simply isn’t true: the European Court of Human Rights (‘ECtHR’) is not in any way an EU institution. It is a Council of Europe court – and the Council of Europe is an organisation both broader and older than the European Union. The ECtHR exists to enforce the European Convention on Human Rights (the ‘ECHR’ – yes, all these abbreviations are confusing), something that was created in the aftermath of the Second World War and the Holocaust, agreed in 1950 and entering into force in 1953. Brits played a key part in its creation – it is something that for the most part the British legal community are justifiably proud of. So no, the ECtHR is not in any way an EU institution.

There is a link to the EU in a way, as a number of people have mentioned – but not in a way that makes the ECtHR in any way an EU institution. This link is that new member states of the EU are required to have signed up to the European Convention of Human Rights – and thus come under the jurisdiction of the ECtHR. This is because the EU recognises that the ECHR represents a minimum standard of Human Rights compliance – not that the ECHR is an EU document or the ECtHR is an EU institution. It isn’t even legally certain that existing members of the EU are required to be signatories of the ECHR – they all are, however, and no sensible or even slightly humane member state would be considering leaving the ECHR.

This is because the ECHR is a throughly positive document, and anyone who supports human rights should support our continuing to be a signatory. Certainly any Labour Party member – let alone any Labour Party leader, particularly one like Jeremy Corbyn with a history of supporting – indeed championing – human rights.

There is, however, at least one person who has suggested that we leave the ECHR: Theresa May. She’s been frustrated by the ECtHR more than once – and it is hard not to conclude that she’s far from a fan of human rights. Indeed, some have suggested that her antipathy for the ECJ – the European Court of Justice, which is is Luxembourg, as opposed to the ECtHR which is in Strasbourg – because she’s confused between the two courts.

That confusion is why so many legal commentators reacted so angrily to Corbyn’s remarks. Muddying waters that are already pretty murky feeds into the confusion between the courts – and potentially puts human rights even further at risk than they already are.

There is another potential reason that Corbyn might not want to be completely clear about this. If you support the European Court of Human Rights – which you should do if you support human rights – then that gives yet another reason to oppose Brexit. Whilst we’re still in the EU, it’s harder for the likes of Theresa May to achieve their aim of removing us from the ECHR – though technically, as noted, existing members may not be required to be signatories of the ECHR, that has not been tested and is highly unlikely. Keeping us in the EU provides another layer of protection for human rights. That, in these somewhat troubling times, might be crucial.