BT’s ‘Walk me home’: tech solutionism at its worst

Magical thinking

It’s all too easy to see a difficult, societal problem and try to solve it with a technological ‘magic wand’. We tend to treat technology as magical a lot of the time – Arthur C Clarke’s Third Law, from as far back as 1962, that “Any sufficiently advanced technology is indistinguishable from magic” has a great deal of truth to it, and is the route to a great many problems. This latest one, BT’s idea that women can ‘opt-in’, probably via an app, to being tracked in real-time as they walk home alone, is just the latest in a long series of these kinds of ideas. Click on the app, and you have a fairy godmother watching you, ready to protect you from the evil monsters who might be out to get you.

More surveillance doesn’t mean more security

That’s the essence of this kind of thinking. By tech, we can sort everything out. And, as so often, the method by which this tech will solve everything is surveillance. It’s another classical trap – the idea that as long as we can monitor things, track things, gather more data, we can solve the problems. If only we knew, if only we were able to watch, everything would be OK.

This is the logic that lies behind ideas such as backdoors into encryption – still being touted on a big scale by many in governments all over the world – which mean, just as BT’s ‘walk me home’ would – actually reducing security and increasing risks for most of those involved. Just as breaking encryption will make children more vulnerable, getting women to put themselves under real-time surveillance at their key moments of risk will be likely to make them more vulnerable rather than less.

Look at the downsides….

It will make them easier to identify, and easier to locate – they will be effectively ‘registered’ on the system through downloading and activating the app, it will record their location, their regular routes – and the times they use them, their phone numbers and more. It will identify them as vulnerable – and make them even more of a target.

This, again, is a classical trap of tech solutionism. It’s easy just to look at a piece of tech in terms of how it’s intended to be used, and in terms to the intended user. In this case, that the people tracking the relevant woman will be only people who have her best interests at heart, and who will only intervene in the best way, as the system intends. The good police officer, acting in the best possible way.

All systems – and all data – will be misused

This is in itself magical thinking, and the opposite of the way we should be looking at this. We have to be aware that all systems will be misused. History show this – particularly in relation to technology. Just as one example, there are a whole series of data protection cases involving police officers misusing their ‘authorised’ access to data – from the Bignell case in 1998, where officers used their access to a motor vehicle database to find out details of cars for personal purposes onwards. It must never be forgotten that Wayne Couzens was a serving police officer when he abducted, raped and murdered Sarah Everard

This kind of a system will also create a database of vulnerable women – together with their personal details, their phone numbers, their home addresses, the routes they take to get home – including when they use them – and that they feel vulnerable coming home. This will be a honeypot of data for any potential stalker – and again, we must not forget that Wayne Couzens was a serving police officer, and that he planned the abduction, rape and murder of Sarah Everard carefully. Systems like this would be a perfect tool for another would-be Wayne Couzens – and also to ‘smaller scale’ creeps and misogynists. The plethora of stories about police officers and others misusing their position to pester women – and worse – that have come out in the wake of the abduction, rape and murder of Sarah Everard should make it abundantly clear that this isn’t a minor concern.

A route to victim-blaming – and infringing women’s rights

Perhaps even more importantly, systems like this are part of a route to blame the victim for the crime. ‘If only she’d used her ‘walk me home’ she would have been OK’ could be the new ‘if only she hadn’t dressed provocatively’. It puts pressure on women to let themselves be tracked and monitored – as well as making it their fault if they don’t use this ‘tool’ to save themselves.

This in itself is an infringement on women’s rights. Not just the right to be safe – which is fundamental – but the right to privacy, to freedom of action, and much more. It’s treating women as though they are like pets, to be microchipped for their own protection, registered on a database so that men can protect them. And if they don’t take advantage of this, well, they deserve what they get.

Avoiding the issue – and avoiding responsibility

All of which brings us back to the real problem: male violence. Tech solutionism is about attempting to use tech to solve societal problems – the societal problem here is male violence. So long as the focus is on the tech, and the tech that can be used by the women, the focus is off the men whose violence is the real problem. And so long as we thing that problem can be solved with an app, we fail to acknowledge how serious a problem it is, how deep a problem it is, and how serious a solution it requires.

It also means that many of those involved avoid taking the responsibility that they have for the problem. The police. The Home Office. Men. Avoiding responsibility has become an art form for the Metropolitan Police, and for Cressida Dick in particular. Some of the officers who shared abusive messages with Wayne Couzens are still working at the Met – and those are just the ones that we know about. This problem is deep-set. It is societal

Societal problems need societal solutions

The bottom line here is that this a massive societal problem – and that is something that won’t be solved by an app. It requires a societal solution – and that isn’t easy, it isn’t quick, and it isn’t something that can be done without pain and sacrifice. The pain and sacrifice, though, should not come from the victims. At the moment, and with ‘solutions’ like BT’s ‘Walk me home’, it is only the victims who are being expected to sacrifice anything. That is straightforwardly wrong.

The starting point should be with the police. That there have been no resignations – least of all from Cressida Dick – is no surprise at all. Beyond a few pseudo-apologies and a concerted attempt to present Couzens as an ‘ex’ police officer, there’s been almost nothing. He was a serving officer when he did the crime. The Met should be facing radical change – if it expects to regain trust, it must change. Societal solutions mean that we need to be able to trust the police.

It is only when we can trust the police that technological tools like BT’s ‘Walk Me Home’ have a chance of playing a part – a small part – in helping women. The trust has to come first. The change in the police has to come first. Without that, we have no chance.

Children need anonymity and encryption!

In recent weeks, two of the oldest issues on the internet have reared their ugly heads again: the demand that people use their ‘real names’ on social media, and the suggestion that we should undermine or ban the use of encryption – in particular end-to-end encryption. As has so often been the case, the argument has been made that we need to do this to ‘protect’ children. ‘Won’t someone think of the children’ has been a regular cry from people seeking to ‘rein in’ the internet for decades – this is the latest manifestation of something with which those concerned with internet governance are very familiar.

Superficially, both these ideas are attractive. If we force people to use their real names, bullies and paedophiles will be easier to catch, and trolls won’t dare do their trolling – for shame, perhaps, or because it’s only the mask of anonymity that gives them the courage to be bad. Similarly, if we ban encryption we’ll be able to catch the bullies and paedophiles, as the police will be able to see their messages, the social media companies will be able to algorithmically trawl through kids’ feeds and see if they’re being targeted and so forth. That, however, is very much only the superficial view. In reality, forcing real names and banning or restricting end-to-end encryption will make everyone less safe and secure, but will be particularly damaging for kids. For a whole series of reasons, kids benefit from both anonymity and encryption. Indeed, it can be argued that they need to have both anonymity and encryption available to them. A real ‘duty of care’ – as suggested by the Online Safety Bill – should mean that all social media systems implement end-to-end encryption for its messaging and make anonymity and pseudonymity easily available for all.

Children need anonymity

The issues surrounding anonymity on the internet have a long history – Pete Steiner’s seminal cartoon ‘On the Internet, Nobody Knows You’re a Dog’ was in the New Yorker in 1993, before social media in its current form was even conceived: Mark Zuckerberg was 9 years old.

It’s barely true these days – indeed, very much the reverse a lot of the time, as the profiling and targeting systems of the social media and advertising companies often mean they know more about us that we know ourselves – but it makes a key point about anonymity on the net. It can allow people to at least hide some things about themselves.

This is seen by many as a bad thing – but for children, particularly children who are the victims of bullies and worse, it’s a critical protector. As those who bully kids are often those who know the kids – from school, for example – being forced to use your real name means leaving yourself exposed to exactly those bullies. Real names becomes a tool for bullies online – and will force victims either to accept the bullying or avoid using the internet. This, of course, is not just true for bullies, but for overbearing parents, sadistic teachers and much worse. It is really important not to just think about good parents and protective teachers. For the vulnerable children, parents and teachers can be exactly the people they need to avoid – and there’s a good reason for that, as we shall see.

Some of those who had advocated for real names have recognised this negative impact, and instead suggest a system like ‘verified IDs’. That is, people don’t have to use their real names, but in order to use social media they need to prove to the social media company who they are – providing some kind of ID verification documentation (passports, even birth certificates etc) – but can then use a pseudonym. This might help a little – but has another fundamental flaw. The information that is gathered – the ID data – will be a honeypot of critically important and dangerous data, both a target for hackers and a temptation for the social media companies to use for other purposes – profiling and targeting in particular. Being able to access this kind of information about kids in particular is critically dangerous. Hacking and selling such information to child abusers in particular isn’t just a risk, it is pretty much inevitable. The only way to safeguard this kind of data is not to gather it at all, let alone put it in a database that might as well have the words ‘hack me’ written in red letters a hundred feet tall.

Children need encryption

Encryption is a vital protection for almost everything we do that really matters on the internet. It’s what makes online banking even possible, for example. This is just as true for kids as it is for adults – indeed, in some particular ways it is even more true for kids. End-to-end encryption is especially important – that is, the kind of encryption that means on the sender and recipient of a message can read it, not even the service that the message is sent on.

The example that Priti Patel and others are fighting in particular is the implementation of end-to-end encryption across all Facebook’s messaging system – it already exists on WhatsApp. End-to-end encryption would mean that not even Facebook could read the messages sent over the system. The opposers of the idea think it means that they won’t be able to find out when bullies, paedophiles etc are communicating with kids – bullying or grooming for example – but that misses a key part of the problem. Encryption doesn’t just protected ‘bad guys’, it protect everyone. Breaking encryption doesn’t just give a way in for the police and other authorities, it gives a way in for everyone. It removes the protection that the kids have from those who might target them.

End-to-end encryption protects against one other group that can and does pose a very significant risk to kids: the social media companies themselves. It should be remembered that the profiling and targeting of kids that is done by the social media companies is itself a significant danger to kids. In 2017, for example, a leaked document revealed that Facebook in Australia was offering advertisers (and hence not just advertisers) the opportunity to target vulnerable teens in real time.

“…By monitoring posts, pictures, interactions and internet activity in real-time, Facebook can work out when young people feel “stressed”, “defeated”, “overwhelmed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”,”

Facebook, of course, backed off from this particular programme when it was revealed – but it should not be seen as an anomaly but as part of the way that this kind of system works, and of the harm that the social media services themselves can represent for kids. End-to-end encryption to begin to limit this kind of thing – only to a certain extent, as the profiling and targeting mechanisms work on much more than just the content of messages. It could be a start though, and as kids move towards more private messaging systems the capabilities of this kind of hard could be reduced. If more secure, private and encrypted systems become the norm, children in particular will be safer and more secure

Children need privacy

The reason that kids need anonymity and encryption is a fundamental one. It’s because they need privacy, and in the current state of the internet anonymity and encryption are key protectors of privacy. More fundamentally than that, we need to remember that everyone needs privacy. This is especially true for children – because privacy is about power. We need privacy from those who have power over us – an employee needs privacy from their employer, a citizen from their government, everyone needs privacy from criminals, terrorists and so forth. For children this is especially intense, because so many kinds of people have power over children. By their nature, they’re more vulnerable – which is why we have the instinct to wish to protect them. We need to understand, though, what that protection could and should really mean.

As noted at the start, ‘won’t someone think of the children‘ has been a regular mantra – but it only gives one small side of the story. We need not just to think of the children, but think like the children and think with the children. Move more to thinking from their perspective, and not just treat them as though they need to be wrapped in cotton wool and watched like hawks. We also need to prepare them for adulthood – which means instilling in them good practices and arming them with the skills they need for the future. That means anonymity and encryption too.

Duty of care?

Priti Patel has suggested the duty of care could mean no end-to-end encryption, and Dowden has suggested everyone should have verified ID. There’s a stronger argument in both cases precisely the opposite way around – that a duty of care should mean that end-to-end encryption is mandatory on all messenger apps and messaging systems within social networking services, and that real names mandates should not be allowed on social networking systems. If we really have a duty of care to our kids, that’s what we should do.

Paul Bernal is Professor of Information Technology Law at UEA Law School

Verified IDs for social media? I don’t think so….

‘Verified ID’ is almost (but not quite) as bad an idea as ‘real names’: this is why. It does have some advantages. It’s not as quick a stalker’s tool as real names. It doesn’t chill quite as much as real names, but it is still has some very bad features.

Firstly, and most importantly, it is very unlikely to solve any problems. It still assumes that trolls act rationally and are ashamed of their trolling or expect to be punished for their trolling if caught. If real names doesn’t do that, why would verified ID? That is, it doesn’t provide any real deterrent to trolling, or to racial abuse. So what problem are you trying to solve with it? If you want to make subsequent investigation and prosecutions easier, you’re still missing the point: we don’t have the capacity.

You need to specify very carefully what you’re trying to solve first. Deterrence won’t work. Supporting prosecutions won’t work. So what is it? Set that down first, before you suggest them as a solution. Remember that many trolls think their comments are justified. Trolls tend not to think of themselves as trolls or their activities as trolling – they think their abuse of Diane Abbott is really about her mathematical skills and so forth – so measure to get ‘trolls’ don’t apply to them.

Next, the downsides. Who will hold all this vital ID data? The social media companies? They’re the last people who should get vital information to add to their databases. Giving them more power is disastrous to all of us. Some ‘trusted’ third party? Who? How? Why? Remember who the government wanted to get to be ‘trusted’ over Age Verification? Mind geek, who own Porn Hub. Who would they get here? Dido Harding? Trust is critical here, and trust is missing.

Next, the chilling effect. The people most in need of protection, the ones most at risk from Real Names, will still be chilled. Will someone with mental health issues want to give information that might be handed over to a service that might get them sectioned.And people who don’t trust the government or the police? Remember that the Investigatory Powers Act will mean they can get access to all that data. This will chill them. Maybe that’s the intention.

Then we have the data itself. Whoever holds it, it’s vulnerable to misuse and to hacking. It’s a honeypot of data that will be vulnerable. Experience makes that very clear. Even those with the best intentions make mistakes. There are hackers, leakers and more.

So if we want to do this, we need the benefits to outweigh these risks. So far, the benefits are minimal if they exist at all. The risks are not minimal at all. And that still leaves the biggest elephant in the room. What lies behind the REAL problem.

…because the real problem with racial abuse is the racism in our society. The racism in our media. The racism in our politicians. I like to blame Mark Zuckerberg for a huge amount – but here, he’s less responsible than Boris Johnson, Priti Patel, Nigel Farage etc.

So let’s not be distracted. I’m not against this in the absolute way I am about real names – but there are so many obstacles to be overcome before it could be made to work I find it hard to believe that it’s a realistic solution. AND it’s a distraction from the real problem.

Real Names: the wrong tool for the wrong problem

The drive towards enforcing ‘real names’ on the internet – and on social media in particular – is gathering momentum at the moment. Katie Price’s petition to require people to provide verifiable ID before opening a social media account is just a new variation on a very old theme – and though well intentioned (as are many of the similar drives) it is badly misdirected and not just unlikely to solve any of the problems it is intended to solve it would make things worse – and make it even harder to find genuine solutions to what are, for the most part, genuine problems.

The attraction of ending anonymity

Ending – or significantly curbing – anonymity on social media is superficially attractive. ‘They wouldn’t behave that way if they had to use their real names’ is one argument. ‘They only do it because we can’t find them’ is another. Neither of these things are really true. Evidence that people are less aggressive or less antagonistic if they are forced to use their real names is mixed at best – and indeed some large scale studies have shown that trolls can be worse if they have to use their real names. More importantly, however, curtailing anonymity would have very damaging consequences for many vulnerable people, as well as distracting us from the real problems behind a lot of trolling. It isn’t the anonymity that’s the problem, it’s the trolling – and the reasons for the trolling are far deeper than the names people use when they troll. It isn’t the anonymity, it’s the aggression, it’s the anger, it’s the hate and it’s the lies. Whilst anger, hate and lies are endemic in our society – and notably successful in our media and our politics – that anger, hate and lies will be manifested online and in the social media in particular.

Trolls don’t need anonymity….

There are many assumptions behind the idea that real names would stop trolling. One is that people imagine that trolls are ashamed of their trolling, so would no longer do it if they were forced to do it using their real names. For some trolls, this may be the case – but for others exactly the opposite is the case. They may even be proud of their trolling, happy to be seen to be calling out their enemies and abusing them. For still more, they don’t consider themselves to be trolls, so wouldn’t think this applies to them. In troll-fights, it’s very common for both sides to think they’re the good guys, fighting the good fight against the evil on the other side. Their victims are the real trolls, they’re just defending themselves or fighting their own corner. This has been a characteristic of many of the major trolling phenomena of the last few decades – GamerGate is one of the most dramatic example. Neither side in a conflict thinks they’re the Nazis, they both think they’re the French Resistance.

The downsides of ‘real names’.

Another is that forcing real names only has downsides for trolls. No-one else has anything to fear from having to use their real names – or having to provide verifiable IDs for their social media accounts. Very much the opposite is true. There are many people who rely on anonymity or pseudonymity – some for their own protection, as they have enemies who might target them (whistle-blowers, victims of spousal abuse, gay teens with conservative parents, people living under oppressive regimes etc) – others to enable their freedom of speech (people in responsible positions who might be compromised are just one of the examples) including those who want their words to be taken on face value rather than being judged because of who has said them. ‘Real’ names can reveal things about a person that make them a target – revealing ethnicity, religion, gender, age, class, and much more – and in the current era that revelation can be more precise, more detailed and more damaging because of the kind of profiling possible through data analysis. Forcing real names is something that privileged people (including people like me) may not understand the impact of – because it won’t damage them or put them at risk. For millions of others, it would. People who are in that kind of privileged position should think twice before assuming their own position is the only one that matters.

Real names make the link between the online person and the ‘real’ person easier. That’s good when you think it will allow you to ‘catch’ the bad guy – but bad when you realise it will allow the bad guys to catch their victims. There’s a reason ‘doxxing’ is a classic troll tool – revealing documents about their victims is a way to punish them. Forcing real names makes doxxing much easier – in practice, it’s like automatically doxxing people. Moreover, even if you don’t force real names but you do require some kind of verified ID, you’re providing an immediate source of doxxing information for the trolls to use to find their victims. You might as well be painting ‘HACK ME PLEASE’ in red letters 100 feet high on your database of IDs. It’s a recipe for disaster for a great many people

What is the real problem?

This is the question that is often missed. What are we worried about? There are many forms of trolling – but there are two that are particularly important here. The first is the specific, individual direct and vicious attacks – death and rape threats, misogyny and racism etc. Real names won’t stop this – even if it can be enforced – and we already have tools to deal with it, even if they’re not as often or easily applied as they should be. ‘Anonymous’ trolls can be and are identified and prosecuted for these kinds of attacks. We have the technological tools to do this, and the law is in place to prosecute them (the Malicious Communications Act 1988, S127 of the Communications Act 2003 and more). People have been successfully prosecuted and jailed for trolling of this kind. There wasn’t any need for real names or digital IDs for this. It’s not easy, it’s not quick, and it’s not ‘summary justice’ – but it can be done.

The second is the ‘pile-on’ where a victim gets attacked by hundreds or thousands of smaller scale bits of nastiness simultaneously – often from many anonymous accounts. Some of the attacks are as vicious as the individual direct attacks mentioned above, but many won’t be – and wouldn’t easily be prosecuted under the laws mentioned above. It can be the sheer weight of the numbers of attacks that can be overwhelming – you can block one or two attackers, you can mute more, you can ignore some others, but when there are hundreds every minute it is impossible to deal with other than by locking your account or withdrawing from social media. This is where technological solutions – and social media company action – could help, and indeed is helping. The ability on Twitter, for example, to automatically mute all people with default pictures, can clean up a timeline a bit – taking out the most obvious of trolls. More of this is happening all the time – and again, does not require real names or digital IDs.

What is more important in the latter example – and indeed in the former example – is why it happens. Pile-ons happen because they’re instigated – and they’re instigated not by anonymous trolls, but by exactly the opposite. By the big names, the ‘blue ticks’, the mainstream media, the mainstream politicians. When a blue tick (and I’m a blue tick) quote-tweets someone with a sarcastic comment, the thousands (or millions) of followers who see that tweet can and will pile in on the person quote tweeted. The sarcastic comment from a big name is the cause of the pile on, though in itself it isn’t harmful (and certainly not a prosecutable death threat or piece of hate speech). If you go after the individual (and sometimes anonymous) who does the death threat without considering the reason they targeted that individual, you don’t really do anything to solve the problem.

And that’s the bottom line. Right now, our political climate encourages hatred and anger. The ‘war on woke’, Trump, Brexit, Le Pen, Modi, the Daily Mail, all encourage it. Anonymity on social media isn’t the problem. Our society and our political climate is the problem. Ending anonymity would cause vast and permanent damage to exactly the people who we need to protect, and for only a slight chance of making it easier to catch a small subsection of those who cause problems online. It should be avoided strenuously.

(For more serious an academic analysis of this issue, see Chapter 8 of my 2018 book The Internet, Warts and All, or my 2020 book What do we know and what should we do about internet privacy)

Why a real names policy won’t solve trolling

I don’t know how many times I’ve had to write about it, but it’s a lot. It comes up again and again. Anyway, once more I see that ‘real names’ are being touted as the solution to trolling. They aren’t. They won’t ever be – and in fact they’re highly likely to be counterproductive and deeply damaging to many of the vulnerable people they’re supposed to be protecting. Anyway, I’m not going to write something new, but give you an extract from my 2020 book, ‘What do we know and what should we do about Internet Privacy’ – which is relatively cheap (less than £10) and written, I hope, in language even an MP can understand. You can find it here or at any decent online bookseller.


Whenever there is any kind of ‘nastiness’ on social media – trolling, hate speech, cyber bullying, ‘revenge porn’ – there are immediate calls to force people to use their real names. It is seen as some kind of panacea, based in part on the idea that ‘hiding’ behind a false name makes people feel free to behave badly and the related idea that they would be ashamed to do so if they were forced to reveal their real names. ‘Sunlight is the best disinfectant’ is a compelling argument on the surface but when examined more closely it is not just likely to be ineffective abut counterproductive, discriminatory and with the side effect of putting many different groups of people at significant risk. Moreover, there are already both technical and legal methods to discover who is behind an online account without the negative side effects.

The empirical evidence, counterintuitive though it might seem, suggests that when forced to use their real names internet trolls actually become more rather than less aggressive. There are a number of possible explanations for this. It might be seen as a ‘badge of honour’. Sometimes being a troll is something to boast about – and showing your real identity gives you kudos. Having to use your real name might actually free you from the shackles of wanting to hide. Perhaps it just makes trolls feel there’s nothing more to hide.

Whatever the explanation, forcing real names on people does not seem to stem the tide of nastiness. Platforms where real names are required – Facebook is the most obvious here – are conspicuously not free from harmful material, bullying and trolling. The internet is no longer anything like the place where ‘nobody knows you’re a dog’, even if you are operating under a pseudonym. There are many technological ways to know all kinds of thing about someone on the internet regardless of ‘real-names’ policies. The authorities can break down most forms of pseudonymity and anonymity when they need to, while others can use a particular legal mechanism, the Norwich Pharmacal Order, to require the disclosure of information about an apparently anonymous individual from service providers when needed.

Even more importantly, requirements for real names can be deeply damaging to many people, as they provide the link between the online and ‘real-world’ identities. People operating under oppressive regimes – it should be no surprise that the Chinese government is very keen on real-names policies – are perhaps the most obvious, but whistle-blowers, people in positions of responsibility like police officers or doctors who want to share important insider stories, victims of domestic violence, young people who quite reasonably might not want their parents to know what they are doing, people with illnesses who wish to find out more about those illnesses, are just a start.

There are some specific groups who can and do suffer discrimination as a result of real-names policies: people with names that identify their religion or ethnicity, for a start, and indeed their gender. Transgender people suffer particularly badly – who defines what their ‘real’ name is, and how? Real names can also allow trolls and bullies to find and identify their victims – damaging exactly the people that the policies are intended to protect. It is not a coincidence that a common trolling tactic is doxxing – releasing documents about someone so that they can be targeted for abuse in the real world.

When looked at in the round, rather than requiring real names we should be looking to enshrine the right to pseudonymity online. It is a critical protection for many of exactly the people who need protection. Sadly, just as with encryption, it is much more likely that the authorities will push in exactly the wrong direction on this.


Our Dom

A tavern in the shadow of a castle, somewhere in France, or perhaps County Durham. Dom sits in a large chair, looking a little morose. In comes a young, northern lad, a salt of the earth type, who looks over at Dom and stops.

Darren (for it is he): Why are you looking so sad, Dom? What’s wrong?

Dom looks up, but barely registers Darren’s existence. Darren is unfazed, and comes up to Dom and tries to cheer him up with a smile. In the background, a brass band (good Northern stuff) starts up, in a tune recognisable as coming from Disney’s Beauty and the Beast.

Darren starts, in a sing-song voice

Gosh, it disturbs me to see you Our Dom
Looking so down in the dumps
Every guy here’d love to be you, Our Dom
Even when taking your lumps
You’re Boris’s trusted adviser
You’re Laura K’s favourite source
I’ve never met anyone wiser
There’s a reason for that: it’s because…

the band strikes up a jaunty tune…

No… one… lies like our Dom
Fakes his cries like our Dom
Cannot tell the truth if he tries like our Dom

His lying can never be bested
From London to Durham and more
To drive so his eyesight is tested?
I laughed so much my ribs were sore…

No… one… cheats like our Dom
Does deceit like our Dom
Turns his enemies white as a sheet like our Dom

When it comes down to rewriting history
There no folk who can quite compare
Why people believe it’s a mystery
But it drives his foes hearts to despair…

No… one… takes like our Dom
On the make like our Dom
Makes his news quite so perfectly fake like our Dom

His lies they are brash, they are brazen
But the media just doesn’t care
He crafts lies for ev’ry occasion
And his army of trolls and of bots can then share…

No… one.. drives like our Dom
Coaches wives like our Dom
Cares nothing for old people’s lives like our Dom

He can break any law with impunity
In elections and lockdown who cares?
His denials of planned herd immunity
Are about as convincing as Donald Trump’s hair…

No… one… sneers like our Dom
Stokes up fears like our Dom
No… one… lies like our Dom
Porkie pies like our Dom

He’s….
Our…
Dom!

Darren sits down, exhausted. Dom just ignores him, but a secret smile just touches his eyes…

With apologies to anyone even slightly associated to Disney.

Contact tracing, privacy, magical thinking – and trust!

The saga of the UK’s contact tracing app has barely begun but already it is fraught with problems. Technical problems – the app barely works on iPhones, for example, and communication between iPhones requires someone with an Android phone to be in close proximity – are just the start of it. Legal problems are another issue – the app looks likely to stretch data protection law at the very least. Then there are practical problems – will the app record you as having contact with people from whom you are blocked by a wall, for example – and the huge issue of getting enough people to download it when many don’t have smartphones, many won’t be savvy enough to get it going, and many more, it seems likely, won’t trust the app enough to use it.

That’s not even to go into the bigger problems with the app. First of all, it seems unlikely to do what people want it to do – though even what is wanted is unclear, a problem which I will get back to. Secondly, it rides roughshod over privacy in not just a legal but a practical way, and despite what many might suggest people do care about privacy enough to make decisions on its basis.

This piece is not about the technical details of the app – there are people far more technologically adept than me who have already written extensively and well about this – and nor is it about the legal details, which have also been covered extensively and well by some real experts (see the Hawktawk blog on data protection, and the opinion of Matthew Ryder QC, Edward Craven, Gayatri Sarathy & Ravi Naik for example) but rather about the underlying problems that have beset this project from the start: misunderstanding privacy, magical thinking, and failure to grasp the nature of trust.

These three issues together mean that right now, the project is likely to fail, do damage, and distract from genuine ways to help deal with the coronavirus crisis, and the best thing people should do is not download or use the app, so that the authorities are forced into a rethink and into a better way forward. It would be far from the first time during this crisis that the government has had to be nudged in a positive direction.

Misunderstanding Privacy – Part 1

Although people often underplay it – particularly in relation to other people – privacy is important to everyone. MPs, for example, will fiercely guard their own privacy whilst passing the most intrusive of surveillance laws. Journalists will fight to protect the privacy of their sources even whilst invading the privacy of the subjects of their investigations. Undercover police officers will resist even legal challenges to reveal their identities after investigations go wrong.

This is for one simple reason: privacy matters to people when things are important.

That is particularly relevant here, because the contact tracing app hits at three of the most important parts of our privacy: our health, our location, and our social interactions. Health and location data, as I detail in my most recent book, what do we know and what should we do about internet privacy, are two of the key areas of the current data world, in part because we care a lot about them and in part because they can be immensely valuable in both positive and negative ways. We care about them because they’re intensely personal and private – but that’s also why they can be valuable to those who wish to exploit or harm us. Health data, for example, can be used to discriminate – something the contact tracing app might well enable, as it could force people to self-isolate whilst others are free to move, or even act as an enabler for the ‘immunity passports’ that have been mooted but are fraught with even more problems than the contact tracing app.

Location data is another matter and something worthy of much more extensive discussion – but suffice it to say that there’s a reason we don’t like the idea of being watched and followed at all times, and that reason is real. If people know where you are or where you have been, they can learn a great deal about you – and know where you are not (if you’re not at home, you might be more vulnerable to burglars) as well as where you might be going. Authoritarian states can find dissidents. Abusive spouses can find their victims and so forth. More ‘benignly’, it can be used to advertise and sell local and relevant products – and in the aggregate can be used to ‘manage’ populations.

Relationship data – who you know, how well you know them, what you do with them and so forth – is in online terms one of the things that makes Facebook so successful and at the same time so intrusive. What a contact tracing system can do is translate that into the offline world. Indeed, that’s the essence of it: to gather data about who you come into contact with, or at least in proximity to, by getting your phone to communicate with all the phones close to you in the real world.

This is something we do and should care about, and could and should be protective over. Whilst it makes sense in relation to protecting against the spread of an infection, the potential for misuse of this kind of data is perhaps even greater than that of health and location data. Authoritarian states know this – it’s been standard practice for spies for centuries. The Stasi’s files were full of details of who had met whom and when, and for how long – this is precisely the kind of data that a contact tracing system has the potential to gather. This is also why we should be hugely wary of establishing systems that enable it to be done easily, remotely and at scale. This isn’t just privacy as some kind of luxury – this is real concern about things that are done in the real world and have been for many, many years, just not with the speed, efficiency and cheapness of installing an app on people’s phones.

Some of this people ‘instinctively’ know – they feel that the intrusions on their privacy are ‘creepy’ – and hence resist. Businesses and government often underestimate how much they care and how much they resist – and how able they are to resist. In my work I have seen this again and again. Perhaps the most relevant here was the dramatic nine day failure that was the Samaritans Radar app, which scanned people’s tweets to detect whether they might be feeling vulnerable and even suicidal, but didn’t understand that even this scanning would be seen as intrusive by the very people it was supposed to protect. They rebelled, and the app was abandoned almost immediately it had started. The NHS’s own ‘care.data’ scheme, far bigger and grander, collapsed for similar reasons – it wanted to suck up data from GP practices into a great big central database, but didn’t get either the legal or the practical consent from enough people to make it work. Resistance was not futile – it was effective.

This resistance seems likely in relation to the contact tracing app too – not least because the resistance grows spectacularly when there is little trust in the people behind a project. And, as we shall see, the government has done almost everything in its power to make people distrust their project.

Magical thinking

The second part of the problem is what can loosely be called ‘magical thinking’. This is another thing that is all too common in what might loosely be called the ‘digital age’. Broadly speaking, it means treating technology as magical, and thinking that you can solve complex, nuanced and multifaceted problems with a wave of a technological wand. It is this kind of magic that Brexiters believed would ‘solve’ the Irish border problems (it won’t) and led anti-porn campaigners to think that ‘age verification’ systems online would stop kids (and often adults) from accessing porn (it won’t).

If you watched Matt Hancock launch the app at the daily Downing Street press conference, you could have seen how this works. He enthused about the app like a child with a new toy – and suggested that it was the key to solving all the problems. Even with the best will in the world, a contact tracing app could only be a very small part of a much bigger operation, and only make a small contribution to solving whatever problems they want it to solve (more of which later). Magical thinking, however, makes it the key, the silver bullet, the magic spell that needs just to be spoken to transform Cinderella into a beautiful princess. It will never be that, and the more it is thought of in those terms the less chance it has of working in any way at all. The magical thinking means that the real work that needs to go on is relegated to the background or eliminated at all, replaced only by the magic of tech.

Here, the app seems to be designed to replace the need for a proper and painstaking testing regime. As it stands, it is based on self-reporting of symptoms, rather than testing. A person self-reports, and then the system alerts anyone who it thinks has been in contact with that person that they might be at risk. Regardless of the technological safeguards, that leaves the system at the mercy of hypochondriacs who will report the slightest cough or headache, thus alerting anyone they’ve been close to, or malicious self-reporters who either just want to cause mischief (scare your friends for a laugh) or who actually want to cause damage – go into a shop run by a rival, then later self-report and get all the workers in the shop worried into self-isolation.

These are just a couple of the possibilities. There are more. Stoics, who have symptoms but don’t take it seriously and don’t report – or people afraid to report because it might get them into trouble with work or friends. Others who don’t even recognise the symptoms. Asymptomatic people who can go around freely infecting people and not get triggered on the system at all. The magical thinking that suggests the app can do everything doesn’t take human nature into account – let alone malicious actors. History shows that whenever a technological system is developed the people who wish to find and exploit flaws in it – or different ways to use it – are ready to take advantage.

Magical thinking also means not thinking anything will go wrong – whether it be the malicious actors already mentioned or some kind of technical flaw that has not been anticipated. It also means that all these problems must be soluble by a little bit of techy cleverness, because the techies are so clever. Of course they are clever – but there are many problems that tech alone can’t solve

The issue of trust

One of those is trust. Tech can’t make people trust you – indeed, many people are distinctly distrustful of technology. The NHS generates trust, and those behind the app may well be assuming that they can ride on the coattails of that trust – but that itself may be wishful thinking, because they have done almost none of the things that generate real trust – and the app depends hugely on trust, because without it people won’t download and won’t use the app.

How can they generate that trust? The first point, and perhaps the hardest, is to be trustworthy. The NHS generates trust but politicians do the opposite. These particular politicians have been demonstrably and dramatically untrustworthy, noted for their lies – Boris Johnson having been sacked from more than one job for having lied. Further, their tech people have a particularly dishonourable record – Dominic Cummings is hardly seen as a paragon of virtue even by his own side, whilst the social media manipulative tactics of the leave campaign were remarkable for their effectiveness and their dishonesty.

In those circumstances, that means you have to work hard to generate trust. There are a few keys here. The first is to distance yourself from the least trustworthy people – the vote leave campaigners should not have been let near this with a barge pole, for example. The second is to follow systems and procedures in an exemplary way, building in checks and balances at all times, and being as transparent as possible.

Here, they’ve done the opposite. It has been almost impossible to find out what was going to until the programme was actually already in pilot stage. Parliament – through its committee system – was not given oversight until the pilot was already under way, and the report of the Human Rights Committee was deeply critical. There appears to have been no Data Protection Impact Assessment done in advance of the pilot – which is almost certainly in breach of the GDPR.

Further, it is still not really clear what the purpose of the project is – and this is also something crucial for the generation of trust. We need to know precisely what the aims are – and how they will be measured, so that it is possible to ascertain whether it is a success or not. We need to know the duration, what happens on completion – to the project, to the data gathered and to the data derived from the data gathered. We need to know how the project will deal with the many, many problems that have already been discussed – and we needed to know that before the project went into its pilot stage.

Being presented with a ‘fait accompli’ and being told to accept it is one way to reduce trust, not to gain it. All these processes need to take place whilst there is still a chance to change the project, and change is significantly – because all the signs are that a significant change will be needed. Currently it seems unlikely that the app will do anything very useful, and it will have significant and damaging side effects.

Misunderstanding Privacy – part 2

…which brings us back to privacy. One of the most common misunderstandings of privacy is the idea that it’s about hiding something away – hence the facetious and false ‘if you’ve got nothing to hide you’ve got nothing to fear’ argument that is made all the time. In practice, privacy is complex and nuanced and more about controlling – or at least influencing – what kind of information about you is made available to whom.

This last part is the key. Privacy is relational. You need privacy from someone or something else, and you need it in different ways. Privacy scholars are often asked ‘who do you worry about most, governments or corporations?’ Are you more worried about Facebook or GCHQ. It’s a bit of a false question – because you should be (and probably are) worried about them in different ways, just as you’re worried about privacy from your boss, your parents, your kids, your friends in different ways. You might tell your doctor the most intimate details about your health, but you probably wouldn’t tell your boss or a bloke you meet in the pub.

With the coronavirus contact tracing app, this is also the key. Who gets access to our data, who gets to know about our health, our location, our movements and our contacts? If we know this information is going to be kept properly confidential, we might be more willing to share it. Do we trust our doctors to keep it confidential? Probably. Would we trust the politicians to keep it confidential? Far less likely. How can we be sure who will get access to it?

Without getting into too much technical detail, this is where the key current argument is over the app. When people talk about a centralised system, they mean that the data (or rather some of the data) is uploaded to a central server when you report symptoms. A decentralised system does not do that – the data is only communicated between phones, and doesn’t get stored in a central database. This is much more privacy-friendly, but does not build up a big central database for later use and analysis.

This is why privacy people much prefer the idea of a decentralised system – because, amongst other things, it keeps the data out of the hands of people that we cannot and should not trust. Out of the hands of the people we need privacy from.

The government does not seem to see this. They’re keen to stress how well the data is protected in ‘security’ terms – protected from hackers and so forth – without realising (or perhaps admitting) that the people we really want privacy from, the people who present the biggest risk to the users, are the government themselves. We don’t trust this government – and we should not really trust any government, but build in safeguards and protections from those governments, and remember that what we build now will be available not just to this government but to successors, which may be even worse, however difficult that might be to imagine.

Ways forward?

Where do we go from here? It seems likely that the government will try to push on regardless, and present whatever happens as a great success. That should be fought against, tooth and nail. They can and should be challenged and pushed on every point – legal, technical, practical, and trust-related. That way they may be willing to move to a more privacy-friendly solution. They do exist, and it’s not too late to change.

what do we know and what should we do about…? internet privacy

My new book, what do we know and what should we do about internet privacy has just been published, by Sage. It is part of a series of books covering a wide range of current topics – the first ones have been on immigrationinequality, the future of work and housing. 

This is a very different kind of book from my first two books – Internet Privacy Rights, and The Internet, Warts and All, both of which are large, relatively serious academic books, published by Cambridge University Press, and sufficiently expensive and academic as to be purchasable only by other academics – or more likely university libraries. The new book is meant for a much more general audience – it is short, written intentionally accessibly, and for sale at less than £10. It’s not a law book – the series is primarily social science, and in many ways I would call the book more sociology than anything else. I was asked to write the book by the excellent Chris Grey – whose Brexit blogs have been vital reading over the last few years – and I was delighted to be asked, because making this subject in particular more accessible has been something I’ve been wanting to do for a long time. Internet privacy has been a subject for geeks and nerds for years – but as this new book tries to show, it’s something that matters more and more for everyone these days.

Cover

It may be a short book (well, it is a short book, well under 100 pages) but it covers a wide range. It starts by setting the context – a brief history of privacy, a brief history of the internet, and then showing how we got from what were optimistic, liberal and free beginnings to the current situation – all-pervading surveillance, government involvement at every level, domination by a few, huge corporations with their own interests at heart. It looks at the key developments along the way – the world-wide-web, search, social networks – and their privacy implications. It then focusses on the biggest ‘new’ issues: location data, health data, facial recognition and other biometrics, the internet of things, and political data and political manipulation. It sketches out how each of these matters significantly – but how the combination of them matters even more, and what it means in terms of our privacy, our autonomy and our future.

The final part of the book – the ‘what should we do about…’ section – is by its nature rather shorter. There is not as much that we can do as many of us would like – as the book outlines, we have reached a position from which it is very difficult to escape. We have built dependencies that are hard to find alternatives to – but not impossible. The book outlines some of the key strategies – from doing our best to extricate ourselves from the disaster that is Facebook to persuading our governments not to follow the current ultimately destructive paths that it seems determined to pursue. Two policies get particular attention: Real Names, which though superficially attractive are ultimately destructive and authoritarian, fail to deal with the issues they claim to and put vulnerable people in more danger, and the current and fundamentally misguided attempts to undermine the effectiveness of encryption.

Can we change? I have to admit this is not a very optimistic book, despite the cheery pink colour of its cover, but it is not completely negative. I hope that the starting point is raising awareness, which is what this book is intended to do.

The book can be purchased directly from Sage here, or via Amazon here, though if you buy it through Amazon, after you’ve read the book you might feel you should have bought it another way!

 

Paul Bernal

February 2020

For Brexit

When hate and lies

Found wings to fly

And ignorance

Gained prominence

Those Empire songs

And Big Ben bongs

Nostalgic dreams

Weren’t what they seemed

Rose-tinted specs

With dire effect

And science dies

Beneath those lies

With knowledge lost

Old friendships tossed

For hateful thoughts

A mood they caught

And migrants blamed

Old hates inflamed

“Take back control”

And lose your soul

And so we go

Although we know

That wounded future

Finds no suture

All the madness

Leaves just sadness

It’s over now.

And how.

P Bernal

The BBC’s problems are no conspiracy theory…

The BBC’s latest response to their challenges over their election coverage, in a piece in the Guardian by Fran Unsworth, their director of news and current affairs, has a very welcome headline:

“At the BBC, impartiality is precious. We will protect it”

Fran, and the BBC, are right that their impartiality is precious – as well as being required by law – but by dismissing those who are challenging them as conspiracy theorists they are doing the opposite of protecting it. They’re helping to ensure its demise.

Not a conspiracy theory

The first and most important thing to say is that very few people – and no-one serious – is suggesting there is any kind of conspiracy going on here. To suggest that they are is a classic straw man argument. Conspiracy theories are easily dismissed, and often make little sense when analysed. Of course it’s impossible to get a large number of independent minded journalists and individual editors to follow a conspiracy. We know that very well – but it’s absolutely not what the BBC is being accused of, so attacking it and dismissing it bears no relationship to the real problem – or real problems, because there are a number of connected problems involved here.

The problems with the BBC are qualitatively different. Unconscious or subconscious bias. A tendency to groupthink. Subservience to authority. High-handedness to the rest of us. This, coupled with a kind of naïveté and misunderstanding of the new media environment, is what produces the problems that we see with the BBC – and which the BBC either don’t see or don’t want to see or address.

Making mistakes

Everyone makes mistakes – and though many might take issue with Fran Unsworth’s description of ‘a couple of editorial mistakes’ as perhaps something of an underestimate –  and no-one expects all mistakes to be avoided. The big questions, though, are what kind of mistakes are made, how they are corrected and avoided in the future, and what kind of apologies are made for them. That’s where the question of unconscious or subconscious bias comes in. The two mistakes Fran Unsworth is presumably referring to are using the wrong clip for Boris Johnson at the Cenotaph and editing out the laughter that followed his answer about trust in the Question Time debate, but there are a number of others. The most noticeable thing about them, however, is not the individual errors, but that they all lean in the same direction. All tend to favour Boris Johnson. That’s where the question of bias comes in. Not a conspiracy theory that the mistakes are made deliberately, under some kind of orders, but that they tend to follow the subconscious bias.

Subservience to authority

This is closely related to the accusation – made in particular by Peter Oborne – that the BBC is too servile to the Prime Minister’s Office. Again, this isn’t a conspiracy theory, but an observation, and certainly not one restricted to the BBC. Robert Peston fits the profile every bit as much as Laura Kuenssberg, for example. This is nothing new for the BBC, however, as the role of being a state broadcaster has consequences, but it has a particular significance in a time when those in authority – and those in Number 10 in particular – are notably less trustworthy than in the past.

Being willing to make compromises in order to get access is normal journalistic practice, but there are balances to be found and the main accusation is that the balance has been tipped too far. When Number 10 is restricting other media – bans on Channel 4 News and on the Daily Mirror for example – it should ring alarm bells in the minds of any journalists. When the criticisms of Peter Oborne are taken into account, those alarm bells should be listened to even more carefully.  Denying that it’s even possible that the balance may have been missed, rather than critical self-examination, is a recipe for disaster.

Fran Unsworth assures us that the BBC are not ‘cowed or unconfident’. I hope she’s right, but the evidence does not really support her. The other ‘mistake’ – failing to secure a date for an Andrew Neil interview with Boris Johnson whilst telling (or at the very least hinting) to the other leaders that they had – does not look at all good. Acquiescing to Johnson’s subsequent request to get the Sunday morning chat with Andrew Marr rather than the evening grilling by Neil makes it look even worse. A strong, ‘uncowed’ BBC would not have let either of those things happen.

Understanding the new media

Another key aspect of the current political climate – and again, the current occupants of Number 10 are critical here – is that the relationship between the old and the new media is vitally important. It is very easy for the ‘old media’ to get ‘played’ by skilful operators of the new media. Selectively RTing poorly phrased and incomplete tweets by BBC journalists, taking them out of context and not mentioning critiques that had been put in separate tweets is just one example. Using clips from interviews similarly selectively or even editing them to create an effect (making Keir Starmer pause and look as though he didn’t answer a question that he did, or editing out the laughter that followed Boris Johnson’s answer on trust) is pretty standard practice now – and the BBC should be aware of that.

There are things that the BBC journalists could do to slow down this manipulation – including the criticism within the tweet rather than separately. “Mr Johnson again mentioned the 50,000 new nurses” in a tweet leaves it open to magnification without criticism, “Mr Johnson again claimed the debunked number of 50,000 new nurses” does not. Taking care over words more: say that a politician ‘says’ or ‘claims’ rather than ‘reveals’ something if they thing they are claiming is dubious at least. Being cynical in the face of people with a track record of dishonesty isn’t being unfair, it’s being a proper journalist.

High-handedness to critics

The responses to criticism – and Fran Unsworth’s is just the latest of many – have been perhaps the most disappointing of all. Anyone even slightly criticising the BBC is dismissed as a conspiracy theorist, fobbed off with straw man arguments or worse. Huw Edwards suggested Peter Oborne looked ‘crackers’ for suggesting the clipped version of Boris Johnson’s response on trust had been edited – and even when the BBC eventually admitted it had been edited there has been no apology from Edwards.

This is pretty much the definition of gaslighting – and the BBC should know this and should find a much, much better way.

Trusting the BBC

Right now, we need the BBC to be working well. We need to be able to trust the BBC – and the BBC needs us to trust them. Calling its critics conspiracy theorists and miscasting their criticism as ‘crackers’ is pretty much guaranteed to damage that trust. It is already close to breaking point. Unless the BBC starts to understand this – and to openly acknowledge it, because I am quite sure there are a fair number of journalists and others in the BBC who are quite aware of the problems – that trust will be gone. The BBC needs to understand how it appears to others.

The dramatic cartoon in the Dutch newspaper Volkskrant, showing Boris Johnson raping Britain whilst Nigel Farage and Jacob Rees-Mogg et al hold her down, has the BBC pushing away the crowd saying the Dutch equivalent of  ‘move along, nothing to see here’. This should really give the BBC pause for thought. What role are they taking? How do they want to be remembered? When the rest of the world can see it but the BBC themselves can’t, things have got very bad. This may be the BBC’s last chance. I hope it takes it.