The story of Google’s AI subsidiary DeepMind took a not-unexpected turn this week when the ICO ruled that the Royal Free NHS Foundation Trust failed to comply with the Data Protection Act when it provided patient details to DeepMind. This is the latest step in a saga that looks set to rumble on for some time – and one from which there are many, many lessons to be learned. One of those – sadly one that does not seem likely to be heeded as much as it should be – is that those involved in projects like this should pay more attention to those who can loosely be described as ‘privacy geeks’.
Two in particular have been critically involved in this process – Hal Hodson (@halhod) and Julia Powles (@juliapowles). Hal started the ball rolling with a serious piece of investigative journalism in New Scientist in April 2016, which brought the issue to light, and as well as further journalistic work Hal and Julia wrote a piece of ‘proper’ academic work – ‘Google DeepMind and healthcare in an age of algorithms’ in the journal Healthcare and Technology. This led, ultimately, to the ICO’s investigation and ruling – though it has to be noted that the ICO’s ruling is on DeepMind’s trial with the Royal Free: the real test will be when DeepMind’s work rolls out. The ICO has asked the Royal Free, amongst other things, to do a full ‘privacy impact assessment’ prior to further work. That they did not do so prior to the previous trial is one of the serious shortcomings of the project. As Julia Powles put it in the Guardian yesterday:
“The ruling states that by transferring this data and using it for app testing, the Royal Free breached four data protection principles, as well as patient confidentiality under the common law. The transfer was not fair, transparent, lawful, necessary or proportionate. Patients wouldn’t have expected it, they weren’t told about it and their information rights weren’t available to them.”
These are serious matters – and they could have been avoided, if only DeepMind had listened to the right people. To the privacy geeks. That they didn’t is part of a pattern that has been seen on many occasions in the past. It was one of the reasons the Samaritans, one of the most respected charities in the world, launched their ill-conceived and ill-fated Twitter app Samaritans Radar – which had to be abandoned within ten days. It was one of the reasons that NHS England’s massive data project ‘care.data’ failed. Going further back, it was why the behavioural advertising firm Phorm failed – after conducting secret trials monitoring thousands of people’s web activity back in 2006 – and what led to all those annoying ‘cookie warnings’ you see at the top of websites.
In all these cases, the warning signs were there, if only the people involved had been willing to listen. The same will happen again – because the privacy geeks know what they’re doing. All too often those involved in these kinds of projects – people from businesses and from big public sector organisations – see those who raise concerns as either easily-dismissed tinfoil-hat-wearing consipiracy theorists, or as people who can cause a little trouble on Twitter but little more than that. Nothing to be taken seriously, little more than an annoyance. More, they’re seen as barriers to innovation, people just raising trouble for its own sake, luddites or worse.
None of this is true. Firstly, the people involved – whether they’re journalists, academics or ‘activists’ (and often they wear more than one of those hats) are often genuine experts. Hal Hodson’s degree from Trinity College Dublin is in Astrophysics, for example, whilst Julia Powles has a PhD in Law from Cambridge. Their concerns aren’t foolish, the issues they raise aren’t just for the sake of it.
Secondly, they know how to use the media – both the social media and the ‘traditional’ media. Hal’s original work was in the New Scientist, and he’s now The Economist’s Technology Correspondent. Julia writes regularly for the Guardian. Both know people all over the media and academia – and they’re far from alone. The failure of care.data and Samaritans Radar involved different people (there are many of us) but similar patterns – blogs, articles in the mainstream media, academic attention and more.
Thirdly, and perhaps most importantly, the people involved are far from a barrier to innovation. I have labelled them (and I’m very much one of them!) ‘geeks’ for a reason. We’re not geeks only about privacy – we’re real geeks. We like technology, we like innovation. We play with all the new technological toys, and see the potential in all kinds of directions – but we want these innovations to work for the people, to work responsibly, to be sustainable. Indeed, this last point is critical – it is a central tenet of much of my own academic work that if privacy is not considered properly, it is not just that a project should fail, but that it will fail. People will reject it – who now remembers the wonderful Google Glass, for example? Despite the sexy technology and the backing of Google’s deep pockets it died a death. It may well re-emerge at some point, but it need not have failed…
…and the same is true of many other projects. There are some great ideas, great innovations, that could avoid suffering the fate of Samaritans Radar, care.data and Google Glass. If they are to do so, the people involved should start listening to the privacy geeks, and sooner rather than later. Don’t see us as the enemy. Don’t try to hide what you do – it is very tempting to do everything you can ‘under the radar’, but when it is revealed it looks even worse. That was true of DeepMind’s deal with the Royal Free – and was just as true about Phorm’s ‘secret trials’ with BT and others back in 2006. One thing that people really should have learned is that these things do get discovered, one way or another. When they do, and it looks as though they’ve been done secretly or without proper scrutiny, they look even worse than they are.
It can all be avoided – but it rarely is. Sadly I expect to have to write similar pieces to this many times in the future.