Samaritans Radar is not a ‘privacy’ story!

The story of the Samaritans Radar app continues to rage on. I’ve written about it before, and how it demonstrates a misunderstanding of privacy – but that doesn’t mean that it’s really a ‘privacy’ story at all. Quite the opposite. Privacy is largely incidental to the story of Samaritans Radar, and not even close to the central problems with the system – and I say that as a privacy scholar and a privacy advocate.

This has come out twice for me during the day. Firstly, when I was accused by one of the supporters of the Samaritans Radar app of being one of a handful of privacy advocates trying to derail an app that will save lives. Secondly, when one of the opponents of the app pointed out that the mainstream media seems to be treating this primarily as a privacy story, and suggesting that the main problems with the app relate to privacy.

Now privacy is an important part of the problem with the Samaritans Radar app – but only in an instrumental way. As I see it, the real problems with the app are that it makes already vulnerable people more vulnerable, that it disrupts what is, for many people, a really positive community online, a place where they can find support in a natural, human way, and that it breaks down trust. It is the trust is critical here, not the privacy.

I was alerted to the existence of the app, and to the problems with the app, by Tweeter @latentexistence, an activist in mental health and disability – and it’s important to understand that the first people concerned with the app were people who are part of the mental health community on Twitter, not the privacy advocates. They remain the core to the ‘resistance’ to the app – privacy advocates like me are supporters of theirs, not the instigators of the resistence. I’ve worked a bit in the mental health field, but peripherally – I was the finance director of a mental health charity. Enough, though, to recognise the importance of trust, and how privacy is critical for that trust. If vulnerable people are to feel safe, they need privacy – not as some abstract or airy-fairy right, but as something to protect them.

There’s a strong and effective community of mental health professionals, and people who have mental health issues, on Twitter. Twitter offers some significant advantages to some people with both physical and mental health issues – that’s why things like the remarkable ‘Spartacus Report’ happened. I’ve been deeply inspired by them, and feel happy to support them in any way that I can. That’s why I got involved in the campaign against the Samaritans Radar app – not just because it fits ‘my’ privacy agenda.

…and anyway, the way that I see privacy is primarily instrumental. My book is called ‘Internet Privacy Rights: Rights to Protect Autonomy’, because that’s what I see as the most important purpose of privacy. To protect our autonomy, as much as it can.  Vulnerable people, people who might potentially be the kind to contact the Samaritans, need that autonomy. They want that autonomy – and the Samaritans, in their non-online form, respect that absolutely. It’s up to people to call them – and when they call them they are listened to and respected, not judged. The Samaritans Radar app reverses that – the fact that it invades privacy in order to do so is not nearly as important as the way that it breaks their trust and disrespects them and their autonomy.

Samaritans Radar: misunderstanding privacy and ‘publicness’

The furore over the launch of the Samaritans Radar app has many dimensions: whether it’s ethical, whether it will help, whether it will chill – putting vulnerable people off using Twitter, whether it’s legal – there are huge data protection issues – are just a start. Many excellent pieces have been written about it from all these angles, and they almost all leave me thinking that the whole thing is misconceived, however positive its motivations may be.

I’m not going to go over much of these, but want to look at one particular angle where it seems to me that the creators of the app have made a fundamental misunderstanding. To recap, once someone authorises the Samaritans Radar app, that app will automatically scan the tweets of all the people that person follows, looking for signs in those tweets of potentially worrying words or phrases: triggers that suggest that the tweeter may be at risk. The tweeter does not know that their tweets are being scanned, as it’s only the person who’s authorised the app whose consent has been sought – and it’s important to remember that we don’t generally have control over who follows us. Yes, we can block people, but that often seems an overly aggressive act. I very rarely block, for example.

The logic behind the Samaritans Radar approach to privacy is simple: tweets are ‘public’, therefore they’re fair game to be scanned and analysed. Their response to suggestions that this might not be right is that people always have the option of making their twitter accounts private – thus effectively locking themselves out of the ‘public’ part of Twitter. On the surface this is logical – but only if you think that ‘private-public’ is a two-valued, black-and-white issue. Either something is ‘public’ and available to all, or it’s ‘private’ and hidden. Privacy, both in the ‘real’ world and on Twitter, doesn’t work like that. It’s far more complex and nuanced than that – and anyone who thinks in those simple terms is fundamentally misunderstanding privacy.

The two extremes are fairly obvious. If you sit in a TV studio on a live programme being broadcast to millions, everything you say is clearly public. If you’re in a private, locked room with one other person, and have sworn them to secrecy, what you say is clearly private. Between the two, however, there is a whole spectrum, and defining precisely where things fit is hard. You can have an intimate, private conversation in a public place – whispering to a friend in a pub, for example. Anyone who’s been to a football match, or been on a protest march, knows theoretically that it’s a public place, but might well have private conversations, whether wisely or not. Chatting around the dinner table when you don’t know all the guests – where would that fit in? In law, we can analyse what we call a ‘reasonable expectation of privacy’, but it’s not always an easy analysis – and many people who might be potentially interested in the Samaritans should not be expected to understand the nuances of the law, or even the technicalities of Twitter.

On Twitter, too, we have very different expectations of how ‘visible’ or obscure what we tweet might be. We’re not all Stephen Fry, with millions of followers and an expectation that everything we write is read by everyone. Very much the opposite. We know how many followers we have – and some might assume, quite reasonably, that this is a fair representation of how many people might see our Tweet. It’s very different having 12 followers to having 12 million – and there are vastly more at the bottom end. Indeed, analysis at the end of 2013 suggested that 60% of active Twitter accounts have fewer than 100 followers, and 97% have fewer than 1000. That, to start with, suggests that most Twitter users might quite reasonably imagine that their tweets are only seen by a relatively small number of people – particularly as at any time only a fraction of those who follow you may be online and bother to read your tweet.

Further, not all tweets are equally visible – and experienced tweeters should know that. There are ways to make your tweets a little more intimate, and ways to make them more easily visible. If you tweet in response to someone, and leave their twitter tag at the start of the tweet, it will only appear on the timelines of people that follow both you and the person you are responding.  That’s why people sometimes put a ‘.’ in front of the tag.

A tweet like this, for example, would only be immediately visible to myself and the first tweeter named, and people who follow both of us, which is not likely to be a very large number.

Screen Shot 2014-11-01 at 09.44.00

If I had put a ‘.’ (or indeed any other characters) in front of @ABeautifulMind1, it would have been visible to all of the 9,000+ people who follow me. I made the decision not to do that – choosing to limit the visibility of the tweet. Having a semi-private conversation in a very public forum. Of course other people could find the tweet, but it would be harder – just as other people could hear a conversation on a public street, but it would be harder.

You can do the reverse, and try to make your tweet more rather than less visible. Adding a hashtag, for example, highlights the tweet to people following that hashtag – live tweeting my anger at BBC Question Time by adding the hashtag #bbcqt, for example. I could mention the name of a prominent tweeter, in the hope that they would read the tweet and choose to re-tweet it to their thousands or millions of followers. I could even ‘direct-message’ someone asking them to retweet my tweet as a special favour. All of these things can and do change the visibility – and, in effect, the publicness of the tweet.

Some people will understand all this. Some people won’t. Some people will have the two-valued idea about privacy that seems to underlie the Samaritans Radar logic – but, by both their thoughts and their actions, most people are unlikely to. We don’t all guard our thoughts on Twitter – indeed, that’s part of its attraction and part of its benefit for people with mental health issues, or indeed people potentially interested in the services of the Samaritans. Many people use twitter for their private conversations in the pub – and that’s great. Anyone who uses Twitter often, and anyone with any understanding of vulnerable people should know that – and see beyond the technical question of whether a tweet is ‘public’ or not.

The Samaritans responded to some of these questions, after their initial and depressing ‘you can lock your account’ response, by suggesting that people could join a ‘white list’ that says their tweets should not be scanned by Samaritans Radar – but that doesn’t just fail to solve the real issue, it might even exacerbate them. First of all, you have to be aware that you’re being scanned in order to want to be on the white list. Secondly, you’re adding yourself to a list – and not only is that list potentially vulnerable (both to misuse and to being acquired, somehow, by people with less than honourable motives), but the very idea of being added to yet another list is off-putting in the extreme. Anyone with negative experiences of the mental health services, for example, would immediately worry that being on that list marks you out as ‘of interest’. We don’t like lists, and with good reason.

At the very least, the system should be the other way around – you should have to actively ‘opt-in’ to being scanned. Having an opt-in system would be closer to the Samaritans’ role: the person would say ‘please, watch me, look after me’, as though they were phoning Samaritans. Even then, it’s far from perfect, as a decision to let people watch you at one point may not be relevant later. People’s minds change, their sensitivity changes, their level of trust changes. They should be able to revoke that decision to be watched – but even making them do that could be a negative. Why should it be up to them to say ‘stop scanning me’? With sensitive, vulnerable people, that could be yet another straw on the camel’s back.

Personally, I’d like the Samaritans to withdraw the app and have a rethink. This isn’t just a theoretical exercise, or a bit of neat technology – these are real issues for real people. It needs sensitivity, it needs care, it needs a willingness to admit ‘Oh, we hadn’t realise that, and we were wrong.’ With Samaritans Radar, I think the Samaritans have really got it wrong, in many ways. The privacy and publicness issue is just one of them. It does, however, add weight to the feeling that this whole idea was misconceived.