Online privacy and identity – and Disney Princesses!

This afternoon I gave a presentation at Gikii, one of the most remarkable of conferences, where law meets technology meets science fiction and popular culture. It wasn’t exactly a serious presentation – the subject matter is Disney Princesses, after all – but there is a bit of a point to it to… I’ve put a video of the presentation below – as well as some notes to accompany it.

What does every princess need? A dress? A tiara? Glass slippers? A palace? A fairy Godmother? A handsome prince?

No. What every princess needs – what we all need – is autonomy. Every one of the Disney Princesses struggles to get the kind of life they want, pushed in ways they don’t want by powerful forces, by established norms, and by systems that seem designed to control them. How can they break free? How can they gain the autonomy they want? One key tool for all of them is to have more control over their privacy and their identity.

Snow WhiteSnow White, the first of the Disney Princesses, had a vital need to protect her privacy. She was under surveillance by her stepmother, the Evil Queen. The Queen’s magic mirror was every bit as effective as PRISM in its ability to search out every face and perform a detailed analysis in order to determine who was the fairest of them all, and give the Queen all the details she needed to locate her and then track her down.

CinderallaCinderella had a different problem – her identity was stolen by her step-mother, who forced her to take up a false identity, under her stepmother’s control. This false identity had a huge impact on her autonomy, blocking her from finding what she wanted and needed. To solve her problems she had to create a new identity – using the magical assistance of her Fairy Godmother in order to get the life she deserves and wants.

Sleeping BeautyPrincess Aurora, the Sleeping Beauty, had problems with both privacy and identity. She needed privacy and had to live under a pseudonym, Briar Rose, to protect herself from the wrath of Maleficent, the bad fairy. It wasn’t until her sixteenth birthday that Maleficent finally caught her – but Maleficent only had all the forces of evil working for her: if she’d had the NSA she’d have caught Princess Aurora before she was out of swaddling clothes.

BelleBelle – the heroine of Beauty and the Beast – is herself struggling with autonomy, wanting find something different from the ‘provincial life’ that she leads, but privacy and identity really come into play in relation to the Beast. Belle wants to protect the Beast’s privacy, but inadvertently lets the evil Gaston see the Beast. The Beast is profiled by Gaston, and given the full torches and pitchforks treatment. It’s not just evil witch-queens that we need to worry about surveillance from – sometime privacy invasions from those we love can end up damaging us.

JasmineJasmine, the daughter of the Sultan in Aladdin, finds her autonomy severely curtailed. In order to try to escape the marriage the norms require of her, has to try to create a new identity for herself. The same, in a different way is true for Aladdin – he needs the false identity of Prince Ali to get into the position to let his real identity shine through. Note that like so many of those on the Dark Side, the evil Vizier Jafar uses surveillance as part of his machinations.

ArielAriel, the Little Mermaid, creates a new identity for herself (as a human) to get the kind of life she wants. She was always fascinated by humanity – magic allowed her to experience it. Online, the ability to create and use your online identity can give you some of that magic – while policies such as that of Facebook that demands real names and ‘real’ identities take away that magic.

PocahontasThe issues for Pocahontas were all about identity – but a different aspect of identity, the need to be able to assert your identity. Pocahontas needed to say ‘this is me’, that she wasn’t the savage that the English settlers thought she was based on their inaccurate profiling and prejudicial assumptions. Assertion of identity – and overcoming these kinds of presumptions, is crucial to online autonomy.

MulanMulan’s problems were also about identity, but from a very different perspective. She had to create a new identity – as a boy rather than a girl – in order to protect her father from disgrace, and she needed privacy in order to protect this identity and not have her ‘real’ identity revealed. There can be good reasons to allow pseudonymity – and to protect privacy. It’s worth noting that Mulan’s eventual identity combines elements of both the original the created identities.

TianaTiana, from the Princess and the Frog, had another identity problem. When she kissed her frog, it wasn’t the frog that became a prince but the ‘princess’ who became a frog. Just like Cinderella, she had a new identity thrust upon her, one that she didn’t like and didn’t want – and it took a great struggle and a good deal of magical assistance to find her way to becoming human again.

RapunzelRapunzel, the heroine of Tangled, had a different kind of privacy problem: she had a kind of privacy forced upon her when she was locked away in a tower. She needed to find a way to escape: it is important to understand that privacy is not just about hiding, but about having the ability to decide how and when information about yourself is made available and to whom.

LeiaPrincess Leia – now a Disney Princess since Disney’s acquisition of the rights to Star Wars – has a desperate need for privacy and for the protection for her identity, though she herself is not even aware of it. She needs the kind of privacy that many children need – privacy from her parents, and in particular from her father, Darth Vader! The way that children need privacy from their parents – indeed, that they have a right to privacy from their parents – is often missed or misunderstood. Not all parents are benign, and even if they are, children often need privacy from them.

MeridaLast, but by no means least is Merida, the heroine of Brave. Merida’s whole story is about autonomy and identity – she wants to forge her own identity, one very different from the identity chosen for her by her mother. She wants to claim her own kind of life, one in which she has autonomy. Merida is able to do this partly through the use of magic, partly through sheer force of will, and partly, in the end, through changing the law…

But they’re only fictional princesses…

Yes, they are, and yes, to a great extent they’re clichéd, they’re sexist and so on – though given the dates that many of them were created (Snow White dates back to 1937) some of that can be overstated, but even so they still resonate with many people. They hint at things we all need – particularly those of us who don’t feel in control of our lives, which ultimately is most of us! We all need privacy, we all need control over our identity – sometimes to conceal it temporarily, sometimes to forge new identities. We may not face evil magicians, dragons or worse, but even so we do face challenges that privacy and protection and control over identity can help to solve.

The analogies between what the princesses face and the online world are fairly direct. The way in which many face surveillance. The ways in which they often need magical assistance to gain control over their identities is analogous to the way that the technology of the internet can allow people to create and control their online identities. The forces that oppose the princesses are analogous to the forces that face many of us online – authorities wanting to get information about us and control our activities, commercial forces wanting to make us follow a particular line in order to buy their products or use their services, friends, relations (or even enemies) wanting to know more about us and so forth. The risks are analogous too.

It’s worth noting that as shown above, there are privacy or identity issues with all of the Disney Princesses. Every single one. It’s also worth noting that not only do the films come from a great span of time, but the stories from which the films are taken come from all over the world, and from many different times. Mulan, for example, was inspired by what is believed to be a true story from as long ago as the 4th century AD. These are perennial issues, and perennial problems. The need for privacy and the protection of identity really is a tale as old as time.

‘Real names’ chill free speech….

The Huffington Post has recently announced that it is going to bring in a policy of only allowing commenters on its posts who use their real names. The idea, as I understand it, is to cut down on abusive comments and trolling – but from my perspective the policy is not only highly unlikely to be effective but it is short sighted and ultimately counter-productive. Indeed, ‘real names’ policies like this are likely to be deeply detrimental to free speech in any really meaningful form. Real names policies work, to a great extent, to help the powerful against the vulnerable, to exacerbate existing power imbalances and to further marginalise and alienate those who are already on the fringes. It is a huge subject, far bigger than I can do full justice to here, but these are some of the reasons that I think the Huffington Post – and anyone else who instigates a real names policy – are fundamentally wrong in their approach.

Making links to the real world

Real names can help to make a link from the online world to the ‘real’ world – indeed, that’s really the point, if you want to make people ‘accountable’ for their comments. If you have any kind of vulnerability in the ‘real’ world that can be very bad news. Anything you do online can emphasise that vulnerability – and put you at real risk. The risks are different for different people, but they’re real. For many of those people, the risks may well be sufficient to silence them completely – and it’s not just the ‘obvious’ people who might be silenced. There are many different groups who might need some degree of anonymity – these are just a few of the possibilities.

1) Whistleblowers

The role of the whistleblower has come under huge scrutiny recently – Obama’s apparent ‘war on whistleblowers’ has been hugely criticised. Whistleblowers in most forms would be crushed by the need to provide real names. The organisations about which they are blowing the whistle will find it much easier to find them and silence them – or worse. It is already very risky to be a whistleblower: requiring real names makes it far more dangerous

2) People in positions of responsibility

In some ways related to whistleblowers are those whose positions of responsibility would be compromised if their real names came out. Doctors, police officers, soldiers, teachers, social workers and so forth are just some of these – and they are people who can often give invaluable insight to important things in our society. Perhaps the best known online example was the Nightjack blogger – a police insider who provided a brilliant blog, winning the Orwell Prize for blogging in 2009. NIghtjack gave a warts-and-all portrayal of the life of a policeman, something he could not possibly have done if he had been forced to use his real name. Indeed, when, via illegal email hacking by a Times Journalist as it turned out (see David Allen Green’s piece here), his name was revealed, his blog was silenced. There are many others – yesterday one of the people I know on twitter reminded me that when they were operating as a prison chaplain they could not possibly have blogged or tweeted under their real name.

3) People with problematic pasts.

It’s not immediately obvious, but some people like to – and need to – operate online to escape a problematic past. Something they have done, or something that has happened to them, whether it be merely embarrassing or far more serious, could ‘catch up with them’ if they operate using their real name. This isn’t about rewriting history – it’s about being a able to make a fresh start. By enforcing a real names policy you deny them this opportunity.

4) People with enemies

This doesn’t just mean the kind of person you read about in thrillers or see on TV detective stories – it means people who have been stalked, it means people who have had arguments with former colleagues, it means people who have caused ‘trouble’ at work, or against whom someone has just taken a dislike. It might just be people with problematic neighbours. Forcing any of them to reveal their real names helps their enemies find them.

5) People with complex or delicate issues

The most obvious of these is sexuality – if we lived in a world where people did not get abused for their sexuality it would be great, but we don’t. For some people exploring whether or not they might be gay is a huge and delicate issue – and they wouldn’t even dare ask the questions they really need to ask if they were forced to reveal their real names. Sexuality is far from the only area where this kind of issue can raise its head – religion, politics, even such things as vegetarianism or liking particular kinds of music can be things that make people sensitive. Force real names and you stop them being explored.

6) People living under ‘oppressive’ regimes.

This much should be obvious – and it’s far from surprising that the Chinese government is a staunch supporter of real names policies, and has gone so far as to legislate in that direction. Express a dissenting opinion and you will be hunted down. However, as recent events have suggested, it’s not just the obvious regimes that might be seen as oppressive – and regimes change, and not just for the better. Put a real names policy in place under a relatively benign government, and a subsequent, more dictatorial regime will be able to use it.

7) People who might be involved in protest or civil disobedience

With protests in the UK about the badger cull looming, this issue will no doubt come to the fore. Already an injunction has been brought in to try to block most of the protests, and the government has announced that it is going to scan social media to try to ‘head off’ protests – and if people involved in protest have to use their real names in their online activities it will be far easier for the authorities to find them and crack down. For me, protest is a fundamental part of democracy. Already it is much more limited than it should be – and real names policies can curtail it still further.

8) Young people

The position for young people is complex. One of the characteristics of the life of young people is that there are other people in positions of power over them, whether they be parents, teachers or others. That makes you more vulnerable – if your teachers or parents find out that you’re saying things or asking things that they don’t approve of, you can be in trouble, or worse. It also makes it easier for people to disregard or override your views – you’re only a kid, your views aren’t worth listening to. The internet allows a degree of this prejudice to be overcome – people can be judged by what they say, not by how old they are. Real names policies suppress young voices.

9) Women

It would be great if women were not likely to be targeted for abuse, but as recent events have shown this is far from the case. It would also be great if all women were ‘strong’ enough not to worry about the risk of being abused, but some aren’t, and none should need to be. For some women, a way out of this – temporary, many might hope – is to use pseudonyms that don’t necessarily reveal their sex. This kind of a tactic can really help in some situations – and preventing it can silence some crucial voices.

10) Victims of spousal abuse

A special and particularly nasty case of this is that of victims of spousal abuse – people whose partners or ex-partners are violent or abusive often want to track down their victims. If people are forced to reveal their real names, they can be tracked down far more easily – with devastating consequences.

11) People with religious or ethnic names

Forcing people to reveal their names can force them to reveal much more about themselves than might be immediately obvious – it can reveal or at least suggest your ethnicity or religion amongst other things.  That can make you a target – and it can also mean that what you say is viewed with prejudice, ignored or abused. Real names policies make it much harder for people in that kind of position to be heard.

12) People with a reputation

Sometimes you don’t want to be judged by who you are, but by what you say – and this can work in many different directions. ‘Give a dog a bad name’ is one part of it – but it can work the other way too. JK Rowling was recently justifiably angry when it was revealed that she was the author of a detective story under a pseudonym – she wanted the novel to be judged for what it was, not because she was the author. That’s a dramatic example, but the point is much deeper – when you want to test out your views it can really help to write them anonymously.

13) People needing an escape from difficult or stressful lives.

In the current climate, this means a great many of us – we want to separate our online lives, at least to an extent, from our real lives. It is a liberating feeling, and can help provide relief from stress, and a chance to do something different. Again, this is crucial to real free speech.

14) Vulnerable people generally

There are so many kinds of vulnerability that it’s hard to even scratch the surface – and any kind of vulnerability can make you feel at risk of being ‘exposed’. That chills speech. It doesn’t have to be ‘serious’ to have the effect – even without an obvious vulnerability many people just feel more comfortable speaking out without fear of ‘showing’ themselves.

The risks and rewards of anonymity

Of course there are risks with allowing anonymity and pseudonymity – and there are some hideous anonymous trolls and abusers online – but there have to be other ways to deal with them. Better ways, with less of a chilling effect on free speech.

It’s easy from a position of power or privilege to think real names policies will work. I use my own real name online – but I’m in a position to do so. I’m safe, secure, privileged and lucky enough to have a job and an employer that supports me in this way. I have a feeling that many of those advocating real names policies are in similar positions – they lose nothing and risk nothing by revealing their real names. Not all people are in such fortunate positions. Indeed, many of those whose voices we most need to hear are not in that kind of a position – the categories I’ve outlined above are just some of the possibilities. Going for a ‘real names’ policy will silence those key voices.

On top of all of this, from my perspective, we have a right to create, assert and protect the identities we use online – and that, amongst other things, means we have a right to pseudonymity. The Internet offers us the opportunity to bring that right to bear. It’s what makes Twitter a livelier place to debate than Facebook – well, one of the things. If you want real debate, and real free speech, you need that liveliness. You need to let those who need pseudonymity find voices for themselves. Real names policies deny them this opportunity.

I hope the Huffington Post reconsider their position – and, more importantly, I hope that other groups don’t follow their lead.

Topical subjects….

I was struck by something yesterday: the things that I research and teach about seem to be becoming more and more topical. What drove it home was the Philip Schofield/David Cameron incident on ITV’s This Morning – happening in a week where on Monday and Tuesday I was lecturing about defamation defences and defamation reform, and on the very day of the incident was running a seminar on privacy and defamation in the press.

It was a bit strange – and I hope the students appreciated the strangeness – to be able to talk about an incident as it actually unfolded. We looked at the clip on the internet as it happened, and discussed all the potential issues – because there are big potential implications in terms of both privacy and defamation from the event. I’m not going to write much more about it here – I want to see how events unfold – save to say that for me it’s really important to understand that even people that we intensely dislike or disapprove of have rights, and need to have those rights respected. That means people accused of or suspected of paedophilia – even more than most, because if they’re innocent the false accusation is one of the most hideously damaging you can have, and the torches and pitchforks seem to be out on twitter and elsewhere at the drop of a hat in the current climate. It also means David Cameron – as anyone who knows me will realise, I’m a strong opponent of Cameron and his government, but that doesn’t mean that he doesn’t have rights. Neither does it mean that he should be ambushed in the way that he was (though as others have pointed out he did choose the ‘safe’ sofa of This Morning over the tough chair of Newsnight, so to an extent is his own worst enemy). It also means, in my opinion, that we should give him the benefit of at least some doubt – personally I don’t think he was deliberately trying to suggest that there’s some sort of link between paedophilia and homosexuality, but that others might try to make such a link, and to warn against it. Anyone who ever reads the Daily Mail would know that this is entirely likely!

I shall be watching as events unfold with a lot of interest – and some trepidation, because there are distinct possibilities this could get very messy indeed. Time will tell. It’s not, however, the only thing that has made me feel that my subjects are becoming more and more topical. For some reason, this last week seems to have been one where my ‘stuff’ has come out. Four new things by or about me have appeared on the net:

First, my blog post for the UK Constitutional Law Group, about online anonymity rights – a very British dilemma: found here

I particularly liked doing this post because it allowed me to explore an aspect of privacy and anonymity that I don’t often look at – the fact that it’s part of our tradition in the UK, not just something new and trendy. We’ve always wanted privacy – and I suspect we always will.

Second, my post for Russell Webster’s ‘Why I Tweet’ series: found here

One of the things I really enjoy on twitter is that there’s a great community of people of all kinds – and in particular there’s a strong ‘privacy community’ of people from all over the world who have interesting things to say, interesting links and so on.

Third, the first clips from my interviews for Orwell Upgraded series – me talking about Big Data. found here

I’m looking forward to the final edited version of the programmes – though I’m not at all sure that I can deal with the extremely high quality of the video… far too accurate a close up of my face!

Fourth, a Q&A I did on ‘Do Not Track’ for PcPro.com – another key, current internet privacy debate: found here

All in all I’ve felt like I was part of something that is topical – and important at the same time. It’s a good feeling, but a daunting one. People do seem to be interested in the subject, which is great: even just a few years ago most of what I talked about and researched into seemed to be very much a niche subject. Privacy – and most of my stuff is privacy related in some form or other – is becoming bigger and bigger news, and more people are understanding its importance. On Monday, I’m on a panel at the Internet Service Providers Association conference, talking about the Communications Data Bill – the snoopers’ charter – another big privacy issue, and one that I’ve talked and written a lot about before. The fact that the internet industry (and that’s what the ISPA represents) is making a privacy-related subject one of the key points of their annual conference is very important. Despite many signs in the other direction – not least the willingness of people like Philip Schofield to put people’s privacy at risk just because of rumours on the internet – I think the trend towards privacy may be a positive one. I hope so!

Will the government ‘get’ digital policy?

I had an interesting time at the ‘Seventh Annual Parliament and Internet Conference’ yesterday – and came away slightly less depressed than I expected to be. It seemed to me that there were chinks of light emerging amidst the usually stygian darkness that is UK government digital policy and practice – and signs that at least some of the parliamentarians are starting to ‘get it’. There were also some excellent people there from other areas – from industry, from civil society, from academia – and I learned as much from private conversations as I did in the main sessions.

The highlight of the conference, without a doubt, was Andy Smith, the PSTSA Security Manager at the Cabinet Office, recommending to everyone that they should use fake names on the internet everywhere except when dealing with the government – the faces of the delegation from Facebook, whose ‘real names’ policy I’ve blogged about before were a sight to behold. Andy Smith’s suggestion was noted and reported on by Brian Wheeler of the BBC within minutes, and made Slashdot shortly after.

It was a moment of high comedy – Facebook’s Simon Milner, on a panel in the afternoon, said he had had a ‘chat’ with Andy Smith afterwards, a chat which I think a lot of us would have liked to listen in on. The comedic side, though, reveals exactly why this is such a thorny issue. Smith, to a great extent, is right that we should be deeply concerned by the extent to which our real information is being gathered, held and used by commercial providers for their own purposes – but he’s quite wrong that we should be able and willing to trust the government to hold our data any more securely or use it any more responsibly. The data disasters when HMRC lost the Child Benefit details of 25 million families or the numerous times the MoD has lost unencrypted laptops with all the details of both serving and retired members of the armed forces – and potential recruits – are not exceptions but symptoms of a much deeper problem. Trusting the government to look after our data is almost as dangerous as trusting the likes of Facebook and Google.

The worst aspect of the conference for me was that there seemed to still be a large number of people who believed that ‘complete’ security was not just possible but practical and just a few tweaks away. It’s a dangerous delusion – and means that bad decisions are being made, and likely to continue. A few other key points of the conference:

  • Chloe Smith, giving the morning keynote, demonstrated that she’d learned a little from her Newsnight mauling – she was better at evading questions, even if she was no better at actually answering them.
  • In Chi Onwurah, Labour have a real star – I hope she gets a key position in a future Labour government (should one come to pass)
  • We’ve got a long way to go with the Defamation Bill – without seeing the regulations that will accompany the bill, which apparently haven’t even been drafted yet, it’s all but impossible to know whether it will have any real effect (at least insofar as the internet is concerned)
  • In a private conversation, someone who really would know told me that one of the problems with sorting out the Defamation Bill has been an apparent obsession that Westminster insiders have with the ‘threat’ from anonymous bloggers – I suspect Guido Fawkes would be delighted by the amount of fear and loathing he seems to have generated in MPs, and how much it seems to have distracted them from doing what they should on defamation and libel reform.
  • After a few conversations, I’m quietly optimistic that we’ll be able to defeat the Communications Data Bill – it wasn’t on the agenda at the conference, but it was on many people’s minds and the whispers were generally more positive than I had feared they might be. Time will tell, of course.
  • Ed Vaizey is funny and interesting – but potentially deeply dangerous. His enthusiasm for the ‘iron fist’ side of copyright enforcement built into the Digital Economy Act was palpable and depressing. The way he spoke, it seemed as though the copyright lobby have him in the palm of their hand – and that neither they nor he have learned anything about the failure of the whole approach.
  • Vaizey’s words on porn-blocking – he seemed to suggest that we’ll go for an ‘opt-out’ blocking systems, where child-free households would effectively have to ‘register’ for access to porn, something which has HUGE risks (see my blog here) – were worrying, but again, another insider assured me that this wasn’t what he meant to say, nor the proposal currently on the table. This will need very careful watching!!
  • The savaging of Vaizey by a questioner from the floor revealing how much better and cheaper broadband internet access was in Bucharest than in Westminster was enjoyed by most – but not Vaizey, nor the industry representatives who remained conspicuously quiet.
  • Julian Huppert – my MP, amongst other things – was again impressive, and seems to have understood the importance of privacy in all areas: the fact that Nick Pickles of Big Brother Watch was invited to the panel on the internet of things that Huppert chaired made that point.
  • On that subject – mentions of either privacy or free speech were conspicuous by their absence in the early sessions on cybersecurity, but they grew both in presence and importance during the day. I asked a couple of questions, and they were both taken seriously and answered reasonably well. There’s a huge way to go, of course, but I did feel that the issue is taken a touch more seriously than it used to be. Mind you, none of the government representatives mentioned either in their speeches at all – it was all ‘economy’ and ‘security’, without much space for human rights….
  • The revelation from the excellent Tom Scott that though the rest of us are blocked from accessing the Pirate Bay, it IS accessible from Parliament was particularly good – and when my neighbour accessed the site and saw the picture of Richard O’Dwyer on the front page, it was poignant…

I came away from the conference with distinctly mixed feelings – there are some very good signs and some very bad ones. The biggest problem is that the really good people are still not in the positions of power, or seemingly being listened to – and those at the top don’t seem to be changing as fast as the rest. If we could replace Ed Vaizey with Julian Huppert and Chloe Smith with Chi Onwurah, government digital policy would be vastly improved….

The myth of technological ‘solutions’

A story on the BBC webpages caught my eye this morning: ‘the parcel conundrum‘. It described a scenario that must be familiar to almost everyone in the UK: you order something on the internet and then the delivery people mess up the delivery and all you end up with is a little note on the floor saying they tried to deliver it. Frustration, anger and disappointment ensue…

…so what is the ‘solution’? Well, if you read the article, we’re going to solve the problems with technology! The new, whizz-bang solutions are going to not just track the parcels, but track us, so they can find us and deliver the parcel direct to us, not to our unoccupied homes. They’re going to use information from social networking sites to discover where we are, and when they find us they’re going to use facial recognition software to ensure they deliver to the right person. Hurrah! No more problems! All our deliveries will be made on time, with no problems at all. All we have to do is let delivery companies know exactly where we are at all times, and give them our facial biometrics so they can be certain we are who we are.

Errr… could privacy be an issue here?

I was glad to see that the BBC did at least mention privacy in passing in their piece – even if they did gloss over it pretty quickly – but there are just one or two privacy problems here. I’ve blogged before about the issues relating to geo-location (here) but remember delivery companies often give 12 hour ‘windows’ for a delivery – so you’d have to let yourself be tracked for a long time to get the delivery. And your facial biometrics – will they really hold the information securely? Delete it when you’re found? Delivery companies aren’t likely to be the most secure or even skilled of operators (!) and their employees won’t always be exactly au fait with data protection etc – let alone have been CRB checked. It would be bad enough to allow the police or other authorities track us – but effectively unregulated businesses to do so? It doesn’t seem very sensible, to say the least…

…and of course under the terms of the Communications Data Bill (of which more below) putting all of this on the Internet will automatically mean it is gathered and retained for the use of the authorities, creating another dimension of vulnerability…

Technological solutions…

There is, however, a deeper problem here: a tendency to believe that a technological solution is available to a non-technological problem. In this case, the problem is that some delivery companies are just not very good – it may be commercial pressures, it may be bad management policies, it may be that they don’t train their employees well enough, it may be that they simply haven’t thought through the problems from the perspective of those of us waiting for deliveries. They can, however, ‘solve’ these problems just by doing their jobs better. A good delivery person is creative and intelligent, they know their ‘patch’ and find solutions when people aren’t in. They are organised enough to be able to predict their delivery times better. And so on. All the tracking technology and facial recognition software in the world won’t make up for poor organisation and incompetent management…

…and yet it’s far too easy just to say ‘here’s some great technology, all your problems will be solved’.

We do it again and again. We think the best new digital cameras will turn us into fantastic photographers without us even reading the manuals or learning to use our cameras (thanks the the excellent @legaltwo for the hint on that one!). We think ‘porn filters’ will sort out our parenting issues. We think web-blocking of the Pirate Bay will stop people downloading music and movies illegally. We think technology provides a shortcut without dealing with the underlying issue – and without thinking of the side effects or negative consequences. It’s not true. Technology very, very rarely ‘solves’ these kinds of problems – and the suggestion that it does is the worst kind of myth.

The Snoopers’ Charter

The Draft Communications Data Bill – the Snoopers’ Charter – perpetuates this myth in the worst kind of way. ‘If only we can track everyone’s communications data, we’ll be able to stop terrorism, catch all the paedos, root out organised crime’… It’s just not true – and the consequences to everyone’s privacy, just a little side issue to those pushing the bill, would be huge, potentially catastrophic. I’ve written about it many times before – see my submission to the Joint Committee on Human Rights for the latest example – and will probably end up writing a lot more.

The big point, though, is that the very idea of the bill is based on a myth – and that myth needs to be exposed.

That’s not to say, of course, that technology can’t help – as someone who loves technology, enjoys gadgets and spends a huge amount of his time online, that would be silly. Technology, however, is an adjunct, not a substitute, to intelligent ‘real world’ solutions, and should be clever, targeted and appropriate. It should be a rapier rather than a bludgeon.

Safe…. or Savvy?

What kind of an internet do we want for our kids? And, perhaps more importantly, what kind of kids do we want to bring up?

These questions have been coming up a lot for me over the last week or so. The primary trigger has been the reemergence of the idea, seemingly backed by David Cameron (perhaps to distract us from the local elections!), of comprehensive, ‘opt-out’ porn blocking. The idea, apparently, is that ISPs would block porn by default, and that adults would have to ‘opt-out’ of the porn blocking in order to access pornographic websites. I’ve blogged on the subject before – there are lost of issues connected with it, from slippery slopes of censorship to the creation of databases of those who ‘opt-out’, akin to ‘potential sex-offender’ databases. That, though is not the subject of this blog – what I’m interested in is the whole philosophy behind it, a philosophy that I believe is fundamentally flawed.

That philosophy, it seems to me, is based on two fallacies:

  1. That it’s possible to make a place – even virtual ‘places’ like areas of the internet – ‘safe'; and
  2. That the best way to help kids is to ‘protect’ them

For me, neither of these are true – ultimately, both are actually harmful. The first idea promotes complacency – because if you believe an environment is ‘safe’, you don’t have to take care, you don’t have to equip kids with the tools that they need, you can just leave them to it and forget about it. The second idea magnifies this problem, by encouraging a form of dependency – kids will ‘expect’ everything to be safe for them, and they won’t be as creative, as critical, as analytical as they should be, first of all because their sanitised and controlled environment won’t allow it, and secondly because they’ll just get used to being wrapped in cotton wool.

Related to this is the idea, which I’ve seen discussed a number of times recently, of electronic IDs for kids, to ‘prove’ that they’re young enough to enter into these ‘safe’ areas where the kids are ‘protected’ – another laudable idea, but one fraught with problems. There’s already anecdotal evidence of the sale of ‘youth IDs’ on the black market in Belgium, to allow paedophiles access to children’s areas on the net – a kind of reverse of the more familiar sale of ‘adult’ IDs to kids wanting to buy alcohol or visit nightclubs. With the growth of databases in schools (about which I’ve also blogged) the idea that a kids electronic ID would actually guarantee that a ‘kid’ is a kid is deeply flawed. ‘Safe’ areas may easily become stalking grounds…

There’s also the question of who would run these ‘safe’ areas, and for what purpose? A lovely Disney-run ‘safe’ area that is designed to get children to buy into the idea of Disney’s movies – and to buy (or persuade their parents to buy) Disney products? Politically or religiously run ‘safe’ areas which promote, directly or indirectly, particular political or ethical standpoints? Who decides what constitutes ‘unacceptable’ material for kids?

So what do we need to do?

First of all, to disabuse ourselves of these illusions. The internet isn’t ‘safe’ – any more than anywhere in the real world is ‘safe’. Kids can have accidents, meet ‘bad’ people and so on – just as they do in the real world. Remember, too, that the whole idea of ‘stranger danger’ is fundamentally misleading – most abuse that kids receive comes from people they know, people in their family or closely connected to it.

That doesn’t mean that kids should be kept away from the internet – the opposite. The internet offers massive opportunities to kids – and they should be encouraged to use it from a young age, but to use it with intelligence, with a critical and analytical outlook. Kids are far better at this than most people seem to give them credit for – they’re much more ‘savvy’ instinctively than we often think. That ‘savvy’ approach should be encouraged and supported.

What’s more, we have to understand our roles as parents, as teachers, as adults in relation to kids – we’re there to help, and to support, and to encourage. My daughter’s just coming up to six years old, and when she wants to know things, I tell her. If she’s doing something I think is too dangerous, I tell her – and sometimes I stop her. BUT, much of the time – most of the time – I know I need to help her rather than tell her what to do. She learns things best in her own way, in her own time, through her own experience. I watch her and help her – but not all the time. I encourage her to be independent, not to take what people say as guaranteed to be true, but to criticise and judge it for herself.

I don’t always get it right – indeed, I very often get it wrong – but I do at least know that this is how it is, and I try to learn. I know she’s learning – and I know she’ll make mistakes too. She’ll also encounter some bad stuff when she starts exploring the internet for real – I don’t want to stop her encountering it – I want to equip her with the skills she needs to deal with it, and to help her through problems that arise as a result.

I want a savvy kid – not the illusion of a safe internet. Isn’t that a better way?

Goo goo google’s tiny steps towards privacy…

Things seem to be hotting up in the battle for privacy on the internet. Over the last few days, Google have made three separate moves which look, on the surface at least, as though they’re heading, finally, in the right direction as far as privacy is concerned. Each of the moves could have some significance, and each has some notable drawbacks – but to me at least, it’s what lies behind them that really matters.
The first of the three moves was the announcement on October 19th, that for signed in users, Google was now adding end-to-end (SSL) encryption for search. I’ll leave the technical analysis of this to those much more technologically capable than me, but the essence of the move is that it adds a little security for users, making it harder to eavesdrop on a user’s seating activities – and meaning that when someone arrives at a website after following a google search, the webmaster of the site arrived at will know that the person arrived via google, but not the search term used to find them. There are limitations, of course, and Google themselves still gather and store the information for their own purposes, but it is still a step forward, albeit small. It does, however, only apply to ‘signed in’ users – which cynics might say is even more of a drawback, because by signing in a user is effectively consenting to the holding, use and aggregation of their data by Google. The Article 29 Working Party, the EU body responsible for overseeing the data protection regime, differentiates very clearly between signed-in and ‘anonymous’ (!) users of the service in terms of complying with consent requirements – Google would doubtless very much like more and more users to be signed in when they use the service, if only to head off any future legal conflicts. Nonetheless, the implementation of SSL should be seen as a positive step – the more that SSL is implemented in all aspects of the internet, the better. It’s a step forward – but a small one.

There have also been suggestions (e.g. in this article in the Telegraph) that the move is motivated only by profit, and in particular to make Google’s AdWords more effective at the expense of techniques used by Search Engine Optimisers, who with the new system will be less able to analyse and hence optimise. There is something to this, no doubt – but it must also be remembered first of all that pretty much every move of Google is motivated by profit, that’s the nature of the beast, and secondly that a lot of the complaints (including the Telegraph article) come from those with a vested interest in the status quo – the Search Engine Optimisers themselves. Of course profit is the prime motivation – but if profit motives drive businesses to do more privacy-friendly things, so much the better. That, as will be discussed below, is one of the keys to improving things for privacy.

The second of the moves was the launch of Google’s ‘Good to know’, a ‘privacy resource centre’, intended to help guide users in how to find out what’s happening to their data, and to use tools to control that data use. Quite how effective it will be has yet to be seen – but it is an interesting move, particularly in terms of how Google is positioning itself in relation to privacy. It follows from the much quieter and less user-friendly Google Dashboard and Google AdPreferences, which technically gave users quite a lot of information and even some control, but were so hard to find that for most intents and purposes they appeared to exist only to satisfy the demands of privacy advocates, and not to do anything at all for ordinary users. ‘Good to know’ looks like a step forward, albeit a small and fairly insubstantial one.
The third move is the one that has sparked the most interest – the announcement by Google executive Vic Gundotra that social networking service Google+ will ‘begin supporting pseudonyms and other types of identity.’ The Electronic Frontier Foundation immediately claimed ‘victory in the nymwars’, suggesting that Google had ‘surrendered’. Others have taken a very different view – as we shall see. The ‘nymwars’ as they’ve been dubbed concern the current policies of both Facebook and Google to require a ‘real’ identity in order to maintain an account with them – a practice which many (myself definitely included) think is pernicious and goes against the very things which have made the internet such a success, as well as potentially putting many people at real risks in the real world. The Mexican blogger who was killed and decapitated by drugs cartels after posting on an anti-drugs website is perhaps the most dramatic example of this, but the numbers of people at risk from criminals, authoritarian governments and others is significant. To many (again, myself firmly included), the issue of who controls links between ‘real’ and ‘online’ identities is one of the most important on the internet in its current state. The ‘nymwars’ are of fundamental importance – and so, to me, is Google’s announcement.
Some have greeted it with cynicism and anger. One blogger put it bluntly:
“Google’s statement is obvious bullshit, and here’s why. The way you “support” pseudonyms is as follows: Stop deleting peoples’ accounts when you suspect that the name they are using is not their legal name.

There is no step 2.”
The EFF’s claims of ‘victory’ in the nymwars is perhaps overstated – but Google’s move isn’t entirely meaningless, nor is it necessarily cynical. Time will tell exactly what Google means by ‘supporting pseudonyms’, and whether it will really start to deal with the problems brought about by a blanket requirement for ‘real’ identities – but this isn’t the first time that someone within Google has been thinking about these issues. Back in February, Google’s ‘Director of Privacy, Product and Engineering’ wrote a blog for the Google Policy Blog called ‘The freedom to be who you want to be…’, in which she said that Google recognised three kinds of user: ‘unidentified’, pseudonymous and identified. It’s a good piece, and well worth a read, and shows that within Google these debates must have been going on for a while, because the ‘real identity’ approach for Google Plus has at least in the past been directly contrary to what Whitten was saying in the blog.
That’s one of the reasons I think Vic Gundotra’s announcement is important – it suggests that the ‘privacy friendly’ people within Google are having more say, and perhaps even winning the arguments. When you combine it with the other two moves mentioned above, that seems even more likely. Google may be starting to position itself more firmly on the ‘privacy’ side of the fence, and using privacy to differentiate itself from the others in the field – most notably Facebook. To many people, privacy has often seemed like the last thing that Google would think about – that may be finally changing.
4Chan’s Chris Poole, in a brilliant speech to the Web 2.0 conference on Monday, challenged Facebook, Google and others to start thinking of identity in a more complex, nuanced way, and suggested that Facebook and Google, with their focus on real identities, had got it fundamentally wrong. I agreed with almost everything he said – and so, I suspect, did some of the people at Google. The tiny steps we’ve seen over the last few days may be the start of their finding a way to make that understanding into something real. At the very least, Google seem to be making a point of saying so.
That, for me, is the final and most important point. While Google and Facebook, the two most important players in the field, stood side by side in agreement about the need for ‘real’ identities, it was hard to see a way to ‘defeat’ that concept, and it felt almost as though victory for the ‘real’ identities side was inevitable, regardless of all the problems that would entail, and regardless of the wailing and gnashing of teeth of the privacy advocates, hackers and so forth about how wrong it was. If the two monoliths no longer stand together, that victory seems far less assured. If we can persuade Google to make a point of privacy, and if that point becomes something that brings Google benefits, then we all could benefit in the end. The nymwars certainly aren’t over, but there are signs that the ‘good guys’ might not be doomed to defeat.
Google is still a bit of a baby as far as privacy is concerned, making tiny steps but not really walking yet, let alone running. In my opinion, we need to encourage it to keep on making those tiny steps, applaud those steps, and it might eventually grow up…

UPDATED TO INCLUDE REFERENCE TO SEOS…