10 reasons to leave Facebook…

If you’re looking for a New Year’s Resolution – have you considered leaving Facebook? There are many reasons to do so, and getting more compelling all the time – all it takes is a little resolution.

1) Privacy

Everyone should be aware that privacy is an issue with Facebook. So many people put so much ‘private’ information onto Facebook that the possibility that your private information, photos, stories etc might get known to a wider public should be obvious. We shouldn’t be shocked when bad things happen – and yet even Randi Zuckerberg, sister of Facebook’s founder Mark Zuckerberg, still seemed surprised and upset when a ‘private’ family photograph she posted somehow made its way onto Twitter. It wasn’t hacked, scraped, leaked or anything nasty – it’s just that Facebook is designed that way. The private becomes public all too easily – ‘sharing’ means you lose control. If Randi had just emailed the pic to her family, or put it on a genuinely private site, none of this would have happened.

2) Real Names Policy

Facebook’s policy is that people should only ever use their real names – and this can have very bad consequences. There are many people for whom using real names is dangerous, from whistle-blowers to political dissidents, from victims of domestic abuse to people just wanting to harmlessly let off steam. And it’s not just in the extremes that it matters: forcing a real names policy can matter to almost anyone. It helps anchor your ‘online’ life to your ‘offline’ life – meaning that anyone wishing to take advantage of you, to manipulate you, to take information out of context etc, and link what they find out about you online to your offline existence. Real names policies are potentially deeply pernicious – and not only does Facebook have one, but it is ratcheting up its efforts to enforce it. Snitchgate, about which I blogged in September, was just one example, where they experimented with getting people to ‘snitch’ on their friends for not using their real names. For Facebook, a real names policy has value – it makes their data on you more valuable when they want to sell it to others – but for people, it is both limiting and risky.

3) Monetization

Facebook is a business, and in business to do just one thing: make money. What that means is that they want to make money from their assets – your data. The recent furore over Instagram’s altered terms of service was just one example – and in many ways it was typical. Instagram has access to a huge collection of photographs – and since Facebook acquired Instagram for $1 billion earlier in 2012, it has been looking for ways to make money out of those photographs. The internet community’s reaction to that change was dramatic – and Instagram quickly changed tack (or at least appeared to) but make no mistake, the issue will recur. Facebook will look to make money – since the far-from-stellar IPO, the pressure to make money has been growing. Facebook has to satisfy its shareholders first of all, its advertisers next, and its ‘users’ last of all. The users don’t provide money directly, after all – so Facebook has to make money from their data. That drive to make money means that what happens to you when your data is used is of very little consequence….

4) Profiling – and self-profiling

One of the best ways to describe Facebook is as a ‘self-profiling service’. Everything you put up on Facebook, every ‘like’ button you press, every silly game you play, every person you ‘friend’ (and every person that ‘friends’ you) helps build up that profile. The profiles are used primarily for advertising – but also to build up their database of profiles. Profiling is something that is risky in two diametrically opposite ways: if profiling is accurate, it impinges on your privacy, whilst if it is inaccurate it can mean that bad decisions are made for you or about you. What’s more, profiling data is particularly vulnerable – allowing far more accurate and dangerous forms of identity fraud and similar scams.

5) Facial recognition

Facebook loves facial recognition – and it’s not just a coincidence of names. Facial recognition allows them to make more and more links, which helps them to profile better, and also to anchor information in the ‘real’ world, just like their ‘real names policies. Their practices with facial recognition – including ‘automatically’ tagging photographs – may have been rebuffed in Europe on the grounds of data protection, but just as with the Instagram issue (see (3) above), make no mistake, it’s coming back. The risks will still be there – they’re inherent in the concept – but they’ll find a way to get what at least purports to be consent from users in order to satisfy the letter of the law.  Anyone who has put a photo of themselves on Facebook should be concerned.

6) You never know who’s watching

Most Facebook users imagine that the people who look at their pages are their ‘friends’, or perhaps their ‘potential friends’, and don’t consider who else might look at what they post – and there are vast numbers of other groups who will look. Those who are slightly less naïve might understand that their employers might look, or their potential employers – but what about insurance companies, looking to see if people are engaging in risky activities, or credit agencies wanting to make more ‘accurate’ assessments? Or the authorities, looking for people doing ‘bad’ things – or people who ‘might’ do bad things? Show some interest in anything political… again, the risks are both ways: accurate watchers finding out things you don’t want them to find out, inaccurate watchers making bad decisions based on incorrect assumptions.

7) Facebook is forever

Many users of Facebook start off ‘young’ – perhaps in age, but perhaps in naïveté. They put material up that they think is funny, or cool, and don’t think how it might look in the future. This doesn’t just mean the odd drunken photo being seen by a potential employer – it means pretty much everything you put on Facebook. There was a big story in September 2012 when people thought their old ‘private messages’ were being posted onto their timelines, and they were hugely upset.It wasn’t true: what was actually happening was that some of their old public posts, posts from a few years ago, were reappearing – and people had forgotten the kind of things that they used to post. What you want to be public one year, you might well wish to forget in a few year’s time: with Facebook, that’s close to impossible! These days you can delete your account – but even if you do, that may not be enough. Services like profileengine.com keep old Facebook profiles even when they’ve been deleted….

8) Monopoly

Facebook is proud that it has now got more than 1 billion users – which makes it pretty close to the only game in town. Monopolies are very, very rarely a good thing – and if Facebook becomes (or perhaps has already become) the default, that puts a huge amount of extra power in their hands. Effectively, they can do whatever they want, and we’ll still have to be there. That can’t be good – and shouldn’t be good, particularly is you really CAN leave, and really DON’T need to be on Facebook. There are alternatives….

9) Concentration

…and those alternatives offer a solution to another risk involved in Facebook. Facebook wants to be all things to all people – and that means all your data, all your links, all aspects of your life concentrated in one place. That means much more accurate profiling, but also much greater vulnerability. If Facebook knows everything about you, they have much more power over you – and their profiles become much more powerful, so if compromised, sold, hacked, given to the authorities, to some other ‘enemy’ of yours, they have much more potential for damage. What would be much better – though somewhat harder work – would be to use different services for different features. Use one provider for email, use twitter for mass communication, set up your own blog on a different provider, put your photos on your own website, play games on yet another and so forth. Much less risk – and much more freedom to get better services. Also, much less dependency…

10) Dependency – and bad habits…

The last reason I’m going to mention here is dependency. Many people seem to be becoming deeply dependent on Facebook. They use it for everything – and seem totally lost if it goes down. They can’t contact their real friends and relations – they haven’t even kept a record of their email addresses. That means they end up spending far too much time on Facebook – and get into lots of bad habits, habits that Facebook encourage. Too much sharing (which to Facebook sounds like blasphemy), too many pictures posted online, too much information given out (e.g. geo-location data) without a real thought to the consequences. If you leave Facebook, and instead set up particular systems for particular functions, you’re far less likely to become dependent – and you’re far less lost if one or other of those services goes down for some reason or other.

And if that’s not enough…

…there are many other reasons. One that matters to people like me is that the only way that Facebook will ever change in any meaningful way, the only way it will start to take users’ privacy and other rights seriously, is if it starts to lose users. If enough people start leaving, it will have to do something differently, and start to take us more seriously rather than just treat us as cattle to be herded and milked….

So why not do it? Make it your New Year’s Resolution: leave Facebook!

Here is a link to instructions as to how to delete your Facebook account. If you have the strength, go for the real ‘deletion’ rather than the ‘deactivation’ method. If you just deactivate, you’re leaving your data there for Facebook and their partners to exploit…..

Taking a lead on privacy??

Two related stories about privacy and tracking are doing the rounds at the moment: both show the problems that companies are having in taking any sort of lead on privacy.

The first is about Apple, and the much discussed recent upgrade to their iOS, the operating system for the iPhone and iPad. There’s been a huge amount said about the problems with the mapping system (and geo-location is of course a huge privacy issue – as I’ve discussed before) but now there’s an increasing buzz about their newly introduced tracking controls. Apple, for the first time, have provided users with the option to ‘limit ad tracking’ – though as noted in a number of stories, including this one from Business Insider, that option is hidden away, not in the vaunted ‘Privacy’ tab, but under a convoluted set of menus (first ‘General’ settings, then ‘About’, then scroll down to the bottom to find ‘Advertising’, then click ‘Limit Ad Tracking’). Not easy to find, as even the techie and privacy geeks that I converse with on twitter have found.

This of course raises a lot of issues – it’s great to have the feature, but the opposite to have it hidden away where only the geeks and the paranoid will find it. It looks as though the people at Apple have been thinking hard about this, and working hard at this, and have come up with an interesting (and perhaps effective – but more on that below) solution, but then been told by someone, somewhere, that they should hide it for fear of upsetting the advertisers. I’d love to know the inside story on this – but Apple are rarely quite as open about their internal discussions as they could be.

There’s a conflict of motivations, of course. On the one hand, Apple wants to make customers happy, and there is increasing evidence that customers don’t want to be tracked – most recently this excellent paper from Hoofnagle, Urban and Li, appropriately entitled “Privacy and Modern Advertising: Most US Internet Users Want ‘Do Not Track’ to Stop Collection of Data about their Online Activities”. On the other hand, Apple don’t want to annoy the advertisers – particularly when the market for mobile is getting increasingly competitive. And the advertisers seem to be on a knife edge at the moment, very touchy indeed, as the latest spats over the ‘Do Not Track’ initiative have shown.

That’s the second story doing the rounds at the moment: the increasing acrimony and seemingly bitter conflict over Do Not Track. It’s a multi-dimensional spat, but seems to have been triggered by Microsoft’s plan to make do not track ‘on’ by default – something that the advertising industry are up in arms about. The ‘Digital Advertising Alliance’ issued a statement effectively saying they would simply ignore Microsoft’s system and track anyway – which led to privacy advocates suggesting that the advertisers wanted to kill the whole Do Not Track initiative. This is Jeff Chester of the Center for Digital Democracy:

“The DAA is trying to kill off Do Not Track.  Its announcement today to punish Microsoft for putting consumers first is an extreme measure designed to strong-arm companies that care about privacy.”

Chester and others saying similar things may be right – and it makes people like me wonder if the whole problem is that the ‘Do Not Track’ initiative was never really intended to work, but was just supposed to make people think that their privacy was protected. If it actually got some teeth – and setting it to a default ‘on’ position would be the first way to give it teeth – then the industry wouldn’t want it to exist. There are other huge issues with Do Not Track anyway. As the title of the Hoofnagle, Urban and Li report suggested, people think ‘Do not track’ means they won’t be tracked – that their data won’t be collected at all – while the industry seems to think what really matters to people is that they aren’t targeted – i.e. their data is still collected, and they’re still tracked and profiled, but that tracking isn’t used to send advertisements to them. For me, that at least is completely clear. Do Not Track should mean no tracking. Blocking data collection is more important than stopping targetting – because once the data is collected, once the profiles are made, they’re available for misuse later down the line.

That, far deeper point, is still not being discussed sufficiently. The battle is at a more superficial level – but it’s still an important battle. Who matters more, the consumers or the advertisers? Advertisers would have us believe that by stopping behavioural targetting we will break the whole economic basis of the internet – but that is based on all kinds of assumptions and presumptions, as Sarah A Downey pointed out in this piece for TechCrunch “The Free Internet Will Be Just Fine With Do Not Track. Here’s Why.” At the recent Amsterdam Privacy Conference, Simon Davies, one of the founders of Privacy International, made the bold suggestion that the behavioural targetting industry should simply be banned – and there is something behind his argument. Right now, the industry is not doing much to improve its image: seeming to undermine the whole nature of Do Not Track does not make them look good.

There’s another spectre that the industry might have to face: the European Union is getting ready to act, and when they act, they tend to do things without a great deal of subtlety, as the fuss around the Cookie Directive has shown. If the advertisers want to avoid heavy-handed legislation, they should beware: ‘Steelie’ Neelie Kroes is getting impatient. As reported in The Register, if they don’t stop their squabbling tactics over Do Not Track, she’s going to call in the politicians….

Someone, somewhere, has to take a lead on privacy. Apple had the chance, and to a great extent blew it, by hiding their tracking controls where the sun doesn’t shine. Microsoft seems to be making an attempt too, but will they hold their nerve in the face of huge pressure from the advertising industry – and even if they do, will their lead be undermined by the tactics of the advertising industry? If no-one takes that lead, no-one takes that initiative, the EU will take their kid gloves off… and then we’re all likely to be losers, consumers and advertisers alike….

How personal is personal?

The Register is reporting that the ICO wants a clearer definition of what consititutes ‘personal data’ – and it is indeed a crucial question, particularly under the current data protection regime. The issue has come up in the ICO’s response to the Government consultation on the review of the Data Protection Directive – and one of the key points is that there is a difference between how personal data is defined in the directive and how it is defined in the UK Data Protection Act. That difference gives scope for lots of legal argument – and is one of many factors that help to turn the data protection regime from something that should be about rights and personal protection into something often hideously technical and legalistic. The ICO, fortunately, seems to recognise this. As quoted in The Register, ICO Deputy Director David Smith says:

“We need to ensure that people have real protection for their personal information, not just protection on paper and that we are not distracted by arguments over interpretations of the Data Protection Act,”

That’s the crux of it – right now, people don’t really have as much real protection as they should. Will any new version of the directive (and then the DPA) be any better? It would be excellent if it did, but right now it’s hard to imagine that it will, unless there is a fundamental shift in attitudes.

There’s another area, however, that just makes it into the end of the Register’s article, that may be even more important – the question of what constitutes ‘sensitive personal data’.  Here, again, the ICO is on the ball – this is again from the Register:

“The current distinction between sensitive and non-sensitive categories of personal data does not work well in practice,” said the submission. “The Directive’s special categories of data may not match what individuals themselves consider to be ‘sensitive’ – for example their financial status or geo-location data about them.”

The ICO go on to suggest not a broadening of the definition of sensitive personal data, but a more ‘flexible and contextual approach’ to it – and they’re right. Data can be sensitive in one context, not sensitive in another. However, I would suggest that they’re not going nearly far enough. The problem is that the idea of the ‘context’ of any particular data is so broad as to be unmanageable. What matters isn’t just who has got the data and what they might do with it, but a whole lot of other things concerning the data subject, the data holder, any other potential data user and so on.

For instance, consider data about someone’s membership of the Barbra Streisand fan club. Sensitive data? In most situations, people might consider it not to be sensitive at all – who  cares what kind of music someone listens to? However, liking Barbra Streisand might mean a very different thing for a 22 year old man than it does for a 56 year old woman. Extra inferences might be drawn if the data gatherer has also learned that the data subject has been searching for holidays only in San Francisco and Sydney, or spends a lot of time looking at hairdressing websites. Add to that the real ‘geo-tag’ kind of information about where people actually go, and you can build up quite detailed profiles without ever touching what others might consider sensitive. When you have all that information, even supposedly trivial information like favourite colours or favourite items in your Tesco online shopping could end up being sensitive – as an extra item in a profile that ‘confirms’ or ‘denies’ (according to the kinds of probabilistic analyses that are used for behavioural profiling) that a person fits into a particular category.

What does all this mean? Essentially that ANY data that can be linked to a person can become sensitive – and that judging the context is so difficult that it is almost impossible. Ultimately, if we believe that sensitive data needs particular protection, then we should apply that kind of protection to ALL personal data, regardless of how apparently sensitive it is….