Facebook, Google and the little people….

This last week has emphasised the sheer power and influence of the internet giants – Facebook and Google in particular.

The Facebook Experiment

First we had the furore over the so-called ‘Facebook Experiment’ – the revelation that Facebook had undertaken an exercise in ‘emotional contagion’, effectively trying to manipulate the emotions of nearly 700,000 of its users without their consent, knowledge or understanding. There were many issues surrounding it (some of which I’ve written about here) starting with the ethics of the study itself, but the most important thing to understand is that the experiment succeeded, albeit not very dramatically. That is, by manipulating people’s news feeds, Facebook found that they were able to manipulate peoples emotions. However you look at the ethics of this, that’s a significant amount of power.

Google and the Right to be Forgotten

Then we’ve had the excitement over Google’s ‘clumsy’ implementation of the ECJ ruling in the Google Spain case. I’ve speculated before about Google’s motivations in implementing the ruling so messily, but regardless of their motivations the story should have reminded us of the immense power that Google have over how we use the internet. This power is demonstrated in a number of ways. Firstly, in the importance we place in whether a story can be found through Google – those who talk about the Google Spain ruling being tantamount to censorship are implicitly recognising the critical role that Google plays and hence the immense power that they wield. Secondly, it has demonstrated Google’s power in that, ultimately, how Google decides to interpret and implement the ruling of the court is what decides whether we can or cannot find a story. Thirdly, the way that Google seems to be able to drive the media agenda has been apparent: it sometimes seems as though people in the media are dancing to Google’s tune.

Further, though the early figures for takedown requests under the right to be forgotten sound large – 240,000 since the Google Spain ruling – the number of requests they deal with based on copyright is far higher: 42,324,954 since the decision. Right to be forgotten requests are only 0.5% of those under copyright. Google deals with these requests without the fanfare of the right to be forgotten – and apart from a few internet freedom advocates, very few people seem to even notice. Google has that much control, and their decisions have a huge impact upon us.

Giants vs. Little People

Though the two issues seem to have very little in common, they both reflect the huge power that the internet giants have over ordinary people. It is very hard for ordinary people to fight for their rights – for little people to be able to face up to giants. Little people, therefore, have to do two things: use every tool they can in the fight for their rights, and support each other when that support is needed. When the little people work together, they can punch above their weight. One of the best ways for this to happen, is through civil society organisations. All around the world, civil society organisations make a real difference – from the Open Rights Group and Privacy International in the UK to EDRi in Europe and the EFF in the US. One of the very best of these groups – and one that punches the most above its weight, has been Digital Rights Ireland. They played a critical role in one of the most important legal ‘wins’ for privacy in recent years: the effective defeat of the Data Retention Directive, one of the legal justifications for mass surveillance. They’re a small organisation, but one with expertise and a willingness to take on the giants. Given that so many of those giants – including Facebook – are officially based in Ireland, Digital Rights Ireland are especially important.

Europe vs. Facebook

There is one particular conflict between the little people and the giants that is currently in flux: the ongoing legal fight between campaigner Max Schrems and Facebook. Schrems, who is behind the ‘Europe vs. Facebook’ campaign,  has done brilliantly so far, but his case appears to be at risk. After what looked like an excellent result – the referral by the Irish High Court to the ECJ of his case against Facebook (which relates to the vulnerability of Facebook data to US surveillance via the PRISM program) – Schrems is reported as considering abandoning his case, as the possible costs might bankrupt him if things go badly.

This would be a real disaster – and not just for Schrems. This case really matters in a lot of ways. The internet giants need to know that we little people can take them on: if costs can put us off, the giants will be able to use their huge financial muscle to win every time. It’s a pivotal case – for all of us. For Europeans, it matters in protecting our data from US surveillance. For non Europeans it matters, because it challenges the US giants at a critical point – we all need them to fight against US surveillance, and they’ll only really do that wholeheartedly if it matters to their bottom line. This case could seriously hit Facebook’s bottom line – so if they lost, they’d have to do something to protect their data from US surveillance. They wouldn’t just do that for European Facebook users, they’d do it for all.

Referral to the ECJ is critical, not just because it might give a chance to win, but because (as I’ve blogged before) recently the ECJ has shown more engagement with technological issues and more willingness to rule in favour of privacy – as in the aforementioned invalidation of the Data Retention Directive and in the contentious ruling in Google Spain. We little people need to take advantage of those times when the momentum is on our side – and right now, at least in some ways, the momentum seems to be with us in the eyes of the ECJ.

So what can be done to help Schrems? Well, the first thing I would suggest to Max is to involve Digital Rights Ireland. They could really help him – and I understand that they’ve been seeking an amicus brief in the case. They’re good at this kind of thing, and they and other organizations in Europe have experience in raising the funds for this type of case. Max has done brilliant work, but where ‘little people’ have to face up to giants, they’re much better off not fighting alone.

The Facebook Experiment: the ‘why’ questions…

Facebook question markA great deal has been written about the Facebook experiment – what did they actually do, how did they do it, what was the effect, was it ethical, was it legal, will it be challenged and so forth – but I think we need to step back a little and ask two further questions. Why did they do the experiment, and why did they publish it in this ‘academic’ form.

What Facebook tell us about their motivations for the experiment should be taken with a distinct pinch of salt: we need to look further. What Facebook does, it generally does for one simple reason: to benefit Facebook’s bottom line. They do things to build their business, and to make more money. That may involve getting more subscribers, or making those subscribers stay online for longer, or, most crucially and most directly, by getting more money from its advertisers. Subscribers are interesting, but the advertisers are the ones that pay.

So, first of all, why would Facebook want to research into ‘emotional contagion’? Facebook isn’t a psychology department in a university – they’re a business. There are a few possible reasons – and I suspect the reality is a mixture of them. At the bottom level, they want to check whether emotions can be ‘spread’, and they want to look at the mechanisms through which this spreading happens. There have been conflicting theories – for example, does seeing lots of happy pictures of your friends having exciting holidays make you happier, or make you jealous and unhappy – and Facebook would want to know which of these is true, and when. But then we need to ask ‘why’ they would want to know all this – and there’s only one obvious answer to that: because they want to be able to tap into that ability to spread emotional effects. They don’t just want to know that emotional contagion works out of academic interest – they want to be able to use it to make money.

This is where the next level of creepiness comes in. If, as they seem to think, they can spread emotional effects, how will they use that ability. With Facebook, it’s generally all about money – so in this case, that means that they will want to find ways to use emotional contagion as an advertising tool. The advertising possibilities are multiple. If you can make people associate happiness with your product, there’s a Pavlovian effect just waiting to make them salivate. If you can make people afraid, they’ll presumable be more willing to spend money on things or services to protect themselves – the lobbying efforts of those in the cybersecurity industry to make us afraid of imminent cyberwarfare or cyberterrorism are an example that springs to mind. So if Facebook can prove that emotional contagion works, and prove it in a convincing way, it opens up a new dimension of possible advertising opportunities.

That also gives part of the answer to the ‘why did they do this in this academic form’ question. An academic paper looks much more convincing than an internal, private research report. Academia provides credibility – though as an academic I’m all too aware of how limited, not to say flimsy, that credibility can be. Facebook can wave the academic paper in the faces of the advertisers – and the government agencies – and say ‘look, it’s not just us that are claiming this, it’s been proven, checked and reviewed, and by academics’.

So far, so obvious – isn’t emotional contagion just like ordinary advertising? Isn’t this all just making a mountain out of a molehill? Well, perhaps to an extent, so long as users of Facebook are aware that the whole of Facebook is, as far as Facebook is concerned, about ways to make money out of them. However, there are reasons that subliminal advertising is generally illegal – and this has some of the feeling of subliminal advertising to it, and it does have a ‘whiff of creepy’ about it. We don’t know how we’re being manipulated. We don’t know when we’re being manipulated. We don’t know why we’re seeing what we’re seeing – and we don’t know what we’re not seeing. If people imagine their news feed is just a feed, tailored perhaps a little according to their interests and interactions, a way of finding out what is going on in the world – or rather in their friends’ worlds – then they are being directly and deliberately misled. I for one don’t like this – which is why I’m not on Facebook and suggest to others to leave Facebook – but I do understand that I’m very much in the minority in that.

That brings me to my last ‘why’ question (for now). Why didn’t they anticipate the furore that would come from this paper? Why didn’t they realise that privacy advocates would be up in arms about it? I think there’s a simple answer to that: they did, but they didn’t mind. I have a strong suspicion, which I mentioned in my interview on BBC World News, that they expected all of this, and thought that the price in terms of bad publicity, a little loss of goodwill, a few potential investigations by data protection authorities and others, and perhaps even a couple of lawsuits, was one that was worth paying. Perhaps a few people will spend less time on Facebook, or even leave Facebook. Perhaps Facebook will look a little bad for a little while – but the potential financial benefit from the new stream of advertising revenue, the ability to squeeze more money from a market that looks increasingly saturated and competitive, outweighs that cost.

Based on the past record, they’re quite likely to be right. People will probably complain about this for a while, and then when the hoo-haa dies down, Facebook will still have over a billion users, and new ways to make money from them. Mark Zuckerberg doesn’t mind looking like the bad guy (again) for a little while. Why should he? The money will continue to flow – and whether it impacts upon the privacy and autonomy of the people on Facebook doesn’t matter to Facebook one way or another. It has ever been thus….

 

Facebook’s updated terms and conditions…. ;)

facebook-dislikeIn the light of the recently revealed ‘Facebook Experiment’, Facebook has issued new, simplified terms and conditions.*

Emotional Manipulation

  1. By using Facebook, you consent to having your emotions and feelings manipulated, and those of all your friends (as defined by Facebook) and relatives, and those people that Facebook deems to be connected to you in any way.
  2. The feelings to be manipulated may include happiness, sadness, depression, fear, anger, hatred, lust and any other feelings that Facebook finds itself able to manipulate
  3. Facebook confirms that it will only manipulate those emotions in order to benefit Facebook, its commercial or governmental partners and others.

Research

  1. By using Facebook, you consent to being used for experiments and research.
  2. This includes the use of your data and any aspect of your activities and profile that Facebook deems appropriate
  3. Facebook confirms that the research will be used only to improve Facebook’s service, to improve Facebook’s business model or to benefit Facebook or any of Facebook’s commercial or governmental partners in some other way.

Ethics and Privacy

  1. Facebook confirms that it has no ethics and that you have no privacy.

 

 

 

*Not actually Facebook’s terms and conditions…

PRISM: Share with the CIA – and Facebook!

new-facebook-privacy-options

Going out for a pizza? Who wants to know?

There’s been a joke going around the net over the last couple of weeks, inspired by the PRISM revelations. The picture above is just one of the examples – variants include replacing the CIA with the NSA, or adding the two together so that it says, effectively ‘Share with Friends, the CIA and the NSA’ and so on. It’s a pretty good joke – and spot on about the nature of the PRISM programme (and indeed the equivalents elsewhere in the world, such as the UK’s Communications Data Bill, the ‘Snoopers’ Charter’), but ultimately it misses one key element from the equation. It should also include ‘share with Facebook’…

Share with only me, the CIA, the NSA and FaceBook!

Something that seems to be forgotten pretty much every time is that whenever you put something on Facebook, no matter how tightly and precisely you select your ‘privacy’ settings, Facebook themselves always get to see your stuff. It’s never ‘just you’, or ‘just you and your close friends': Facebook themselves are always there. That means a lot of different things – at the very least that they will use that information to build up your profile and to choose who is going to target advertising at you. It might be used directly for Facebook themselves to target products and services at you. It might mean that they put you on various lists of people of a certain kind to receive mailings – lists that could then be used for other purposes, potentially sold (perhaps not now, but in the future?) or even could be hacked…

Data is vulnerable

…and that is point that shouldn’t be forgotten. If you put something on Facebook, or if Facebook infers something from the information that you put up, that information is potentially vulnerable. Now it’s easy to worry about spies and spooks – and then to dismiss that worry because you’re not really the kind of person that spies and spooks would care about – but there are others to whom the kind of information you put on Facebook could be valuable. Criminals intent on identity theft. Other criminals looking for targets in other ways (if you’re going out for a pizza, that means you’re not at home…. burglary opportunity?). Insurers wanting to know whether they should put up your premiums (aha, they often go out for pizzas – doesn’t sound like a healthy diet to me! Up with the premiums!), potential employers checking you out (if you’re going out for a pizza at an unsuitable time of day, you might be an unsuitable employee) and so on.

Don’t imagine your ‘privacy’ settings really imply privacy…

This doesn’t mean that we shouldn’t ‘share’ anything on Facebook (or Google, or any other system online, because what happens with Facebook happens just as much with others), but that we should be a touch more aware of the situation. The PRISM saga has highlighted that what we share can be seen by the authorities – and has triggered off quite a lot of concern. That concern is, in my opinion, only a small part of the story. What the authorities do is only one aspect – and for most people a far less important one than the rest of the story. Having your insurance premiums raised, having credit refused, becoming a victim of identity-related crimes, being socially embarrassed or humiliated, becoming a victim of cyber-bullying etc are much more common for most of us. What we do online can contribute to all of these – and we should be a bit more aware of it.

Dear Larry and Mark….

Larry Page, Google

Mark Zuckerberg, Facebook

8th June, 2013

Dear Larry and Mark

The PRISM project

I know that you’ve been as deeply distressed as I have by the revelations and accusations released to the world about the PRISM project – and I am delighted by the vehemence and clarity with which you have denied the substance of the reports insofar as they relate to your services. The zeal with which you wish to protect your users’ privacy is highly commendable – and I’m looking forward to seeing how that zeal produces results in the future. To find that the two of you, the leaders of two of the biggest providers of services on the internet, are so clearly in favour of individual privacy on the internet is a wonderful thing for privacy advocates such as myself. There are, however, a few ways that you could make a slightly more direct contribution to that individual privacy – and seeing the depth of feeling in your proclamations over PRISM I feel sure that you will be happy to do them.

Do Not Track

As I’m sure you’re aware, people are concerned not just about governments tracking their activities on the net, but others tracking them – not least since it appears clear from the PRISM project that if commercial organisations track people, governments might try to get access to that tracking, and perhaps even succeed. As you know, the Do Not Track initiative was designed with commercial tracking in mind – but it has become a little bogged down since it began, and looks as though it might be far less effective than it could be. You could change that – put your considerable power into making it strong and robust, very clearly do not track rather than do not target, and most importantly ensure that do not track is on by default. As you clearly care about the surveillance of your users, I know that you’ll want them not to be tracked unless they actively choose to let advertisers track them. That’s the privacy-friendly way – and as supporters of privacy, I’m sure you’ll want to support that. Larry, in particular, I know this is something you’ll want to do, as perhaps the world leader in advertising – and now also in privacy – your support of this will be both welcome and immensely valuable.

Anonymity – no more ‘real names’ policies

As UN Special Rapporteur on Freedom of Expression and Opinion, Frank La Rue, recently reported, privacy, and in particular anonymity is a crucial underpinning of freedom of expression on the internet. I’m sure you will have read his report – and will have realised that your insistence on people using real names when they use your services is a mistake. I imagine, indeed, that you’re already preparing to reverse those policies, and come out strongly for people’s right to use pseudonyms – particularly you, Mark, as Facebook is so noted for its ‘real names’ policy. As supporters of privacy, there can’t be any other way – and now that you’re both so clearly in the privacy-supporting camp, I feel confident that you’ll make that choice. I’m looking forward to the press releases already.

Data Protection Reform

As supporters of privacy, I know you’ll be aware of the current reform programme going on with the European Data Protection regime – data protection law is strongly supportive of individual privacy, and may indeed be the most important legal protection for privacy in the world. You might be shocked to discover that there are people from both of your companies lobbying to weaken and undermine that reform – so I’m sure you’ll tell them at once to stop that lobbying, and instead to get solidly behind those looking for better protection for individual privacy and stronger rights to protect themselves from tracking and misuse of their data.  As you are now the champions of individual privacy, I’m sure you’ll be delighted to do so – and I suspect memos have already been issued from your desks to those lobbying teams ordering them to change your stance and support rather than undermine individuals’ rights over their data. I know that those pushing for this reform will be delighted by your new found support.

That support, I’m sure, will build on Eric Schmidt’s recent revelation that he thinks the internet needs a ‘delete’ button – so you’ll be backing Viviane Reding’s ‘right to be forgotten’ and doing everything you can to build in easy ways for people to delete their accounts with you, to remove all traces of their profiling and related data and so on.

Geo-location, Facial Recognition and Google Glass

Your new found zeal for privacy will doubtless also be reflected in the way that you deal with geo-location and facial recognition – and in Larry’s case, with Google Glass. Of course you’ve probably had privacy very much in the forefront of your thoughts in all of these areas, but just haven’t yet chosen to talk about it. Moving away from products that gather location data by default, and cutting back on facial recognition except where people really need it and have given clear and properly informed consent will doubtless be built in to your new programs – and, Larry, I’m sure you’ll find some radical way to cut down on the vast array of privacy issues associated with Google Glass. I can’t quite see how you can at the moment, but I’m sure you’ll find a way, and that you’re devoting huge resources to do so.

Supporting privacy

We in the privacy advocacy field are delighted to have you on our side now – and look forward greatly to seeing that support reflected in your actions, and not just in relation to government surveillance. I’ve outlined some of the ways that this might be manifested in reality – I am waiting with bated breath to see it all come to fruition.

Kind regards

Paul Bernal

P.S. Tongue very firmly in cheek

Big Brother is watching you…. and so are his corporate partners

big-brother-is-watching-you_thumbnaPrivacy advocates are spoilt for choice these days about what to complain about – privacy invasions by business, or privacy invasions by the authorities? Over the last year or so, I’ve written regularly about both – whether it be my seemingly endless posts in recent weeks about Facebook, or the many times I wrote last year about the wonderful Snoopers’ Charter – our Communications Data Bill (which is due to re-emerge after its humiliation fairly shortly).

It’s a hard one to answer – and I tend to oscillate between the two in terms of which I think is more worrying, more of a threat. And then a new story comes along to remind me that it isn’t either on its own that we should be really worried about – it’s when the two work together. Another such story has just come to light, this time in The Guardian

“Raytheon’s Riot program mines social network data like a ‘Google for spies’, drawing ire from civil rights groups”

The essence of the story is simple. Raytheon is reported to have developed software “capable of tracking people’s movements and predicting future behaviour by mining data from social networking websites”. Whether the details of the story are correct, and whether Raytheon’s software is particularly good at doing what it is supposed to do isn’t really the main point: the emergence of software like this was always pretty close to inevitable. And it will get more effective – profiling will get sharper, targeting more precise, predictions more accurate.

Inevitable and automatic

What’s more, this isn’t just some ‘friendly’ policemen or intelligence operatives looking over our Facebook posts or trawling through our tweets – this is software, software that will operate automatically and invisibly, and can look at everything. What’s more, it’s commercially produced software. Raytheon says that ‘it has not sold the software – named Riot, or Rapid Information Overlay Technology – to any clients’ but it will. It’s commercially motivated – and investigations by groups such as Privacy International have shown that surveillance technology is sold to authoritarian regimes and others around the world in an alarming way.

If you build it, they will come

The real implication is that when software like this is developed, the uses will follow. Perhaps it will be used at first for genuinely helpful purposes – tracking real terrorists, finding paedophiles etc (and you can bet that the fights against terrorism and child abuse will be amongst the first reasons wheeled out for allowing this kind of thing) – but those uses will multiply. Fighting terrorist will become fighting crime, which will become fighting disorder, which will become fighting potential disorder, which will become locating those who might have ‘unhelpful’ views. Planning a protest against the latest iniquitous taxation or benefits change? Trying to stop your local hospital being shut or your local school being privatised? Supporting the ‘wrong’ football team?

Just a quick change in the search parameters and this kind of software, labelled by the Guardian a ‘google for spies’, will track you down and predict your next moves. Big Brother would absolutely love it.

A perfect storm for surveillance

This is why, in the end, we should worry about both corporate and government surveillance. The more that private businesses gather data, the better they get at profiling, even for the most innocuous of purposes, or for that all too common one, making money, the more that this kind of data, these kinds of techniques, can be used by others.

We should worry about all of this – and fight it on all fronts. We should encourage people to be less blasé about what they post on Facebook. I may be a bit extreme in regularly recommending that people leave Facebook (see my 10 reasons to leave Facebook post) because I know many people rely on it at the moment, but we should seriously advise people to rely on it less, to use it more carefully – and to avoid things like geo-location etc (see my what to do if you can’t leave Facebook post). We should oppose any and all government universal internet surveillance programmes – like the Snoopers’ Charter – and we should support campaigns like that of Privacy International against the international trade in surveillance technology.

Facebook and others create a platform. We put in all our data. Technology firms like Raytheon write the software. It all comes together like a perfect storm for surveillance.

If you “can’t” leave Facebook…

I’ve been posting a lot about Facebook recently. I gave ‘10 reasons to leave Facebook‘ a few weeks ago – but for many people that seems either to be impossible, or very, very difficult. So, what can you do if you ‘can’t’ leave Facebook, and you want to minimise your privacy risks? After the new stuff on Facebook’s Graph Search (see my blog posts here and here), now the revelation that people on Facebook will no longer have the option to avoid being ‘searchable’, this is becoming more and more important.

So what can you do? Well, here are twelve suggestions from me – I’m sure there are many more…

  1. Check your privacy settings. Really check them. Lock them down as tight as you can – but remember that they only control what other users can see, not what Facebook can see or use for their profiling of you.
  2. Prune your ‘friends’ list down to an absolute minimum. With Graph Search this is particularly important – it seems as though Graph Search will assume that if you’re ‘friends’ with someone then all your data is available for full analysis for search by those friends. If they’re just people you met once, or were in the same year as at school or college, would you really trust them with your most intimate details?
  3. Never press the ‘like’ button ever, ever again. The ‘like’ button is another of the profiling keys – and could effectively give permission for those whom you like to access your data.
  4. Do a serious deletion job on your photographs – Graph Search will search them, facial recognition may be applied, and not just to you but to anyone in the photos. If you have a friend (a real one) who’s in one of your photos, it’s not just you who’s being subject to privacy risks.
  5. Think before you post any more photos – same reasons, really. Do you really need to ‘share’ that picture? If you don’t, don’t! And if you ‘need’ to, is there a way to do it other than Facebook?
  6. Never use geolocation again – at least not on Facebook. If you’re given the option to allow any application to know your location, say no! Geolocation is a tool that’s immensely useful at times – when you’re using maps, or other transport apps (for train timetables etc) but most of the time it’s really not necessary at all.
  7. Check the apps you use to access Facebook on your phone or tablet – there are all kinds of risks associated with apps that people simply don’t think about. The settings may be very different from what you think – again, think geolocation, think photo tagging.
  8. Think about when you post, as well as what and why. Posting at night, for example, could profile you as a ‘night owl’, for whatever reason.
  9. Don’t play games on Facebook – play them somewhere else. Games are primarily used for profiling, and may have privacy risks attached that are not immediately obvious.
  10. Don’t sign into any other service ‘via Facebook’ if you have the option. All you’re doing is allowing the two services to share data, to add depth and strength to their profile.
  11. Sign out of Facebook whenever you’re not using it – don’t leave it running in the background when you do other stuff. When you’re signed in, you can be giving permission to Facebook to track or follow other activities. Now they might be doing that anyway, but you shouldn’t give them the legal excuse to!
  12. Keep ‘work’ and home separate on Facebook if you can. It may not be easy….

Finally, though, think again about whether you really do need to be on Facebook. You may need to – or you may want to – but if so, you should manage your risks and be as ‘savvy’ about it as you can.

Facebook Graph Search: Privacy issues….

thumbs-downI wrote yesterday about Facebook’s new ‘Graph Search’ system – in particular, about the way in which it is intended to convince people to put more and better data onto the system, and to lock them and businesses further into the Facebook system. What I didn’t talk about much was privacy…. not because there aren’t privacy issues with the new system, but more because there are so many privacy issues that it’s hard to know where to start.

One of the most interesting things is that as a part of the launch, Mark Zuckerberg has been very keen to stress that privacy is built into the system, even releasing information suggesting that the reason he went with Bing rather than Google for the web-search part of the service is that Google weren’t ‘privacy-friendly enough’ for him – see this piece in the Guardian. Why did he do that? Well, in one way I’m glad he did because it shows that he knows that people care about privacy, and that Facebook doesn’t exactly have the greatest reputation about privacy, to put it mildly. However, I’m far from convinced that what he’s been saying means very much – because the essence of Facebook Graph Search makes privacy very, very hard to achieve.

There are many things to mention – I can’t even get close to covering them all in one post. I’ll start with the very purpose of the system. Zuckerberg gave an example of a possible search: “people who like fencing and live in Palo Alto”. It doesn’t take much of a stretch to turn that into something distinctly creepy: “Single women who live in Palo Alto, work in Menlo Park and ‘like’ public transportation.” You can take it a lot further than that – which is why many commentators suggest that the system could be a stalker’s dream. Facebook already allows things that point in that direction: the scrutiny of other peoples’ profiles is one of the points of the system. Graph Search takes that to another level…

Secondly, the idea of the ‘built-in privacy’ that Zuckerberg talked about is that ‘stuff’ is only searchable if you’ve let friends see it anyway. There are big problems with that. Firstly, it relies on people understanding and using Facebook’s notoriously over complex privacy settings – which is quite something to rely on. Secondly, it assumes that if you’re willing to let your friends see or know something, then you’re willing to let it be aggregated, analysed, searched, sorted and so forth… which is of course what Facebook do anyway, but I would be very surprised if many Facebook users realise this. For that, and other reasons, I suppose we should welcome Graph Search – it demonstrates graphically what Facebook actually does with your data.

Thirdly, Zuckerberg made the point that photos and location information would be part of Graph Search – again, something that we should all have known, but I’m not sure people have fully understood. Combine this with facial recognition, and with the new smartphone Facebook apps that will automatically post photos you take with your camera onto Facebook, complete with location stamp, and you get a whole new scale of possible intrusion. Add this to the stalking capabilities noted above, and you’ve got quite a tool…

The point with a lot of this is that it’s all becoming the default – which is clearly the intention. As I noted in my previous post, Graph Search will work best if you ‘give’ Facebook all your information – and Facebook is providing the tools to let you give them it all. Moreover, they’re making it easier to give that information than not to give that information. They want all your data… and not just to give you a better service. They want it because they can use it to make more money…

….which brings me to the final privacy point. Zuckerberg makes the point again and again that in some ways you are in control of privacy, by using your privacy settings. You decide who sees what. However, that’s not really true at all. You may decide which other users get to see which bits of your data – but Facebook gets to see it all. Facebook gets to analyse it, to profile you through it, to effectively share it with its partners, to use it to categorise you for advertisers, or for others pretending to be advertisers. You may have more privacy from other people – but to Facebook, you are transparent, and have no privacy at all. Graph Search doesn’t really change that – but it should make it clearer that it is the case, and what some of the implications are.

I wrote over the holiday season my ‘Ten Reasons to Leave Facebook’. For me, Graph Search adds an eleventh – and makes some of the other ten even clearer than before. It’s not going to convince me to re-join Facebook. Quite the opposite: it makes it crystal clear to me that I was right to leave when I did.