Dave Eggers’ The Circle: a book for our times…

I was introduced to Dave Eggers’ novel, The Circle, by Professor Andrew Murray – one of the pre-eminent scholars in IT Law in the UK, and also on of my PhD supervisors. I know I’m very late to this game – the book came out in 2013, and all the cool people will already have read it or reviewed it, but in this case I think it’s worth it. And the fact that someone like Andrew Murray would recommend it should give pause for thought: this isn’t just an entertaining piece of science fiction, it’s a book that really makes you think. It’s not just a dystopian vision of the future, it’s one that is far, far closer to reality than almost any I’ve read – and dystopian novels and films are pretty much my favourite genre.

It’s a book that reminded me why, unlike most of my schoolmates, I always preferred Brave New World to 1984 – and why, of the various privacy stories of the last few months I suspect, ultimately, the Facebook Experiment and the ruling over the Right to be Forgotten will matter more than the passing of the deeply depressing DRIP. In the end, as The Circle demonstrates graphically, we have more to fear from corporate domination of the Internet than we do from all the spooks and law enforcement agencies.

The Circle from which the novel gets its name is a technology company that combines a great deal of Google and Facebook with a little dash of Apple and a touch of Twitter. It dominates search and social media, but also makes cool and functional hardware. Egger’s triumph in the Circle is that he really gets not just the tech but the culture that surrounds it – little details like sending frowns to paramilitaries in Guatemala echo campaigns like #BringBackOurGirls in their futility, superficiality and ultimate inanity. The lives portrayed in the Circle should send shivers down the spines of any of us who spend much time on Twitter or Facebook: that I read the book whilst on a holiday without much Internet access made the point to me most graphically.

Privacy is theft

Eggers echoes both 1984 and Brave New World in using slogans to encapsulate concepts – exaggerating to make the point. For the Circle, these are:

Secrets are lies
Sharing is caring
Privacy is theft

All three are linked together – and connected to the idea that there’s something almost mystical about data. We don’t just have no right to privacy, we have a duty to disclose, a duty to be transparent. A failure to disclose means we’re depriving others of the benefits of our information: by claiming privacy, we’re stealing opportunities and advantages that others have the right to. If we care about others, we should share with them. This is Facebook, this is Google Flu Trends – and it’s the philosophy that implies that those of us who oppose the care.data scheme through which all our health data will be shared with researchers, pharmaceutical companies and many others, are selfish Luddites likely to be responsible for the deaths of thousands.

It is also the philosophy behind a lot of the opposition to the right to be forgotten. That opposition is based on the myth – one that Eggers exposes excellently – that the records on the Internet represent ‘the truth’ and that tampering with them, let alone deleting anything from them, is tantamount to criminality. Without spoiling the plot too much, one of the characters is psychologically and almost physically destroyed by the consequences of that. Eggers neatly leaves it unclear whether the key ‘facts’ that do the damage are actually real – he knows that this, ultimately, isn’t the point. Even if it all were true, the idea that maintaining it and exposing it would be a general good, something to be encouraged and fought for, is misguided at best.

It’s about power – and how it’s wielded

In the novel, The Circle has the power – and it wields it in many ways. Emotional manipulation, keeping people happy and at the same time keeping them within the Circle, is the key point – and the echoes of the Facebook Experiment, about which much has been written, but much has missed the deeper points, are chilling here. One of the real functions of the experiment was for Facebook to find ways to keep people using Facebook…

Another of the key ways that the Circle wields power is through its influence over lawmakers – and the same is sadly evident of Google and Facebook, in the UK as much as in the US. In the UK in particular the influence over things like opposition to data protection reform – and the right to be forgotten – are all too clear. It would be great if this could change, but as in the novel, the powers and common interests are far too strong for much chance. More’s the pity.

As a novel, The Circle is not without fault. I guessed the main plot twist less than half-way through the book. There’s a good deal of hyperbole – but this is dystopian fiction, after all – and the tech itself is not exactly described convincingly. What’s more, the prose is far from beautiful, the characters are mostly rather two-dimensional, and often they’re used primarily to allow Eggers to make his points, often through what amount to set speeches – but Huxley was guilty of that from time to time too. Those speeches, however, are often worth reading. Here, one of the dissidents explains his objections:

“It’s the usual utopian vision. This time they were saying it’ll reduce waste. If stores know what their customers want, then they don’t overproduce, don’t overship, don’t have to throw stuff away when it’s not bought. I mean, like everything else you guys are pushing, it sounds perfect, sounds progressive, but it carries with it more control, more central tracking of everything we do.”

“Mercer, the Circle is a group of people like me. Are you saying that somehow we’re all in a room somewhere, watching you, planning world domination?”

“No. First of all, I know it’s all people like you. Individually you don’t know what you’re doing collectively. But secondly, don’t presume the benevolence of your leaders.”

In that brief exchange Eggers shows how well he gets the point. A little later he nails why we should care much more about this but don’t, focussing instead on the spooks of the NSA and GCHQ.

“Here, though, there are no oppressors. No one’s forcing you to do this. You willingly tie yourself to these leashes.”

That’s the problem. We don’t seem to see the risk – indeed, just as in the novel, we willingly seem to embrace the very things that damage us. Lawmakers, too, seem not to see the problem – and as noted all too often allow themselves to be lobbied into compliance. The success of Google’s lobbyists over the right to be forgotten is testimony to this. Even now, people who really should know better are being persuaded to support the Circle sorry, I mean Google’s business model rather than address a real, important privacy issue.

Coming to a society near you…

We’re taking more and more steps in the direction of the Circle. Not just the Facebook experiment and the reaction to the ‘right to be forgotten’ ruling – but even in the last week or two a House of Lords committee has recommended an end to online anonymity, effectively asking service providers to require real names before receiving services. This is one of the central planks of the way the Circle takes control over people’s lives, and one which our lawmakers seem to be very happy to give them. There are also stories going around about government plans to integrate various databases from health and the DVLA to criminal records… another key tenet of the Circle‘s plans… The ‘detailed’ reasons for doing so sound and seem compelling – but the ultimate consequences could be disastrous…

Anyway, that’s enough from me. Read the book. I’ll be recommending it to
my Internet Law and Privacy students, but I hope it’s read much more widely than that. It deserves to be.

20140804-222408-80648421.jpg

Facebook, Google and the little people….

This last week has emphasised the sheer power and influence of the internet giants – Facebook and Google in particular.

The Facebook Experiment

First we had the furore over the so-called ‘Facebook Experiment’ – the revelation that Facebook had undertaken an exercise in ‘emotional contagion’, effectively trying to manipulate the emotions of nearly 700,000 of its users without their consent, knowledge or understanding. There were many issues surrounding it (some of which I’ve written about here) starting with the ethics of the study itself, but the most important thing to understand is that the experiment succeeded, albeit not very dramatically. That is, by manipulating people’s news feeds, Facebook found that they were able to manipulate peoples emotions. However you look at the ethics of this, that’s a significant amount of power.

Google and the Right to be Forgotten

Then we’ve had the excitement over Google’s ‘clumsy’ implementation of the ECJ ruling in the Google Spain case. I’ve speculated before about Google’s motivations in implementing the ruling so messily, but regardless of their motivations the story should have reminded us of the immense power that Google have over how we use the internet. This power is demonstrated in a number of ways. Firstly, in the importance we place in whether a story can be found through Google – those who talk about the Google Spain ruling being tantamount to censorship are implicitly recognising the critical role that Google plays and hence the immense power that they wield. Secondly, it has demonstrated Google’s power in that, ultimately, how Google decides to interpret and implement the ruling of the court is what decides whether we can or cannot find a story. Thirdly, the way that Google seems to be able to drive the media agenda has been apparent: it sometimes seems as though people in the media are dancing to Google’s tune.

Further, though the early figures for takedown requests under the right to be forgotten sound large – 240,000 since the Google Spain ruling – the number of requests they deal with based on copyright is far higher: 42,324,954 since the decision. Right to be forgotten requests are only 0.5% of those under copyright. Google deals with these requests without the fanfare of the right to be forgotten – and apart from a few internet freedom advocates, very few people seem to even notice. Google has that much control, and their decisions have a huge impact upon us.

Giants vs. Little People

Though the two issues seem to have very little in common, they both reflect the huge power that the internet giants have over ordinary people. It is very hard for ordinary people to fight for their rights – for little people to be able to face up to giants. Little people, therefore, have to do two things: use every tool they can in the fight for their rights, and support each other when that support is needed. When the little people work together, they can punch above their weight. One of the best ways for this to happen, is through civil society organisations. All around the world, civil society organisations make a real difference – from the Open Rights Group and Privacy International in the UK to EDRi in Europe and the EFF in the US. One of the very best of these groups – and one that punches the most above its weight, has been Digital Rights Ireland. They played a critical role in one of the most important legal ‘wins’ for privacy in recent years: the effective defeat of the Data Retention Directive, one of the legal justifications for mass surveillance. They’re a small organisation, but one with expertise and a willingness to take on the giants. Given that so many of those giants – including Facebook – are officially based in Ireland, Digital Rights Ireland are especially important.

Europe vs. Facebook

There is one particular conflict between the little people and the giants that is currently in flux: the ongoing legal fight between campaigner Max Schrems and Facebook. Schrems, who is behind the ‘Europe vs. Facebook’ campaign,  has done brilliantly so far, but his case appears to be at risk. After what looked like an excellent result – the referral by the Irish High Court to the ECJ of his case against Facebook (which relates to the vulnerability of Facebook data to US surveillance via the PRISM program) – Schrems is reported as considering abandoning his case, as the possible costs might bankrupt him if things go badly.

This would be a real disaster – and not just for Schrems. This case really matters in a lot of ways. The internet giants need to know that we little people can take them on: if costs can put us off, the giants will be able to use their huge financial muscle to win every time. It’s a pivotal case – for all of us. For Europeans, it matters in protecting our data from US surveillance. For non Europeans it matters, because it challenges the US giants at a critical point – we all need them to fight against US surveillance, and they’ll only really do that wholeheartedly if it matters to their bottom line. This case could seriously hit Facebook’s bottom line – so if they lost, they’d have to do something to protect their data from US surveillance. They wouldn’t just do that for European Facebook users, they’d do it for all.

Referral to the ECJ is critical, not just because it might give a chance to win, but because (as I’ve blogged before) recently the ECJ has shown more engagement with technological issues and more willingness to rule in favour of privacy – as in the aforementioned invalidation of the Data Retention Directive and in the contentious ruling in Google Spain. We little people need to take advantage of those times when the momentum is on our side – and right now, at least in some ways, the momentum seems to be with us in the eyes of the ECJ.

So what can be done to help Schrems? Well, the first thing I would suggest to Max is to involve Digital Rights Ireland. They could really help him – and I understand that they’ve been seeking an amicus brief in the case. They’re good at this kind of thing, and they and other organizations in Europe have experience in raising the funds for this type of case. Max has done brilliant work, but where ‘little people’ have to face up to giants, they’re much better off not fighting alone.

The Facebook Experiment: the ‘why’ questions…

Facebook question markA great deal has been written about the Facebook experiment – what did they actually do, how did they do it, what was the effect, was it ethical, was it legal, will it be challenged and so forth – but I think we need to step back a little and ask two further questions. Why did they do the experiment, and why did they publish it in this ‘academic’ form.

What Facebook tell us about their motivations for the experiment should be taken with a distinct pinch of salt: we need to look further. What Facebook does, it generally does for one simple reason: to benefit Facebook’s bottom line. They do things to build their business, and to make more money. That may involve getting more subscribers, or making those subscribers stay online for longer, or, most crucially and most directly, by getting more money from its advertisers. Subscribers are interesting, but the advertisers are the ones that pay.

So, first of all, why would Facebook want to research into ‘emotional contagion’? Facebook isn’t a psychology department in a university – they’re a business. There are a few possible reasons – and I suspect the reality is a mixture of them. At the bottom level, they want to check whether emotions can be ‘spread’, and they want to look at the mechanisms through which this spreading happens. There have been conflicting theories – for example, does seeing lots of happy pictures of your friends having exciting holidays make you happier, or make you jealous and unhappy – and Facebook would want to know which of these is true, and when. But then we need to ask ‘why’ they would want to know all this – and there’s only one obvious answer to that: because they want to be able to tap into that ability to spread emotional effects. They don’t just want to know that emotional contagion works out of academic interest – they want to be able to use it to make money.

This is where the next level of creepiness comes in. If, as they seem to think, they can spread emotional effects, how will they use that ability. With Facebook, it’s generally all about money – so in this case, that means that they will want to find ways to use emotional contagion as an advertising tool. The advertising possibilities are multiple. If you can make people associate happiness with your product, there’s a Pavlovian effect just waiting to make them salivate. If you can make people afraid, they’ll presumable be more willing to spend money on things or services to protect themselves – the lobbying efforts of those in the cybersecurity industry to make us afraid of imminent cyberwarfare or cyberterrorism are an example that springs to mind. So if Facebook can prove that emotional contagion works, and prove it in a convincing way, it opens up a new dimension of possible advertising opportunities.

That also gives part of the answer to the ‘why did they do this in this academic form’ question. An academic paper looks much more convincing than an internal, private research report. Academia provides credibility – though as an academic I’m all too aware of how limited, not to say flimsy, that credibility can be. Facebook can wave the academic paper in the faces of the advertisers – and the government agencies – and say ‘look, it’s not just us that are claiming this, it’s been proven, checked and reviewed, and by academics’.

So far, so obvious – isn’t emotional contagion just like ordinary advertising? Isn’t this all just making a mountain out of a molehill? Well, perhaps to an extent, so long as users of Facebook are aware that the whole of Facebook is, as far as Facebook is concerned, about ways to make money out of them. However, there are reasons that subliminal advertising is generally illegal – and this has some of the feeling of subliminal advertising to it, and it does have a ‘whiff of creepy’ about it. We don’t know how we’re being manipulated. We don’t know when we’re being manipulated. We don’t know why we’re seeing what we’re seeing – and we don’t know what we’re not seeing. If people imagine their news feed is just a feed, tailored perhaps a little according to their interests and interactions, a way of finding out what is going on in the world – or rather in their friends’ worlds – then they are being directly and deliberately misled. I for one don’t like this – which is why I’m not on Facebook and suggest to others to leave Facebook – but I do understand that I’m very much in the minority in that.

That brings me to my last ‘why’ question (for now). Why didn’t they anticipate the furore that would come from this paper? Why didn’t they realise that privacy advocates would be up in arms about it? I think there’s a simple answer to that: they did, but they didn’t mind. I have a strong suspicion, which I mentioned in my interview on BBC World News, that they expected all of this, and thought that the price in terms of bad publicity, a little loss of goodwill, a few potential investigations by data protection authorities and others, and perhaps even a couple of lawsuits, was one that was worth paying. Perhaps a few people will spend less time on Facebook, or even leave Facebook. Perhaps Facebook will look a little bad for a little while – but the potential financial benefit from the new stream of advertising revenue, the ability to squeeze more money from a market that looks increasingly saturated and competitive, outweighs that cost.

Based on the past record, they’re quite likely to be right. People will probably complain about this for a while, and then when the hoo-haa dies down, Facebook will still have over a billion users, and new ways to make money from them. Mark Zuckerberg doesn’t mind looking like the bad guy (again) for a little while. Why should he? The money will continue to flow – and whether it impacts upon the privacy and autonomy of the people on Facebook doesn’t matter to Facebook one way or another. It has ever been thus….

 

Facebook’s updated terms and conditions…. ;)

facebook-dislikeIn the light of the recently revealed ‘Facebook Experiment’, Facebook has issued new, simplified terms and conditions.*

Emotional Manipulation

  1. By using Facebook, you consent to having your emotions and feelings manipulated, and those of all your friends (as defined by Facebook) and relatives, and those people that Facebook deems to be connected to you in any way.
  2. The feelings to be manipulated may include happiness, sadness, depression, fear, anger, hatred, lust and any other feelings that Facebook finds itself able to manipulate
  3. Facebook confirms that it will only manipulate those emotions in order to benefit Facebook, its commercial or governmental partners and others.

Research

  1. By using Facebook, you consent to being used for experiments and research.
  2. This includes the use of your data and any aspect of your activities and profile that Facebook deems appropriate
  3. Facebook confirms that the research will be used only to improve Facebook’s service, to improve Facebook’s business model or to benefit Facebook or any of Facebook’s commercial or governmental partners in some other way.

Ethics and Privacy

  1. Facebook confirms that it has no ethics and that you have no privacy.

 

 

 

*Not actually Facebook’s terms and conditions…

PRISM: Share with the CIA – and Facebook!

new-facebook-privacy-options

Going out for a pizza? Who wants to know?

There’s been a joke going around the net over the last couple of weeks, inspired by the PRISM revelations. The picture above is just one of the examples – variants include replacing the CIA with the NSA, or adding the two together so that it says, effectively ‘Share with Friends, the CIA and the NSA’ and so on. It’s a pretty good joke – and spot on about the nature of the PRISM programme (and indeed the equivalents elsewhere in the world, such as the UK’s Communications Data Bill, the ‘Snoopers’ Charter’), but ultimately it misses one key element from the equation. It should also include ‘share with Facebook’…

Share with only me, the CIA, the NSA and FaceBook!

Something that seems to be forgotten pretty much every time is that whenever you put something on Facebook, no matter how tightly and precisely you select your ‘privacy’ settings, Facebook themselves always get to see your stuff. It’s never ‘just you’, or ‘just you and your close friends': Facebook themselves are always there. That means a lot of different things – at the very least that they will use that information to build up your profile and to choose who is going to target advertising at you. It might be used directly for Facebook themselves to target products and services at you. It might mean that they put you on various lists of people of a certain kind to receive mailings – lists that could then be used for other purposes, potentially sold (perhaps not now, but in the future?) or even could be hacked…

Data is vulnerable

…and that is point that shouldn’t be forgotten. If you put something on Facebook, or if Facebook infers something from the information that you put up, that information is potentially vulnerable. Now it’s easy to worry about spies and spooks – and then to dismiss that worry because you’re not really the kind of person that spies and spooks would care about – but there are others to whom the kind of information you put on Facebook could be valuable. Criminals intent on identity theft. Other criminals looking for targets in other ways (if you’re going out for a pizza, that means you’re not at home…. burglary opportunity?). Insurers wanting to know whether they should put up your premiums (aha, they often go out for pizzas – doesn’t sound like a healthy diet to me! Up with the premiums!), potential employers checking you out (if you’re going out for a pizza at an unsuitable time of day, you might be an unsuitable employee) and so on.

Don’t imagine your ‘privacy’ settings really imply privacy…

This doesn’t mean that we shouldn’t ‘share’ anything on Facebook (or Google, or any other system online, because what happens with Facebook happens just as much with others), but that we should be a touch more aware of the situation. The PRISM saga has highlighted that what we share can be seen by the authorities – and has triggered off quite a lot of concern. That concern is, in my opinion, only a small part of the story. What the authorities do is only one aspect – and for most people a far less important one than the rest of the story. Having your insurance premiums raised, having credit refused, becoming a victim of identity-related crimes, being socially embarrassed or humiliated, becoming a victim of cyber-bullying etc are much more common for most of us. What we do online can contribute to all of these – and we should be a bit more aware of it.