Happy Christmas from the Investigatory Powers Bill – part 4
Category: Profiling
Happy Christmas from the IP Bill
Happy Christmas from the Investigatory Powers Bill….
The Facebook Experiment: the ‘why’ questions…
A great deal has been written about the Facebook experiment – what did they actually do, how did they do it, what was the effect, was it ethical, was it legal, will it be challenged and so forth – but I think we need to step back a little and ask two further questions. Why did they do the experiment, and why did they publish it in this ‘academic’ form.
What Facebook tell us about their motivations for the experiment should be taken with a distinct pinch of salt: we need to look further. What Facebook does, it generally does for one simple reason: to benefit Facebook’s bottom line. They do things to build their business, and to make more money. That may involve getting more subscribers, or making those subscribers stay online for longer, or, most crucially and most directly, by getting more money from its advertisers. Subscribers are interesting, but the advertisers are the ones that pay.
So, first of all, why would Facebook want to research into ’emotional contagion’? Facebook isn’t a psychology department in a university – they’re a business. There are a few possible reasons – and I suspect the reality is a mixture of them. At the bottom level, they want to check whether emotions can be ‘spread’, and they want to look at the mechanisms through which this spreading happens. There have been conflicting theories – for example, does seeing lots of happy pictures of your friends having exciting holidays make you happier, or make you jealous and unhappy – and Facebook would want to know which of these is true, and when. But then we need to ask ‘why’ they would want to know all this – and there’s only one obvious answer to that: because they want to be able to tap into that ability to spread emotional effects. They don’t just want to know that emotional contagion works out of academic interest – they want to be able to use it to make money.
This is where the next level of creepiness comes in. If, as they seem to think, they can spread emotional effects, how will they use that ability. With Facebook, it’s generally all about money – so in this case, that means that they will want to find ways to use emotional contagion as an advertising tool. The advertising possibilities are multiple. If you can make people associate happiness with your product, there’s a Pavlovian effect just waiting to make them salivate. If you can make people afraid, they’ll presumable be more willing to spend money on things or services to protect themselves – the lobbying efforts of those in the cybersecurity industry to make us afraid of imminent cyberwarfare or cyberterrorism are an example that springs to mind. So if Facebook can prove that emotional contagion works, and prove it in a convincing way, it opens up a new dimension of possible advertising opportunities.
That also gives part of the answer to the ‘why did they do this in this academic form’ question. An academic paper looks much more convincing than an internal, private research report. Academia provides credibility – though as an academic I’m all too aware of how limited, not to say flimsy, that credibility can be. Facebook can wave the academic paper in the faces of the advertisers – and the government agencies – and say ‘look, it’s not just us that are claiming this, it’s been proven, checked and reviewed, and by academics’.
So far, so obvious – isn’t emotional contagion just like ordinary advertising? Isn’t this all just making a mountain out of a molehill? Well, perhaps to an extent, so long as users of Facebook are aware that the whole of Facebook is, as far as Facebook is concerned, about ways to make money out of them. However, there are reasons that subliminal advertising is generally illegal – and this has some of the feeling of subliminal advertising to it, and it does have a ‘whiff of creepy’ about it. We don’t know how we’re being manipulated. We don’t know when we’re being manipulated. We don’t know why we’re seeing what we’re seeing – and we don’t know what we’re not seeing. If people imagine their news feed is just a feed, tailored perhaps a little according to their interests and interactions, a way of finding out what is going on in the world – or rather in their friends’ worlds – then they are being directly and deliberately misled. I for one don’t like this – which is why I’m not on Facebook and suggest to others to leave Facebook – but I do understand that I’m very much in the minority in that.
That brings me to my last ‘why’ question (for now). Why didn’t they anticipate the furore that would come from this paper? Why didn’t they realise that privacy advocates would be up in arms about it? I think there’s a simple answer to that: they did, but they didn’t mind. I have a strong suspicion, which I mentioned in my interview on BBC World News, that they expected all of this, and thought that the price in terms of bad publicity, a little loss of goodwill, a few potential investigations by data protection authorities and others, and perhaps even a couple of lawsuits, was one that was worth paying. Perhaps a few people will spend less time on Facebook, or even leave Facebook. Perhaps Facebook will look a little bad for a little while – but the potential financial benefit from the new stream of advertising revenue, the ability to squeeze more money from a market that looks increasingly saturated and competitive, outweighs that cost.
Based on the past record, they’re quite likely to be right. People will probably complain about this for a while, and then when the hoo-haa dies down, Facebook will still have over a billion users, and new ways to make money from them. Mark Zuckerberg doesn’t mind looking like the bad guy (again) for a little while. Why should he? The money will continue to flow – and whether it impacts upon the privacy and autonomy of the people on Facebook doesn’t matter to Facebook one way or another. It has ever been thus….
The Facebook Experiment… a discussion on BBC world…
Privacy isn’t selfish…
The importance of privacy is often downplayed. It sometimes seems as though privacy is viewed as something bad, something inherently selfish, something that ‘good’ people don’t need or really want – or at the very least are willing to sacrifice for the greater good. To me, that displays a fundamental misunderstanding of privacy and of the role it plays in society. Privacy isn’t selfish – though it is sometimes used for selfish means – it’s one of the crucial elements of a functioning society. We all need privacy – and not just for our personal, individualistic needs, but to be able to function properly in society. We need to have our privacy respected – and we need to respect others’ privacy.
Two current stories exemplify both the importance of privacy and the way that the arguments often get skewed: the debate over mass surveillance of the internet, and the issue of the ‘opening up’/’selling off’ of health data from the NHS (the ‘care.data’ story).
Mass surveillance and selfishness
The argument here (which I’ve discussed before, most recently here) is that the only reason to oppose mass surveillance is to protect your own, individual and selfish personal privacy. As ‘good’ people have ‘nothing to hide’, they’ve got ‘nothing to lose’ by sacrificing this individual, selfish concern for the greater good. As Sir Malcolm Rifkind put it:
“There is a balance to be found between our individual right to privacy and our collective right to security.”
The implication of Rifkind’s words is pretty clear – the collective right to security is more important. It’s ‘collective’ rather than ‘individual’ – and hence altruistic rather than selfish. There are many holes in the argument. Surveillance impacts upon rights that are far from individual or selfish – chilling free speech (and the right of others to hear your speech), limiting rights to assemble and associate both offline and online and more. It creates a power imbalance between those who have the information (in this case the authorities) and those about whom the information is held (in this case each and every one of us) which ultimately undermines pretty much every element of how our society functions. That’s why police states are so keen on surveillance and information gathering – it gives them control. It’s not just selfish to want to avoid that kind of control – it’s for the good of the society as a whole.
Furthermore, the idea that only those who have ‘something to hide’ should be concerned about privacy is in itself fundamentally flawed. People don’t just want to hide ‘bad’ or ‘discreditable’ information – and privacy isn’t just about ‘hiding’ things either. Privacy is about autonomy – about the ability to have at least an element of control over what information about them is made available to whom. Some information you might be happy to share with your friends but not with your parents, your employers or the government. What you might want to share can change over time – even the nicest things are often best held back for your own reasons. It’s not selfishness – it’s humanity.
Health data and selfishness
The parallels between the care.data debate and the debate over mass surveillance may not be immediately apparent, but the two issues are closer than they might seem. The argument goes broadly like this: those who are objecting to the sharing of health data are selfishly worrying about a minuscule risk to their own, individual privacy and should be sacrificing that selfishness to the collective good created by the sharing of health data. Just as for mass surveillance, the balance suggested is between an almost irrelevant selfishness and a general and bountiful good created by the sharing of health data.
Again, I think this argument is misstated – though perhaps not as clearly as with mass surveillance. The first thing to say is that the risk to individual privacy is not minuscule. The suggestion by the proponents of the system is that because the data is ‘anonymised’ then privacy is protected. The problem is that anonymisation, both theoretically and practically, has generally been shown to be ineffective. ‘Anonymous’ records can be deanonymised – and individual, personal and deeply private information can be extracted.
The next question is whether making this data available will actually benefit the ‘collective good’. The assumption that seems to be being made – the image being presented – is that the data will go to research laboratories developing cures for terrible diseases. If we don’t let this data be shared in this way, we’ll stop them developing cures for currently incurable cancers and so forth. The reality appears likely to be quite different – one key aim seems to be to sell the data to drug companies and insurance firms. Both of these aims need to be handled with a great deal of care.
As for mass surveillance, the ultimate result may well be more about a transfer of power from individuals to organisations that may well be far from benevolent. A society where those in control of health care – and in particular access to health care (either to services or to drugs) have complete informational control over individuals is in some ways just as bad (and remarkably similar) to one where authorities have informational control over people. In a society like that in the UK where there is increasing and creeping privatisation of health services this is particularly worrying. Being concerned about this isn’t selfishness.
A more privacy friendly society?
In both cases, it is important to understand that we’re not left with just one choice. This isn’t black and white. It’s not ‘mass surveillance or anarchy’, or ‘complete health data sharing or a collapse in public health’. It really is about balance – but finding the balance should be based on a more appropriate and accurate analysis of the issues. For mass surveillance we need to look more carefully at the impact of that surveillance, and ask for more evidence of the collective benefit.
For health data we need to look more carefully – much more carefully – at the risks of deanonymisation. And, if we can ameliorate those risks appropriately, we need to set the terms of the ‘opening up’/’selling off’ of the data in a way that benefits society. Drug companies should only get access to that information if they make appropriate commitments to make the drugs they develop available in forms and at prices that benefit society – and the NHS in particular. For insurance companies the terms should be even tougher – if they should be allowed access at all.
Most of all, though, we need to have a proper debate about this, and the case needs to be made. Anyone who has received the care.data leaflet through their door (mine came last week) should be shown how much it shows only one side of the argument. This is a critical moment – for both health data and surveillance. What we do now will be very hard to reverse, not just for us but for future generations. To care about them is the opposite of selfishness.
Surveillance and Consent
I was fortunate enough to speak at the Internet and Human Rights Conference at the Human Rights Law Centre at the University of Nottingham on Wednesday. My talk was on the topic of internet surveillance – as performed both by governments and by commercial entities. This is approximately what I said – I very rarely have fully written texts when I talk or lecture, and this was no exception. As you can see, I had one ‘official’ title, but the talk had a number of alternative titles…
Surveillance and Consent
Or
Big Brother is watching you – and so are his commercial partners
Or
What Edward Snowden can teach us about the commercial Internet
Or
To what do we consent, when we enter the Internet?
In particular, do we consent to surveillance? If we do, by whom? When? And on what terms? There are three parts to this talk:
1) Government surveillance and consent
2) Commercial surveillance and consent
3) Forging a (more) privacy friendly future?
1: Government surveillance and consent.
Big Brother is Watching You. He really is. Some of us have always thought so – even if we’ve sometimes been called conspiracy theorists when we’ve articulated those thoughts. Since the revelations of Edward Snowden this summer, we’ve been taken a bit more seriously – and quite rightly so.
The first and perhaps most important question to ask is why the authorities perform surveillance? Counter-terrorism? That’s the one most commonly mentioned. Detection and enforcement of criminal law? Crime prevention? Prevention of disorder? Dealing with child abuse images and tracking down paedophiles? Monitoring of social trends? There are different degrees to all these areas – and potentially some very slippery slopes. Some of the surveillance is clearly beneficial – but some is highly debatable. When looking in the area of crime and disorder this is particularly true when one considers police tactics in the past, from dealing with the anti-nuclear movements in the sixties, seventies and eighties to the shocking revelation about the infiltration of environmental activists more recently. Even this summer, the government admitted that it monitored people’s social media activities in order to ‘head off’ the badger cull protests. Was that right? Are other forms of ‘social control’ through surveillance acceptable? They should at least raise questions.
When looking at government surveillance, we need to ask what is acceptable? Where do we draw the line? Who draws that line? How much of this do we consent to? There are a number of different ways to look at this.
Societal consent?
Do we, as a societies, consent to this kind of surveillance? It is not at all clear that we do, even in the UK, if the furore that lead to the defeat of the Snoopers Charter is anything to go by, or the reaction to Edward Snowden’s revelations in most of the world (though not so much in the UK) is any guide. Do we, as societies, understand the level of surveillance that our governments are performing? It doesn’t seem likely given the surprise shown as more and more of the reality of the situation is revealed. Can we, as societies, understand all of this? Perhaps not fully, but certainly a lot more than we currently do.
Parliamentary consent?
Do we effectively consent by delegating our decisions to our political representatives? By electing them, are we consenting to their decision-making, both in general and in the particular area of internet surveillance? This is a big political question in any situation – but anyone who has observed MPs, even supposedly expert MPs, knows that the level of knowledge and understanding of either the internet or surveillance is appalling. Labour’s Helen Goodman, the Tories’ Clare Perry, the Lib Dems’ Tom Brake, all of whom have been (and still are) in positions of power and responsibility within their own parties in relation to the internet have a level of understanding that would be disappointing in a secondary school pupil.
The Intelligence and Security Committee, who made their first public appearance in November, demonstrated that they were pretty much entirely incapable of providing the scrutiny necessary to represent us – and to hold Big Brother to account on our behalf. Most of the Home Affairs Committee – and the chair, Keith Vaz, in particular, demonstrated this even more dramatically this Tuesday, when questioning Guardian Editor Alan Rusbridger. Keith Vaz’s McCarthy-esque question to Rusbridger ‘do you love your country’ was sadly indicative of the general tone and level of much of the questioning.
There are some MPs who could understand this, but they are few and far between – Lib Dem Julian Huppert, Labour’s Tom Watson, the Tories’ David Davis are the best and perhaps only real examples, but they are mavericks. None are on the front benches, and none seem to have that much influence on their political bosses. Parliament, therefore, seems to offer little help. Whether it could ever offer that help – whether we could ever have politicians with enough understanding of the issues to act on our behalf in a meaningful way, is another question. I hope so – but I may well be pipe dreaming.
Automatic or assumed consent?
Perhaps none of this matters. Could it this kind of government surveillance something we automatically consent to when we use the Internet? Simply by using the net, do we automatically consent to being observed? Is this the price that we have to pay – and that we can be assumed to be willing to pay – in order to use the internet? Scott McNealy’s infamous quote – you have zero privacy anyway, get over it – may be old enough to represent common knowledge. Can we assume that everyone knows they have no privacy? Would that be reasonable, even if it were true? It isn’t true of the public telephone system – wholesale wiretapping isn’t acceptable or accepted, not even of the metadata.
I don’t think any of these – societal, parliamentary or ‘assumed’ really work, or would be sufficient even if they did – because amongst other things because we simply haven’t known what was going on. Our consent, such as it existed, could not have been informed consent, in either of the two ways that can be understood. We did not have the information. We were deliberately kept in the dark. And experience suggests that when we do know more, we tend to object more – as events like the defeat of the Snoopers’ Charter demonstrate.
Do we know what we are consenting to?
Do we understand what the implications of this surveillance actually are? This isn’t just about privacy, no matter how much people like Malcolm Rifkind tries to frame it that way. It isn’t just about individual either – sometimes through this kind of framing it can seem as though asking for privacy is an act of selfishness, and that we should be ashamed of ourselves, and sacrifice our privacy for the greater good – for security.
This is quite wrong – and in many ways framing it in this way is deliberately deceptive. There is a significant impact on many kinds of human rights, not just on privacy. Freedom of expression is chilled – both by overt surveillance through the panopticon effect and through covert surveillance through the imbalance of power that allows control to be exerted. Freedom of association and assembly are deeply affected – both online through the disruption and chilling of online communities, and offline through the disruption of the organisation of ‘real world’ protest and so forth. There’s more too – profiling can allow for discrimination. Indeed, as we shall see, discrimination of a different form is fundamental to commercial surveillance – so can be easily enabled in other ways. Ultimately, too, it can even impact upon freedom of thought – as profiling develops, it could allow the profiler to know what you want even before you do.
So even if we have given consent before, that consent is not really valid. The internet is not like old-fashioned communications. We do more online than we ever did through other forms of communication The nature of the surveillance itself has changed – and the impact of it. Any old consent that did exist should be revoked. If Big Brother wants to keep watching us, He needs to ask again.
2: Commercial surveillance and consent
This is an issue much closer to the common legal understanding of consent – and one that has been much debated. It’s one of the key subjects of the current discussions over the reform of the data protection regime. Edward Snowden, however, has thrown a bit of a spanner into that debate, and those discussions.
To understand what this means, we need to understand commercial surveillance better. Who does ‘commercial’ surveillance? What do I mean by commercial surveillance? Surveillance where money is the motivation – or, to be more precise, where commercial benefit is the motivation. This means things like behavioural tracking – for various purposes – but it also means profiling, it means analysis, all of which are done extensively by all the big players on the Internet, with little or no real idea of consent.
Does commercial surveillance matter?
Commercial surveillance does not often seem to be something people (other than a few privacy geeks like me) care about that much. It’s just about advertising, isn’t it? Doesn’t do anyone any harm? Opt-out’s OK, those paranoid privacy geeks can avoid it if they want, for the rest of us it’s what pays for the net, right? For people like me, there are big concerns – and in some ways it might matter more for most people than surveillance by the NSA and GCHQ. The idea – the one that’s being sold to us – is that it’s about ‘tailoring’ or ‘personalisation’ of your web experience. We can get more relevant content and and more appropriate advertising…
…but that also means that it can have a real impact on real people, from price and service discrimination to an influence on such things credit ratings, insurance premiums and job prospects. Real things that matter to almost all of us. There’s even the possibility of political manipulation – from personalised political advertising to detailed targeting of key ‘swing’ voters, putting even more political influence into the hands of those with the deepest pockets – for it is the deepest pockets that allow access to the ‘biggest’ data, and the most sophisticated profiling and targeting systems.
What Edward Snowden could teach us…
Some parts of the revelations from Edward Snowden should make us think again. PRISM, in particular, should change people’s attitudes to commercial surveillance. This is what Edward Snowden has to teach us. Look at the purported nature of the PRISM program. ‘Direct access’ to the servers of the big Internet companies – including Google and Facebook. Who does commercial surveillance more than Google and Facebook? What’s more, the interaction between governments and businesses is much closer than it might immediately seem. They share technology – and businesses have even let governments subvert their technology, building backdoors, undermining encryption systems and so forth. They share techniques – and even share data, whether willingly or otherwise.
Shared techniques…
Behavioural profiling is just what governments want to do. Behavioural analysis is just what governments want to do. Behavioural targeting is just what governments want to do Is identifying potential customers any different from identifying potential suspects? Is identifying potential markets any different from identifying potential protest groups (such as those involved in the aforementioned badger cull protest)? Or potential dissidents? Is predicting political trends and political risks any different from predicting market trends? Is ‘nudging’ a market that different from manipulating politics? The Internet companies have built engines to do all the authorities’ work for them (well, OK, most of the authorities’ work for them). They just need to tap into those engines. Tailor them a bit. It’s perfect surveillance, and we’ve helped build it. We’ve ‘consented’ to it.
Who is undermining privacy?
So who is undermining privacy? The spooks with their secret surveillance… ….or the business leaders telling us to share everything and that, as Mark Zuckerberg put it, ‘privacy is no longer a social norm’? This ‘de-normalisation’ of privacy – apologies for the word, which I suspect doesn’t really exist – amounts to an attempt to normalise surveillance. The extent to which this desired and pushed-for ‘de-normalisation’ has contributed to the increasing levels of surveillance is essentially a matter for conjecture, but it’s hard not to see a connection.
Paranoid privacy geeks like me have been warning about for a while – but just because we’re paranoid, it doesn’t mean we’re wrong. In this case, it’s looking increasingly as though we were right all along – and that the situation is even worse than we thought.
Is this what we consented to when we signed up for Facebook? Is this what we consent to each time we do a Google search? Is this what we expect when we watch a YouTube video or play a game of Words with Friends? I don’t think so. With new information there should come new understanding – and a reassessment of the situation. We need to decide.
3: A (more) privacy-friendly future?
A three-way consensus is needed. People, businesses and governments need to come to an agreement about what the parameters are, about what it acceptable. About what we consent to. All three groups have power – but at the moment only the authorities seem to be really wielding theirs.
Imagine what would happen if Facebook’s Mark Zuckerberg, Google’s Sergey Brin, Apple’s Tim Cook and their fellows from Microsoft, eBay, Twitter etc all came together and said to the US government ‘No’! Would they be locked up? Would their companies be viciously punished? It seems unlikely – they are much more powerful than they realise. We often talk about the power of the corporate lobbyists – this power could be wielded in a positive way, not just a negative way…
…but it only will if there’s a profit in it for the companies concerned. And that’s where we come in.
We have a key part to play. We need to keep making noises. We need to keep informing people, keep lobbying. Make sure that the companies know that we care about privacy – and not just in relation to governments. Then the companies might start to make a move that helps us.
There are some signs that this might be the case – from the noises from Zuckerberg and so on about how upset they are about the NSA to the current crop of ‘Outlook.com’ advertisements that proclaim loudly how they don’t scan your emails the way that Google do – though it is difficult to tell whether this is just lip service. They talk a lot about transparency, not so much about a reduction in actual surveillance by government – let alone by themselves. If they can wield this power in our favour it could help a lot – but it will only be wielded in this positive way if we make them. So we must be clear that we do not consent to the current situation. We do not consent to surveillance.
iPhones, fingerprints and privacy
The latest iPhone, the iPhone 5S, launched last night with the usual ceremony. Slick, clever, sexy technology at its best. One feature stood out from the rest: ‘Touch ID’. As the Apple website puts it:
“[Y]our iPhone reads your fingerprint and knows who you are.”
Sounds great, doesn’t it? Perhaps…. but to people who work in privacy, particularly people who have been paying attention to the revelations of Edward Snowden, it should be ringing a lot of alarm bells too. This is a big step, and associated with it are a lot of risks, not just with the technology itself, but more importantly with the implications of this kind of technology. This isn’t just a new generation of iPhone, it’s a new generation of risk. There’s a long way to go before we really understand these risks – but we need to start thinking now, right from the outset.
Keeping our fingerprint data secure?
Apple have said that the biometric information (presumably some kind of distillation or sampling of a print rather than an image of the print itself) is stored ‘securely’ on the phone itself rather than sent to Apple or even stored on the cloud. That is certainly much better than the other way around, which would raise enormous and immediate security and privacy issues, but in the light of the Snowden revelations, and in particular the PRISM programme in which Apple was implicated, these assurances can only be taken with a pretty huge pinch of salt. The possibilities of backdoors into this data, or of hacking of this data cannot be easily dismissed – and there are those within the hacker community that just love to crack iPhones. Some will be itching to get their hands on the new iPhone and see how quickly they can get this data out.
Apple have also said that they won’t give App developers access to this data – and they haven’t so far – but they didn’t add the crucial word ‘yet’. Once this system is in common use, won’t App developers be clamouring to use it? Apple themselves understand that this could lead to a whole new raft of possibilities. “Your fingerprint can also approve purchases from iTunes Store, the App Store and the iBooks Store, so you don’t have to enter your password” Would that be the end of it? Hardly. As I shall expand below, this kind of system helps ‘normalise’ the use of fingerprints as an authentication system – of course it has already begun to be normalised, but building it into the iPhone takes that normalisation to a new level.
Why would they want your fingerprints?
Fingerprints have been used as a way of identifying people for a very long time – since the 19th Century at least – and it is that ability to identify people that is the key to both the strengths and the weaknesses of the system. Ostensibly, the idea of ‘Touch ID’ is that it helps you, the user, to control who has access to your phone, by checking anyone who tries to use the phone against a list of authorised users – you and those you’ve said can use it. Others, however, can use your fingerprints for many other reasons – the well known use of fingerprints for crime detection is just part of it. When dealing with data, though, the key point about a fingerprint is that it links the data to you in the real world. If someone gets your iPhone but doesn’t know that it’s yours, and they then check your print on that phone’s database, they can be ‘sure’ it’s yours, no matter how much you deny it. That in itself raises privacy issues (and no doubt begins the ‘if you’ve got nothing to hide’ argument again) but also raises possibilities of misuse.
Linking with other data
Once they know that a phone is yours, the possibilities to link to other information are immense, and growing all the time. Think how much data you have on your smartphone. You use it for your email. You use it to make calls, to send texts, to social network, to tweet – – so all of your communications are opened up. You have your photos on it – so add in a little facial recognition and another vast number of connections are opened up. You keep your music on it – so you can be profiled in a detailed way in terms of preferences. You probably access your bank account, perhaps have travel tickets in your Passbook. You may well do work on your phone – keep notes or voice memos. The possibilities are endless – and the fingerprint can form an anchor point, linking all this information together and attaching it to the ‘real’ you.
That’s part of the rub. Many people have already said ‘but the government already have this data, haven’t you ever entered the US?’ Yes, the US government have a database of fingerprints of all those of us who’ve entered the US in recent years – but this creates a link between that government database and pretty much all the data there is out there about you. It’s true, the authorities may well have already made that link – but why make it easier, and almost as importantly why make it normal and acceptable for that link to be made?
Normalising fingerprinting
This, to me, is the most important issue of all. Even if Apple’s security system works, even if there is no ‘function creep’ into greater uses within the Apple system, even if the fears over the NSA and other intelligence agencies are overblown (and they might be), the ‘normalisation’ of using fingerprints as a standard method of authentication matters. In the UK there was a huge amount of resistance to the introduction of a compulsory, biometric ID card – resistance that ultimately defeated the bill intended to introduce the card, and that played at least a small part in the defeat of the Labour government in 2010. We don’t like the idea that the authorities can say ‘your papers please’ whenever they like, and demand that we prove who we are. It smacks of police states – and denies individual freedom. We shouldn’t need to ‘prove’ who we are unless that proof is absolutely necessary – and in the vast, vast majority of cases it isn’t.
And yet, with systems like this, we seem to be accepting something very similar without even thinking about it. The normalisation of fingerprinting is already happening – the border-check fingerprinting is just one part of it. In many UK schools, kids are required to give their fingerprints in order to get food from the canteen – essentially for convenience, so they don’t have to carry cash around – and there has been barely a murmur of complaint. Indeed, it may be too late to stop this normalisation – but we should at least be aware of what we’re sleepwalking into.
Each little step makes the idea of fingerprinting more acceptable – and brings on the next step. If Apple’s Touch ID is successful, we can pretty much guarantee that other smartphone developers will introduce their own systems, and the idea will become universal. The idea has been there for a few years already – on laptops and on other devices. As is often the case, Apple aren’t the first, but they may be the first to bring it full-scale to the mainstream.
Just because it’s cool…
As I’ve written before – most directly concerning Google Glass (see here) – there’s a strong tendency to develop and build technology ‘because it’s cool’, without fully thinking through the consequences. ‘Touch ID’ in some ways is very cool – but I do have the same feelings of concern as I have about Google Glass. Do we really know what we’re opening up here? I’ve outlined some of my immediate concerns here – but these are just part of the possibilities. As Bruce Schneier said:
“It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state”
I’m concerned that what Apple are doing here is part of that bad civic hygiene. I hope I’m wrong. I am a fan of Apple – I have been since the 80s, when I bought my first Mac. I wrote this blog on an Apple computer, and have had iPhones since the first generation. My instinct is to like Apple, and to trust them. PRISM shook that trust – and this fingerprinting system is shaking that trust even more.
The biggest point, however, is the normalisation one. It may well be that we’re beyond the point of no return, and fingerprinting and other biometrics are now part of the environment. I hope not – but at the very least we should be talking about the risks and taking appropriate precautions. It may also be that this is just a storm in a teacup, and that I’m being overly concerned about something that really doesn’t matter much. I hope so. Time will tell.
PRISM: Share with the CIA – and Facebook!
Going out for a pizza? Who wants to know?
There’s been a joke going around the net over the last couple of weeks, inspired by the PRISM revelations. The picture above is just one of the examples – variants include replacing the CIA with the NSA, or adding the two together so that it says, effectively ‘Share with Friends, the CIA and the NSA’ and so on. It’s a pretty good joke – and spot on about the nature of the PRISM programme (and indeed the equivalents elsewhere in the world, such as the UK’s Communications Data Bill, the ‘Snoopers’ Charter’), but ultimately it misses one key element from the equation. It should also include ‘share with Facebook’…
Share with only me, the CIA, the NSA and FaceBook!
Something that seems to be forgotten pretty much every time is that whenever you put something on Facebook, no matter how tightly and precisely you select your ‘privacy’ settings, Facebook themselves always get to see your stuff. It’s never ‘just you’, or ‘just you and your close friends’: Facebook themselves are always there. That means a lot of different things – at the very least that they will use that information to build up your profile and to choose who is going to target advertising at you. It might be used directly for Facebook themselves to target products and services at you. It might mean that they put you on various lists of people of a certain kind to receive mailings – lists that could then be used for other purposes, potentially sold (perhaps not now, but in the future?) or even could be hacked…
Data is vulnerable
…and that is point that shouldn’t be forgotten. If you put something on Facebook, or if Facebook infers something from the information that you put up, that information is potentially vulnerable. Now it’s easy to worry about spies and spooks – and then to dismiss that worry because you’re not really the kind of person that spies and spooks would care about – but there are others to whom the kind of information you put on Facebook could be valuable. Criminals intent on identity theft. Other criminals looking for targets in other ways (if you’re going out for a pizza, that means you’re not at home…. burglary opportunity?). Insurers wanting to know whether they should put up your premiums (aha, they often go out for pizzas – doesn’t sound like a healthy diet to me! Up with the premiums!), potential employers checking you out (if you’re going out for a pizza at an unsuitable time of day, you might be an unsuitable employee) and so on.
Don’t imagine your ‘privacy’ settings really imply privacy…
This doesn’t mean that we shouldn’t ‘share’ anything on Facebook (or Google, or any other system online, because what happens with Facebook happens just as much with others), but that we should be a touch more aware of the situation. The PRISM saga has highlighted that what we share can be seen by the authorities – and has triggered off quite a lot of concern. That concern is, in my opinion, only a small part of the story. What the authorities do is only one aspect – and for most people a far less important one than the rest of the story. Having your insurance premiums raised, having credit refused, becoming a victim of identity-related crimes, being socially embarrassed or humiliated, becoming a victim of cyber-bullying etc are much more common for most of us. What we do online can contribute to all of these – and we should be a bit more aware of it.
Food stamps and the database state…
The latest proposal for ‘food stamps’ has aroused a good deal of anger. It’s a policy that is divisive, depressing and hideous in many ways – Suzanne Moore’s article in the Guardian is one of the many excellent pieces written about it. She hits at the heart of the problem: ‘Repeat after me: austerity removes autonomy’.
That’s particularly true in this case, and in more ways than even Suzanne Moore brings in. This new programme has even more possibilities to remove autonomy than previous attempts at controlling what ‘the poor’ can do with their money – because it takes food stamps into the digital age…
The idea, as I understand it, is that people will be issued with food ‘cards’, rather than old fashioned food stamps. The precise details of these cards have yet to emerge, and quite how ‘smart’ they will be has yet to be seen, but the direction is clear. The cards will only work in certain shops, and only allow the purchase of certain goods. At the moment they’re talking about stopping ‘the poor’ from buying such evil goods as tobacco and alcohol, but as Suzanne Moore points out, equivalent schemes in the US have blocked the purchase of fizzy drinks. In a digital world, the control over what can or cannot be purchased can be exact and ever-changing. It allows complete control – we can determine an ‘acceptable’ list of things that people can and cannot buy.
All well and good, so people might think. Let’s make sure people only eat fresh fruit and vegetables – improve the nation’s health, instil better eating habits, force people to learn to cook. All for the better! There are, however, one or two flaws in this plan.
Firstly, it seems almost certain that the plan will be effectively subcontracted out to private companies – and limited to specific shops. In Birmingham it has already been said that these cards will only work in ASDA. Doubtless there will be tendering process, and the different supermarkets will be vying for the opportunity to stake their claims. Once they do, which products will they be directing people to buy? The most nutritious ones? The cheapest ones? The most practical ones? Or the ones that will make them the most money?
Secondly, these cards present a built-in opportunity for profiling. Just as existing supermarket loyalty cards are used primarily to profile the people who use them, monitoring shopping habits in order (amongst other things) to find ways to convince people to spend more money, these kind of food cards can be used to profile the people who use them. This may not be any different from existing supermarket loyalty cards – but at least people have a choice as to which supermarket they use, and whether they want to be profiled. This kind of a system is effectively selling the profiles of people directly to the supermarkets, without any choice at all. Now of course privacy isn’t as important as food – but is it really right that we say that poor people aren’t allowed privacy?
Thirdly, a database will be built up of those who have the cards – and it will be a database that is crying out to be used. If those selling ‘pay-day loans’ with interest rates in the thousands get access to those databases they’ve got a beautiful set of potential targets to exploit – almost certainly complete with addresses included, just in case the people need a little ‘visit’ to chivvy them along in terms of payment.
There are further implications of this kind of thing – logical extensions to the idea. Once the system is introduced, it’s almost bound to be abused. If you have a ‘food card’ but need cash – for example to pay off a loan – then if someone else says ‘I’ll buy your card for cash, but at a 40% discount’, many, many people may accept that offer. The chances of a black market growing are huge, and the implications even worse. It would make the poor poorer (by whatever discount they’re forced to accept for their cards) for starters, but there’s more. If the authorities see this kind of abuse to the system happening, they’ll try to do something about it – for example, by requiring biometrics for verification. Fingerprints are even a possibility…
…which may seem far fetched, but school canteens around the country are already using fingerprint verification to allow children access to school meals. The technology is there – and those who make it and sell it will be lobbying the government to let them have contracts to do this.
That, again, makes the situation worse – making the databases even more invasive, even more open to abuse, and so the cycle begins again.
Of course this is only a side issue compared to the main issues of divisiveness, demonisation and sheer vindictive dehumanisation that are the inevitable consequences of this kind of scheme. I’m sure, however, that these possibilities won’t have escaped the eagle eyes of those working with these kinds of schemes. It may sound like a conspiracy theory – and indeed, to an extent it is – but it isn’t nearly as far fetched as might be imagined. As well as removing autonomy, austerity provides opportunities for those unscrupulous enough to use them – and sadly, as the last few years have made far too clear, there are plenty of people and companies like that.
Big Brother is watching you…. and so are his corporate partners
Privacy advocates are spoilt for choice these days about what to complain about – privacy invasions by business, or privacy invasions by the authorities? Over the last year or so, I’ve written regularly about both – whether it be my seemingly endless posts in recent weeks about Facebook, or the many times I wrote last year about the wonderful Snoopers’ Charter – our Communications Data Bill (which is due to re-emerge after its humiliation fairly shortly).
It’s a hard one to answer – and I tend to oscillate between the two in terms of which I think is more worrying, more of a threat. And then a new story comes along to remind me that it isn’t either on its own that we should be really worried about – it’s when the two work together. Another such story has just come to light, this time in The Guardian
The essence of the story is simple. Raytheon is reported to have developed software “capable of tracking people’s movements and predicting future behaviour by mining data from social networking websites”. Whether the details of the story are correct, and whether Raytheon’s software is particularly good at doing what it is supposed to do isn’t really the main point: the emergence of software like this was always pretty close to inevitable. And it will get more effective – profiling will get sharper, targeting more precise, predictions more accurate.
Inevitable and automatic
What’s more, this isn’t just some ‘friendly’ policemen or intelligence operatives looking over our Facebook posts or trawling through our tweets – this is software, software that will operate automatically and invisibly, and can look at everything. What’s more, it’s commercially produced software. Raytheon says that ‘it has not sold the software – named Riot, or Rapid Information Overlay Technology – to any clients’ but it will. It’s commercially motivated – and investigations by groups such as Privacy International have shown that surveillance technology is sold to authoritarian regimes and others around the world in an alarming way.
If you build it, they will come
The real implication is that when software like this is developed, the uses will follow. Perhaps it will be used at first for genuinely helpful purposes – tracking real terrorists, finding paedophiles etc (and you can bet that the fights against terrorism and child abuse will be amongst the first reasons wheeled out for allowing this kind of thing) – but those uses will multiply. Fighting terrorist will become fighting crime, which will become fighting disorder, which will become fighting potential disorder, which will become locating those who might have ‘unhelpful’ views. Planning a protest against the latest iniquitous taxation or benefits change? Trying to stop your local hospital being shut or your local school being privatised? Supporting the ‘wrong’ football team?
Just a quick change in the search parameters and this kind of software, labelled by the Guardian a ‘google for spies’, will track you down and predict your next moves. Big Brother would absolutely love it.
A perfect storm for surveillance
This is why, in the end, we should worry about both corporate and government surveillance. The more that private businesses gather data, the better they get at profiling, even for the most innocuous of purposes, or for that all too common one, making money, the more that this kind of data, these kinds of techniques, can be used by others.
We should worry about all of this – and fight it on all fronts. We should encourage people to be less blasé about what they post on Facebook. I may be a bit extreme in regularly recommending that people leave Facebook (see my 10 reasons to leave Facebook post) because I know many people rely on it at the moment, but we should seriously advise people to rely on it less, to use it more carefully – and to avoid things like geo-location etc (see my what to do if you can’t leave Facebook post). We should oppose any and all government universal internet surveillance programmes – like the Snoopers’ Charter – and we should support campaigns like that of Privacy International against the international trade in surveillance technology.
Facebook and others create a platform. We put in all our data. Technology firms like Raytheon write the software. It all comes together like a perfect storm for surveillance.