The iPhone crack’d from side to side….

Cracked iphoneThe news that the new iPhone 5S’s fingerprinting system has been successfully cracked by German hacker group the Chaos Computer Club should come as no surprise. As I suggested in my initial response to the announcement, hackers would be itching to get their fingers on the technology to find a way around it. It took them about a week.

This is from the Chaos Computer Club’s blog post:

“The biometrics hacking team of the Chaos Computer Club (CCC) has successfully bypassed the biometric security of Apple’s TouchID using easy everyday means. A fingerprint of the phone user, photographed from a glass surface, was enough to create a fake finger that could unlock an iPhone 5s secured with TouchID. This demonstrates – again – that fingerprint biometrics is unsuitable as access control method and should be avoided.”

The Chaos Computer Club are what I would call ‘white hat’ hackers: they’re the good guys, working generally to bring into the open things that are of benefit to us all. They’re very good at what they do – but they’re not the only hackers out there. What the Chaos Computer Club could do in about a week will be possible for those others – and that includes those working for the authorities, for organised crime, for the other tech companies and so forth.

The precise details of how they did it are interesting but not really that important: the key is to understand the implications. Any technology, no matter how advanced, will have vulnerabilities. Any data gathered, no matter by whom or how held, will be vulnerable.  That needs to be taken on board when we look at how and whether to embrace that technology – and it needs to be understood when considering how to balance the risks and rewards of that technology. Many people – not least in the technology press when covering the launch of products like the iPhone 5S – tend to gloss over the risks. They take the assurances of the manufacturers that the technology is ‘secure’ at close to face value – and treat the concerns of the odd privacy advocate as tinfoil-hat-wearing paranoia.

Now there IS a good deal of paranoia out there – but to paraphrase Joseph Heller, just because they’re paranoid it doesn’t mean they’re not right. What we’ve learned about the activities of the NSA, GCHQ and others over the summer has gone far beyond many of the nightmares of the most rabid conspiracy theorist. That doesn’t mean that we should all be moving to small islands in the Outer Hebrides – but it should mean that we are a little more cautious, a little more open-minded, and a little less trusting of everything we’re told.

There are a lot of jokes circulating on the internet at the moment. One goes like this:

Screen Shot 2013-09-23 at 09.31.55

There’s a point there. By moving from a system of passwords (a deeply flawed system) to one based on biometrics we’re taking on a new level of risk. Is this a risk that we really want to take? What are the benefits? As the Chaos Computer Club have demonstrated, it’s not really for security. Fingerprinting is a deeply insecure system. If someone gets hold of your phone, it will be covered with your fingerprints – getting the data out of it won’t be major problem for any of the people who might want to use that data.

So it’s not really about security – it’s about convenience. It’s about saving the seconds that it takes to put in a few numbers to unlock your screen. That’s not something to be ignored – we give away huge numbers of things just for a little convenience – but we should at least be aware that this is the bargain being made. For many people it may be worth it. I’m not one of them.

The other risks associated with the use of fingerprinting as an identification and authentication method – some of which I outlined here – are too much for me. Still, for me, the way that it helps establish as ‘normal’ the idea of asking for fingerprints is the worst. It’s not normal to me. It still smacks of authoritarianism – it’s worse that the image of the policeman asking ‘your papers please’, as you’ll have no choice. That’s the thing about biometrics. You become your papers…..

No thank you.

iPhones, fingerprints and privacy

Iphone fingerprintThe latest iPhone, the iPhone 5S, launched last night with the usual ceremony. Slick, clever, sexy technology at its best. One feature stood out from the rest: ‘Touch ID’. As the Apple website puts it:

“[Y]our iPhone reads your fingerprint and knows who you are.”

Sounds great, doesn’t it? Perhaps…. but to people who work in privacy, particularly people who have been paying attention to the revelations of Edward Snowden, it should be ringing a lot of alarm bells too. This is a big step, and associated with it are a lot of risks, not just with the technology itself, but more importantly with the implications of this kind of technology. This isn’t just a new generation of iPhone, it’s a new generation of risk. There’s a long way to go before we really understand these risks – but we need to start thinking now, right from the outset.

Keeping our fingerprint data secure?

Apple have said that the biometric information (presumably some kind of distillation or sampling of a print rather than an image of the print itself) is stored ‘securely’ on the phone itself rather than sent to Apple or even stored on the cloud. That is certainly much better than the other way around, which would raise enormous and immediate security and privacy issues, but in the light of the Snowden revelations, and in particular the PRISM programme in which Apple was implicated, these assurances can only be taken with a pretty huge pinch of salt. The possibilities of backdoors into this data, or of hacking of this data cannot be easily dismissed – and there are those within the hacker community that just love to crack iPhones. Some will be itching to get their hands on the new iPhone and see how quickly they can get this data out.

Apple have also said that they won’t give App developers access to this data – and they haven’t so far – but they didn’t add the crucial word ‘yet’. Once this system is in common use, won’t App developers be clamouring to use it? Apple themselves understand that this could lead to a whole new raft of possibilities. “Your fingerprint can also approve purchases from iTunes Store, the App Store and the iBooks Store, so you don’t have to enter your password” Would that be the end of it? Hardly. As I shall expand below, this kind of system helps ‘normalise’ the use of fingerprints as an authentication system – of course it has already begun to be normalised, but building it into the iPhone takes that normalisation to a new level.

Why would they want your fingerprints?

Fingerprints have been used as a way of identifying people for a very long time – since the 19th Century at least – and it is that ability to identify people that is the key to both the strengths and the weaknesses of the system. Ostensibly, the idea of ‘Touch ID’ is that it helps you, the user, to control who has access to your phone, by checking anyone who tries to use the phone against a list of authorised users – you and those you’ve said can use it. Others, however, can use your fingerprints for many other reasons – the well known use of fingerprints for crime detection is just part of it. When dealing with data, though, the key point about a fingerprint is that it links the data to you in the real world. If someone gets your iPhone but doesn’t know that it’s yours, and they then check your print on that phone’s database, they can be ‘sure’ it’s yours, no matter how much you deny it. That in itself raises privacy issues (and no doubt begins the ‘if you’ve got nothing to hide’ argument again) but also raises possibilities of misuse.

Linking with other data

Once they know that a phone is yours, the possibilities to link to other information are immense, and growing all the time. Think how much data you have on your smartphone. You use it for your email. You use it to make calls, to send texts, to social network, to tweet – – so all of your communications are opened up. You have your photos on it – so add in a little facial recognition and another vast number of connections are opened up. You keep your music on it – so you can be profiled in a detailed way in terms of preferences. You probably access your bank account, perhaps have travel tickets in your Passbook. You may well do work on your phone – keep notes or voice memos. The possibilities are endless – and the fingerprint can form an anchor point, linking all this information together and attaching it to the ‘real’ you.

That’s part of the rub. Many people have already said ‘but the government already have this data, haven’t you ever entered the US?’ Yes, the US government have a database of fingerprints of all those of us who’ve entered the US in recent years – but this creates a link between that government database and pretty much all the data there is out there about you. It’s true, the authorities may well have already made that link – but why make it easier, and almost as importantly why make it normal and acceptable for that link to be made?

Normalising fingerprinting

This, to me, is the most important issue of all. Even if Apple’s security system works, even if there is no ‘function creep’ into greater uses within the Apple system, even if the fears over the NSA and other intelligence agencies are overblown (and they might be), the ‘normalisation’ of using fingerprints as a standard method of authentication matters. In the UK there was a huge amount of resistance to the introduction of a compulsory, biometric ID card – resistance that ultimately defeated the bill intended to introduce the card, and that played at least a small part in the defeat of the Labour government in 2010. We don’t like the idea that the authorities can say ‘your papers please’ whenever they like, and demand that we prove who we are. It smacks of police states – and denies individual freedom. We shouldn’t need to ‘prove’ who we are unless that proof is absolutely necessary – and in the vast, vast majority of cases it isn’t.

And yet, with systems like this, we seem to be accepting something very similar without even thinking about it. The normalisation of fingerprinting is already happening – the border-check fingerprinting is just one part of it. In many UK schools, kids are required to give their fingerprints in order to get food from the canteen – essentially for convenience, so they don’t have to carry cash around – and there has been barely a murmur of complaint. Indeed, it may be too late to stop this normalisation – but we should at least be aware of what we’re sleepwalking into.

Each little step makes the idea of fingerprinting more acceptable – and brings on the next step. If Apple’s Touch ID is successful, we can pretty much guarantee that other smartphone developers will introduce their own systems, and the idea will become universal. The idea has been there for a few years already – on laptops and on other devices. As is often the case, Apple aren’t the first, but they may be the first to bring it full-scale to the mainstream.

Just because it’s cool…

As I’ve written before – most directly concerning Google Glass (see here) – there’s a strong tendency to develop and build technology ‘because it’s cool’, without fully thinking through the consequences. ‘Touch ID’ in some ways is very cool – but I do have the same feelings of concern as I have about Google Glass. Do we really know what we’re opening up here? I’ve outlined some of my immediate concerns here – but these are just part of the possibilities. As Bruce Schneier said:

“It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state”

I’m concerned that what Apple are doing here is part of that bad civic hygiene.  I hope I’m wrong. I am a fan of Apple – I have been since the 80s, when I bought my first Mac. I wrote this blog on an Apple computer, and have had iPhones since the first generation. My instinct is to like Apple, and to trust them. PRISM shook that trust – and this fingerprinting system is shaking that trust even more.

The biggest point, however, is the normalisation one. It may well be that we’re beyond the point of no return, and fingerprinting and other biometrics are now part of the environment. I hope not – but at the very least we should be talking about the risks and taking appropriate precautions. It may also be that this is just a storm in a teacup, and that I’m being overly concerned about something that really doesn’t matter much. I hope so. Time will tell.

Google, privacy and a new kind of lawsuit

Today is Data Privacy Day – and new lawsuit has been launched against Google in the UK – one which highlights a number of key issues. It could be very important – a ‘landmark case’ according to a report on Reuters. The most notable thing about the case, for me, is that it is consumer-led: UK consumers are no longer relying on the authorities, and the Information Commissioner’s Office in particular, to safeguard their privacy. They’re taking it into their own hands.

The case concerns the way that Google exploited a bug in Apple’s Safari browser to enable it to bypass customers’ privacy settings. As reported on Reuters:

“Through its DoubleClick adverts, Google designed a code to circumvent privacy settings in order to deposit the cookies on computers in order to provide user-targeted advertising. The claimants thought that cookies were being blocked on their devices because of Safari’s strict default privacy settings and separate assurances being given by Google at the time. This was not the case.”

The group of consumers have engaged noted media and telecomms lawyers Olswang for the case. Dan Tench, the partner at Olswang responsible for the case, told Reuters:

“Google has a responsibility to consumers and should be accountable for the trust placed in them. We hope that they will take this opportunity to give Safari users a proper explanation about what happened, to apologise and, where appropriate, compensate the victims of their intrusion.”

For further information – and if you want to join the action – Tench can be contacted by email at

There’s also a Facebook page for the suit:

What’s important here?

The case highlights several crucial aspects of privacy on the net. The first is the extent to which we can – or should be able to – rely on the settings we make on our browsers. What was happening here is that those settings were being overridden. Now it’s a moot point quite how many people use their privacy settings – or indeed even know that they exist – but if those settings are being overridden by anyone, let alone a company as big and respected as Google, it’s something that we need to know about and to fight. Browser settings – and privacy settings in general – are the key control, perhaps the only control, that individuals have over their online privacy, so we need to know that they work if we are to have any trust. A lack of trust is something that damages everyone.

The second is that the case highlights that users aren’t going to take things lying down – and neither are they going to rely on what often seem to be supine regulators, regulators unwilling to take on the ‘big boys’ of the internet, regulators who seem to take their role as supporters of business much more seriously than their role as protectors of the public. Alexander Hanff, a privacy advocate who is assisting Olswang on this case, said that:

“This group action is not about getting rich by suing Google, this lawsuit is about sending a very clear message to corporations that circumventing privacy controls will result in significant consequences. The lawsuit has the potential of costing Google £10s of millions, perhaps even breaking £100m in damages given the potential number of claimants – making it the biggest group action ever launched in the UK. It should also be seen as a message to the Information Commissioner’s Office that they are in contempt of the British public and are not doing their job.”

This last point is crucial – and it may suggest not that the Information Commissioner’s Office are not doing their job but that their job is one that needs redefining. The ICO sometimes appears to be caught between two stools – their role is more complex than just as protectors of the public. They’re not a Privacy Commissioner’s Office – and perhaps that is what we need. An office with teeth whose prime task is to protect individuals’ privacy.

What happens next?

This lawsuit will be watched very carefully by everyone in the field of online privacy. The number of people who join the case is one question – there are plenty who could, as Safari, though somewhat a niche browser on computers, is the default browser on iPhones, so is used by many millions in the UK. How it progresses has yet to be seen – there are many different possibilities. If nothing else, I hope it acts as a wake-up-call for all involved: Google, the ICO, and the public.

Taking a lead on privacy??

Two related stories about privacy and tracking are doing the rounds at the moment: both show the problems that companies are having in taking any sort of lead on privacy.

The first is about Apple, and the much discussed recent upgrade to their iOS, the operating system for the iPhone and iPad. There’s been a huge amount said about the problems with the mapping system (and geo-location is of course a huge privacy issue – as I’ve discussed before) but now there’s an increasing buzz about their newly introduced tracking controls. Apple, for the first time, have provided users with the option to ‘limit ad tracking’ – though as noted in a number of stories, including this one from Business Insider, that option is hidden away, not in the vaunted ‘Privacy’ tab, but under a convoluted set of menus (first ‘General’ settings, then ‘About’, then scroll down to the bottom to find ‘Advertising’, then click ‘Limit Ad Tracking’). Not easy to find, as even the techie and privacy geeks that I converse with on twitter have found.

This of course raises a lot of issues – it’s great to have the feature, but the opposite to have it hidden away where only the geeks and the paranoid will find it. It looks as though the people at Apple have been thinking hard about this, and working hard at this, and have come up with an interesting (and perhaps effective – but more on that below) solution, but then been told by someone, somewhere, that they should hide it for fear of upsetting the advertisers. I’d love to know the inside story on this – but Apple are rarely quite as open about their internal discussions as they could be.

There’s a conflict of motivations, of course. On the one hand, Apple wants to make customers happy, and there is increasing evidence that customers don’t want to be tracked – most recently this excellent paper from Hoofnagle, Urban and Li, appropriately entitled “Privacy and Modern Advertising: Most US Internet Users Want ‘Do Not Track’ to Stop Collection of Data about their Online Activities”. On the other hand, Apple don’t want to annoy the advertisers – particularly when the market for mobile is getting increasingly competitive. And the advertisers seem to be on a knife edge at the moment, very touchy indeed, as the latest spats over the ‘Do Not Track’ initiative have shown.

That’s the second story doing the rounds at the moment: the increasing acrimony and seemingly bitter conflict over Do Not Track. It’s a multi-dimensional spat, but seems to have been triggered by Microsoft’s plan to make do not track ‘on’ by default – something that the advertising industry are up in arms about. The ‘Digital Advertising Alliance’ issued a statement effectively saying they would simply ignore Microsoft’s system and track anyway – which led to privacy advocates suggesting that the advertisers wanted to kill the whole Do Not Track initiative. This is Jeff Chester of the Center for Digital Democracy:

“The DAA is trying to kill off Do Not Track.  Its announcement today to punish Microsoft for putting consumers first is an extreme measure designed to strong-arm companies that care about privacy.”

Chester and others saying similar things may be right – and it makes people like me wonder if the whole problem is that the ‘Do Not Track’ initiative was never really intended to work, but was just supposed to make people think that their privacy was protected. If it actually got some teeth – and setting it to a default ‘on’ position would be the first way to give it teeth – then the industry wouldn’t want it to exist. There are other huge issues with Do Not Track anyway. As the title of the Hoofnagle, Urban and Li report suggested, people think ‘Do not track’ means they won’t be tracked – that their data won’t be collected at all – while the industry seems to think what really matters to people is that they aren’t targeted – i.e. their data is still collected, and they’re still tracked and profiled, but that tracking isn’t used to send advertisements to them. For me, that at least is completely clear. Do Not Track should mean no tracking. Blocking data collection is more important than stopping targetting – because once the data is collected, once the profiles are made, they’re available for misuse later down the line.

That, far deeper point, is still not being discussed sufficiently. The battle is at a more superficial level – but it’s still an important battle. Who matters more, the consumers or the advertisers? Advertisers would have us believe that by stopping behavioural targetting we will break the whole economic basis of the internet – but that is based on all kinds of assumptions and presumptions, as Sarah A Downey pointed out in this piece for TechCrunch “The Free Internet Will Be Just Fine With Do Not Track. Here’s Why.” At the recent Amsterdam Privacy Conference, Simon Davies, one of the founders of Privacy International, made the bold suggestion that the behavioural targetting industry should simply be banned – and there is something behind his argument. Right now, the industry is not doing much to improve its image: seeming to undermine the whole nature of Do Not Track does not make them look good.

There’s another spectre that the industry might have to face: the European Union is getting ready to act, and when they act, they tend to do things without a great deal of subtlety, as the fuss around the Cookie Directive has shown. If the advertisers want to avoid heavy-handed legislation, they should beware: ‘Steelie’ Neelie Kroes is getting impatient. As reported in The Register, if they don’t stop their squabbling tactics over Do Not Track, she’s going to call in the politicians….

Someone, somewhere, has to take a lead on privacy. Apple had the chance, and to a great extent blew it, by hiding their tracking controls where the sun doesn’t shine. Microsoft seems to be making an attempt too, but will they hold their nerve in the face of huge pressure from the advertising industry – and even if they do, will their lead be undermined by the tactics of the advertising industry? If no-one takes that lead, no-one takes that initiative, the EU will take their kid gloves off… and then we’re all likely to be losers, consumers and advertisers alike….

The privacy race to the bottom

I tend to be a ‘glass-half’ sort of person, seeing the positive side of any problem. In terms of privacy, however, this has been very hard over the last few weeks. For some reason, most of the ‘big guns’ of the internet world have chosen the last few weeks to try to out-do each other in their privacy-intrusiveness. One after the other, Google, Facebook and Amazon have made moves that have had such huge implications for privacy that it’s hard to keep positive. It feels like a massive privacy ‘race to the bottom’.
Taking Google first, it wasn’t exactly that any particular new service or product hit privacy, but more the sense of what lies ahead that was chilling, with Google’s VP of Products, Bradley Horowitz, talking about how ‘Google + was Google itself’. As Horowitz put it in an interview for Wired last week:
“But Google+ is Google itself. We’re extending it across all that we do — search, ads, Chrome, Android, Maps, YouTube — so that each of those services contributes to our understanding of who you are.”
Our understanding of who you are. Hmmm. The privacy alarm bells are ringing, and ringing loud. Lots of questions arise, most directly to do with consent, understanding and choice. Do people using Google Maps, or browsing with Chrome, or even using search, know, understand and accept that their actions are being used to build up profiles so that Google can understand ‘who they are’? Do they have any choice about whether their data is gathered or used, or how or whether their profile is being generated?  The assumption seems to be that they just ‘want’ it, and will appreciate it when it happens.
Mind you, Facebook are doing their very best to beat Google in the anti-privacy race. The recent upgrade announced by Facebook has had massive coverage, not least for its privacy intrusiveness, from Timeline to Open Graph. Once again it appears that Mark Zuckerberg is making his old assumption that privacy is no longer a social norm, and that we all want to be more open and share everything. Effectively, he seems to be saying that privacy is dead – and if it isn’t quite yet, he’ll apply the coup-de-grace.
That, however is only part of the story. The other side is a bit less expected, and a bit more sinister. Thanks to the work of Australian hacker/blogger Nik Cubrilovic, it was revealed that Facebook’s cookies ‘might’ be continuing to track us after we log out of Facebook. Now first of all Facebook denied this, then they claimed it was a glitch and did something to change it. All the time, Facebook tried to portray themselves as innocent – even as the ‘good guys’ in the story. A Facebook engineer – identifying himself as staffer Gregg Stefancik – said that “our cookies aren’t used for tracking”, and that “most of the cookies you highlight have benign names and values”. He went on to make what seemed to be a very reassuring suggestion quoted in The Register:
“Generally, unlike other major internet companies, we have no interest in tracking people.” 

How, then, does this square with the discovery that a couple of weeks ago Facebook appears to have applied for a patent to do precisely that? The patent itself is chilling reading. Amongst the gems in the abstract is the following:
“The method additionally includes receiving one or more communications from a third-party website having a different domain than the social network system, each message communicating an action taken by a user of the social networking system on the third-party website”
Not only do they want to track us, but they don’t want us to know about it, telling us they have no interest in tracking.
OK, so that’s Google and Facebook, with Facebook probably edging slightly ahead in their privacy-intrusiveness. But who is this coming fast on the outside? Another big gun, but a somewhat unexpected one: Amazon. The new Kindle Fire, a very sexy bit of kit, takes the Kindle, transforms the screen into something beautiful and colourful. It also adds a web-browsing capability, using a new browser Amazon calls Silk. All fine, so far, but the kicker is that Silk appears to track your every action on the web and pass it on to Amazon. Take that, Google, take that Facebook! Could Amazon beat both of them in the race to the bottom? They’re certainly giving it a go.
All pretty depressing reading for those of us interested in privacy. And the trio could easily be joined by another of the big guns when Apple launches its new ‘iCloud’ service, due this week. I can’t say I’m expecting something very positive from a service which might put all your content in the cloud….
…and yet, somehow, I DO remain positive. Though the big guns all seem to be racing the same way, there has at least been a serious outcry about most of it, and it’s making headline news not just in what might loosely be described as the ‘geek press’. Facebook seemed alarmed enough by Nik Cubrilovic’s discoveries to react swiftly, even if a touch disingenuously. We all need to keep talking about this, we all need to keep challenging the assumption that privacy doesn’t matter. We need to somehow start to shift the debate, to move things so that companies  compete to be the most privacy-friendly rather than the most privacy-intrusive. If we don’t, there’s only one outcome. The only people who really lose in the privacy race-to-the-bottom are us….

Dogs will be dogs…

The growing furore over the gathering and retention of location data by smartphones reminds me very strongly of a joke that I heard first in the school playground many years ago. ‘Why does a dog lick his balls? Because he can.’
The same is true about smartphone operators. Why do they gather location data? Because they can. Technically, they can, because of the very nature of smartphones. Legally they can, because our laws over this kind of thing are obtuse and opaque – and because they understand the way they can get ‘consent’ through the small print of terms and conditions that no-one ever reads, let alone understands.
A lot of the discussion about the current furore has centred around the individual companies concerned, and brought out all the usual views of the merits or otherwise of Apple, Google and Microsoft – but whether you consider each of them to be fancified show-bred French poodles, friendly and loveable Labradors or ageing but far from toothless Rottweilers, they’re all dogs, and dogs will be dogs. Even the best behaved and most presentable show dog will lick his balls if he’s allowed to.
Three questions arise for me. Firstly, why are people surprised? Many people seem to be genuinely shocked by what has been revealed – even people who know a great deal about the subject. Is it really such a surprise? We’ve known about the capabilities of smartphones since they first emerged, and about the behaviour of all the companies involved for even longer. Dogs will be dogs.
The second question is whether any of it matters – and for me the answer is clear. Of course it matters, and matters a lot. That doesn’t mean that we need to panic, or need to throw our iPhones, Blackberries and HTCs in the nearest river – just that we need to aware of what is going on, and do what we can to ameliorate or manage the situation.
That brings me to the last question – what, if anything, can be done about it? Well, if we were talking about dogs, the answer would be simple: make sure they’re well trained, and well managed. If badly looked after, dogs behave badly. If they’re well trained, they can be very useful, helpful and excellent pets. They can help us in our personal lives, in our work and in many social situations – but you still need to train them and manage them. We need to do the same for the likes of Apple, Google and Microsoft. Show them who’s boss – using all the tools we can to do so. That means putting the right laws in place, but also using our powers as consumers, as advocates, and lobbyists.
If dogs know what they can do and what they can’t, they’ll behave much better. It’s very hard to train a dog not to lick his balls – and probably just as hard to train companies like Apple, Google and Microsoft not to push the limits of privacy – but it can be done. We need to tell them that this kind of thing is not acceptable – and back up what we say with the law and with our money. If we don’t want our location data gathered, we need to be clear about it.
My personal view is that we have the right not to have this kind of thing happen to us – and that we need to proclaim that right (and other rights) loud and clear.