Privacy and Security together…

I just spent a very interesting day at ‘Project Breach’ – an initiative of Norfolk and Suffolk police, trying to encourage businesses and others to understand and protect themselves from cybercrime. It was informative in many ways, and primarily (as far as I could tell) intended to be both a pragmatic workshop, giving real advice, and to ‘change the narrative’ over cybercrime. In both ways, I think it worked – the advice, in particular, seemed eminently sensible.

What was particularly interesting, however, was how that advice was in most ways in direct tension with the government’s approach to surveillance, as manifested most directly in the Investigatory Powers Act 2016 – often labelled the ‘Snooper’s Charter’.

The speaker – Paul Maskall – spent much of the first session outlining the risks associated with your ‘digital footprint’. How your search history could reveal things about you. How your ‘meta data’ could say more about you than the content of your postings. How your browsing history could put you at risk of all kinds of scams and so forth. And yet all of this is made more vulnerable by the Investigatory Powers Act. Search histories and metadata could be forced to be retained by service providers. ‘Internet Connection Records’ could be used to create a record of your browsing – and all of this could then be vulnerable to the many forms of hacking etc that Maskall then went on to detail. The Investigatory Powers Act makes you more vulnerable to scams and other crimes.

The keys to the next two sessions were how to protect yourself – and two central pillars were encryption and VPNs. Maskall emphasised again and again the importance of encryption – and yet this is what Amber Rudd railed against only a few weeks ago, trying to link it to the Westminster attack, though subsequent evidence proved yet again that this was a red herring at best. The Investigatory Powers Act adds to the old Regulation of Investigatory Powers Act (RIPA) in the way it could allow encryption to be undermined…. which again puts us all at risk. When I raised this issue, first on Twitter and then in the room, Maskall agreed with me – encryption is critical to all of us, and attempts to undermine it put us all at risk – but I was challenged, privately, by another delegate in the room, after the session was over. Amber Rudd, this delegate told me, wasn’t talking about undermining encryption for us, but only for ISIS and Al Qaeda. I was very wrong, he told me, to put the speaker on the spot about this subject. All that showed me was how sadly effective the narrative presented by Amber Rudd, and Theresa May before her, as well as others in what might loosely be called the ‘security lobby’ has been. You can’t undermine encryption for ISIS without undermining it for all of us. You can’t allow backdoors for the security services without providing backdoors for criminals, enemy states and terrorists.

VPNs were the other key tool mentioned by the speaker – and quite rightly. Though they have not been directly acted against by the Investigatory Powers Act, they do (or might) act against the main new concept introduced by the Act, the Internet Connection Record. Further, VPN operators might also be subjected to the attention of the authorities, and asked to provide browsing histories themselves – though the good ones don’t even retain those histories, which will cause a conflict in itself. Quite now the authorities will deal with the extensive use of VPNs has yet to be seen – but if they frustrate the intentions of the act, we can expect something to be done. The overall point, however, remains. For good security – and privacy – we need to go against the intentions of the act.

The other way to put that is that the act goes directly against good practice in security and privacy. It undermines, rather than supports security. This is something that many within the field understand – including, from his comments to me after the event, the speaker at Project Breach. It is sad that this should be the case. A robust, secure and privacy-friendly internet helps us all. Even though it might go against their instincts, governments really should recognise that.

The iPhone crack’d from side to side….

Cracked iphoneThe news that the new iPhone 5S’s fingerprinting system has been successfully cracked by German hacker group the Chaos Computer Club should come as no surprise. As I suggested in my initial response to the announcement, hackers would be itching to get their fingers on the technology to find a way around it. It took them about a week.

This is from the Chaos Computer Club’s blog post:

“The biometrics hacking team of the Chaos Computer Club (CCC) has successfully bypassed the biometric security of Apple’s TouchID using easy everyday means. A fingerprint of the phone user, photographed from a glass surface, was enough to create a fake finger that could unlock an iPhone 5s secured with TouchID. This demonstrates – again – that fingerprint biometrics is unsuitable as access control method and should be avoided.”

The Chaos Computer Club are what I would call ‘white hat’ hackers: they’re the good guys, working generally to bring into the open things that are of benefit to us all. They’re very good at what they do – but they’re not the only hackers out there. What the Chaos Computer Club could do in about a week will be possible for those others – and that includes those working for the authorities, for organised crime, for the other tech companies and so forth.

The precise details of how they did it are interesting but not really that important: the key is to understand the implications. Any technology, no matter how advanced, will have vulnerabilities. Any data gathered, no matter by whom or how held, will be vulnerable.  That needs to be taken on board when we look at how and whether to embrace that technology – and it needs to be understood when considering how to balance the risks and rewards of that technology. Many people – not least in the technology press when covering the launch of products like the iPhone 5S – tend to gloss over the risks. They take the assurances of the manufacturers that the technology is ‘secure’ at close to face value – and treat the concerns of the odd privacy advocate as tinfoil-hat-wearing paranoia.

Now there IS a good deal of paranoia out there – but to paraphrase Joseph Heller, just because they’re paranoid it doesn’t mean they’re not right. What we’ve learned about the activities of the NSA, GCHQ and others over the summer has gone far beyond many of the nightmares of the most rabid conspiracy theorist. That doesn’t mean that we should all be moving to small islands in the Outer Hebrides – but it should mean that we are a little more cautious, a little more open-minded, and a little less trusting of everything we’re told.

There are a lot of jokes circulating on the internet at the moment. One goes like this:

Screen Shot 2013-09-23 at 09.31.55

There’s a point there. By moving from a system of passwords (a deeply flawed system) to one based on biometrics we’re taking on a new level of risk. Is this a risk that we really want to take? What are the benefits? As the Chaos Computer Club have demonstrated, it’s not really for security. Fingerprinting is a deeply insecure system. If someone gets hold of your phone, it will be covered with your fingerprints – getting the data out of it won’t be major problem for any of the people who might want to use that data.

So it’s not really about security – it’s about convenience. It’s about saving the seconds that it takes to put in a few numbers to unlock your screen. That’s not something to be ignored – we give away huge numbers of things just for a little convenience – but we should at least be aware that this is the bargain being made. For many people it may be worth it. I’m not one of them.

The other risks associated with the use of fingerprinting as an identification and authentication method – some of which I outlined here – are too much for me. Still, for me, the way that it helps establish as ‘normal’ the idea of asking for fingerprints is the worst. It’s not normal to me. It still smacks of authoritarianism – it’s worse that the image of the policeman asking ‘your papers please’, as you’ll have no choice. That’s the thing about biometrics. You become your papers…..

No thank you.

The Snoopers’ Charter: we need a new consultation

The Communications Data Bill – more commonly (and fairly accurately) known as the ‘Snoopers’ Charter’ is due to re-emerge at any moment. We have been expecting it for some time – and yet have seen nothing official, and there has been no sign of a proper public consultation on the subject. That, to me, is wholly inadequate – so I have written to the Home Office, copying in my MP, Dr Julian Huppert. The contents of the letter are below. If you feel strongly about this matter – and I hope you do – you could add to the pressure to have a proper public consultation by using the Open Rights Group’s system, which can be found at:

http://www.openrightsgroup.org/campaigns/snoopers-charter-consultation

Here’s what I wrote – it is fairly long, but still only scratches at the surface of what is wrong with the overall approach to surveillance put forward in this bill:

————————————————————

Dear Home Office

Re: Draft Communications Data Bill

I write to you as a legal academic, specialising in data privacy, and as a member of the public with significant concerns over the progress of the Communications Data Bill. In my opinion we need a consultation – and a public and open consultation – on the Bill for many reasons.

The media storm – and the eventual and significant criticism levelled at the bill by the Parliamentary Committee hastily convened to discuss the bill the first time around should have made it clear to the proponents of the bill that there is a huge public interest in this area. That has a number of implications:

  1. That the criticisms levelled at the bill need to be taken seriously.
  2. That all the interested groups, including advocacy groups and academics – and indeed the public – need to be talked to up front, not after all the work has been done.
  3. That a ‘fait accompli’ is not an acceptable solution
  4. That the level of ‘proof’ provided by those putting the bill forward needs to be much more convincing – and much more open – than what has been provided to date. It is simply not sufficient to say ‘it’s dangerous and something must be done’, or ‘we can’t tell you why, but we need to do this’.

Those of us interested in the Bill have been waiting for the consultation to begin – there have been leaks to the media at intervals suggesting that it would start soon, but so far nothing has been made official or public. That is both unfortunate and ultimately unacceptable. We need that consultation to begin soon, and in an open and public way.

A targeted rather than universal approach to surveillance

Though in my view the Parliamentary Committee did a very good job in scrutinising the bill and in reviewing the huge amount of evidence submitted to it, there are a number of areas that I do not believe were sufficiently considered.

These areas hit at the very essence of the approach adopted by the bill. The whole idea of a ‘gather everything for later scrutiny’ approach misses many of the fundamental risks attached to this kind of surveillance: the risks of data and system vulnerability, of function creep, and of system misuse. Much of the evidence submitted to the committee that scrutinised the Communications Data Bill examined these risks – but the committee did not, in my opinion, see quite how fundamentally they undermined the overall approach of the Bill. Nor, in my opinion, did they look sufficiently into a genuinely alternative approach.

That alternative is to go for a targeted rather then universal surveillance. This kind of approach can significantly reduce all of these risks. Putting both the warranting and the filtration systems before the gathering stage rather than the accessing stage would reduce the amount of data that is vulnerable, make the systems harder to misuse, and reduce the likelihood – or the impact – of function creep. It is closer to the concept at the heart of British justice: that people are innocent until proven guilty.

1     Risks of data and system vulnerability.

It is a fundamental truth of computer data that wherever data exists and however it is held it can be vulnerable – to hacking, to accidental loss, to corruption, to misinterpretation, to inappropriate transfers and to many other things. By gathering all the communications data, this approach sets itself up for disaster – it is like painting metaphorical signs saying ‘hack me’ on the databases of information. If you build it, it will be hacked.

What is more, it is not only data that’s vulnerable but systems – if ‘black boxes’ are installed at ISPs, those black boxes will also have the metaphorical ‘hack me’ signs upon them. If you make a back door, the bad people as well as the good people can come in. It doesn’t matter how secure you think your system is, it can be broken into and hacked.

The government doesn’t have an inspiring record in terms of keeping data secure – from the Child Benefit data discs and the MOD laptops to the numerous NHS data breaches – but this is not really so much a reflection of government inadequacy to an underlying truth about data and systems. Stories of hack and data losses are in the news almost every day – and even those with the greatest technical ability and the greatest incentives to keep data secure have been victims, from Swiss banks to pretty much every major technology company. Facebook, Apple, Microsoft and Google have all fallen victim over recent months.

Ultimately, the only data that is not vulnerable is data that doesn’t exist at all. Furthermore, the only systems that can’t be hacked are systems that don’t exist. If targeted rather than universal surveillance is used, then the vulnerability is enormously reduced.

2      Risks of Function Creep

When data is gathered, or systems are built, for a specific purpose, that purpose can very easily end up being changed. This is a phenomenon particularly common in the field of anti-terror legislation and systems. Most people are aware of RIPA having been used for such things as dog fouling and fly-tipping, of CCTV cameras ostensibly for crime prevention actually being used to check children’s addresses for school catchment areas and so forth – but these are not myths or even particularly atypical. This has become something close to a ‘norm’ in this field.

There often appear to be good reasons for this function creep – not many people argued when the CCTV system for the London Congestion Charge started being used for crime prevention, for example – but it is a phenomenon that needs to be acknowledged. There isn’t really a legislative way to deal with it – caveats in laws can be sidestepped, laws can be amended in moments of ‘need’ and so forth. The only way to prevent it, as for data vulnerability, is to not build the systems or gather the data in the first place.

Again, this is a strong argument against universal data gathering – data gathered specifically, in a targeted and warranted way, presents less of a risk of function creep. Similarly, specifically designed and targeted systems are less susceptible to function creep than huge, universal surveillance systems.

3      Risks of System and Data Misuse

Another phenomenon familiar to those who study this field is that systems can be and are misused – whether it is databases searched for information about people against whom the searcher has a grudge, or collusion between the authorities and the press. The Leveson Inquiry should have made it entirely clear that such risks are not mythical – and if anyone believes that either the police and other authorities or the press have completely changed as a result of the exposure of the phone and email hacking then they are being extremely naïve.

The systems and data envisaged in this plan are particularly susceptible to this kind of misuse. The description of this kind of system as a ‘Google for communications data’ is entirely apt, and anyone who uses Google regularly should understand how easily the process of searching morphs from one thing to another. Human beings will use these systems – and human beings have human weaknesses, and those weaknesses lead almost inevitably to this kind of misuse. With universal data gathering built into the system, the database would be ripe for very serious misuse indeed.

Again, there is only one real way to deal with the possibility of system and data misuse – minimise the size and scope of the system and the amount of data involved. That, again, suggests that we need targeted rather than universal surveillance.

The Normalisation of Surveillance – and the Panopticon Chill

These are just a few of the problems that a system like this could bring about. There are many more – and I have written before about how this kind of surveillance impacts not only on privacy but on a whole array of human rights. By suggesting that universal surveillance is something that we should consider normal and acceptable we are ‘normalising’ – which has a whole set of implications.

Jeremy Bentham’s concept of the Panopticon, one with which I am sure you are familiar, is based on the idea that if people know that they’re being observed, they modify their behaviour. He envisaged it for a prison – to help control the behaviour of potentially violent and dangerous prisoners, put them in a position where they know that at any time they might be observed. In effect, this is what this kind of a system does – it tells everyone that whatever they do on the internet can and will be observed.

What will that mean? Well, it creates a kind of ‘chilling effect’ – what I would call the ‘Panopticon Chill’. It means people will be less free with their speech – and feel less free about what they do, where they go, and what they do online. That impacts upon many of our recognised human rights: freedom of expression, freedom of association and freedom of assembly to start with.

There are some who would welcome this kind of Panopticon Chill – some who think that we need to control the internet more. That, however, is to treat us as though we are prisoners in an enormous online jail. Bentham’s idea was not for ordinary peoples, but for potentially violent and dangerous prisoners. Is that how we all want to be treated? Is that the kind of society that we want to live in?

What kind of society do we want to live in?

That is the bottom line for the Communications Data Bill. Putting this kind of bill into place would be to set up precisely the kind of surveillance society that Orwell warned us of in 1984. Is that what we want to do?

There are other huge questions – not least the question of whether it will work at all. As the huge quantity of evidence submitted in the initial consultation revealed, few real experts believe that it will – anyone with expertise will be able to sidestep the system, leaving the rest of us to suffer the downsides of this kind of surveillance without the upsides even existing.

What is more, by promoting such a system in this country we not only give permission for the more oppressive of regimes to do the same (and leaders from Putin in Russia to Assad in Syria will be watching us with eagle eyes) but we would also be kick-starting the surveillance technology industry. Whichever companies win the contracts to supply the tech to enable the Bill will be looking to exploit that technology – who will they sell their systems to? What will the next systems they apply their learning and experience to? The £1.8 billion (and probably more) that the UK government spends on this will reap benefits to dictators and oppressors worldwide in coming decades.

A new draft of the Communications Data Bill?

As I understand it, only people close to the Home Office have yet seen how the Communications Data Bill will look in its new draft. I have been told that I won’t be upset when I see it – but without more information it is hard for me to be optimistic. Unless there is a fundamental change – most importantly a shift from the universal to the targeted approach, and an application of warrants and filters at the gathering rather than the accessing stage – it is hard to imagine that it will be something that I do like, and something that will inspire public trust.

A bill based on targeted rather than universal surveillance is possible. If it is brought about I believe it could not only be more compatible with human rights, send better messages to the rest of the world and be more cost effective – but it could also be more effective and less wasteful of the scarce resources of the police and intelligence services.

It does, however, require a big shift in attitudes. I hope that shift in attitudes is possible – and, at the very least, that we can have this debate in public, on reasonable and sensible terms, and without the feeling that we are being railroaded into a particular solution without any real options being presented.

A proper consultation

That, ultimately, is why we need a proper consultation. I have a perspective to present – and very particular views to put forward. I believe they are worthy of consideration – and I am very much open to discuss them. That discussion has not yet taken place. We need it to happen – and we have a right to have it happen. A proper consultation should happen now – at the drafting stage – not after everything has been already set in stone.

One real key here is that the public has not been properly informed over this debate. Initially, it appeared as though the government wanted to get this bill passed so quickly there wouldn’t even be time for Parliamentary scrutiny – it was only when the issue caused outrage that the committee was set up, and even then the consultation period was very brief and at a time when many academics in particular were not in a position to submit. We need more time – this is a crucial issue, and the public needs to have confidence that an appropriate decision is being made. The debate was characterised at times by language that should have no place in a serious debate, with opponents of the bill being accused of having blood on their hands.

This time around, we need proper consultation, with sufficient time and sufficient opportunity for all stakeholders to have their say. All options need to be considered, and in a way that both encourages and supports public participation, and develops a greater level of trust.

I am copying this letter to my MP, Dr Julian Huppert, who was on the Parliamentary Committee that scrutinised the bill, and to the Open Rights Group as well as making it public on my blog.

Kind regards

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia
Norwich NR4 7TJ
Email: paul.bernal@uea.ac.uk

Leveson: don’t believe the hype….

With Monday’s debate and vote looming, the hype over Leveson seems to be ratcheting up a few notches. Nick Cohen’s acerbic piece in the Observer, headlined ‘Leveson’s liberal friends bring shame upon the left’ is just one example. Given that those most closely involved in the debate on both sides are journalists, politicians and ‘media folk’ it should not come as a surprise that the contributions (again on both sides) are well-written, in prominent places in the media, and tending towards the hyperbolic.

If you believe Cohen and those on his side, the ‘pro-Leveson lobby’ are risking centuries of precious free speech just to make a political point, whilst if you believe Cathcart, Hugh Grant and the Hacked Off team, if we don’t implement Leveson we will be missing a historical opportunity to rein in the evils of the press barons and their abominable practices. Who’s right? The points made by both sides are well-put and seductive. Cohen’s right that we shouldn’t allow an opportunity to humiliate David Cameron and give the likes of Murdoch and Dacre a bloody nose to blind us to the risks to free speech of giving politicians control over the press. Hacked Off are quite right that what the press have done – and indeed continue to do – is often hideous and hugely reprehensible, and that just allowing it to go on without any action would be ridiculous. And yet I find it hard to get wholly enthused by either side of the debate.

Leveson wouldn’t be the end of free speech…

I don’t believe the ‘anti-Leveson’ argument for a number of reasons. First of all, because as I’ve argued before I don’t think the mainstream press that we have now bears much resemblance to a ‘free press’ – it’s just a question of who or what controls it, rather than whether it’s free. Secondly, I don’t think that what’s being proposed by either side will actually do much to fetter the press. It may control one or two excesses, but it won’t do anything that’s not already being done. We already have defamation and privacy law that impacts upon free speech, we already have huge editorial control that prevents some of the really important debates ever reaching the public eye – what’s proposed by Leveson won’t make as much difference as his opponents might think.

Leveson wouldn’t do much to control press excesses…

Similarly, I don’t believe the ‘pro-Leveson’ group either. Firstly, as noted above I suspect they’re deeply naïve if they believe that even the full implementation of Leveson would really do that much to curb the practices of the press – regulation rarely has the effects that people might desire, either way. What’s more, if they imagine that implementation of Leveson would turn the likes of the Sun, Mail and Express into responsible papers, they’re really living in cloud cuckoo-land. Regardless of Leveson, the Sun will still be full of rampant misogyny, the Mail full of vile anti-immigrant and anti-European rants and the Express will still billow out homophobia and Islamophobia. They’ll continue to demonise the disabled and those on benefits, twist the debate on Europe and shift the blame for all our problems onto the vulnerable and the innocent. They may not hack our phones, but they’ll still find a way to dig out secrets and private information – and ways that are technically legal, too. The data is out there – and they’ll find a way to dig it out and to use it in all kinds of horrible ways. If we think statutory press regulation will stop this, we’re deluding ourselves.

This debate is about politics…

The reality, it seems to me, is that this debate is primarily a political one – and almost nothing to do with free speech. It’s a chance for David Cameron to put clear blue water between himself and the Lib Dems – and a chance for Ed Miliband to give Cameron a good hiding. It’s Nick Clegg staking claim to a liberalism that his behaviour over the last two years in coalition have vigorously denied. It’s a chance for all three to position themselves in preparation for the long run-up to the 2015 election. Nothing to do with free speech at all. But then, to a great extent, free speech is moving on from the ‘press’…

Free speech matters…

All this is happening while the real ‘cutting edge’ of free speech is somewhere other than the papers – and is under threat in ways that Leveson doesn’t get close to. Free speech is in the hands of the bloggers and tweeters – and the question of how to ‘regulate’ them is still up in the air. Social media prosecutions are still happening – and though the DPP has issued new guidance that might liberalise it a bit, the proof will still be in the pudding. We don’t know what will happen – but none of the political parties has taken a good, free speech stance, obsessed as they are by Leveson.

Free speech is also in the hands of the protestors – and there are also few signs that any of the politicians are coming out properly in support of the rights of people to protest. Instead, there are prosecutions and crackdowns. If politicians of any side of the debate are really in favour of free speech, they’d be talking about this a lot more. Are they? Not really – and certainly not at anywhere near the level that they talk about Leveson.

For me, Leveson is to a great extent a distraction. However the vote goes on Monday, it won’t be disastrous for either side. There will be much more hype over the next few days – but we should take it all with a huge pinch of salt. We shouldn’t believe the hype – we should focus more on the real threats to free speech that are out there.

The good, the bad and the ugly side of privacy in Germany

Privacy advocates in the UK sometimes look across at Germany in wistful admiration – but is the story quite as rosy for privacy in Germany as it sometimes appears? Perhaps not, for though one recent event has shown Germany in its best light, as a beacon for privacy rights across Europe, another has demonstrated the opposite. Even Germany has an ugly side to how it deals with privacy.

First for the good. As reported widely (and in this case in out-law.com), this last week Germany’s highest court has suspended that country’s implementation of the EU Data Retention Directive by ruling that it violates citizens’ rights to privacy. This suspension comes after a class action suit brought by 35,000 German citizens – a level of citizen activity that would be close to miraculous in the UK, particularly for as issue such as privacy. The law by which the German government implemented the Data Retention Directive has been found unconstitutional, failing to include enough safeguards for the privacy of the individuals that is required under Germany’s constitution. A victory for privacy, albeit neither a complete nor a permanent one, since the court did not say that it would be impossible to implement the Data Retention Directive in a constitutionally acceptable way, just that this particular implementation was unconstitutional. Nonetheless, it is something about which German privacy advocates will feel justifiably proud – and many in other countries in Europe will hope signals changes elsewhere. It is hard to imagine, however, that it will be possible to achieve a similar result in the UK.

Then for the bad – or at least the ugly. A story reported far less widely, at least in the UK, is emerging concerning the German government’s use of data concerning the use by German citizens of Swiss banks for the purposes of tax evasion. This data has been acquired through various methods, most of which would probably be considered illegal – certainly from the perspective of the Swiss banks. Reuters has reported on the subject – it is a somewhat complex story, but the essence of it is that private data, detailing the banking activities of German citizens, has been offered for sale to a number of German states. Some of that data may have come from insider whistle-blowers, but some has also come from hackers – and earlier this year the German Federal Government gave states the go-ahead to buy the data if they want, whether or not the data has been obtained illegally. At least one state, the State of North Rhine-Westphalia, has bought the data, and is using it to flush out tax evaders. As Reuters reports, nearly 6,000 German tax evaders have ‘owned up’ to the tax evasion as a result of this evidence – and more could still be about to come out of the woodwork. As DSTG head Dieter Ondracek said, “If we get a signal from the politicians that it’ll only be possible for people to come clean this year, then we could have another 5,000 doing so with corresponding additional revenues,” Ondracek told Reuters. “Then a billion euros could be possible.”

This is not the first time that Germany has bought illegally acquired private data. Two years ago, something similar happened with bank data from Lichtenstein, effectively forcing the principality to relax its previously stringent bank secrecy laws. The current affairs over Swiss banking data might have a somewhat similar effect over the banking rules in Switzerland, though that of course could be a long way away – though already the Swiss have complied with a US request over tax evasion, and as reported in Reuters, Switzerland’s justice minister questioned on Sunday whether tax evasion should continue to be treated as a misdemeanour rather than a crime.

It is hard, of course, to generate much sympathy for people evading tax through the use of bank accounts in Switzerland – but that should not blind us to the significance of the events that are taking place. It’s not so much the nature of the data that’s significant, but the way in which is has been acquired. Getting data through the use of official requests from one government to another, as in the case of the US, is one matter, but paying money for data acquired illegally, and quite likely through hacking, is quite another, and sets a very uncomfortable precedent. Moreover, it provides a new and potentially large incentive to hackers to go after this kind of data. And if this kind of data, why not other data? Aside from the obvious problems of Germany’s potential obligations as a signatory of the Cybercrime Convention, there is an awkward parallel here with another recent event – the enormously publicised hacking of the gmail accounts of Chinese dissident groups. The Chinese government of course vigorously denies any involvement in the hack, but if it were to be offered data on illegal groups acquired by hacking, how different would it be for the Chinese government to buy it from the German government’s buying of this Swiss banking data?

From the perspectives of the two governments, they’re just seeking to root out people involved in illegal activities – for the Germans, tax evaders, for the Chinese, people involved in subversive (and illegal) activities. And in both cases, the fact that it might be possible to make money from selling this kind of data cannot help but be an incentive to try to acquire it. People in the West may have much more sympathy for Chinese dissidents than they do for German tax-evaders, but in some ways the principles are very much the same. Do we really want to set that kind of precedent?