A progressive digital policy?

Yesterday I read a call for submissions to Labour Left’s ‘Red Book II’, by Dr Éoin Clarke – to develop a way forward for the Labour Party. It started me thinking about what would really constitute a progressive digital policy – because for me, any progressive party should be looking at how to deal with the digital world. It is becoming increasingly important – and policies of governments seem to be wholly unable to deal with or even understand the digital world.

It must be said from the outset that I am not a Labour Party member, but that I was for many years. I left in 1999, partly because I was leaving the country and partly because I was already becoming disillusioned as to the direction that Labour was taking – a stance that the invasion of Iraq only confirmed. I have not rejoined the party since, though I have been tempted at times. One of the reasons I have not been able to bring myself to join has been the incoherence and oppressiveness of Labour’s digital policies, which are not those of a progressive, positive and modern party, of one that represents the ordinary people, and in particular the young people, of Britain today.

That seems to me to be very wrong. Labour should be a progressive party. It should be one that both represents and learns from young people. It should be one that looks forward rather than back – and one that is brave enough to be radical. Right now it isn’t: and the last government presided over some appalling, oppressive and regressive digital policies.

I’ve written in the past about why governments always get digital policy wrong – but it’s much easier to snipe from the sidelines than it is to try to build real policy. Here, therefore, is my first attempt at putting together a coherent, progressive policy for digital government. It is of course very much a skeleton – just the barest of bones – and very much a first attempt. There is probably a lot missing, and it needs a lot more thought. It would take a lot of work to put flesh on the bones – but for me, the debate needs to be had.

The starting point for such a policy would be a series of nine commitments.

  1. A commitment to the right to access to the net – and to supporting human rights online as well as in the real world. This is the easiest part of the policy, and one where Labour, at least theoretically, has not been bad. Gordon Brown spoke of such a right. However, supporting such a right has implications, implications which the Labour Party seems to have neither understood nor follows. The most important such implication is that it should not be possible to arbitrarily prevent people accessing the net – and that the barrier for removal of that right should be very high. Any policy which relies on the idea of blocking access should be vigorously resisted – the Digital Economy Act is the most obvious example. Cutting people’s access on what is essentially suspicion is wholly inconsistent with a commitment to the right to access the internet.
  2. A commitment against internet surveillance - internet surveillance is very much in the news right now, with the Coalition pushing the Communications Data Bill, accurately labelled the ‘snoopers’ charter’, about which I have written a number of times.Labour should very much oppose this kind of surveillance, but doesn’t. Indeed, rather the opposite – the current bill is in many ways a successor to Labour’s ‘Interception Modernisation Programme’. Surveillance of this kind goes very much against what should be Labour values: it can be and has been used to monitor those organising protests and similar, going directly against the kinds of civil rights that should be central to the programme of any progressive, left wing party: the rights to assembly and association. Labour should not only say, right now, that it opposes the Snoopers Charter, but that it would not seek to bring in other similar regulation. Indeed, it should go further, and suggest that it would work within the European Union to repeal the Data Retention Directive (which was pushed through by Tony Blair) and to reform RIPA – restricting the powers that it grants rather than increasing them.
  3. A commitment to privacy and data protection – rather than just paying lip service to them. I have written many times before about the problems with the Information Commissioner’s Office. First of all it needs focus: it (or any replacement body) should be primarily in charge of protecting privacy. Secondly, it needs more real teeth – but also more willingness to use them and against more appropriate targets. There has been far too little enforcement on corporate bodies, and too much on public authorities. If companies are to treat individuals’ private information better, they need the incentive to do so – at the moment even if they are detected, the enforcement tends to be feeble: a slap on the wrist at best. The current law punishes each group inappropriately: public authorities with big fines, which ultimately punish the public, corporates barely at all. Financial penalties would provide an incentive for businesses, while more direct personal punishments for those in charge of public authorities would work better as an incentive for them, as well as not punishing the public!
  4. A commitment to oppose the excessive enforcement of copyright – and instead to encourage the content industry to work for more positive ways forward. This would include the repeal of the Digital Economy Act, one of the worst pieces of legislation in the digital field, and one about which the Labour Party should be thoroughly ashamed. Labour needs to think more radically and positively – and understand that the old ways don’t work, and merely manage to alienate (and even criminalise) a generation of young people. Labour has a real opportunity to do something very important here – and to understand the tide that is sweeping across the world, at least in the minds of the people. In the US, SOPA and PIPA have been roundly beaten. ACTA suffered a humiliating defeat in the European Parliament and is probably effectively dead. In France, the new government is looking to abolish HADOPI – the body that enforces their equivalent of the Digital Economy Act. A truly progressive, radical party would not resist this movement – it would seek to lead it. Let the creative minds of the creative industries be put to finding a creative, constructive and positive way forward. Carrots rather than just big sticks.
  5. A commitment to free speech on the internet. This has a number of strands. First of all, to develop positive and modern rules governing defamation on the internet. Reform of defamation is a big programme – and I am not convinced that the current reform package does what it really should, focussing too much on reforming what happens in the ‘old media’ (where I suspect there is less wrong than some might suggest) without dealing properly with the ‘new media’ (which has been dealt with fairly crudely in the current reforms). There needs to be clarity about protection for intermediaries, for example.
  6. A commitment against censorship – this is the second part of the free speech strand. In the current climate, there are regular calls to deal with such things as pornography and ‘trolling’ on the internet – but most of what is actually suggested amounts to little more than censorship. We need to be very careful about this indeed – the risks of censorship are highly significant. Rather than strengthening our powers to censor and control,via web-blocking and so forth, we need to make them more transparent and accountable. A key starting point would be the reform of the Internet Watch Foundation, which plays a key role in dealing with child abuse images and related websites, but falls down badly in terms of transparency and accountability. It needs much more transparency about how it works – a proper appeals procedure, better governance structures and so forth. The Labour Party must not be seduced by the populism of anti-pornography campaigners into believing in web-blocking as a simple, positive tool. There are huge downsides to that kind of approach, downsides that often greatly outweigh the benefits.
  7. A radical new approach to social media – the third strand of the free speech agenda. We need to rethink the laws and their enforcement that have led to tragic absurdities like the Twitter Joke Trial, and the imprisonment of people for Facebook posts about rioting. The use of social media is now a fundamental part of many people’s lives – pretty much all young people’s lives – and at present it often looks as though politicians and the courts have barely a clue how it works. Labour should be taking the lead on this – and it isn’t. The touch needs to be lighter, more intelligent and more sensitive – and led by people who understand and use social media. There are plenty of them about – why aren’t they listened to?
  8. A commitment to transparency – including a full commitment to eGovernment, continuing the good aspects of what the current government is doing in relation to Open Data. Transparency, however, should mean much more – starting with full and unequivocal support for Freedom of Information. There has been too much said over recent months to denigrate the idea of freedom of information, and to suggest that it has ‘gone too far’. The opposite is much more likely to be the case: and a new approach needs to be formulated. If it takes too much time, money and effort to comply with FOI requests, that indicates that the information hasn’t been properly organised or classified, not that the requests should be curbed. The positive, progressive approach would be to start to build systems that make it easier to provide the information, not complain about the requests.
  9. A commitment to talk to the experts – and a willingness to really engage with and listen to them. We have some of the best – from people like Tim Berner-Lee to Professor Ross Anderson at the Cambridge University Computer Lab, Andrew Murray at the LSE, the Oxford Internet Institute and various other university departments, civil society groups and so forth – and yet the government consistently fails to listen to what they say, and prefers instead to listen to industry lobby groups and Whitehall insiders. That is foolish, short-sighted and inappropriate – as well as being supremely ineffective. It is one of the reasons that policies formulated are not just misguided in their aims but also generally fail to achieve those aims. There is real expertise out there – it should be used!

Much more is needed of course – this just sets out a direction. I’ve probably missed out some crucial aspects. Some of this may seem more about reversing and cancelling existing policies rather than formulating new ones – but that is both natural and appropriate, as the internet, much more than most fields, it generally needs a light touch. The internet is not ‘ungovernable’, but most attempts to govern it have been clumsy and counter-productive.

A forward-looking, radical and positive digital policy would mark the Labour Party out as no longer being in the hands of the lobbyists, but instead being willing to fight for the rights of real, ordinary people. It would mark out the Labour Party as being a party that understands young people better – and supports them rather than demonises and criminalises them. Of course I do not expect the Labour Party to take this kind of agenda on. It would take a level of political courage that has not been demonstrated often by any political party, let alone the current Labour Party, to admit that they have got things so wrong in the past. Admission of past faults is something that seems close to political blasphemy these days – for me, that is one of the biggest problems in politics.

As I said at the start, this is very much a first stab at an approach for the future – I would welcome comments, thoughts and even criticism(!). We need debate on this – and not just for the Labour Party. Currently, though my history has been with the Labour Party, I find myself without anyone that I think can represent me. If any party were to take on an agenda for the digital world that would make more sense, I would be ready to listen.

Snoopers’ Charter Consultation

The draft Communications Data Bill – the ‘Snoopers’ Charter’ – is currently up for consultation before a specially put together Joint Parliamentary Committee. The consultation period has been relatively short – it ends on 23rd August – and at a time when many people are away on holiday and while many other have been enjoying (and being somewhat distracted by) the Olympic Games.

Even so, it’s very important – not just because what is being proposed is potentially highly damaging, but because it’s a field in which the government has been, in my opinion, very poorly advised and significantly misled. There is a great deal of expertise around – particularly on the internet – but in general, as in so many areas of policy, the government seems to be very unwilling to listen to the right people. I’ve blogged on the general area a number of times before – most directly on ‘Why does the government always get it wrong?’.

All this means that it would be great if people made submissions – for details see here.

Here is the main part of my submission, reformatted for this blog.

————————————————-

Submission to the Joint Committee on the draft Communications Data Bill

The draft Communications Data Bill raises significant issues – issues connected with human rights, with privacy, with security and with the nature of the society in which we wish to live. These issues are raised not by the detail of the bill but by its fundamental approach. Addressing them would, in my opinion, require such a significant re-drafting of the bill that the better approach would be to withdraw the bill in its entirety and rethink the way that security and surveillance on the Internet is addressed.

As noted, there are many issues brought up by the draft bill: this submission does not intend to deal with all of them. It focusses primarily on three key issues:

1) The nature of internet surveillance. In particular, that internet surveillance means much more than ‘communications’, partly because of the nature of the technology involved and partly because of the many different ways in which the internet is used. Internet surveillance means surveilling not just correspondence but social life, personal life, finances, health and much more. Gathering ‘basic’ data can make the most intimate, personal and private information available and vulnerable.

2) The vulnerability of both data and systems. It is a fallacy to assume that data or systems can ever be made truly ‘secure’. The evidence of the past few years suggests precisely the opposite: those who should be most able and trusted with the security of data have proved vulnerable. The approach of the draft Communications Data Bill – essentially a ‘gather all then look later’ approach – is one that not only fails to take proper account of that vulnerability, but actually sets up new and more significant vulnerabilities, effectively creating targets for hackers and others who might wish to take advantage of or misuse data.

3) The risks of ‘function creep’. The kind of systems and approach envisaged by the draft Bill makes function creep a real and significant risk. Data, once gathered, is a ‘resource’ that is almost inevitably tempting to use for purposes other than those for which its gathering was envisaged. These risks seem to be insufficiently considered both in the overall conception and in the detail of the Bill.

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. The Draft Communications Data Bill therefore lies precisely within my academic field. I would be happy to provide more detailed evidence, either written or oral, if that would be of assistance to the committee.

1 The Nature of internet Surveillance

As set out in Part 1 of the draft bill, the approach adopted is that all communications data should be captured and made available to the police and other relevant public authorities. The regulatory regime set out in Part 2 concerns accessing the data, not gathering it: gathering is intended to be automatic and universal. Communications data is defined in Part 3 Clause 28 very broadly, via the categories of ‘traffic data’, ‘use data’ and ‘subscriber data’, each of which is defined in such a way as to attempt to ensure that all internet and other communications activity is covered, with the sole exception of the ‘content’ of a communication.

The all-encompassing nature of these definitions is necessary if the broad aims of the bill are to be supported: if the definitions do not cover any particular form of internet activity (whether existent or under development), then the assumption would be that those who the bill would intend to ‘catch’ would use that form. That the ‘content’ of communications is not captured (though it is important in relation to more conventional forms of communication such as telephone calls, letters and even emails) is of far less significance in relation to internet activity, as shall be set out below

1.1 ‘Communications Data’ and the separation of ‘content’

As noted above, the definition of  ‘communications data’ is deliberately broad in the bill. On the surface, it might appear that ‘communications data’ relates primarily to ‘correspondence’ – bringing in the ECHR Article 8 right to respect for privacy of correspondence – and indeed communications like telephone calls, emails, text messages, tweets and so forth do fit into this category – but internet browsing data has a much broader impact. A person’s browsing can reveal far more intimate, important and personal information about them than might be immediately obvious. It would tell which websites are visited, which links are followed, which files are downloaded – and also when, and how long sites are perused and so forth. This kind of data can reveal habits, preferences and tastes and can uncover, to a reasonable probability religious persuasion, sexual preferences, political leanings etc, even without what might reasonably be called the ‘content’ of any communications being examined – though what constitutes ‘content’ is contentious.

Considering a Google search, for example, if RIPA’s requirements are to be followed, the search term would be considered ‘content’ – but would links followed as a result of a search count as content or communications data? Who is the ‘recipient’ of a clicked link? If the data is to be of any use, it would need to reveal something of the nature of the site visited – and that would make it possible to ‘reverse engineer’ back to something close enough to the search term used to be able to get back to the ‘content’. The content of a visited site may be determined just by following a link – without any further ‘invasion’ of privacy. When slightly more complex forms of communication on the internet are considered – e.g. messaging or chatting on social networking sites – the separation between content and communications data becomes even less clear. In practice, as systems have developed, the separation is for many intents and purposes a false one.  The issue of whether or not ‘content’ data is gathered is of far less significance: focussing on it is an old fashioned argument, based on a world of pen and paper that is to a great extent one of the past.

What is more, analytical methods through which more personal and private data can be derived from browsing habits have already been developed, and are continuing to be refined and extended, most directly by those involved in the behavioural advertising industry. Significant amounts of money and effort are being spent in this direction by those in the internet industry: it is a key part of the business models of Google, Facebook and others. It is already advanced but we can expect the profiling and predictive capabilities to develop further.

What this means is that by gathering, automatically and for all people, ‘communications data’, we would be gathering the most personal and intimate information about everyone. When considering this Bill, that must be clearly understood. This is not about gathering a small amount of technical data that might help in combating terrorism or other crime – it is about universal surveillance and profiling.

1.2 The broad impact of internet surveillance

The kind of profiling discussed above has a very broad effect, one with a huge impact on much more than just an individual’s correspondence. It is possible to determine (to a reasonable probability) individuals’ religions and philosophies, their languages used and even their ethnic origins, and then use that information to monitor them both online and offline. When communications (and in particular the internet) are used to organise meetings, to communicate as groups, to assemble both offline and online, this can become significant. Meetings can be monitored or even prevented from occurring, groups can be targeted and so forth. Oppressive regimes throughout the world have recognised and indeed used this ability – recently, for example, the former regime in Tunisia hacked into both Facebook and Twitter to attempt to monitor the activities of potential rebels.

It is of course this kind of profiling that can make internet monitoring potentially useful in counterterrorism – but making it universal rather than targeted will impact directly on the rights of the innocent, rights that, according to the principles of human rights, deserve protection. In the terms set out in the European Convention on Human Rights, there is a potential impact on Article 8 (right to private and family life, home and correspondence), Article 9 (Freedom of thought, conscience and religion), Article 10 (Freedom of expression) and Article 11 (Freedom of assembly and association).  Internet surveillance can enable discrimination (contrary to ECHR Article 14 (prohibition of discrimination) and even potentially automate it – a website could automatically reject visitors whose profile doesn’t match key factors, or change services available or prices based on those profiles.

2 The vulnerability of data

The essential approach taken by the bill is to gather all data, then to put ‘controls’ over access to that data. That approach is fundamentally flawed – and appears to be based upon false assumptions. Most importantly, it is a fallacy to assume that data can ever be truly securely held. There are many ways in which data can be vulnerable, both from a theoretical perspective and in practice. Technological weaknesses – vulnerability to ‘hackers’ etc – may be the most ‘newsworthy’ in a time when hacker groups like ‘anonymous’ have been gathering publicity, but they are far from the most significant. Human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

In practice, those groups, companies and individuals that might be most expected to be able to look after personal data have been subject to significant data losses. The HMRC loss of child benefit data discs, the MOD losses of armed forces personnel and pension data and the numerous and seemingly regular data losses in the NHS highlight problems within those parts of the public sector which hold the most sensitive personal data. Swiss banks losses of account data to hacks and data theft demonstrate that even those with the highest reputation and need for secrecy – as well as the greatest financial resources – are vulnerable to human intervention. The high profile hacks of Sony’s online gaming systems show that even those that have access to the highest level of technological expertise can have their security breached. These are just a few examples, and whilst in each case different issues lay behind the breach the underlying issue is the same: where data exists, it is vulnerable.

Designing and building systems to implement legislation like the Bill exacerbates the problem. The bill is not prescriptive as to the methods that would be used to gather and store the data, but whatever method is used would present a ‘target’ for potential hackers and others: where there are data stores, they can be hacked, where there are ‘black boxes’ to feed real-time data to the authorities, those black boxes can be compromised and the feeds intercepted. Concentrating data in this way increases vulnerability – and creating what are colloquially known as ‘back doors’ for trusted public authorities to use can also allow those who are not trusted – of whatever kind – to find a route of access.

Once others have access to data – or to data monitoring – the rights of those being monitored are even further compromised, particularly given the nature of the internet. Information, once released, can and does spread without control.

3 Function Creep

Perhaps even more important than the vulnerabilities discussed above is the risk of ‘function creep’ – that when a system is built for one purpose, that purpose will shift and grow, beyond the original intention of the designers and commissioners of the system. It is a familiar pattern, particularly in relation to legislation and technology intended to deal with serious crime, terrorism and so forth. CCTV cameras that are built to prevent crime are then used to deal with dog fouling or to check whether children live in the catchment area for a particular school. Legislation designed to counter terrorism has been used to deal with people such as anti-arms trade protestors – and even to stop train-spotters photographing trains.

In relation to the Communications Data Bill this is a very significant risk – if a universal surveillance infrastructure is put into place, the ways that it could be inappropriately used are vast and multi-faceted. What is built to deal with terrorism, child pornography and organised crime might creep towards less serious crimes, then anti-social behaviour, then the organisation of protests and so forth. Further to that, there are many commercial lobbies that might push for access to this surveillance data – those attempting to combat breaches of copyright, for example, would like to monitor for suspected examples of ‘piracy’. In each individual case, the use might seem reasonable – but the function of the original surveillance, the justification for its initial imposition, and the balance between benefits and risks, can be lost. An invasion of privacy deemed proportionate for the prevention of terrorism might well be wholly disproportionate for the prevention of copyright infringement, for example.

The risks associated with function creep in relation to the surveillance systems envisaged in the Bill have a number of different dimensions. There can be creep in terms of the types of data gathered: as noted above, the split between ‘communications data’ and ‘content’ is already one that is contentious, and as time and usage develops is likely to become more so, making the restrictions as to what is ‘content’ likely to shrink. There can be creep in terms of the uses to which the data can be put: from the prevention of terrorism downwards. There can be creep in terms of the authorities able to access and use the data: from those engaged in the prevention of the most serious crime to local authorities and others. All these different dimensions represent important risks: all have happened in the recent past to legislation (e.g. RIPA) and systems (e.g. the London Congestion charge CCTV system).

Prevention of function creep through legislation is inherently difficult. Though it is important to be appropriately prescriptive and definitive in terms of the functions of the legislation (and any systems put in place to bring the legislation into action), function creep can and does occur through the development of different interpretations of legislation, amendments to legislation and so forth. The only real way to guard against function creep is not to build the systems in the first place: a key reason to reject this proposed legislation in its entirety rather than to look for ways to refine or restrict it.

4 Conclusions

The premise of the Communications Data Bill is fundamentally flawed. By its very design, innocent people’s data will be gathered (and hence become vulnerable) and their activities will be monitored. Universal data gathering or monitoring is almost certain to be disproportionate at best, highly counterproductive at worst.

This Bill is not just a modernisation of existing powers, nor a way for the police to ‘catch up’. It is something on a wholly different scale. We as citizens are being asked to put a huge trust in the authorities not to misuse the kind of powers made possible by this Bill. Trust is of course important – but what characterises a liberal democracy is not trust of authorities but their accountability, the existence of checks and balances, and the limitation of their powers to interfere with individuals’ lives. This bill, as currently envisaged, does not provide that accountability and does not sufficiently limit those powers: precisely the reverse.

Even without considering the issues discussed above, there is a potentially even bigger flaw with the bill: it appears very unlikely to be effective. The people that it might wish to catch are the least likely to be caught – those expert with the technology will be able to find ways around the surveillance, or ways to ‘piggy back’ on other people’s connections and draw more innocent people into the net. As David Davis MP put it, only the incompetent and the innocent will get caught.

The entire project needs a thorough rethink. Warrants (or similar processes) should be put in place before the gathering of the data or the monitoring of the activity, not before the accessing of data that has already been gathered, or the ‘viewing’ of a feed that is already in place. A more intelligent, targeted rather than universal approach should be developed. No evidence has been made public to support the suggestion that a universal approach like this would be effective – it should not be sufficient to just suggest that it is ‘needed’ without that evidence, nor to provide ‘private’ evidence that cannot at least qualitatively be revealed to the public.

That brings a bigger question into the spotlight, one that the Committee might think is the most important of all: what kind of a society do we want to build – one where everyone’s most intimate activities are monitored at all times just in case they might be doing something wrong? That, ultimately, is what the draft Communications Data Bill would build. The proposals run counter to some of the basic principles of a liberal, democratic society – a society where there should be a presumption of innocence rather than of suspicion, and where privacy is the norm rather than the exception. Is that what the Committee would really like to support?

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law, UEA Law School