No Grants, No Fair? – Guest post by Super Cyan…

 

Screen Shot 2016-01-19 at 18.28.04

No Grants:

I finally have something worthwhile to say in 2016, and unfortunately it’s in response to a Conservative measure. An article by @JBeattieMirror highlighted that the Tories have blocked a debate concerning the end of maintenance grants. On Thursday the 14th of January this year, the Third Delegated Legislation Committee discussed the Education (Student Support) (Amendment) Regulations 2015 (Regulations) which was voted in favour for with a ten to eight majority. The explanatory memorandum to these Regulations maintain that 2016 cohort students, will no longer qualify for maintenance grant or special support grant, but will instead qualify for an increased loan for living costs in 2016/17 (para 4.2). A 2016 cohort student according to Regulation 4(iv) is a full-time student who begins their academic course on or after August 2016. Regulation 19 inserts the following into Regulation 56 of its predecessor:

A current system student who is not a 2016 cohort student qualifies in accordance with this regulation for a maintenance grant in connection with the student’s attendance on a designated course (other than a distance learning course) (bolded for emphasis).

Meaning precisely what has been said above, that grants and special support are to be made obsolete for students starting courses this year.

Human Rights

From a human rights perspective, what exactly are the implications of these Regulations? The starting point would be that university courses fall within the realm of higher education (Leyla Şahin v. Turkey – (Application no. 44774/98) para 141), and the corresponding right from the European Convention on Human Rights (ECHR) is Article 2 Protocol 1 (A2P1) which stipulates that:

[i] No person shall be denied the right to education.

[ii] In the exercise of any functions which it assumes in relation to education and to teaching, the State shall respect the right of parents to ensure such education and teaching in conformity with their own religious and philosophical convictions.

This Protocol is incorporated into UK law through Schedule 1 of the Human Rights Act 1998 (HRA 1998). Section 15(1)(a) of the HRA 1998 sets out reservations in Part II Schedule 3 to the effect that the principle affirmed in the second sentence of A2P1 only so far as compatible with the provision of efficient instruction and training and the avoidance of unreasonable public expenditure. This as noted, by implication, that the UK accepts unreservedly the principle that “no person shall be denied the right to education” set out in the first sentence of A2P1.

The basis the argument would be that removing the maintenance grant and special support will indirectly discriminate against those from poorer backgrounds making them less likely to go into higher education. As the amount of grant was relative to household income, for example under the old Regulation 57(3)(a), a student whose household income was below £25,000 would receive £2,984. This is where Article 14 would take effect, this is the discrimination Article which states that:

The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.

Article 14 is not a standalone right and can only be used in conjunction with another substantive right (Sommerfield v Germany – (Application no. 31871/96) para 84) in this instance A2P1. In terms of the poorest students, they would likely fall under the ‘social origin’ category which the Committee on Economic, Social and Cultural Rights (CESCR) refers to a person’s inherited social status (para 24). Also according to a handbook jointly collaborated upon by the European Court of Human Rights (ECtHR) and European Union Agency for Fundamental Rights (EUAFR) who regarded social origin as possibly relating to a position that they have acquired through birth into a particular social class or community (such as those based on ethnicity, religion, or ideology), or from one’s social situation such as poverty and homelessness. In the unlikely event poorer students would fall outside the ambit of ‘social origin’ they would fall under ‘other status’ where the Grand Chamber (GC) of the ECtHR in Carson and Others v United Kingdom (Application no. 42184/05) noted that only differences in treatment based on a personal characteristic (or “status”) by which persons or groups of persons are distinguishable from each other are capable of amounting to discrimination within the meaning of Article 14. Economic status based on residential income would and should quite easily fall into this as noted in Hurley and Moore, R (on the application of) v Secretary of State for Business Innovation & Skills [2012] EWHC 201 (Admin) (para 29).

The jurisprudence of A2P1 has grown from not guaranteeing access to any particular educational institution the domestic system does provide, or that a breach requires evidence of a systemic failure of the national educational system as a whole resulting in the individual not having access to a minimum level of education within it (Simpson v United Kingdom (1989) 64 DR 188) to the point where for example A2P1 must be read in light of Articles 8-10 ((Leyla Şahin v. Turkey – (Application no. 44774/98) para 155). In the Belgian linguistic case the ECtHR held that although Article 8 does not grant a right to education as it mainly concerns protecting the individual against arbitrary interference by the public authorities in his private family life, that does not mean measures taken in the field of education won’t affect those rights (B para 7). Similarly A2P1 must be read in light of Article 10 which pertain to the freedom … to receive and impart information and ideas (Kjeldsen, Busk Madsen and Pedersen – (Application no. 5095/71; 5920/72; 5926/72) (para 52)). Thus the argument would be that the removal of maintenance grants and other forms of support create a restriction (Leyla Şahin v. Turkey – (Application no. 44774/98) para 157) on the right to education based on social origin/other status and also interferes with Article 8 and 10 in an indirectly discriminatory manner. But for the sake of (shortening) this blog post, A2P1 in conjunction with Article 14 will only be considered .

The trebling of tuition fees:

The starting point is the High Court decision in Hurley and Moore, R (on the application of) v Secretary of State for Business Innovation & Skills [2012] EWHC 201 (Admin). This case concerned the trebling of tuition fees and its potential for (indirect) discrimination towards those from poorer backgrounds (para 4), the Secretary of State contested this (para 5). In order to demonstrate evidence of indirect discrimination, the GC in D.H. and Others v. the Czech Republic – (Application no. 57325/00) held that it adopts conclusions that are:

[S]upported by the free evaluation of all evidence, including such inferences as may flow from the facts and the parties’ submissions. According to its established case-law, proof may follow from the coexistence of sufficiently strong, clear and concordant inferences or of similar unrebutted presumptions of fact. Moreover, the level of persuasion necessary for reaching a particular conclusion and, in this connection, the distribution of the burden of proof are intrinsically linked to the specificity of the facts, the nature of the allegation made and the Convention right at stake. (para 178).

The GC also accepted that statistic (although not in the past) can be relied upon to demonstrate a difference in treatment between two groups (para 180). Once a rebuttable assumption has been established the onus then shifts on the respondent State/Government (para 189), nor is discriminatory intent required (para 184 and 194).

In Hurley, evidence took the form of the Browne Report, which looked at higher education funding. It drew from research regarding participation rates from more socially deprived students. One such research paper titled Assessing the Impact of the New Student Support Arrangements carried out by the Institute for Employment Studies maintained that since the reintroduction of grants and other support arrangements, there was no significant change in participation but acknowledged that any potentially negative impact on the propensity to enter HE amongst those from lower socio-economic backgrounds may have been masked by the counter pressures arising from the recession. They concluded that the introduction of grants and bursaries did not encouraged greater participation (p60-61). Other research also supported this assertion (para 17). However, it was incorrectly noted that research on the Impact of Tuition Fees and Support on University Participation in the UK which stipulated that a £1,000 increase in loan increased participation by 3.2% (para 17). When in actual fact, the research demonstrated that an increase in £1,000 in fees resulted in a decrease in participation of 3.9 percentage points (not 4.4% stated by the court), and maintenance grants with an increase of £1,000 had an increase in participation of a 2.6 percentage points (not 2.1% stated by the court) and the increase in loans the participation percentage points was never measured. Thus it seems the court itself made an error of fact (which I will come to later).

In the case, Elias LJ accepted that the case law of the ECtHR regarded tuition fees as a restriction on A2P1 (para 40) but that it did not agree that it impair the essence of the right (para 42). When it came to discrimination because of the hike in fees, Elias LJ accepted that an increase in fees alone would discourage many from going to university and would in particular be likely to have a disproportionate impact on the poorer sections of the community, but an increase in fees cannot be looked at in isolation (para 51). Furthermore, the increases in fees were mitigated by loans and various measures (i.e. maintenance grants) targeted at increasing university access to the poorest students (para 52). Elias LJ found that he did not think that at this stage it is sufficiently clear that as a group they will be disadvantaged under the new scheme (para 52). Elias LJ did not find the evidence whether statistical or by way of rebuttable presumption satisfactory to rule in the claimants favour, but accepted that in time the facts may prove them right. However, overall with Mr Justice King agreeing (para 101-102) the High Court did not conclude in the claimants favour (notwithstanding a declaration that there had been a failure in the Public Sector Equality Duty) of a violation of A2P1 in conjunction with Article 14.

Applying Human Rights and Hurley to the present facts:

Before going further into arguments, it is important to note, a certain obstacle, the ECtHR have noted that a Member State’s margin of appreciation (discretion) when it comes to university (the particular case regarded tuition fees) is much wider than it would be when compared to primary and secondary education (Ponomaryovi v Bulgaria – (Application no. 5335/05) para 56). This is why the trebling in fees was ruled as Convention compliant.

But the present situation is different. Firstly when Elias’ LJ referred to an increase in £1,000 loans increased participation, it was noted that the study did not consider this (unless I’m reading the wrong study) and should therefore be rejected and that particular study cannot be used to justify an argument that an increase in loans will increase participation.

Secondly, Elias LJ noted the importance (para 52) of measures directly targeted at increasing university access to poorer students. In the report titled Urgent reforms to higher education funding and student finance it was maintained that an increase in maintenance grants for the most socially deprived was aimed at ensuring that the 2010 Regulations i.e. trebling in tuition fees did not affect individuals from lower socio-economic backgrounds disproportionately (p5). This however, would no longer be the case if grants are to be removed.

Thirdly, Elias LJ did not buy into the assertion that the motivations for the measures were to save money (see para 59 and 62). However, one of the objectives announced by George Osborne last year, was to make savings in the higher education and further education budgets. Andrew McGettigan maintained back then (in 2015) that the cuts would likely affect grants (see here, and here) which was later confirmed by Osborne himself noting that it was unfair on the taxpayer to subsidise people who are more likely to earn more than them (divide and conquer much?). McGettigan also questioned whether the obligation to make savings on the public sector net debt, rather than the deficit, then a switch from grants to loans would not be sufficient as the loans would contribute to the debt. Therefore the argument of saving money would need to be taken into consideration.

One of the criticisms made by Elias LJ in Hurley was rejecting the contention that the decision was made without proper consultation and analysis. To my knowledge there had been no consultation, and thus no responses, so already these measures would be on the back foot.

When it comes to analysis, pointing back to research which stipulated that an increase in tuition fee decreased participation, whilst an increase in maintenance grant increased participation, further research was carried out which stipulated that an £1,000 increase grants lead to an 3.9% increase in participation where it was concluded that ‘[t]hese results underlie the importance of government commitment to non-repayable forms of upfront support such as maintenance grants for undergraduate degree participation.’ Moreover, the analysis from the Institute of Fiscal Studies in their executive summary (p5) noted the possible effects of the measures as a whole. They said that reduction in participation of those from the poorest backgrounds depended upon how debt averse students are and how credit constrained they are, as well as on how responsive participation decisions are to expected increases in the long-run cost of higher education. Furthermore, although participation did not decrease due to the price hikes, the situations are not analogous as grants went up for the poorest and the net present of loans went down. They contend a system that abolishes grants and d the net present value of repayments is likely to increase substantially for those from the poorest backgrounds and therefore would expect ‘both of those changes to have negative effects on participation for the poorest students.’ However, the up-front support would be increased and may have an offsetting effect if these individuals are not very forward looking and/or they are very credit constrained and/or they expect to have low lifetime income. They concluded that t the potential negative effects on participation to be stronger if all of the proposed reforms are introduced.

With regards to debt aversion, research by the University of Edinburgh concluded that interviewees from Scotland and England were concerned that tuition fees may deter young people from poorer backgrounds from going to university (p13). Back in 2005, it was noted that students from poorer backgrounds were more debt averse than those from other social classes (p 15). In a research briefing paper, the National Union of Students, Sutton Trust, University and Colleges Union were not in favour of abolishing grants, whilst University Alliance would have preferred an increase in grants understood that the government had hard decisions to make. Million+ noted the importance of grants, and urged the government to assess the impact of this switch on university access. Universities UK noted financially the situation is no different bar the increased debt, but that changes to the funding systems do not deter students from the poorest background (p14-15). Therefore, some were totally against the idea, and others were concerned that assessments need to be made to determine whether the measures acted as a deterrence to higher education.

According to the Higher education: (student support) regulations 2015 – equality analysis the switch to loans will have a positive impact on students from low income backgrounds by potentially easing financial worries, reducing the need to work excessive hours during term time and supporting students in their studies. At the margin, for some students, it might make the difference between attending University or not (p52). This increase in £766 seems like a lot to a student (because it is) but actually wouldn’t require excessive hours of work, on minimum wage, 12 hours a week spread out of term time. This £766 may well even have been superseded anyway by bursaries that universities offer in combination with the £4k loan and £3k grant). Not that I’m assuming all universities offer them, but they are means tested like the maintenance grant, and the poorest receive the most. All it would seem that switching to loans as the analysis points out, equals more debt (p52), which indeed may never be paid back, but for those that do, perhaps another post on the loan freeze will be necessary.

Furthermore, the impact assessment specifically highlights that woman, mature students, those from ethnic minority backgrounds, those with disabilities, and certain groups of Muslims students are likely to feel the disproportionate effect of these measures (all within the ambit of Article 14) (p82-83). It also acknowledged that single parent mothers and mature students could be negatively impacted upon without any resolution, the other groups were regarded as either not being a significant risk (disability and religious belief), proposals were being put in place (ethnic minority background) (p84-85). Annex 2 points to various factors of increased participation from said groups above, but it would be unwise to ignore grants etc were available then. Either way, the onus would be on the government to disprove all this, as I would contend there are ample inferences to create a rebuttable presumption.

Although there was a Parliamentary discussion which favoured the Regulations, there is also a debate going on as I type, so the Parliamentary angle is still up in the air.

What makes the argument different than what was advocated in Hurley is obviously an increase in fees plus a removal of grants adds a further restriction to accessing universities. But what wasn’t used in the claimants arguments was that discrimination should be seen in light of Thlimennos v Greece – 34369/97 [2000] ECHR 162, where the ECtHR held that Article 14s can also be violated when States without an objective and reasonable justification fail to treat differently persons whose situations are significantly different (para 44). And this is the crucial point when in concerns the poorest, disabled, single parents etc. The situation for those eligible for Special Support Grants (SSG) the human rights argument may be stronger as in Burnip v Birmingham City Council (Rev 1) [2012] EWCA Civ 629 (a bedroom tax case) found a violation in line with Thlimennos for failing to treat different circumstances differently without objective reasonable justification. The Court of Appeal further held that the Thlimennos principle was not barred from applying positive obligation to the allocate resources (para 18). Thus this reasoning could be used to suggest that Thlimennos could be interpreted as creating a positive obligation to cater for those who are disadvantaged. However, a similar case in that of MA & Ors, R (on the application of) v The Secretary of State for Work and Pensions [2014] EWCA Civ 13 the Court of Appeal , felt that the tax was justified based on the discretionary payments were available. Unlike maintenance grants and SSP, there will be no discretion in their allocation, this would work against the abolition of grants.

Conclusion:

Although the UK has a wider discretion when it comes to universities and how it is financed, they are not barred from treating different groups differently to correct factual inequalities (Stec and Others v United Kingdom – 65731/01 [2006] ECHR 1162 para 51), but consideration must also be taken into account when it comes to those a general rule will affect the most, thus applying Thlimennos may oblige them to permit grants for the most disadvantaged. The jury is out on whether a court would actually buy into my points (THIS IS NOT LEGAL ADVICE, I’m looking at you NUS ;)), but whatever the matter, the argument is now stronger than it was in 2012 because of the further restriction of access to education on the grounds of A2P1 in conjunction with Article 14. The government if taken to court would have to use stronger justifications rather than rhetoric such as ‘why should tax payers subsidise X?’ which could be used to justify essentially anything ever. I couldn’t go into a full ECHR analysis of all the Convention Rights at stake or even all the measures (loan freeze etc) because those require just as much consideration as this one post.

Does the UK engage in ‘mass surveillance’?

Screen Shot 2016-01-15 at 07.42.03

When giving evidence to the Parliamentary Committee on the Draft Investigatory Powers Bill Home Secretary Theresa May stated categorically that the UK does not engage in mass surveillance. The reaction from privacy advocates and many in the media was something to see – words like ‘delusional’ have been mentioned – but it isn’t actually as clear cut as it might seem.

Both the words ‘mass’ and ‘surveillance’ are at issue here. The Investigatory Powers Bill uses the word ‘bulk’ rather than ‘mass’ – and Theresa May and her officials still refuse to give examples or evidence to identify how ‘bulky’ these ‘bulk’ powers really are. While they refuse, the question of whether ‘bulk’ powers count as ‘mass’ surveillance is very hard to determine. As a consequence, Theresa May will claim that they don’t, while skeptics will understandably assume that they do. Without more information, neither side can ‘prove’ they’re right.

The bigger difference, though, is with the word ‘surveillance’. Precisely what constitutes surveillance is far from agreed. In the context of the internet (and other digital data surveillance) there are, very broadly speaking, three stages: the gathering or collecting of data, the automated analysis of the data (including algorithmic filtering), and then the ‘human’ examination of the results of that analysis of filtering. This is where the difference lies: privacy advocates and others might argue that the ‘surveillance’ happens at the first stage – when the data is gathered or collected – while Theresa May, David Omand and those who work for them would be more likely to argue that it happens at the third stage – when human beings are involved.

If the surveillance occurs when the data is gathered, there is little doubt that the powers envisaged by the Investigatory Powers Bill would constitute mass surveillance – the Internet Connection Records, which appear to apply to pretty much everyone (so clearly ‘mass’) would certainly count, as would the data gathered through ‘bulk’ powers,  whether it be by interception, through ICRs, through the mysterious ‘bulk personal datasets’ about which we are still being told very little.

If, however, the surveillance only occurs when human beings are involved in the process, then Theresa May can argue her point: the amount of information looked at by humans may well not be ‘massive’, regardless of how much data is gathered. That, I suspect, is her point here. The UK doesn’t engage in ‘mass surveillance’ on her terms.

Who is right? Analogies are always dangerous in this area, but it would be like installing a camera in every room of every house in the UK, turning that camera on, having the footage recorded and stored for a year – but having police officers only look at limited amounts of the footage and only when they feel they really need to.

Does the surveillance happen when the cameras are installed? When they’re turned on? When the footage is stored? When it’s filtered? Or when the police officers actually look at it.  That is the issue here. Theresa May can say, and be right, that the UK does not engage in mass surveillance, if and only if it is accepted that surveillance only occurs at the later stages of the process.

In the end, however, it is largely a semantic point. Privacy invasion occurs when the camera is installed and the capability of looking at the footage is enabled. That’s been consistently shown by recent rulings at both the Court of Justice of the European Union and of the European Court of Human Rights. Whether it is called ‘surveillance’ or something else, it invades privacy – which is a fundamental right. That doesn’t mean that it is automatically wrong – but that the balancing act between the rights of privacy (and freedom of expression, of assembly and association etc that are protected by that privacy) and the need for ‘security’ needs to be considered at the gathering stage, and not just at the stage when people look at the data.

In practice, too, the middle of the three stages – the automated analysis, filtering or equivalent – may be more important than the last one. Decisions are already made at that stage, and this is likely to increase. Surveillance by algorithm is likely to be (and may already be) more important than surveillance by human eyes, ears and minds. That means that we need to change our mindset about which part of the surveillance process matters. Whether we call it ‘mass surveillance’ or something else is rather beside the point.

Global letter on Encryption – why it matters.

I am one of the signatories on an open letter to the governments of the world that has been released today. The letter has been organised by Access Now and there are 195 signatories – companies, organisations and individuals from around the world.

The letter itself can be found here. The key demands are the following

Screen Shot 2016-01-11 at 06.10.45

It’s an important letter, and one that Should be shared as widely as possible. Encryption matters, and not just for technical reasons and not just for ‘technical’ people. Even more than that, the arguments over encryption are a manifestation of a bigger argument – and, I would argue, a massive misunderstanding that needs to be addressed: the idea that privacy and security are somehow ‘alternatives’ or at the very least that privacy is something that needs to be ‘sacrificed’ for security. The opposite is the case: privacy and security are not alternatives, they’re critical partners. Privacy needs security and security needs privacy.

The famous (and much misused) saying often attributed (probably erroneously) to Benjamin Franklin, “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” is not, in this context at least, strong enough. In relation to the internet, those who would give up essential privacy to purchase a little temporary security will get neither. It isn’t a question of what they ‘deserve’ – we all deserve both security and privacy – but that by weakening privacy on the internet we weaken security.

The conflict over encryption exemplifies this. Build in backdoors, weaken encryption, prevent or limit the ways in which people can use it, and you both reduce their privacy and their security. The backdoors, the weaknesses, the vulnerabilities that are provided for the ‘good guys’ can and will be used by the ‘bad guys’. Ordinary people will be more vulnerable to criminals and scammers, oppressive regimes will be able to use them against dissidents, overreaching authorities against whistleblowers, abusive spouses against their targets and so forth. People may think they have ‘nothing to hide’ from the police and intelligence agencies – but that is to fundamentally miss the point. Apart from everything else, it is never just the police and the intelligence agencies that our information needs protection from.

What is just as important is that there is no reason (nor evidence) to suggest that building backdoors or undermining encryption helps even in the terms suggested by those advocating it. None examples have been provided – and whenever they are suggested (as in the aftermath of the Paris terrorist attacks) they quickly dissolve when examined. From a practical perspective it makes sense. ‘Tech-savvy’ terrorists will find their own way around these approaches – DIY encryption, at their own ends, for example – while non-tech savvy terrorists (the Paris attackers seem to have used unencrypted SMSs) can be caught in different ways, if we use different ways and a more intelligent approach. Undermining or ‘back-dooring’ encryption puts us all at risk without even helping. The superficial attractiveness of the idea is just that: superficial.

The best protection for us all is a strong, secure, robust and ‘privacy-friendly’ infrastructure, and those who see the bigger picture understand this. This is why companies such as Apple, Google, Microsoft, Yahoo, Facebook and Twitter have all submitted evidence to the UK Parliament’s Committee investigating the draft Investigatory Powers Bill – which includes provisions concerning encryption that are ambiguous at best. It is not because they’re allies of terrorists or because they make money from paedophiles, nor because they’re putty in the hands of the ‘privacy lobby’. Very much the opposite. It is because they know how critical encryption is to the way that the internet works.

That matters to all of us. The internet is fundamental to the way that we live our lives these days. Almost every element of our lives has an online aspect. We need the internet for our work, for our finances, for our personal and social lives, for our dealings with governments, corporations and more. It isn’t a luxury any more – and neither is our privacy. Privacy isn’t an indulgence – and neither is security. Encryption supports both. We should support it, and tell our governments so.

Read the letter here – and please pass it on.

Investigatory Powers Bill – my written submission

As well as providing oral evidence to the Draft Investigatory Powers Bill Joint Committee (which I have written about here, can be watched here, and a transcript can be found here) I submitted written evidence on the 15th December 2015.

Screen Shot 2015-12-09 at 10.02.12

The contents of the written submission are set out below. It is a lot more detailed than the oral evidence, and a long read (around 7,000 words) but even so, given the timescale involved, it is not as comprehensive as I would have liked – and I didn’t have as much time to proof read it as I would have liked. There are a number of areas I would have liked to have covered that I did not, but I hope it helps.

As it is published, the written evidence is becoming available on the IP Bill Committee website here – my own evidence is part of what has been published so far.


 

Submission to the Joint Committee on the draft Investigatory Powers Bill by Dr Paul Bernal

I am making this submission in my capacity as Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research in internet law and specialise in internet privacy from both a theoretical and a practical perspective. My PhD thesis, completed at the LSE, looked into the impact that deficiencies in data privacy can have on our individual autonomy, and set out a possible rights-based approach to internet privacy. My book, Internet Privacy Rights – Rights to Protect Autonomy, was published by Cambridge University Press in 2014. I am a member of the National Police Chiefs’ Council’s Independent Digital Ethics Panel. The draft Investigatory Powers Bill therefore lies precisely within my academic field.

I gave oral evidence to the Committee on 7th December 2015: this written evidence is intended to expand on and explain some of the evidence that I gave on that date. If any further explanation is required, I would be happy to provide it.


 

One page summary of the submission

The submission looks specifically at the nature of internet surveillance, as set out in the Bill, at its impact on broad areas of our lives – not just what is conventionally called ‘communications’ – and on a broad range of human rights – not just privacy but freedom of expression, of association and assembly, and of protection from discrimination. It looks very specifically at the idea of ‘Internet Connection Records, briefly at data definitions and at encryption, as well as looking at how the Bill might be ‘future proofed’ more effectively.

The submission will suggest that in its current form, in terms of the overarching/thematic questions set out in the Committee’s Call for Written Evidence, it is hard to conclude that all of the powers sought are necessary, uncertain that they are legal, likely that many of them are neither workable nor carefully defined, and unclear whether they are sufficiently supervised. In some particular areas – Internet Connection Records is the example that I focus on in this submission – the supervision envisaged does not seem sufficient or appropriate. Moreover, there are critical issues – for example the vulnerability of gathered data – that are not addressed at all. These problems potentially leave the Bill open to successful legal challenge and rather than ‘future-proofing’ the Bill, they provide what might be described as hostages to fortune.

Many of the problems, in my opinion, could be avoided by taking a number of key steps. Firstly, rethinking (and possibly abandoning) the Internet Connection Records plans. Secondly, being more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge. Thirdly, taking a new look at encryption and being clear about the approach to end-to-end encryption. Fourthly, strengthening and broadening the scope of oversight. Fifthly, through the use of some form of renewal or sunset clauses to ensure that the powers are subject to full review and reflection on a regular basis.


1          Introductory remarks

1.1       Before dealing with the substance of the Bill, there is an overriding question that needs to be answered: why is the Committee being asked to follow such a tight timetable? This is a critically important piece of legislation – laws concerning surveillance and interception are not put forward often, particularly as they are long and complex and deal with highly technical issues. That makes detailed and careful scrutiny absolutely crucial. Andrew Parker of MI5 called for ‘mature debate’ on surveillance immediately prior to the introduction of the Bill: the timescale set out for the scrutiny of the Bill does not appear to give an adequate opportunity for that mature debate.

1.2       Moreover, it is equally important that the debate be an accurate one, and engaged upon with understanding and clarity. In the few weeks since the Bill was introduced the public debate has been far from this. As shall be discussed below, for example, the analogies chosen for some of the powers envisaged in the Bill have been very misleading. In particular, to suggest that the proposed ‘Internet Connection Records’ (‘ICRs’) are like an ‘itemised phone bill’, as the Home Secretary described it, is wholly inappropriate. As I set out below (in section 5) the reality is very different. There are two possible interpretations for the use of such inappropriate analogies: either the people using them don’t understand the implications of the powers, which means more discussion is needed to disabuse them of their illusions, or they are intentionally oversimplifying and misleading, which raises even more concerns.

1.3       For this reason, the first and most important point that I believe the Committee should be making in relation to the scrutiny of the Bill is that more time is needed. As I set out below (in 8.4 below) the case for the urgency of the Bill, particularly in the light of the recent attacks in Paris, has not been made: in many ways the attacks in Paris should make Parliament pause and reflect more carefully about the best approach to investigatory powers in relation to terrorism.

1.4       In its current form, in terms of the overarching/thematic questions set out in the Committee’s Call for Written Evidence, it is hard to conclude that all of the powers sought are necessary, uncertain that they are legal, likely that many of them are neither workable nor carefully defined, and unclear whether they are sufficiently supervised. In some particular areas – Internet Connection Records is the example that I focus on in this submission – the supervision envisaged does not seem sufficient or appropriate. Moreover, there are critical issues – for example the vulnerability of gathered data – that are not addressed at all. These problems potentially leave the Bill open to successful legal challenge and rather than ‘future-proofing’ the Bill, they provide what might be described as hostages to fortune.

1.5       Many of the problems, in my opinion, could be avoided by taking a number of key steps. Firstly, rethinking (and possibly abandoning) the Internet Connection Records plans. Secondly, being more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge. Thirdly, taking a new look at encryption and being clear about the approach to end-to-end encryption. Fourthly, strengthening and broadening the scope of oversight. Fifthly, through the use of some form of renewal or sunset clauses to ensure that the powers are subject to full review and reflection on a regular basis.

2          The scope and nature of this submission

2.1       This submission deals specifically with the gathering, use and retention of communications data, and of Internet Connection Records in particular. It deals more closely with the internet rather than other forms of communication – this is my particular area of expertise, and it is becoming more and more important as a form of communications. The submission does not address areas such as Equipment Interference, and deals only briefly with other issues such as interception and oversight. Many of the issues identified with the gathering, use and retention of communications data, however, have a broader application to the approach adopted by the Bill.

2.2       It should be noted, in particular, that this submission does not suggest that it is unnecessary for either the security and intelligence services or law enforcement to have investigatory powers such as those contained in the draft Bill. Many of the powers in the draft Bill are clearly critical for both security and intelligence services and law enforcement to do their jobs. Rather, this submission suggests that as it is currently drafted the bill includes some powers that are poorly defined, poorly suited to the stated function, have more serious repercussions than seem to have been understood, and could represent a distraction, a waste of resources and add an unnecessary set of additional risks to an already risky environment for the very people that the security and intelligence services and law enforcement are charged with protecting.

3          The Internet, Internet Surveillance and Communications Data

3.1       The internet has changed the way that people communicate in many radical ways. More than that, however, it has changed the way people live their lives. This is perhaps the single most important thing to understand about the internet: we do not just use it for what we have traditionally thought of as ‘communications’, but in almost every aspect of our lives. We don’t just talk to our friends online, or just do our professional work online, we do almost everything online. We bank online. We shop online. We research online. We find relationships online. We listen to music and watch TV and movies online. We plan our holidays online. We try to find out about our health problems online. We look at our finance online. For most people in our modern society, it is hard to find a single aspect of our lives that does not have a significant online element.

3.2       This means that internet interception and surveillance has a far bigger potential impact than traditional communications interception and surveillance might have had. Intercepting internet communications is not the equivalent of tapping a telephone line or examining the outside of letters sent and received, primarily because we use the internet for far more than we ever used telephones or letters. This point cannot be overemphasised: the uses of the internet are growing all the time and show no signs of slowing down. Indeed, more dimensions of internet use are emerging all the time: the so-called ‘internet of things’ which integrates ‘real world’ items (from cars and fridges to Barbie dolls[1]) into the internet is just one example.

3.3       This is also one of the reasons that likening Internet Connection Records to an itemised phone bill is particularly misleading. Another equally important reason to challenge that metaphor is the nature and potential uses of the data itself. What is labelled Communications Data (and in particular ‘relevant communications data’, as set out in clause 71(9) of the draft Bill) is by nature of its digital form ideal for analysis and profiling. Indeed, using this kind of data for profiling is the heart of the business models of Google, Facebook and the entire internet advertising industry.

3.4       The inferences that can be – and are – drawn from this kind of data, through automated, algorithmic analysis rather than through informed, human scrutiny – are enormous and are central to the kind of ‘behavioural targeting’ that are the current mode of choice for internet advertisers. Academic studies have shown that very detailed inferences can be drawn: analysis of Facebook ‘Likes’, for example, has been used to indicate the most personal of data including sexuality, intelligence and so forth. A recent study at Cambridge University concluded that ‘by mining Facebook Likes, the computer model was able to predict a person’s personality more accurately than most of their friends and family.’[2]

3.5       This means that the kind of ‘communications’ data discussed in the Bill is vastly more significant that what is traditionally considered to be communications. It also means that from a human rights perspective more rights are engaged by its gathering, holding and use. Internet ‘communications’ data does not just engage Article 8 in its ‘correspondence’ aspect, but in its ‘private and family life’ aspect. It engages Article 10 – the impact of internet surveillance on freedom of speech has become a bigger and bigger issue in recent years, as noted in depth by the UN Special Rapporteur on Freedom of Expression, most recently in his report on encryption and anonymity.[3]

3.6       Article 11, which governs Freedom of Association and Assembly, is also critically engaged: not only do people now associate and assemble online, but they use online tools to organise and coordinate ‘real world’ association and assembly. Indeed, using surveillance to perform what might loosely be called chilling for association and assembly has become one of the key tools of the more authoritarian governments to stifle dissent. Monitoring and even shutting off access to social media systems, for example, was used by many of the repressive regimes in the Arab Spring. Even in the UK, the government communications plan for 2013/14 included the monitoring of social media in order to ‘head off badger cull protests’, as the BBC reported.[4] This kind of monitoring does not necessarily engage Article 8, as Tweets (the most obvious example to monitor) are public, but it would engage both aspects of Article 11, and indeed of Article 10.

3.7       Article 14, the prohibition of discrimination, is also engaged: the kind of profiling discussed in paragraph 3.4 above can be used to attempt to determine a person’s race, gender, possible disability, religion, political views, even direct information like membership of a trade union. It should be noted, as is the case for all these profiling systems, that accuracy is far from guaranteed, giving rise to a bigger range of risks. Where derived or profiling data is accurate, it can involve invasions of privacy, chilling of speech and discrimination: where it is inaccurate it can generate injustice, inappropriate decisions and further chills and discrimination.

3.8       This broad range of human rights engaged means that the ‘proportionality bar’ for any gathering of this data, interception and so forth is higher than it would be if only the correspondence aspect of Article 8 were engaged. It is important to understand that the underlying reason for this is that privacy is not an individual, ‘selfish’, right, but one that underpins the way that our communities function. We need privacy to communicate, to express ourselves, to associate with those we choose, to assemble when and where we wish – indeed to do all those things that humans, as social creatures, need to do. Privacy is a collective right that needs to be considered in those terms.

3.9       It is also critical to note that communications data is not ‘less’ intrusive than content: it is ‘differently’ intrusive. In some ways, as has been historically evident, it is less intrusive – which is why historically it has been granted lower levels of protection – but increasingly the intrusion possible through the gathering of communications data is in other was greater than that possible through examination of content. There are a number of connected reasons for this. Firstly, it is more suitable for aggregation and analysis – communications data is in a structured form, and the volumes gathered make it possible to use ‘big data’ analysis, as noted above. Secondly, content can be disguised more easily – either by technical encryption or by using ‘coded’ language. Thirdly, there are many kinds of subjects that are often avoided deliberately when writing content – things like sexuality, health and religion – that can be determined by analysis of communications data. That means that the intrusive nature of communications data can often be greater than that of content. Moreover, as the levels and nature of data gathered grows, the possible intrusions are themselves growing. This means that the idea that communications data needs a lower level of control, and less scrutiny, than content data is not really appropriate – and in the future will become even less appropriate.

4          When rights are engaged

4.1       A key issue in relation to the gathering and retention of communications data is when the relevant rights are engaged: it is when data is gathered and retained, when it is subject to algorithmic analysis or automated filtering, or when it is subject to human examination. When looked at from what might be viewed an ‘old fashioned’ communications perspective, it is only when humans examine the data that ‘surveillance’ occurs and privacy is engaged. In relation to internet communications data this is to fundamentally miss the nature of the data and the nature of the risks. In practice, many of the most important risks occur at the gathering stage, and more at what might loosely be described as the ‘automated analysis’ stage.

4.2       It is fundamental to the nature of data that when it is gathered it becomes vulnerable. This vulnerability has a number of angles. There is vulnerability to loss – from human error to human malice, from insiders and whistle-blowers to hackers of various forms. The recent hacks of Talk Talk and Ashley Madison in particular should have focussed the minds of any envisaging asking communications providers to hold more and more sensitive data. There is vulnerability to what is variously called ‘function creep’ or ‘mission creep’: data gathered for one reason may end up being used for another reason. Indeed, when business models of companies such as Facebook and Google are concerned this is one of the key features: they gather data with the knowledge that this data is useful and that the uses will develop and grow with time.

4.3       It is also at the gathering stage that the chilling effects come in. The Panopticon, devised by Bentham and further theorised about by Foucault, was intended to work by encouraging ‘good’ behaviour in prisoners through the possibility of their being observed, not by the actual observation. Similarly it is the knowledge that data is being gathered that chills freedom of expression, freedom of association and assembly and so forth, not the specific human examination of that data. This is not only a theoretical analysis but one borne out in practice, which is one of the reasons that the UN Special Rapporteur on Freedom of Expression and many others have made the link between privacy and freedom of expression.[5]

4.4       Further vulnerabilities arise at the automated analysis stage: decisions are made by the algorithms, particular in regard to filtering based on automated profiling. In the business context, services are tailored to individuals automatically based on this kind of filtering – Google, for example, has been providing automatically and personally tailored search results to all individuals since 2009, without the involvement of humans at any stage. Whether security and intelligence services or law enforcement use this kind of a method is not clear, but it would be rational for them to do so: this does mean, however, that more risks are involved and that more controls and oversight are needed at this level as well as at the point that human examination takes place.

4.5       Different kinds of risks arise at each stage. It is not necessarily true that the risks are greater at the final, human examination stage. They are qualitatively different, and engage different rights and involve different issues. If anything, however, it is likely that as technology advances the risks at the earlier stages – the gathering and then the automated analysis stages – will become more important than the human examination stage. It is critical, therefore, that the Bill ensures that appropriate oversight and controls are put in place at these earlier stages. At present, this does not appear to be the case. Indeed, the essence of the data retention provisions appears to be that no real risk is considered by the ‘mere’ retention of data. That is to fundamentally misunderstand the impact of the gathering of internet communications data.

5          Internet Connection Records

5.1       Internet Connection Records (‘ICRs’) have been described as the only really new power in the Bill, and yet they are deeply problematic in a number of ways. The first is the question of definition. The ‘Context’ section of the Guide to Powers and Safeguards (the Guide) in the introduction to the Bill says that:

“The draft Bill will make provision for the retention of internet connection records (ICRs) in order for law enforcement to identify the communications service to which a device has connected. This will restore capabilities that have been lost as a result of changes in the way people communicate.” (paragraph 3)

This is further explained in paragraphs 44 and 45 of the Guide as follows:

“44. A kind of communications data, an ICR is a record of the internet services a specific device has connected to, such as a website or instant messaging application. It is captured by the company providing access to the internet. Where available, this data may be acquired from CSPs by law enforcement and the security and intelligence agencies.

45. An ICR is not a person’s full internet browsing history. It is a record of the services that they have connected to, which can provide vital investigative leads. It would not reveal every web page that they visit or anything that they do on that web page.”

Various briefings to the press have suggested that in the context of web browsing this would mean that the URL up to the first slash would be gathered (e.g. www.bbc.co.uk and not any further e.g. http://www.bbc.co.uk/sport/live/football/34706510 ). On this basis it seems reasonable to assume that in relation to app-based access to the internet via smartphones or tablets the ICR would include the activation of the app, but nothing further.

5.2       The ‘definition’ of ICRs in the bill is set out in 47(6) as follows:

“In this section “internet connection record” means data which—

(a) may be used to identify a telecommunications service to which a communication is transmitted through a telecommunication system for

the purpose of obtaining access to, or running, a computer file or computer program, and

(b) is generated or processed by a telecommunications operator in the process of supplying the telecommunications service to the sender of the communication (whether or not a person).”

This definition is vague, and press briefings have suggested that the details would be in some ways negotiated directly with the communications services. This does not seem satisfactory at all, particularly for something considered to be such a major part of the Bill: indeed, the only really new power according to the Guide. More precision should be provided within the Bill itself – and specific examples spelled out in Codes of Practice that accompany the Bill, covering the major categories of communications envisaged. Initial versions of these Codes of Practice should be available to Parliament at the same time as the Bill makes its passage through the Houses.

5.3       The Bill describes the functions to which ICRs may be put. In 47(4) it is set out that ICRs (and data obtained through the processing of ICRs) can only be used to identify:

“(a) which person or apparatus is using an internet service where—

(i) the service and time of use are already known, but

(ii) the identity of the person or apparatus using the service is not known,

(b) which internet communications service is being used, and when and how it is being used, by a person or apparatus whose identity is already known, or

(c) where or when a person or apparatus whose identity is already known is obtaining access to, or running, a computer file or computer program which wholly or mainly involves making available, or acquiring, material whose possession is a crime.”

The problem is that in all three cases ICRs insofar as they are currently defined are very poorly suited to performing any of these three functions – and better methods either already exist for them or could be devised to do so. ICRs provide at the same time much more information (and more intrusion) than is necessary and less information than is adequate to perform the function. In part this is because of the way that the internet is used and in part because of the way that ICRs are set out. Examples in the following paragraphs can illustrate some (but not all) of the problems.

5.4       The intrusion issue arises from the nature of internet use, as described in Section 3 of this submission. ICRs cannot be accurately likened to ‘itemised telephone bills’. They do not record the details of who a person is communicating with (as an itemised telephone bill would) but they do include vastly more information, and more sensitive and personal information, than an itemised telephone bill could possibly contain. A record of websites visited, even at the basic level, can reveal some of the most intimate information about an individual – and not in terms of what might traditionally be called ‘communications’. This intrusion could be direct – such as accessing a website such as www.samaritans.org at 3am or accessing information services about HIV – or could come from profiling possibilities. The commercial profilers, using what is often described as ‘big data’ analysis (and has been explained briefly in section 3 above) are able to draw inferences from very few pieces of information. Tastes, politics, sexuality, and so forth can be inferred from this data, with a relatively good chance of success.

5.5       This makes ICRs ideal for profiling and potentially subject to function-creep/mission-creep. It also makes them ideally suited for crimes such as identity theft and personalised scamming, and the databases of ICRs created by communications service providers a perfect target for hackers and malicious insiders. By gathering ICRs, a new range of vulnerabilities are created. Data, however held and whoever it is held by, is vulnerable in a wide range of ways.[6] Recent events have highlighted this very directly: the hacking of Talk Talk, precisely the sort of provider who would be expected to gather and store ICRs, should be taken very seriously. Currently it appears as though this hack was not done by the kind of ‘cyber-terrorists’ that were originally suggested, but by disparate teenagers around the UK. Databases of ICRs would seem highly likely to attract the interest both hackers of many different kinds. In practice, too, precisely those organisations who should have the greatest expertise and the greatest motivations to keep data secure – from the MOD and HMRC and the US DoD to Swiss Banks, technology companies including Sony and Apple – have all proved vulnerable to hacking or other forms of data loss in recent years. Hacking is the most dramatic, but human error, human malice, collusion and corruption, and commercial pressures (both to reduce costs and to ‘monetise’ data) may be more significant – and the ways that all these vulnerabilities can combine makes the risk even more significant.

5.6       ICRs are also unlikely to provide the information that law enforcement and the intelligence and security services need in order to perform the three functions noted above. The first example of this is Facebook. Facebook messages and more open communications would seem on the surface to be exactly the kind of information that law enforcement might need to locate missing children – the kind of example referred to in the introduction and guide to the bill. ICRs, however, would give almost no relevant information in respect of Facebook. In practice, Facebook is used in many different ways by many different people – but the general approach is to remain connected to Facebook all the time. Often this will literally be 24 hours a day, as devices are rarely turned off at night – the ‘connection’ event has little relationship to the use of the service. If Facebook is accessed by smartphone or tablet, it will generally be via an app that runs in the background at all times – this is crucial for the user to be able to receive notifications of events, of messages, of all kinds of things. If Facebook is accessed by PC, it may be by an app (with the same issues) or through the web – but if via the web this will often be using ‘tabbed browsing’ with one tab on the browser keeping the connection to Facebook available without the need to reconnect.

5.7       Facebook and others encourage and support this kind of long-term and even permanent connection to their services – it supports their business model and in a legal sense gives them some kind of consent to the kind of tracking and information gathering about their users that is the key to their success. ICRs would not help in relation to Facebook except in very, very rare circumstances. Further, most information remains available on Facebook in other ways. Much of it is public and searchable anyway. Facebook does not delete information except in extraordinary circumstances – the requirement for communications providers to maintain ICRs would add nothing to what Facebook retains.

5.8       The story is similar in relation to Twitter and similar services. A 24/7 connection is possible and indeed encouraged. Tweets are ‘public’ and available at all times, as well as being searchable and subject to possible data mining. Again, ICRs would add nothing to the ways that law enforcement and the intelligence and security services could use Twitter data. Almost all the current and developing communications services – from WhatsApp and SnapChat to Pinterest and more – have similar approaches and ICRs would be similarly unhelpful.

5.9       Further, the information gathered through ICRs would fail to capture a significant amount of the ‘communications’ that can and do happen on the internet – because the interactive nature of the internet now means that almost any form of website can be used for communication without that communication being the primary purpose of the website. Detailed conversations, for example, can and do happen on the comments sections of newspaper websites: if an analysis of ICRs showed access to www.telegraph.co.uk would the immediate thought be that communications are going on? Similarly, coded (rather than encrypted) messages can be put on product reviews on www.amazon.co.uk. I have had detailed political conversations on the message-boards of the ‘Internet Movies Database’ (www.imdb.com) but an ICR would neither reveal nor suggest the possibility of this.

5.10     This means that neither can the innocent missing child be found by ICRs via Facebook or its equivalents nor can the even slightly careful criminal or terrorist be located or tracked. Not enough information is revealed to find either – whilst extra information is gathered that adds to intrusion and vulnerability. The third function stated for ICRs refers to people whose identity is already known. For these people, ICRs provide insufficient information to help. This is one of the examples where more targeted powers would help – and are already envisaged elsewhere in the Bill.

5.11     The conclusion for all of this is that ICRs are not likely to be a useful tool in terms of the functions presented. The closest equivalent form of surveillance used around the world has been in Denmark, with very poor results. In their evaluation of five years’ experience the Danish Justice Ministry concluded that ‘session logging’, their equivalent of Internet Connection Records, had been of almost no use to the police. [7] It should be noted that when the Danish ‘session logging’ suggestion was first made, the Danish ISPs repeatedly warned that the system would not work and that the data would be of little use. Their warnings were not heeded. Similar warnings from ISPs in the UK have already begun to emerge. The argument has been made that the Danish failure was a result of the specific technical implementation – I would urge the Committee to examine it in depth to come to a conclusion. However, the fundamental issues as noted above are only likely to grow as the technology becomes more complex, the data more dense and interlinked, and the use of it more nuanced. All these trends are likely only to increase in speed.

5.12     The gathering and holding of ICRs are also likely to add vulnerabilities to all those about whom they are collected, as well as requiring massive amounts of data storage at a considerable cost. At a time when resources are naturally very tight, for the money, expertise and focus to be on something like this appears inappropriate.

 

6          Other brief observations about communications data, definitions and encryption

6.1       There is still confusion between ‘content’ and ‘communications’ data. The references to ‘meaning’ in 82(4), 82(8),106(8) and 136(4) and emphasised in 193(6) seem to add rather than reduce confusion – particularly when considered in relation to the kinds of profiling possible from the analysis of basic communications data. It is possible to derive ‘meaning’ from almost any data – this is one of the fundamental problems with the idea that content and communications can be simply and meaningfully separated. In practice, this is far from the case.[8] Further, Internet Connection Records are just one of many examples of ‘communications’ data that can be used to derive deeply personal information – and sometimes more directly (through analysis) than often confusing and coded (rather than encrypted) content.

6.2       There are other issues with the definitions of data – experts have been attempting to analyse them in detail in the short time since the Bill was published, and the fact that these experts have been unable to agree or at times even ascertain the meaning of some of the definitions is something that should be taken seriously. Again it emphasises the importance of having sufficient time to scrutinise the Bill. Graham Smith of Bird & Bird, in his submission to the Commons Science and Technology Committee,[9] notes that the terms ‘internet service’ and ‘internet communications service’ used in 47(4) are neither defined nor differentiated, as well as a number of other areas in which there appears to be significant doubt as to what does and does not count as ‘relevant communications data’ for retention purposes. One definition in the Bill particularly stands out: in 195(1) it is stated that ‘”data” includes any information which is not data’. Quite what is intended by this definition remains unclear.

6.3       In his report, ‘A question of trust’, David Anderson QC called for a law that would be ‘comprehensive and comprehensible’: the problems surrounding definitions and the lack of clarity about the separation of content and communications data mean that the Bill, as drafted, does not meet either of these targets yet. There are other issues that make this failure even more apparent. The lack of clarity over encryption – effectively leaving the coverage of encryption to RIPA rather than drafting new terms – has already caused a significant reaction in the internet industry. Whether or not the law would allow end-to-end encryption services such as Apple’s iMessage to continue in their current form, where Apple would not be able to decrypt messages themselves, needs to be spelled out clearly, directly and comprehensibly. In the current draft of the Bill it does not.

6.4       This could be solved relatively simply by the modification of 189 ‘Maintenance of technical capability’, and in particular 189(4)(c) to make it clear that the Secretary of State cannot impose an obligation to remove electronic protection that is a basic part of the service operated, and that the Bill does not require telecommunications services to be designed in such a way as to allow for the removal of electronic protection.

7          Future Proofing the Bill

7.1       One of the most important things for the Committee to consider is how well shaped the Bill is for future developments, and how the Bill might be protected from potential legal challenges. At present, there are a number of barriers to this, but there are ways forward that could provide this kind of protection.

7.2       The first of these relates to ICRs, as noted in section 5 above. The idea behind the gathering ICRs appears on the face of it to be based upon an already out-dated understanding of both the technology of the internet and of the way that people use it. In its current form, the idea of requiring communications providers to retain ICRs is also a hostage to fortune. The kind of data required is likely to become more complex, of a vastly greater volume and increasingly difficult to use. What is already an unconvincing case will become even less convincing as time passes. The best approach would seem to be to abandon the idea of requiring the collection of ICRs entirely, and looking for a different way forward.

7.3       Further, ICRs represent one of the two main ways in which the Bill appears to be vulnerable to legal challenge. It is important to understand that recent cases at both the CJEU (in particular the Digital Ireland case[10] and the Schrems case[11]) and the European Court of Human Rights (in particular the Zakharov case[12]) it is not just the examination of data that is considered to bring Article 8 privacy rights into play, but the gathering and holding of data. This is not a perverse trend, but rather a demonstration that the European courts are recognising some of the issues discussed above about the potential intrusion of gathering and holding data. It is a trend that is likely to continue. Holding data of innocent people on an indiscriminate basis is likely to be considered disproportionate. That means that the idea of ICRs – where this kind of data would be required to be held – is very likely to be challenged in either of these courts and indeed is likely to be overturned at some point.

7.4       The same is likely to be true of the ‘Bulk’ powers, unless those bulk powers are more tightly and clearly defined, including the giving of examples. At the moment quite what these bulk powers consist of – and how ‘bulky’ they are – is largely a matter of speculation, and while that speculation continues, so does legal uncertainty. If the powers involve the gathering and holding of the data of innocent people on a significant scale, a legal challenge either now or in the future seems to be highly likely.

7.5       It is hard to predict future developments either in communications technology or in the way that people use it. This, too, is something that seems certain to continue – and it means that being prepared for those changes needs to be built into the Bill. At present, this is done at least in part by having relatively broad definitions in a number of places, to try to ensure that future technological changes can be ‘covered’ by the law. This approach has a number of weaknesses – most notably that it gives less certainty than is helpful, and that it makes ‘function creep’ or ‘mission creep’ more of a possibility. Nonetheless, it is probably inevitable to a degree. It can, however, be ameliorated in a number of ways.

7.6       The first of these ways is to have a regular review process built in. This could take the form of a ‘sunset clause’, or perhaps a ‘renewal clause’ that requires a new, full, debate by Parliament on a regular basis. The precise form of this could be determined by the drafters of the Bill, but the intention should be clear: to avoid the situation that we find ourselves in today with the complex and almost incomprehensible regime so actively criticised by David Anderson QC, RUSI and to an extent the ISC in their reviews.

7.7       Accompanying this, it is important to consider not only the changes in technology, but the changes in people’s behaviour. One way to do this would be to charge those responsible for the oversight of communications with a specific remit to review how the powers are being used in relation to the current and developing uses of the internet. They should report on this aspect specifically.

8          Overall conclusions

8.1       I have outlined above a number of ways in which the Bill, in its current form, does not seem to be workable, proportionate, future-proofed and protected from potential legal challenges. I have made five specific recommendations:

8.1.1    I do not believe the case has been made for retaining ICRs. They appear unlikely to be of any real use to law enforcement in performing the functions that are set out, they add a significant range of risks and vulnerabilities, and are likely to end up being extremely expensive. This expense is likely to fall upon both the government – in which case it would be a waste of resources that could be put to more productive use to achieve the aims of the Bill – or ordinary internet users through increased connection costs.

8.1.2    The Bill needs to be more precise and open about the Bulk Powers, including a proper setting out of examples so that the Committee can make an appropriate judgment as to their proportionality and to reduce the likelihood of their being subject to legal challenge.

8.1.3    The Bill needs to be more precise about encryption and to be clear about the approach to end-to-end encryption. This is critical to building trust in the industry, and in particular with overseas companies such as those in Silicon Valley. It is also a way to future-proof the Bill: though some within the security and intelligence services may not like it, strong encryption is fundamental to the internet now and will become even more significant in the future. This should be embraced rather than fought against.

8.1.4    Oversight needs strengthening and broadening – including oversight of how the powers have been used in relation to changes in behaviour as well as changes in technology

8.1.5    The use of some form of renewal or sunset clause should be considered, to ensure that the powers are subject to full review and reflection by parliemant on a regular basis.

8.2       The question of resource allocation is a critical one. For example, have alternatives to the idea of retaining ICRs been properly considered for both effectiveness and costs? The level of intrusion of internet surveillance (as discussed in section 3 above) adds to the imperative to consider other options. Where a practice is so intrusive, and impacts upon such a wide range of human rights (Articles 8, 10, 11 and 14 of the ECHR – and possibly Article 6), a very high bar has to be set to make it acceptable. It is not at all clear either that the height of that bar has been appropriately set or that the benefits of the Bill mean that it has met them. In particular, the likely ineffectiveness of ICRs mean that it is very hard to argue that this part of the Bill would meet even a far lower requirement. The risks and vulnerabilities that retention of ICRs adds will in all probability exceed the possible benefits, even without considering the intrusiveness of their collection, retention and use.

8.3       The most important overall conclusion at this stage, however, is that more debate and analysis is needed. The time made available for analysis is too short for any kind of certainty, and that means that the debate is being held without sufficient information or understanding. Time is also needed to enable MPs and Lords to gain a better understanding of how the internet works, how people use it in practice, and how this law and the surveillance envisaged under its auspices could impact upon that use. This is not a criticism of MPs or Lords so much as a recognition that people in general do not have that much understanding of how the internet works – one of the best things about the internet is that we can use it quickly and easily without having to understand much of what is actually happening ‘underneath the bonnet’ as it were. In passing laws with significant effects – and the Investigatory Powers Bill is a very significant Bill – much more understanding is needed.

8.4       It is important for the Committee not to be persuaded that an event like the recent one in Paris should be considered a reason to ‘fast-track’ the Bill, or to extend the powers provided by the Bill. In Paris, as in all the notable terrorism cases in recent years, from the murder of Lee Rigby and the Boston Bombings to the Sydney Café Siege and the Charlie Hebdo shootings, the perpetrators (or at the very least a significant number of the perpetrators) were already known to the authorities. The problem was not a lack of data or a lack of intelligence, but the use of that data and that intelligence. The issue of resources noted above applies very directly here: if more resources had been applied to ‘conventional’ intelligence it seems, on the surface at least, as though there would have been more chance of the events being avoided. Indeed, examples like Paris, if anything, argue against extending large-scale surveillance powers. If the data being gathered is already too great for it to be properly followed up, why would gathering more data help?

8.5       As a consequence of this, in my opinion the Committee should look not just at the detailed powers outlined in the Bill and their justification, but also more directly at the alternatives to the overall approach of the Bill. There are significant costs and consequences, and the benefits of the approach as opposed to a different, more human-led approach, have not, at least in public, been proven. The question should be asked – and sufficient evidence provided to convince not just the Committee but the public and the critics in academia and elsewhere. David Anderson QC made ‘A Question of Trust’ the title of his review for a reason: gaining the trust of the public is a critical element here.

 

 

 

Dr Paul Bernal

Lecturer in Information Technology, Intellectual Property and Media Law

UEA Law School

University of East Anglia

Norwich NR4 7TJ

Email: paul.bernal@uea.ac.uk


 

[1] The new ‘Hello Barbie’ doll, through which a Barbie Doll can converse and communicate with a child, has caused some controversy recently (see for example http://www.theguardian.com/technology/2015/nov/26/hackers-can-hijack-wi-fi-hello-barbie-to-spy-on-your-children but is only one of a growing trend.

[2] See http://www.cam.ac.uk/research/news/computers-using-digital-footprints-are-better-judges-of-personality-than-friends-and-family#sthash.OSQ8dqdr.dpuf

[3] Available online at http://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/CallForSubmission.aspx

[4] http://www.bbc.co.uk/news/uk-politics-22984367

[5] See for example the 2015 report of the UN Special Rapporteur on Freedom of Expression, where amongst other things he makes particular reference to encryption and anonymity. http://daccess-dds-ny.un.org/doc/UNDOC/GEN/G15/095/85/PDF/G1509585.pdf?OpenElement

[6] Some of the potential range of vulnerabilities are discussed in Chapter 6 of my book Internet Privacy Rights – Rights to Protect Autonomy, Cambridge University Press, 2014.

[7] See http://www.ft.dk/samling/20121/almdel/reu/bilag/125/1200765.pdf – in Danish

[8] This has been a major discussion point amongst legal academics for a long time. See for example the work of Daniel Solove, e.g. Reconstructing Electronic Surveillance Law, Geo. Wash. L. Review, vol 72, 2003-2004

[9] Published on the Committee website at http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/investigatory-powers-bill-technology-issues/written/25119.pdf

[10] Joined Cases C‑293/12 and C‑594/12, Digital Rights Ireland and Seitlinger and Others, April 2014, which resulted in the invalidation of the Data Retention Directive

[11] Case C-362/14, Maximillian Schrems v Data Protection Commissioner, October 2015, which resulted in the declaration of invalidity of the Safe Harbour agreement.

[12] Roman Zakharov v. Russia (application no. 47143/06), ECtHR, December 2015