Privacy and Security together…

I just spent a very interesting day at ‘Project Breach’ – an initiative of Norfolk and Suffolk police, trying to encourage businesses and others to understand and protect themselves from cybercrime. It was informative in many ways, and primarily (as far as I could tell) intended to be both a pragmatic workshop, giving real advice, and to ‘change the narrative’ over cybercrime. In both ways, I think it worked – the advice, in particular, seemed eminently sensible.

What was particularly interesting, however, was how that advice was in most ways in direct tension with the government’s approach to surveillance, as manifested most directly in the Investigatory Powers Act 2016 – often labelled the ‘Snooper’s Charter’.

The speaker – Paul Maskall – spent much of the first session outlining the risks associated with your ‘digital footprint’. How your search history could reveal things about you. How your ‘meta data’ could say more about you than the content of your postings. How your browsing history could put you at risk of all kinds of scams and so forth. And yet all of this is made more vulnerable by the Investigatory Powers Act. Search histories and metadata could be forced to be retained by service providers. ‘Internet Connection Records’ could be used to create a record of your browsing – and all of this could then be vulnerable to the many forms of hacking etc that Maskall then went on to detail. The Investigatory Powers Act makes you more vulnerable to scams and other crimes.

The keys to the next two sessions were how to protect yourself – and two central pillars were encryption and VPNs. Maskall emphasised again and again the importance of encryption – and yet this is what Amber Rudd railed against only a few weeks ago, trying to link it to the Westminster attack, though subsequent evidence proved yet again that this was a red herring at best. The Investigatory Powers Act adds to the old Regulation of Investigatory Powers Act (RIPA) in the way it could allow encryption to be undermined…. which again puts us all at risk. When I raised this issue, first on Twitter and then in the room, Maskall agreed with me – encryption is critical to all of us, and attempts to undermine it put us all at risk – but I was challenged, privately, by another delegate in the room, after the session was over. Amber Rudd, this delegate told me, wasn’t talking about undermining encryption for us, but only for ISIS and Al Qaeda. I was very wrong, he told me, to put the speaker on the spot about this subject. All that showed me was how sadly effective the narrative presented by Amber Rudd, and Theresa May before her, as well as others in what might loosely be called the ‘security lobby’ has been. You can’t undermine encryption for ISIS without undermining it for all of us. You can’t allow backdoors for the security services without providing backdoors for criminals, enemy states and terrorists.

VPNs were the other key tool mentioned by the speaker – and quite rightly. Though they have not been directly acted against by the Investigatory Powers Act, they do (or might) act against the main new concept introduced by the Act, the Internet Connection Record. Further, VPN operators might also be subjected to the attention of the authorities, and asked to provide browsing histories themselves – though the good ones don’t even retain those histories, which will cause a conflict in itself. Quite now the authorities will deal with the extensive use of VPNs has yet to be seen – but if they frustrate the intentions of the act, we can expect something to be done. The overall point, however, remains. For good security – and privacy – we need to go against the intentions of the act.

The other way to put that is that the act goes directly against good practice in security and privacy. It undermines, rather than supports security. This is something that many within the field understand – including, from his comments to me after the event, the speaker at Project Breach. It is sad that this should be the case. A robust, secure and privacy-friendly internet helps us all. Even though it might go against their instincts, governments really should recognise that.

The internet, privacy and terrorism…

As is sadly all too common after an act of terrorism, freedom on the internet is also under attack – and almost entirely for spurious reasons. This is not, of course anything new. As the late and much lamented Douglas Adams, who died back in 2001 put it:

“I don’t think anybody would argue now that the Internet isn’t becoming a major factor in our lives. However, it’s very new to us. Newsreaders still feel it is worth a special and rather worrying mention if, for instance, a crime was planned by people ‘over the Internet’.”

The headlines in the aftermath of the Westminster attack were therefore far from unpredictable – though a little more extreme than most. The Daily Mail had:

“Google, the terrorists’ friend”


…and the Times noted that:

“Police search secret texts of terrorist”


…while the Telegraph suggested that:

“Google threatened with web terror law”

Screen Shot 2017-03-25 at 20.34.14

The implications are direct: the net is a tool for terrorists, and we need to bring in tough laws to get it under control.

And yet this all misses the key point – the implication of Douglas Adams’ quote. Terrorists use the internet to communicate and to plan because we all use the internet to communicate and plan. Terrorists use the internet to access information because we all use the internet to access information. The internet is a communicative tool, so of course they’ll use it – and as it develops and becomes better at all these things, we’ll all be able to use it in this way. And this applies to all the tools on the net. Yes, terrorists will use Google. Yes, they’ll use Facebook too. And Twitter. And WhatsApp. Why? Because they’re useful tools, systems, platforms, whatever you want to call them – and because they’re what we all use. Just as we use hire cars and kitchen knives.

Useful tools…

That’s the real point. The internet is something we all use – and it’s immensely useful. Yes, Google is a really good way to find out information – that’s why we all use it. The Mail seems shocked by this – not that it’s particularly difficult to know how a car might be used to drive somewhere and to crash into people. It’s not specifically the ‘terrorists’ friend, but a useful tool for all of us.


The same is true about WhatsApp – and indeed other forms of communication. Yes, they can be used by ‘bad guys’, and in ways that are bad – but they are also excellent tools for the rest of us. If you do something to ban ‘secret texts’ (effectively by undermining encryption), then actually you’re banning private and confidential communications – both of which are crucial for pretty much all of us.

The same is true of privacy itself. We all need it. Undermining it – for example by building in backdoors to services like WhatsApp – undermines us all. Further, calls for mass surveillance damage us all – and attacks like that at Westminster absolutely do not help build the case for more of it. Precisely the opposite. To the surprise of no-one who works in privacy, it turns out that the attacker was already known to the authorities – so did not need to be found by mass surveillance. The same has been true of the perpetrators of all the major terrorist attacks in the West in recent years. The murderers of Lee Rigby. The Boston Bombers. The Charlie Hebdo shooters. The Sydney siege perpetrators. The Bataclan killers. None of these attacks needed identifying through mass surveillance. At a time when resources are short, to spend time, money, effort and expertise on mass surveillance rather than improving targeted intelligence, putting more human intelligence into place – more police, more investigators rather than more millions into the hands of IT contractors – is hard to defend.

More responsible journalism…

What is also hard to defend is the kind of journalism that produces headlines like that in the mail, or indeed in the Times. Journalists should know better. They should know all too well the importance of privacy and confidentiality – they know when they need to protect their own sources, and get rightfully up in arms when the police monitor their communications and endanger their sources. They should know that ‘blocking terror websites’ is a short step away from political censorship, and potentially highly damaging to freedom of expression – and freedom of the press in particular.

They should know that they’re scaremongering or distracting with their stories, their headlines and their ‘angles’. At a time when good, responsible journalism is needed more than ever – to counter the ‘fake news’ phenomenon amongst other things, and to keep people informed at a time of political turmoil all over the world – this kind of an approach is deeply disappointing.

Blinded… a short poem for World Poetry Day

We let ourselves be blinded

By ignorance and hate

That keeps us narrow-minded

And leaves others to their fate

We let the hatred-mongers

Weave fairy tales of fear

“There are those amongst us

Who just don’t belong here”

They mix half-truth with anger

And take real people’s pain

And twist it with their stories

For their own hateful gain

And by the time we see it

It’s sadly far too late

They’ve taken back control

And we’re left to our fate.



The Investigatory Powers Act: still a question of trust…

I read the short review of the Investigatory Powers Act by David Anderson QC, Independent Reviewer of Terrorism Legislation, with a great deal of interest. Anderson has been exemplary in his role, and has played a very significant part in ensuring that the Investigatory Powers Act has the safeguards that it does, and the chance to be something other than the ‘Snooper’s Charter’ which it often described as.

I find myself agreeing with a great deal of what he says – though coming to rather different conclusions. As one of those who followed the process of the act from beginning to end – and who participated in a number of the reviews, including appearing before the Joint Bill Committee, and being one of those consulted by David in his Bulk Powers Review, I agree with him entirely that the bill has been one of the most carefully scrutinised in recent times. That, however, also reveals the weaknesses of our scrutiny system. Some of these weaknesses that are unavoidable – it would be impossible to expect parliamentarians to understand many of the issues, or even to read all the fairly massive reports that the various reviews resulted in. Others are not: parliamentarians should be able to see their own weaknesses, and be willing to listen a bit more carefully to those who do understand them. As a legal academic, for example, I try to recognise my own weaknesses in understanding the technology, and defer to those who do understand it.

Where I find myself disagreeing most with the Independent Reviewer is in the weight that he appears to give to the bad features and weaknesses of the Investigatory Powers Act. Many of the problems seem to hit at the heart of the Act, and undermine its claim to be something positive overall.

  1. Internet Connection Records, which he notes that he had no opportunity to evaluate, were the one area noted as being entirely new in the bill – and in the view of many (including myself) are both unproven and represent a huge risk, a huge waste of resources. They should, in my view, have been included in David Anderson’s Bulk Powers Review – though not, in the technical terms of the bill, ‘Bulk Powers’, they are in a real sense every bit as ‘bulky’ and ‘powerful’. There are likely (in my view) to be highly difficult to implement, highly unlikely to be effective – and they could have been excluded from the Act, or introduced and tested on a pilot basis, with scope for a proper review.
  2. I share David Anderson’s concern over the dual lock system – and agree with him that this could and should have been done better. As another key element of the bill – and considered to be one of the key safeguards – this really matters. If the dual lock ends up being little more than a rubber stamp, its existence may do more harm than good, providing false assurance and complacency. The test of this will be in the implementation – something that needs to be watched very carefully.
  3. I also share David Anderson’s note that it is “legitimate to ask whether there are adequate advance safeguards on the exercise of some of the very extensive powers now spelled out for the first time”. This, it seems to me, is very important indeed – and hits at the heart of the problems that many of us have with the bill. The powers are extensive, and it is not at all clear that the safeguards are adequate.
  4. Finally as David Anderson notes, the failure to recognise in statute the idea of an ‘Investigatory Powers Commission’ could be significant. The question is why it was omitted: was it, as those suspicious of the authorities might suggest, because they don’t want to put proper, independent oversight on a statutory basis for fear of its restricting their actions?

That, I think, reflects my overall difference with David Anderson – the same question that he highlighted in his review of investigatory powers in 2015. A question of trust. The biggest weakness of the Investigatory Powers Act, for me, is that it still relies on a great deal of trust, without the authorities having yet, for me, proved themselves worthy of that trust. We have to trust that the dual lock system will work. We have to trust that an investigatory powers commission will be put in place and have appropriate powers – they’re not set down in statute. We have to trust that the Technology Advisory Panel will be filled with the right kind of people, and will be able to perform its functions. We have to trust that everything is ‘OK’ with Internet Connection Records.

We have to trust (as David Anderson also notes) that the government interprets the various grey areas and ambiguities in the Act appropriately – when we really didn’t need to nearly as much as we do. Things like how to deal with encryption (whether the Act allows the government to mandate ‘back doors’ etc) and extraterritoriality (how the Act will be enforced on service providers outside the UK) remain subject to a great deal of doubt – and are potentially deeply dangerous.

Whether it is possible for me to agree with David Anderson that this is a ‘victory for democracy and the rule of law’ remains to be seen. Right now, I can’t give it a round of applause. I don’t condemn it completely – but there are sufficient problems at the heart of many of the most important parts of the Act to make it impossible to applaud. A chance missed, is the best I can say at this stage.

The real test is in the implementation. On that, I wholeheartedly agree with David Anderson that the new Investigatory Powers Commission (or whatever name is given to it) is the key. It will make or break the trust that people can have in the Act, and indeed in those engaged in surveillance. As he puts it:

“the new supervisory body needs to develop a culture of high-level technical understanding, intellectual enquiry, openness and challenge.”

If it does that, I will be delighted – and, with my cynical hat on, very surprised. I hope that I am.

Guest post: A rebuttal of what constitutes discrimination

Guest post by Super__Cyan


On 11 November 2016, Jamie Foster, a solicitor had an opinion piece posted on countrysquire titled Trump, Brexit and a new Freedom. Foster begins with a critique about the left wing intelligentsia and their political correctness which was shattered by Brexit and the election of Donald Trump. Foster remarks that free speech is breaking bounds much to the anxiety of its guardians. Of course, Foster continues with his critique of ‘experts’ and the like, but for the sake of this post, it’s not relevant for this discussion, so let’s just skip it right?

Foster asks, does Brexit and Tump bring a new Dark Age upon us? Foster quite rightly eludes to that it is more complicated than that. He then remarks about what he perceives as the overzealous use of phrases like ‘racist, sexist, homophobe’ to anyone inadvertently stepping on a taboo, which he argues has bred contempt. Of course, ‘taboo’ is not defined in this regard, so makes it difficult to make an assessment of what Foster may have meant. And sure, blindly saying anything and everything is racist, sexist and homophobic devalues the meaning of important phrases, phrases that should never be lost or forgotten, but that all depends upon context. Foster is also right to highlight that ‘[d]iscriminating against individuals on the basis of a prejudicial reaction to a characteristic common to a group is wrong.’

This is, however, when opinions sharply diverge. Foster argues that ‘labelling people you have never met as ‘racist, sexist or homophobic’ on the basis of words that you don’t like’ also amounts to prejudicial discrimination. First of all, that depends on the words in question used, which Foster does not elaborate upon. They may not be liked because they are racist, sexist or homophobic. Secondly, it may not be the person per se that is labelled a racist, sexist or homophobe, but the choice or words used. Thirdly, if it required actually meeting someone to establish whether they are racist, sexist or homophobic, then what is even the point of the internet? Fourthly, context is key. Foster follows that ‘[i]t is a prejudicial discrimination where a human being is branded as unworthy because they have dared to say something wrong.’ Here, Foster conflates calling someone a racist, sexist or homophobe as being unworthy when that may not be the case, depending upon the meaning attached by the person making the accusation, one could argue, such an ideology is dangerous. One does not need to document the many horrors of intolerance of others to hammer this point home. Foster implicitly admits that an accusation of racism, sexism and homophobia may stem from something wrongly said. And of course, Foster does not define what ‘wrong’ means in this context, as saying something factually incorrect could constitute racism, sexism, or homophobia, as is saying something that is based on a characteristic that is generalised to a group could also be wrong i.e. all black people are criminals, all women should stay in the kitchen. The presumption, is based on a clear characteristic i.e. race and gender. This of course, also accords with Foster’s own inclination to rely on ‘prejudicial reactions.’ What we have here, is Foster trying to equate a fundamental characteristic of a person with a possible opinion of another, they are not analogous. To do so would devalue the importance of said characteristic whilst simultaneously elevating a possible opinion.

Foster further argues prejudice is important and not the target. This ignores that the target is fundamental to determining whether or not discrimination has occurred. Foster continues that it is no worse to prejudice a black person than a white person. This is correct, but Foster himself identifies the target in both instances, the black and white person and therefore is betrayed by his own logic. If one targets a person because they are white or black, this highlights the importance of consideration for the target. Not considering the importance of a target would defeat the purpose of non-discrimination laws, because to what criteria is it to be assessed that discrimination has in fact occurred?

Foster then ironically states that terms like ‘racist’ and ‘sexist’ exist only to ‘to allow the user their own prejudices while condemning those of others’ therefore implying those who use the term are projecting their own prejudices. Ironic because prejudice can be inferred from such a statement where Foster himself earlier notes ‘[a]ny chance of persuading them to a different view is lost.’ If one has already formed a view that words such as ‘racist’ and ‘sexist’ are used for projection, then any chance of persuading them to a different view is equally lost. Furthermore, these prejudices are of course, not defined. There is no attempt to discern genuinely calling out racism, sexism and homophobia from the potential of it being used overzealously and carelessly. Foster calls for the challenging of prejudice, but not to fall prey to dehumanising those guilty of it. This sounds a lot like suggesting that one should not call someone racist, sexist or homophobic, if and when they are, whilst also ignoring the fact that calling people racist, sexist and homophobic can be the beginning of the challenge. Sometimes this can be followed by an explanation as to why it is believed what was said was racist, sexist or homophobic ‘this is x because…’, sometimes this may not be necessary.

Foster argues, tolerance is the willingness to put up with things we do not like. Sure, British weather can be unpredictably awful at times, and I deal with it because there is nothing I can do about it, bar moving. But putting up with things that someone does not like is not the same as expecting one to tolerate discrimination, because discrimination is discrimination irrespective of whether it is liked or not. Foster argues that discrimination ‘is a valuable tool that allows us to distinguish between that which is good and useful and that which isn’t.’ But that entirely depends upon the discriminatory measure at hand, and what is defined as ‘good’ and ‘useful.’ Foster highlights that being indiscriminate used to be frowned upon. But guess what? Not only can this still be frowned upon, in some instances it can be illegal. Foster continues that we should not confuse discrimination with prejudice, whilst also maintaining that prejudicial discrimination is wrong. This fails to acknowledge that discrimination need not be prejudicial to be wrong, all that is required is a difference in treatment of those in an analogous situation without objective justification.

Foster makes note that we should tolerate what is lawful and refuse to tolerate what is not. Then I suggest it is important to consider the law on this matter. There are various forms of non-discrimination laws set forth by the European Union (EU) and the Council of Europe (CoE). But because we are supposed to be leaving the EU, it is useful to just consider discrimination from the perspective of the CoE, namely Article 14 of the European Convention on Human Rights (ECHR) which states that:

The enjoyment of the rights and freedoms set forth in this European Convention on Human Rights shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.

Article 14 is not a standalone right, and can only be utilised when in conjunction with another Convention Right. But it does create a non exhaustive list of characteristics that can be discriminated against, in particular it is noted that ‘political or other opinion’ can indeed be discriminated upon. The Handbook on European non-discrimination law highlights that this may be ‘where a particular conviction is held by an individual but it does not satisfy the requirements of being a ‘religion or belief’’ (p117). This seems to equate the political opinion for the purposes of Article 14 to be on a similar level of religion or belief, not just an ‘I like coffee’ opinion. It was further suggested that:

As with other areas of the ECHR, ‘political or other opinion’ is protected in its own right through the right to freedom of expression under Article 10, and from the case-law in this area it is possible to gain an appreciation of what may be covered by this ground. In practice it would seem that where an alleged victim feels that there has been differential treatment on this basis, it is more likely that the ECtHR would simply examine the claim under Article 10. (p.117).

And so it begins to unravel that this may not even be a discrimination issue at all, but one of freedom of expression. The European Court of Human Rights (ECtHR) in Handyside v. United Kingdom noted that:

Freedom of expression constitutes one of the essential foundations of such a society, one of the basic conditions for its progress and for the development of every man. Subject to paragraph 2 of Article 10 (art. 10-2), it is applicable not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no “democratic society”. This means, amongst other things, that every “formality”, “condition”, “restriction” or “penalty” imposed in this sphere must be proportionate to the legitimate aim pursued. (para 49).

And so from the ECtHR’s case law, it is clear that freedom of expression can allow us to be utter shits, but this can be subject to limitations, depending upon the manner to which we are utter shits. In Erbakan v. Turkey (in French) the ECtHR held that:

…[T]olerance and respect for the equal dignity of all human beings constitute the foundations of a democratic, pluralistic society. That being so, as a matter of principle it may be considered necessary in certain democratic societies to sanction or even prevent all forms of expression which spread, incite, promote or justify hatred based on intolerance…(para 56).

In the ECtHR’s admissibility decision in Seurot v. France (in French) it was maintained that:

[T]here is no doubt that any remark directed against the Convention’s underlying values would be removed from the protection of Article 10 [freedom of expression] by Article 17 [prohibition of abuse of rights].

Such intolerance and exclusion from Article 10 includes anti-Semitism, racial hate, homophobia etc. In essence, what the ECtHR are saying is that you cannot be a racist or say racist things, and then cry about it afterwards provided that the consequences are proportionate.

What about calling out racism? Is this problematic? It wouldn’t seem so. The case of Jersild v. Denmark was an ECtHR Grand Chamber (GC) case that concerned a journalist who had made a documentary which contained abusive opinions towards immigrants and ethnic groups from young people calling themselves the ‘Greenjackets.’ The journalist was convicted of aiding and abetting the dissemination of racist remarks. The journalist alleged a breach of Article 10. The GC emphasised that it was ‘particularly conscious of the vital importance of combating racial discrimination in all its forms and manifestations’ (para 30). The GC noted that the feature sought to ‘expose, analyse and explain this particular group of youths, limited and frustrated by their social situation, with criminal records and violent attitudes’ and ‘thus dealing with specific aspects of a matter that already then was of great public concern’ (para 33). The GC also noted that the journalist rebutted some of the racist statements although not explicitly ‘recall[ing] the immorality, dangers and unlawfulness of the promotion of racial hatred and of ideas of superiority of one race’ (para 34). In the end, the GC found a violation of Article 10 (para 37). This clearly demonstrates that challenging racism is protected by the same freedom of expression Foster was adamantly advocating for at the beginning of his article. Foster then, ironically gradually moves onto attacking the very thing he sought to defend. It’s ok to say things that are wrong or comment on the taboo, but you shouldn’t be called a racist, sexist, homophobe even if those sentiments ring true because they dehumanise the person making the comment?


Under UK law under the Equality Act 2010, there are provisions of non-discrimination; one notably is ‘philosophical belief.’ In Olivier v Department for Work and Pensions ET/1701407/2013 it was noted by the Employment Appeal Tribunal that a philosophical belief must be a ‘belief, not an opinion or viewpoint’ which ‘must be worthy of respect in a democratic society, not incompatible with human dignity and not conflict with the fundamental rights of others.’ This poses two problems for Foster’s analysis. First of all, if it is an opinion i.e. vote Trump, vote Brexit, then it is not a characteristic that can be discriminated against. Secondly, if that opinion is a racist, sexist, or homophobic one, it cannot be regarded as worthy of respect, or compatible with the fundamental rights of others, and therefore, again, cannot be discriminated against.

Of course, calling out racism is subject to the laws of defamation and libel for example if such calling out does not ring true as Frankie Boyle demonstrated in 2012. However, across the pond in France, Marie Le Pen, leader of the French National Party, on two occasions did not have similar successes when called a ‘fascist.’

Racism, sexism, homophobia are the objects of discrimination, never the subjects to it. And when one of the prominent figures for leaving the EU feels that race discrimination laws should be scrapped, refused to support same sex marriage and supported its discrimination, it creates an association based on intolerance. This is not to quantify how many ‘racist, sexist, and homophobic’ votes Trump or leaving the EU gained, but to highlight the futility in ignoring that it did pander to those ideologies. Calling someone racist, sexist or homophobe can be correct at best, or ignorant, offensive and defamatory at worst, but never discriminatory.

Finally, Foster notes that people should put down their labels and sanctimony, and talk, because ‘It’s good to talk.’ In response to this it is stressed that these labels exist for a reason, a good talk cannot begin by controlling the narrative as to deny their existence and importance. These labels are important, it’s the exercise of those labels where a good talk can only begin.

Five reasons NOT to use Facebook posts for insurance premiums…

The announcement by Admiral that it was going to use an algorithmic analysis of Facebook usage to ‘ analyse the personalities of car owners and set the price of their insurance’, as reported in the Guardian, should not be a surprise. In many ways it was inevitable that this kind of approach would be taken – the idea that ‘useful’ data can be derived from social media behaviour is fundamental to the huge success of Facebook – but it should not be greeted with much pleasure.

Rather, it should be met with a serious level of scepticism and a good deal of alarm. Facebook themselves seem to have understood this, and have revoked Admiral’s right to use the system – possibly because they understand the problems with the system, and possibly because they can foresee some of the tactics that could be developed in response to it (more of which below). This, however, does not mean that Admiral and other insurers and indeed those in other sectors will not continue with this approach – which is something about which we should be very careful. There are distinct dangers here, and reasons to be concerned that it marks the beginning of a very slippery slope.

1     Inappropriateness

The first reason is perhaps too obvious for many to notice is that the whole idea can be seen as inappropriate. This is not what social media was designed for – and neither is it what users of social media are likely to expect it to be used for. Comparisons between the ‘offline’ and ‘online’ worlds need a lot of care – but here they are apt. Would we expect insurers to go through our address books and calendars, record and listen to our conversations in cafés and pubs, check our taste in music and what films we watch at the cinema? Most people would find that kind of thing distinctly creepy – and yet this is what using an algorithm to analyse our social media activity amounts to. Even if it is consensual, the chances of the consent being truly informed – that the person consenting to the insurer analysing their social media activity actually understand the essence of what they’re allowing – are very small indeed.

2     Discrimination

The chances that this sort of a system would be discriminatory in a wide range of ways are also very significant – indeed, it would be surprising if they are not discriminatory. Algorithmic analysis, despite the best intentions of those creating the algorithms, are not neutral, but embed the biases and prejudices of those creating and using them. A very graphic example of this was unearthed recently, when the first international beauty contest judged by algorithms managed to produce remarkably prejudiced results – almost all of the winners were white, despite there being no conscious mention of skin colour in the algorithms.

This kind of discrimination is likely to be made even worse by the kind of linguistic analysis that seems to be hinted at by Admiral. As reported by the Guardian,

“evidence that the Facebook user might be overconfident – such as the use of exclamation marks and the frequent use of “always” or “never” rather than “maybe” – will count against them”

In practice, this kind of analysis is very likely to favour what might be seen as ‘educated’ language – and make any kind of regional, ethnic or otherwise ‘non-standard’ use of language put its user at a disadvantage. The biases concerned could be racial, ethnic, cultural, regional, sexual, sexual-orientation, class-based – but they will be present, and they will almost certainly be unfair.

3     Digital divides

The problem of digital divides – where people who have good access to or experience with digital technology have an advantage over those who have less – is likely to be exacerbated by this kind of a system. A ‘savvy’ user will be able to game the system – from taking more care of the language use (and avoiding exclamation marks!!!) – while those with less experience and more naïveté will lose out.

Those who don’t use Facebook are also likely to lose out – making the ‘normalisation’ of Facebook use even stronger. This is, perhaps, one of the slipperiest of the slopes involved here: the more we make Facebook ‘part of the infrastructure’, the more we hand over our rights, our freedoms, to a commercial entity that really does not have our best interests at heart.

4     The perpetuation of illusions about algorithms and surveillance

The illusion that algorithms are somehow ‘neutral’ or ‘objective’ is one that seems to be growing, despite the increasing understanding amongst those who study ‘big data’ and social media that they are anything but neutral. This illusion combines with the idea that surveillance – including algorithmic analysis of ‘public’ data – is harmless, unless you have ‘something to hide’. Admiral’s new idea is described only in terms of providing a ‘service’ – and the suggestion is made that it can only ‘reduce’ premiums, not increase them. As well as showing a deep misunderstanding of how insurance works (and even how mathematics works) that idea emphasises how the surveillance itself (and this is surveillance) is not harmful, unless you’re trying to hide some dark secret as a risk taker.

These twin illusions combine into the facade of neutral, objective algorithms undertaking analysis that can only harm you if you’ve got something to hide. This is a doubly pernicious illusion.

5     The chilling effect

The idea that surveillance has a chilling effect – that being watched makes you alter your behaviour, reducing freedom of speech and of action – is one that has been understood for a long time. The concept of the Panopticon, theorised by Bentham and taken much further by Foucault, was initially about a prison where the prisoners knew that at any time they might be watched, and hence they would behave better. It is based, however, on the presumption that we want to control people’s behaviour – and for people whose behaviour is already known to be aberrant and in need of ‘correction’. For a prison, this  might be appropriate – for citizens of a democracy, it is another matter.

It might seem a little extreme to mention this in the context of car insurance – but the point is clear. If we think that it is generally ‘OK’ to monitor people’s behaviour for something like this, then we are pushing open the doors (which are in many ways already at least ajar) to things that take this a lot further. Monitoring social media usage these days means monitoring almost every aspect of someone’s life.

Survival tactics

What can be done about this? The first thing is to develop survival tactics. Keep the more creative sides of your social life off Facebook – and in particular, never mention things like extreme sports, motor racing, admiring Lewis Hamilton or anything even slightly like that on Facebook. Don’t mention or organise parties on social media – or find ways to describe them that the initial algorithms won’t be able to decode immediately.

This should already be becoming the norm for young people – and there is evidence to suggest that it is. Insurers are just another in the long line of potential snoops to be wary of, from parents and potential employers onwards. Thinking about your reputation can have a direct impact on your finances as well as other prospects, and young people need to understand this. Some even maintain several profiles – a ‘safe’ one that the parents can ‘friend’, that employers can scrutinise and insurers can analyse, and a ‘real’ one where real social activity goes on. I would advise all young people to consider using these kinds of tactics – this idea from Admiral is just one example of the kind of thing that is likely to become the norm.

It is, perhaps, an understanding of this that lies behind Facebook’s revocation of Admiral’s permission to use their data. Facebook needs young people (and the rest of us) to post more things, more accurate, more ‘sexy’, and if surveillance like this means that they post less – a trend which has already been observed and about which Facebook is already wary – then Facebook will care. That does not mean they will stop this kind of thing, but more that they are likely to make it less obvious, less overt, and less clearly risky.

The future: algorithmic accountability and regulation 

Ultimately, though, all of this is just scratching the surface – and may even produce a degree of overconfidence. If you think you can ‘fool’ the system, you’re quite likely to end up being the fool yourself. What is needed is for this to be taken seriously by the authorities. ‘Algorithmic accountability’ is one of the new phrases going around – and it is matters. Angela Merkel recently spoke out against the way that Google are ‘distorting our perception’ through the way their algorithm controls what we see when we search. What she was asking for was algorithmic accountability – not for Google to reveal its trade secrets, but for Google to be responsible, and for the way that algorithms control so much to be taken seriously. Algorithms need to be monitored and tested, their impact assessed, and those who create and use them to be held accountable for that impact.

Insurance is just one example – but it is a pertinent one, where the impact is obvious. We need to be very careful here, and not walk blindly into something that has distinct problems.

Brexit and consequences…

Yesterday morning I tweeted about Brexit (as I’ve done a fair number of times), and it went just a little bit viral. Here’s the tweet:


It was an off-the-cuff Tweet, and I had no idea that people would RT it so much, nor that it would provoke quite as many reactions as it has. I’ve replied to a few, but, frankly, it’s not possible to reply to all. The responses, however, have been quite revealing in many ways. As usual, people read Tweets in different ways, and of course this particular Tweet is far from unambiguous. I was asked many times what is the ‘this’ that I’m saying is the fault of the ‘Brexit people’. And who I meant by ‘Brexit people’. I was told I was wrong to lump all Brexit people together. And that we should be looking for unity, not stoking the fires of division.

Some thought I was specifically talking about the dramatic fall of the pound. I wasn’t, but I might have been. Others thought I was blaming Brexit voters for ‘anything and everything’. I wasn’t. Actually, what I was doing was getting angry with those people who voted for Brexit but are now saying ‘we didn’t vote for this’ when they see Theresa May’s increasing nasty and xenophobic government do things like threaten to use EU citizens in the UK as ‘bargaining chips’, sending foreign doctors home as soon as we’ve trained enough ‘home grown’ doctors, and ‘naming and shaming’ companies that employ foreigners.

The thing is, if you voted Brexit you may not have wanted that to happen, but that’s the effect of your vote. And you were warned, many times, that by voting for Brexit you were helping the far right. By voting for Brexit you were ‘sending a message’ that immigrants weren’t welcome. By voting for Brexit you were likely to give more power to the worst kind of Tory. This is what I said on my blog in February, when the campaign was just beginning:

“What’s far more likely with Brexit is that an even more right-wing Tory government will come in, and with even fewer restrictions on their actions will destroy even more of what is left of our welfare state, our NHS, all those things about Britain that those on the left like. It shouldn’t be a surprise that Iain Duncan Smith and Chris Grayling are amongst the most enthusiastic Brexiters. Win the vote and you’re giving them what they want.”

That’s what happened – and I was far from alone in predicting it, and warning people that if they voted for Brexit they’d get more nastiness and a more right-wing government. Now we’ve got it, and if you voted for Brexit, that’s the result.

I’m not, as I’ve also been accused, ‘lumping all Brexit voters together’, suggesting that they’re all racists and xenophobes. Of course they’re not. They have all, however, helped the racists and xenophobes. That’s what the vote did. That’s cause and effect. Some people I know and respect have strong and detailed analytical economic reasons behind their vote – and some expounded them in response to my tweet – but, frankly, that’s by-the-by. Even if their economic arguments  are sound (and I remain unconvinced), they still unleashed the xenophobia.

Others try to suggest that what’s happened is all for the good. We should be making lists of foreigners, we should be replacing foreign doctors with Brits and so forth. That’s also all well and good – but in that case, why be angry with my Tweet? You should be proud of the consequences, if you like them.

I am, of course, one of the out-of-touch metropolitan elite, and I know it. I don’t expect to be listened to. I don’t expect to have any result – but I still have the right to be angry. And I am. I only wish I’d been angrier earlier.