Guest post: A rebuttal of what constitutes discrimination

Guest post by Super__Cyan

panther-meanie

On 11 November 2016, Jamie Foster, a solicitor had an opinion piece posted on countrysquire titled Trump, Brexit and a new Freedom. Foster begins with a critique about the left wing intelligentsia and their political correctness which was shattered by Brexit and the election of Donald Trump. Foster remarks that free speech is breaking bounds much to the anxiety of its guardians. Of course, Foster continues with his critique of ‘experts’ and the like, but for the sake of this post, it’s not relevant for this discussion, so let’s just skip it right?

Foster asks, does Brexit and Tump bring a new Dark Age upon us? Foster quite rightly eludes to that it is more complicated than that. He then remarks about what he perceives as the overzealous use of phrases like ‘racist, sexist, homophobe’ to anyone inadvertently stepping on a taboo, which he argues has bred contempt. Of course, ‘taboo’ is not defined in this regard, so makes it difficult to make an assessment of what Foster may have meant. And sure, blindly saying anything and everything is racist, sexist and homophobic devalues the meaning of important phrases, phrases that should never be lost or forgotten, but that all depends upon context. Foster is also right to highlight that ‘[d]iscriminating against individuals on the basis of a prejudicial reaction to a characteristic common to a group is wrong.’

This is, however, when opinions sharply diverge. Foster argues that ‘labelling people you have never met as ‘racist, sexist or homophobic’ on the basis of words that you don’t like’ also amounts to prejudicial discrimination. First of all, that depends on the words in question used, which Foster does not elaborate upon. They may not be liked because they are racist, sexist or homophobic. Secondly, it may not be the person per se that is labelled a racist, sexist or homophobe, but the choice or words used. Thirdly, if it required actually meeting someone to establish whether they are racist, sexist or homophobic, then what is even the point of the internet? Fourthly, context is key. Foster follows that ‘[i]t is a prejudicial discrimination where a human being is branded as unworthy because they have dared to say something wrong.’ Here, Foster conflates calling someone a racist, sexist or homophobe as being unworthy when that may not be the case, depending upon the meaning attached by the person making the accusation, one could argue, such an ideology is dangerous. One does not need to document the many horrors of intolerance of others to hammer this point home. Foster implicitly admits that an accusation of racism, sexism and homophobia may stem from something wrongly said. And of course, Foster does not define what ‘wrong’ means in this context, as saying something factually incorrect could constitute racism, sexism, or homophobia, as is saying something that is based on a characteristic that is generalised to a group could also be wrong i.e. all black people are criminals, all women should stay in the kitchen. The presumption, is based on a clear characteristic i.e. race and gender. This of course, also accords with Foster’s own inclination to rely on ‘prejudicial reactions.’ What we have here, is Foster trying to equate a fundamental characteristic of a person with a possible opinion of another, they are not analogous. To do so would devalue the importance of said characteristic whilst simultaneously elevating a possible opinion.

Foster further argues prejudice is important and not the target. This ignores that the target is fundamental to determining whether or not discrimination has occurred. Foster continues that it is no worse to prejudice a black person than a white person. This is correct, but Foster himself identifies the target in both instances, the black and white person and therefore is betrayed by his own logic. If one targets a person because they are white or black, this highlights the importance of consideration for the target. Not considering the importance of a target would defeat the purpose of non-discrimination laws, because to what criteria is it to be assessed that discrimination has in fact occurred?

Foster then ironically states that terms like ‘racist’ and ‘sexist’ exist only to ‘to allow the user their own prejudices while condemning those of others’ therefore implying those who use the term are projecting their own prejudices. Ironic because prejudice can be inferred from such a statement where Foster himself earlier notes ‘[a]ny chance of persuading them to a different view is lost.’ If one has already formed a view that words such as ‘racist’ and ‘sexist’ are used for projection, then any chance of persuading them to a different view is equally lost. Furthermore, these prejudices are of course, not defined. There is no attempt to discern genuinely calling out racism, sexism and homophobia from the potential of it being used overzealously and carelessly. Foster calls for the challenging of prejudice, but not to fall prey to dehumanising those guilty of it. This sounds a lot like suggesting that one should not call someone racist, sexist or homophobic, if and when they are, whilst also ignoring the fact that calling people racist, sexist and homophobic can be the beginning of the challenge. Sometimes this can be followed by an explanation as to why it is believed what was said was racist, sexist or homophobic ‘this is x because…’, sometimes this may not be necessary.

Foster argues, tolerance is the willingness to put up with things we do not like. Sure, British weather can be unpredictably awful at times, and I deal with it because there is nothing I can do about it, bar moving. But putting up with things that someone does not like is not the same as expecting one to tolerate discrimination, because discrimination is discrimination irrespective of whether it is liked or not. Foster argues that discrimination ‘is a valuable tool that allows us to distinguish between that which is good and useful and that which isn’t.’ But that entirely depends upon the discriminatory measure at hand, and what is defined as ‘good’ and ‘useful.’ Foster highlights that being indiscriminate used to be frowned upon. But guess what? Not only can this still be frowned upon, in some instances it can be illegal. Foster continues that we should not confuse discrimination with prejudice, whilst also maintaining that prejudicial discrimination is wrong. This fails to acknowledge that discrimination need not be prejudicial to be wrong, all that is required is a difference in treatment of those in an analogous situation without objective justification.

Foster makes note that we should tolerate what is lawful and refuse to tolerate what is not. Then I suggest it is important to consider the law on this matter. There are various forms of non-discrimination laws set forth by the European Union (EU) and the Council of Europe (CoE). But because we are supposed to be leaving the EU, it is useful to just consider discrimination from the perspective of the CoE, namely Article 14 of the European Convention on Human Rights (ECHR) which states that:

The enjoyment of the rights and freedoms set forth in this European Convention on Human Rights shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.

Article 14 is not a standalone right, and can only be utilised when in conjunction with another Convention Right. But it does create a non exhaustive list of characteristics that can be discriminated against, in particular it is noted that ‘political or other opinion’ can indeed be discriminated upon. The Handbook on European non-discrimination law highlights that this may be ‘where a particular conviction is held by an individual but it does not satisfy the requirements of being a ‘religion or belief’’ (p117). This seems to equate the political opinion for the purposes of Article 14 to be on a similar level of religion or belief, not just an ‘I like coffee’ opinion. It was further suggested that:

As with other areas of the ECHR, ‘political or other opinion’ is protected in its own right through the right to freedom of expression under Article 10, and from the case-law in this area it is possible to gain an appreciation of what may be covered by this ground. In practice it would seem that where an alleged victim feels that there has been differential treatment on this basis, it is more likely that the ECtHR would simply examine the claim under Article 10. (p.117).

And so it begins to unravel that this may not even be a discrimination issue at all, but one of freedom of expression. The European Court of Human Rights (ECtHR) in Handyside v. United Kingdom noted that:

Freedom of expression constitutes one of the essential foundations of such a society, one of the basic conditions for its progress and for the development of every man. Subject to paragraph 2 of Article 10 (art. 10-2), it is applicable not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no “democratic society”. This means, amongst other things, that every “formality”, “condition”, “restriction” or “penalty” imposed in this sphere must be proportionate to the legitimate aim pursued. (para 49).

And so from the ECtHR’s case law, it is clear that freedom of expression can allow us to be utter shits, but this can be subject to limitations, depending upon the manner to which we are utter shits. In Erbakan v. Turkey (in French) the ECtHR held that:

…[T]olerance and respect for the equal dignity of all human beings constitute the foundations of a democratic, pluralistic society. That being so, as a matter of principle it may be considered necessary in certain democratic societies to sanction or even prevent all forms of expression which spread, incite, promote or justify hatred based on intolerance…(para 56).

In the ECtHR’s admissibility decision in Seurot v. France (in French) it was maintained that:

[T]here is no doubt that any remark directed against the Convention’s underlying values would be removed from the protection of Article 10 [freedom of expression] by Article 17 [prohibition of abuse of rights].

Such intolerance and exclusion from Article 10 includes anti-Semitism, racial hate, homophobia etc. In essence, what the ECtHR are saying is that you cannot be a racist or say racist things, and then cry about it afterwards provided that the consequences are proportionate.

What about calling out racism? Is this problematic? It wouldn’t seem so. The case of Jersild v. Denmark was an ECtHR Grand Chamber (GC) case that concerned a journalist who had made a documentary which contained abusive opinions towards immigrants and ethnic groups from young people calling themselves the ‘Greenjackets.’ The journalist was convicted of aiding and abetting the dissemination of racist remarks. The journalist alleged a breach of Article 10. The GC emphasised that it was ‘particularly conscious of the vital importance of combating racial discrimination in all its forms and manifestations’ (para 30). The GC noted that the feature sought to ‘expose, analyse and explain this particular group of youths, limited and frustrated by their social situation, with criminal records and violent attitudes’ and ‘thus dealing with specific aspects of a matter that already then was of great public concern’ (para 33). The GC also noted that the journalist rebutted some of the racist statements although not explicitly ‘recall[ing] the immorality, dangers and unlawfulness of the promotion of racial hatred and of ideas of superiority of one race’ (para 34). In the end, the GC found a violation of Article 10 (para 37). This clearly demonstrates that challenging racism is protected by the same freedom of expression Foster was adamantly advocating for at the beginning of his article. Foster then, ironically gradually moves onto attacking the very thing he sought to defend. It’s ok to say things that are wrong or comment on the taboo, but you shouldn’t be called a racist, sexist, homophobe even if those sentiments ring true because they dehumanise the person making the comment?

droid-please

Under UK law under the Equality Act 2010, there are provisions of non-discrimination; one notably is ‘philosophical belief.’ In Olivier v Department for Work and Pensions ET/1701407/2013 it was noted by the Employment Appeal Tribunal that a philosophical belief must be a ‘belief, not an opinion or viewpoint’ which ‘must be worthy of respect in a democratic society, not incompatible with human dignity and not conflict with the fundamental rights of others.’ This poses two problems for Foster’s analysis. First of all, if it is an opinion i.e. vote Trump, vote Brexit, then it is not a characteristic that can be discriminated against. Secondly, if that opinion is a racist, sexist, or homophobic one, it cannot be regarded as worthy of respect, or compatible with the fundamental rights of others, and therefore, again, cannot be discriminated against.

Of course, calling out racism is subject to the laws of defamation and libel for example if such calling out does not ring true as Frankie Boyle demonstrated in 2012. However, across the pond in France, Marie Le Pen, leader of the French National Party, on two occasions did not have similar successes when called a ‘fascist.’

Racism, sexism, homophobia are the objects of discrimination, never the subjects to it. And when one of the prominent figures for leaving the EU feels that race discrimination laws should be scrapped, refused to support same sex marriage and supported its discrimination, it creates an association based on intolerance. This is not to quantify how many ‘racist, sexist, and homophobic’ votes Trump or leaving the EU gained, but to highlight the futility in ignoring that it did pander to those ideologies. Calling someone racist, sexist or homophobe can be correct at best, or ignorant, offensive and defamatory at worst, but never discriminatory.

Finally, Foster notes that people should put down their labels and sanctimony, and talk, because ‘It’s good to talk.’ In response to this it is stressed that these labels exist for a reason, a good talk cannot begin by controlling the narrative as to deny their existence and importance. These labels are important, it’s the exercise of those labels where a good talk can only begin.

Five reasons NOT to use Facebook posts for insurance premiums…

The announcement by Admiral that it was going to use an algorithmic analysis of Facebook usage to ‘ analyse the personalities of car owners and set the price of their insurance’, as reported in the Guardian, should not be a surprise. In many ways it was inevitable that this kind of approach would be taken – the idea that ‘useful’ data can be derived from social media behaviour is fundamental to the huge success of Facebook – but it should not be greeted with much pleasure.

Rather, it should be met with a serious level of scepticism and a good deal of alarm. Facebook themselves seem to have understood this, and have revoked Admiral’s right to use the system – possibly because they understand the problems with the system, and possibly because they can foresee some of the tactics that could be developed in response to it (more of which below). This, however, does not mean that Admiral and other insurers and indeed those in other sectors will not continue with this approach – which is something about which we should be very careful. There are distinct dangers here, and reasons to be concerned that it marks the beginning of a very slippery slope.

1     Inappropriateness

The first reason is perhaps too obvious for many to notice is that the whole idea can be seen as inappropriate. This is not what social media was designed for – and neither is it what users of social media are likely to expect it to be used for. Comparisons between the ‘offline’ and ‘online’ worlds need a lot of care – but here they are apt. Would we expect insurers to go through our address books and calendars, record and listen to our conversations in cafés and pubs, check our taste in music and what films we watch at the cinema? Most people would find that kind of thing distinctly creepy – and yet this is what using an algorithm to analyse our social media activity amounts to. Even if it is consensual, the chances of the consent being truly informed – that the person consenting to the insurer analysing their social media activity actually understand the essence of what they’re allowing – are very small indeed.

2     Discrimination

The chances that this sort of a system would be discriminatory in a wide range of ways are also very significant – indeed, it would be surprising if they are not discriminatory. Algorithmic analysis, despite the best intentions of those creating the algorithms, are not neutral, but embed the biases and prejudices of those creating and using them. A very graphic example of this was unearthed recently, when the first international beauty contest judged by algorithms managed to produce remarkably prejudiced results – almost all of the winners were white, despite there being no conscious mention of skin colour in the algorithms.

This kind of discrimination is likely to be made even worse by the kind of linguistic analysis that seems to be hinted at by Admiral. As reported by the Guardian,

“evidence that the Facebook user might be overconfident – such as the use of exclamation marks and the frequent use of “always” or “never” rather than “maybe” – will count against them”

In practice, this kind of analysis is very likely to favour what might be seen as ‘educated’ language – and make any kind of regional, ethnic or otherwise ‘non-standard’ use of language put its user at a disadvantage. The biases concerned could be racial, ethnic, cultural, regional, sexual, sexual-orientation, class-based – but they will be present, and they will almost certainly be unfair.

3     Digital divides

The problem of digital divides – where people who have good access to or experience with digital technology have an advantage over those who have less – is likely to be exacerbated by this kind of a system. A ‘savvy’ user will be able to game the system – from taking more care of the language use (and avoiding exclamation marks!!!) – while those with less experience and more naïveté will lose out.

Those who don’t use Facebook are also likely to lose out – making the ‘normalisation’ of Facebook use even stronger. This is, perhaps, one of the slipperiest of the slopes involved here: the more we make Facebook ‘part of the infrastructure’, the more we hand over our rights, our freedoms, to a commercial entity that really does not have our best interests at heart.

4     The perpetuation of illusions about algorithms and surveillance

The illusion that algorithms are somehow ‘neutral’ or ‘objective’ is one that seems to be growing, despite the increasing understanding amongst those who study ‘big data’ and social media that they are anything but neutral. This illusion combines with the idea that surveillance – including algorithmic analysis of ‘public’ data – is harmless, unless you have ‘something to hide’. Admiral’s new idea is described only in terms of providing a ‘service’ – and the suggestion is made that it can only ‘reduce’ premiums, not increase them. As well as showing a deep misunderstanding of how insurance works (and even how mathematics works) that idea emphasises how the surveillance itself (and this is surveillance) is not harmful, unless you’re trying to hide some dark secret as a risk taker.

These twin illusions combine into the facade of neutral, objective algorithms undertaking analysis that can only harm you if you’ve got something to hide. This is a doubly pernicious illusion.

5     The chilling effect

The idea that surveillance has a chilling effect – that being watched makes you alter your behaviour, reducing freedom of speech and of action – is one that has been understood for a long time. The concept of the Panopticon, theorised by Bentham and taken much further by Foucault, was initially about a prison where the prisoners knew that at any time they might be watched, and hence they would behave better. It is based, however, on the presumption that we want to control people’s behaviour – and for people whose behaviour is already known to be aberrant and in need of ‘correction’. For a prison, this  might be appropriate – for citizens of a democracy, it is another matter.

It might seem a little extreme to mention this in the context of car insurance – but the point is clear. If we think that it is generally ‘OK’ to monitor people’s behaviour for something like this, then we are pushing open the doors (which are in many ways already at least ajar) to things that take this a lot further. Monitoring social media usage these days means monitoring almost every aspect of someone’s life.

Survival tactics

What can be done about this? The first thing is to develop survival tactics. Keep the more creative sides of your social life off Facebook – and in particular, never mention things like extreme sports, motor racing, admiring Lewis Hamilton or anything even slightly like that on Facebook. Don’t mention or organise parties on social media – or find ways to describe them that the initial algorithms won’t be able to decode immediately.

This should already be becoming the norm for young people – and there is evidence to suggest that it is. Insurers are just another in the long line of potential snoops to be wary of, from parents and potential employers onwards. Thinking about your reputation can have a direct impact on your finances as well as other prospects, and young people need to understand this. Some even maintain several profiles – a ‘safe’ one that the parents can ‘friend’, that employers can scrutinise and insurers can analyse, and a ‘real’ one where real social activity goes on. I would advise all young people to consider using these kinds of tactics – this idea from Admiral is just one example of the kind of thing that is likely to become the norm.

It is, perhaps, an understanding of this that lies behind Facebook’s revocation of Admiral’s permission to use their data. Facebook needs young people (and the rest of us) to post more things, more accurate, more ‘sexy’, and if surveillance like this means that they post less – a trend which has already been observed and about which Facebook is already wary – then Facebook will care. That does not mean they will stop this kind of thing, but more that they are likely to make it less obvious, less overt, and less clearly risky.

The future: algorithmic accountability and regulation 

Ultimately, though, all of this is just scratching the surface – and may even produce a degree of overconfidence. If you think you can ‘fool’ the system, you’re quite likely to end up being the fool yourself. What is needed is for this to be taken seriously by the authorities. ‘Algorithmic accountability’ is one of the new phrases going around – and it is matters. Angela Merkel recently spoke out against the way that Google are ‘distorting our perception’ through the way their algorithm controls what we see when we search. What she was asking for was algorithmic accountability – not for Google to reveal its trade secrets, but for Google to be responsible, and for the way that algorithms control so much to be taken seriously. Algorithms need to be monitored and tested, their impact assessed, and those who create and use them to be held accountable for that impact.

Insurance is just one example – but it is a pertinent one, where the impact is obvious. We need to be very careful here, and not walk blindly into something that has distinct problems.