Five reasons NOT to use Facebook posts for insurance premiums…

The announcement by Admiral that it was going to use an algorithmic analysis of Facebook usage to ‘ analyse the personalities of car owners and set the price of their insurance’, as reported in the Guardian, should not be a surprise. In many ways it was inevitable that this kind of approach would be taken – the idea that ‘useful’ data can be derived from social media behaviour is fundamental to the huge success of Facebook – but it should not be greeted with much pleasure.

Rather, it should be met with a serious level of scepticism and a good deal of alarm. Facebook themselves seem to have understood this, and have revoked Admiral’s right to use the system – possibly because they understand the problems with the system, and possibly because they can foresee some of the tactics that could be developed in response to it (more of which below). This, however, does not mean that Admiral and other insurers and indeed those in other sectors will not continue with this approach – which is something about which we should be very careful. There are distinct dangers here, and reasons to be concerned that it marks the beginning of a very slippery slope.

1     Inappropriateness

The first reason is perhaps too obvious for many to notice is that the whole idea can be seen as inappropriate. This is not what social media was designed for – and neither is it what users of social media are likely to expect it to be used for. Comparisons between the ‘offline’ and ‘online’ worlds need a lot of care – but here they are apt. Would we expect insurers to go through our address books and calendars, record and listen to our conversations in cafés and pubs, check our taste in music and what films we watch at the cinema? Most people would find that kind of thing distinctly creepy – and yet this is what using an algorithm to analyse our social media activity amounts to. Even if it is consensual, the chances of the consent being truly informed – that the person consenting to the insurer analysing their social media activity actually understand the essence of what they’re allowing – are very small indeed.

2     Discrimination

The chances that this sort of a system would be discriminatory in a wide range of ways are also very significant – indeed, it would be surprising if they are not discriminatory. Algorithmic analysis, despite the best intentions of those creating the algorithms, are not neutral, but embed the biases and prejudices of those creating and using them. A very graphic example of this was unearthed recently, when the first international beauty contest judged by algorithms managed to produce remarkably prejudiced results – almost all of the winners were white, despite there being no conscious mention of skin colour in the algorithms.

This kind of discrimination is likely to be made even worse by the kind of linguistic analysis that seems to be hinted at by Admiral. As reported by the Guardian,

“evidence that the Facebook user might be overconfident – such as the use of exclamation marks and the frequent use of “always” or “never” rather than “maybe” – will count against them”

In practice, this kind of analysis is very likely to favour what might be seen as ‘educated’ language – and make any kind of regional, ethnic or otherwise ‘non-standard’ use of language put its user at a disadvantage. The biases concerned could be racial, ethnic, cultural, regional, sexual, sexual-orientation, class-based – but they will be present, and they will almost certainly be unfair.

3     Digital divides

The problem of digital divides – where people who have good access to or experience with digital technology have an advantage over those who have less – is likely to be exacerbated by this kind of a system. A ‘savvy’ user will be able to game the system – from taking more care of the language use (and avoiding exclamation marks!!!) – while those with less experience and more naïveté will lose out.

Those who don’t use Facebook are also likely to lose out – making the ‘normalisation’ of Facebook use even stronger. This is, perhaps, one of the slipperiest of the slopes involved here: the more we make Facebook ‘part of the infrastructure’, the more we hand over our rights, our freedoms, to a commercial entity that really does not have our best interests at heart.

4     The perpetuation of illusions about algorithms and surveillance

The illusion that algorithms are somehow ‘neutral’ or ‘objective’ is one that seems to be growing, despite the increasing understanding amongst those who study ‘big data’ and social media that they are anything but neutral. This illusion combines with the idea that surveillance – including algorithmic analysis of ‘public’ data – is harmless, unless you have ‘something to hide’. Admiral’s new idea is described only in terms of providing a ‘service’ – and the suggestion is made that it can only ‘reduce’ premiums, not increase them. As well as showing a deep misunderstanding of how insurance works (and even how mathematics works) that idea emphasises how the surveillance itself (and this is surveillance) is not harmful, unless you’re trying to hide some dark secret as a risk taker.

These twin illusions combine into the facade of neutral, objective algorithms undertaking analysis that can only harm you if you’ve got something to hide. This is a doubly pernicious illusion.

5     The chilling effect

The idea that surveillance has a chilling effect – that being watched makes you alter your behaviour, reducing freedom of speech and of action – is one that has been understood for a long time. The concept of the Panopticon, theorised by Bentham and taken much further by Foucault, was initially about a prison where the prisoners knew that at any time they might be watched, and hence they would behave better. It is based, however, on the presumption that we want to control people’s behaviour – and for people whose behaviour is already known to be aberrant and in need of ‘correction’. For a prison, this  might be appropriate – for citizens of a democracy, it is another matter.

It might seem a little extreme to mention this in the context of car insurance – but the point is clear. If we think that it is generally ‘OK’ to monitor people’s behaviour for something like this, then we are pushing open the doors (which are in many ways already at least ajar) to things that take this a lot further. Monitoring social media usage these days means monitoring almost every aspect of someone’s life.

Survival tactics

What can be done about this? The first thing is to develop survival tactics. Keep the more creative sides of your social life off Facebook – and in particular, never mention things like extreme sports, motor racing, admiring Lewis Hamilton or anything even slightly like that on Facebook. Don’t mention or organise parties on social media – or find ways to describe them that the initial algorithms won’t be able to decode immediately.

This should already be becoming the norm for young people – and there is evidence to suggest that it is. Insurers are just another in the long line of potential snoops to be wary of, from parents and potential employers onwards. Thinking about your reputation can have a direct impact on your finances as well as other prospects, and young people need to understand this. Some even maintain several profiles – a ‘safe’ one that the parents can ‘friend’, that employers can scrutinise and insurers can analyse, and a ‘real’ one where real social activity goes on. I would advise all young people to consider using these kinds of tactics – this idea from Admiral is just one example of the kind of thing that is likely to become the norm.

It is, perhaps, an understanding of this that lies behind Facebook’s revocation of Admiral’s permission to use their data. Facebook needs young people (and the rest of us) to post more things, more accurate, more ‘sexy’, and if surveillance like this means that they post less – a trend which has already been observed and about which Facebook is already wary – then Facebook will care. That does not mean they will stop this kind of thing, but more that they are likely to make it less obvious, less overt, and less clearly risky.

The future: algorithmic accountability and regulation 

Ultimately, though, all of this is just scratching the surface – and may even produce a degree of overconfidence. If you think you can ‘fool’ the system, you’re quite likely to end up being the fool yourself. What is needed is for this to be taken seriously by the authorities. ‘Algorithmic accountability’ is one of the new phrases going around – and it is matters. Angela Merkel recently spoke out against the way that Google are ‘distorting our perception’ through the way their algorithm controls what we see when we search. What she was asking for was algorithmic accountability – not for Google to reveal its trade secrets, but for Google to be responsible, and for the way that algorithms control so much to be taken seriously. Algorithms need to be monitored and tested, their impact assessed, and those who create and use them to be held accountable for that impact.

Insurance is just one example – but it is a pertinent one, where the impact is obvious. We need to be very careful here, and not walk blindly into something that has distinct problems.

82 thoughts on “Five reasons NOT to use Facebook posts for insurance premiums…

  1. The use by companies of algorithms as you say is worrying and I fear for other uses of algorithms in the way you describe.

    As an example, I remember reading an article about a university professor in the US, whose name, based on an algorithm predicting which individuals were most likely to commit crime, had a very high chance of being a serious offender. This waa despite the fact that her only crime was a speeding ticket.

  2. This is not about insurance companies, it’s about what a program/programmer can and cannot do. It’s the usual overestimation of the power of a computer that is totally reliant on the programmer who wrote its program. Do we want a pseudo-utopian world run by human programmed machines – is that what passes for progress these days? Is the quasi-thinking machine going to clean-up the mess we see every night on the news? I don’t think so.
    cadxx

Leave a comment