Facebook’s updated terms and conditions…. ;)

facebook-dislikeIn the light of the recently revealed ‘Facebook Experiment’, Facebook has issued new, simplified terms and conditions.*

Emotional Manipulation

  1. By using Facebook, you consent to having your emotions and feelings manipulated, and those of all your friends (as defined by Facebook) and relatives, and those people that Facebook deems to be connected to you in any way.
  2. The feelings to be manipulated may include happiness, sadness, depression, fear, anger, hatred, lust and any other feelings that Facebook finds itself able to manipulate
  3. Facebook confirms that it will only manipulate those emotions in order to benefit Facebook, its commercial or governmental partners and others.

Research

  1. By using Facebook, you consent to being used for experiments and research.
  2. This includes the use of your data and any aspect of your activities and profile that Facebook deems appropriate
  3. Facebook confirms that the research will be used only to improve Facebook’s service, to improve Facebook’s business model or to benefit Facebook or any of Facebook’s commercial or governmental partners in some other way.

Ethics and Privacy

  1. Facebook confirms that it has no ethics and that you have no privacy.

 

 

 

*Not actually Facebook’s terms and conditions…

Tim Kelsey discovers care.data is in trouble…

 

For avoidance of doubt, I created this video – or rather I wrote the subtitles to add to the original. It’s a parody based on the great movie Downfall, and follows a whole series of parodies made over the years…

Paul Bernal

 

UPDATE: I wrote a blog about why I created this parody – and why parodies are a good idea, for OpenDemocracy. It can be found here.

Care.data and the community…

care-data_328x212

The latest piece of health data news, that, according to the Telegraph, the hospital records of all NHS patients have been sold to insurers, is a body-blow to the care.data scheme, but make no mistake about it, the scheme was already in deep trouble. Last week’s news that the scheme had been delayed for six months was something which a lot of people greeted as good news – and quite rightly. The whole project has been mismanaged, particularly in terms of communication, and it’s such an important project that it really needs to be done right. Less haste and much more care is needed – and with the latest blow to public confidence it may well be that even with that care the scheme is doomed, and with it a key part of the UK’s whole open data strategy.

The most recent news relates to hospital data – and the details such as we know them so far are depressingly predictable to many of those following the story for a while. The care.data scheme relates to data currently held by GPs – the new scandal relates to data held by hospitals, and suggests that, as the Telegraph puts it:

“a report by a major UK insurance society discloses that it was able to obtain 13 years of hospital data – covering 47 million patients – in order to help companies “refine” their premiums.”

That is, that the hospital data was given or sold to insurers not in order to benefit public health or to help research efforts, but to help business to make more money – potentially to the detriment of many thousands of individuals, and entirely without those individuals’ consent or understanding. This exemplifies some of the key risks that privacy campaigners have been highlighting over the past weeks and months in relation to the care.data – and adds fuel to their already partially successful efforts. Those efforts lay behind the recently announced six month delay – and unless the backers of care.data change their approach, this last story may well be enough to kill the project entirely.

Underestimating the community

One of the key features of the farrago so far has been the way that those behind the project have drastically underestimated the strength, desire, expertise and flexibility of the community – and in particular the online community. That community includes many real experts, in many different fields, whose expertise strike at the heart of the care.data story. As well as many involved in health care, there are academics and lawyers whose studies cover privacy, consent and so forth who have a direct interest in the subject. Data protection professionals with real-life knowledge of data vulnerability and the numerous ways in which the health services in particular have lost data over the years – even before this latest scandal. Computer scientists, programmers and hackers, who understand in detail the risks and weaknesses of the systems proposed to ‘anonymise’ and protect our data. Advocates and campaigners such as Privacy International, the Open Rights Group and Big Brother Watch who have experience of fighting and winning fights against privacy-invasive projects from the ID card plan to the Snoopers Charter.

All of these groups have been roused into action – and they know how to use the tools of a modern campaign, from tweeting and blogging to making their presence felt in the mainstream media. They’ve been good at it – and have to a great degree caught the proponents of care.data on the hop. Often Tim Kelsey, the NHS National Director for Patients and Information and leader of the care.data project, has come across as flustered, impatient and surprised at the resistance and criticism. How he reacts to this latest story will be telling.

Critical issues

Two specific issues have been particularly important: the ‘anonymisation’ of the data, and the way that the data will be sold or made available, and to whom. Underlying both of these is a more general issue – that people DO care about privacy, no matter what some may think.

“Anonymisation”?

On the anonymisation issue, academics and IT professions know that the kind of ‘de-identification’ that care.data talks about is relatively easily reversed. Academics from the fields of computer science and law have demonstrated this again and again – from Latanya Sweeney as far back as 1997 to Arvind Narayanan and Vitaly Shmatikov’s “Robust De-anonymization of Large Sparse Datasets” in 2008 and Paul Ohm’s seminal piece in 2009 “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization”. Given this, to be told blithely by NHS England that their anonymisation system ‘works’ – and to hear the public being told that it works, without question or doubt, naturally raises suspicion. There are very serious risks – both theoretical and practical that must be acknowledged and taken into account. Right now, they seem to either be denied or glossed over – or characterised as scaremongering.

The sale or misuse of data

The second key issue is that of the possible sale and misuse of data – one made particularly pertinent by the most recent revelations, which have confirmed some of the worst fears of privacy campaigners. Two factors particularly come into play. The first is that the experience of the last few years, with the increasing sense of privatisation of our health services, makes many people suspicious that here is just another asset to be sold off to the highest bidder, with the profits mysteriously finding their way into the pockets of those already rich and well-connected. That and the way that exactly who might or might not be able to access the data has remained apparently deliberately obscure makes it very hard to trust those involved – and trust is really crucial here, particularly now.

Many of us – myself included – would be happy, delighted even, for our health data to be used for the benefit of public health and better knowledge and understanding, but far less happy for our data to be used primarily to increase the profits of Big Pharma and the insurance industry, with no real benefit for the rest of us at all. The latest leak seems to suggest that this is a distinct possibility.

The second factor here, and one that seems to be missed (either deliberately or through naïveté) is the number of other, less obvious and potentially far less desirable uses that this kind of data can be put to. Things like raising insurance premiums or health-care costs for those with particular conditions, as demonstrated by the most recent story, are potentially deeply damaging – but they are only the start of the possibilities. Health data can also be used to establish credit ratings, by potential employers, and other related areas – and without any transparency or hope of appeal, as such things may well be calculated by algorithm, with the algorithms protected as trade secrets, and the decisions made automatically. For some particularly vulnerable groups this could be absolutely critical – people with HIV, for example, who might face all kinds of discrimination. Or, to pick a seemingly less extreme and far more numerous group, people with mental health issues. Algorithms could be set up to find anyone with any kind of history of mental health issues – prescriptions for anti-depressants, for example – and filter them out of job applicants, seeing them as potential ‘trouble’. Discriminatory? Absolutely. Illegal? Absolutely. Impossible? Absolutely not – and the experience over recent years of the use of black-lists for people connected with union activity (see for example here) shows that unscrupulous employers might well not just use but encourage the kind of filtering that would ensure that anyone seen as ‘risky’ was avoided. In a climate where there are many more applicants than places for any job, discovering that you have been discriminated against is very, very hard.

This last part is a larger privacy issue – health data is just a part of the equation, and can be added to an already potent mix of data, from the self-profiling of social networks like Facebook to the behavioural targeting of the advertising industry to search-history analytics from Google. Why, then, does care.data matter, if all the rest of it is ‘out there’? Partly because it can confirm and enrich the data gathered in other ways – as the Telegraph story seems to confirm – and partly because it makes it easy for the profilers, and that’s something we really should avoid. They already have too much power over people – we should be reducing that power, not adding to it.

People care about privacy

That leads to the bigger, more general point. The reaction to the care.data saga so far has been confirmation that, despite what some people have been suggesting, particularly over the last few years, people really do care about privacy. They don’t want their most intimate information to be made publicly available – to be bought and sold to all and sundry, and potentially to be used against them. They have a strong sense that this data is theirs – and that they should be consulted, informed, and given some degree of control over what happens to it. They particularly don’t like the feeling that they’re being lied to. It happens far too often in far too many different parts of their lives. It makes them angry – and can stir them into action. That has already happened in relation to care.data – and if those behind the project don’t want the reaction to be even stronger, even angrier, and even more likely to finish off a project that is already teetering on the brink, they need to change their whole approach.

A new approach?

  1. The first and most important step is more honesty. When people discover that they’re not being told the truth – they don’t like it. There has been a distinct level of misinformation in the public discussion of care.data – particularly on the anonymisation issue – and those of us who have understood the issues have been deeply unimpressed by the responses from the proponents of the scheme. How they react to this latest revelation will be crucial.
  2. The second is a genuine assessment of the risks – working with those who are critical – rather than a denial that those risks even exist. There are potentially huge benefits to this kind of project – but these benefits need to be weighed properly and publicly against the risks if people are to make an appropriate decision. Again, the response to the latest story is critical here – if the authorities attempt to gloss over it, minimise it or suggest that the care.data situation is totally different, they’ll be rightly attacked.
  3. The idea that such a scheme should be ‘opt-out’ rather than ‘opt-in’ is itself questionable, for a start, though the real ‘value ‘ of the data is in it’s scale, so it is understandable that an opt-out system is proposed. For that to be acceptable, however, we as a society have to be the clear beneficiaries of the project – and so far, that has not been demonstrated – indeed, with this latest story the reverse seems far more easily shown.
  4. To begin to demonstrate this, particularly after this latest story, a clear and public set of proposals about who can and cannot get access to the data, and under what terms, needs to be put together and debated. Will insurance companies be able to access this information? Is the access for ‘researchers’ about profits for the drugs companies or for research whose results will be made available to all? Will any drugs developed be made available at cheap prices to the NHS – or to those in countries less rich than ours? We need to know – and we need to have our say about what is or is not acceptable.
  5. Those pushing the care.data project need to stand well clear of those who might be profiting from the project – in particular the lobby groups of the insurance and drug companies and others. Vested interests need to be declared if we are to entrust the people involved with our most intimate information. That trust is already rapidly evaporating.

Finding a way?

Will they be able to do this? I am not overly optimistic, particularly as my only direct interaction with Tim Kelsey has been on Twitter where he first accused me of poor journalism after reading my piece ‘Privacy isn’t selfish’ (I am not and have never presented myself as a journalist – as a brief look at my blog would have confirmed) and then complained that a brief set of suggestions that I made on Twitter was a ‘rant’. I do rant, from time to time, particularly about politics, but that conversation was quite the opposite. I hope I caught him on a bad day – and that he’s more willing to listen to criticism now than he was them. If those behind this project try to gloss over the latest scandal, and think that this six month delay is just a chance for them to explain to us that we are all wrong, are scaremongering, don’t understand or are being ‘selfish’, I’m afraid this project will be finished before it has even started. Things need to change – or they may well find that care.data never sees the light of day at all.

The community needs to be taken seriously – to be listened to as well as talked to – and its expertise and campaigning ability respected. It is more powerful than it might appear – if it’s thought of as a rag-tag mob of bloggers and tweeters, scaremongerers, luddites and conspiracy theorists, care.data could go the way of the ID card and the Snoopers Charter. Given the potential benefits, to me at least this could be a real shame – and an opportunity lost.

Privacy for all?

The big ‘privacy’ story this week has been that surrounding the Duchess of Cambridge’s breasts. The coverage it’s been given (and will doubtless continue to be given) has been immense – but the issues that it should raise are far more complex than those that have appeared in the media. A shortish blog post isn’t enough to cover even a fraction of them – but there are a few points that a privacy advocate like me should be highlighting.

This particular intrusion is in many ways a ‘classical’ intrusion: the kind of long-lens photography of a celebrity that has existed pretty much since photography was invented. Indeed, the kind of intrusion that inspired Warren and Brandeis’ seminal piece The Right to Privacy in the Harvard Law Review as long ago as 1890. We can rant and rage about it, put laws in place and try to establish press standards and ethical guidelines as much as we want, but it almost certainly won’t go away – not so long as we’re interested in celebrities, and much though people like me might hope that our celebrity-obsessed culture disappears, I can’t see it happening. However, it does raise some very serious points.

Firstly, from my perspective, it reminds me of an overriding principle: rights, if they are to mean anything, should apply to all. That works both ways in this case:

Even people you dislike, or disapprove of have rights!

Even people that we don’t like, people we disapprove of, should have the right to privacy. In fact, this is one of the biggest tests of any commitment to rights: do we grant those rights to people we don’t like! I’m no royalist – indeed, in most ways I’m a fairly ardent republican – but I do think the Duchess of Cambridge has a right to privacy. Similarly, though I detest his politics, I believe Max Mosley has a right to privacy. They’re human beings – even if they’re ultra-privileged and ultra-rich, even if they ‘represent’ aspects or elements of society that I thoroughly dislike, and institutions that I would much rather don’t exist.

But so do the rest of us!

Just as importantly, it shouldn’t be JUST the Royals and other celebrities that have the right to privacy, and the kinds of protection that this right demands, but all of us. We shouldn’t save our outrage at invasions of privacy for those like the Duchess of Cambridge for whom the privacy invasions are obvious and well publicised – we should be aware of, and oppose, invasions of privacy wherever and however they occur. The threats we face are very different from those faced by the Duchess – I doubt anyone wants to point a telephoto lens at my window – but they’re there, and they’re growing all the time. If we care about privacy – and we should care about privacy – we should care about the way the government is planning to invade our privacy on a systematic and devastating scale with the Communications Data Bill (the snoopers’ charter), and we should care about the way businesses are monitoring our behaviour online on an equally systematic basis.

Privacy is about control

It may not seem the same, but there are more similarities about these two kinds of invasions of privacy. They’re both about control – the Duchess wants to have some control over what images of her are used, and by whom. Invasions of privacy like this destroy that control, and allow the most intimate of information to be spread without her consent or any chance of her control. The kinds of invasions of privacy that we ‘ordinary’ people face also allow the most intimate of information to be gathered about us – whether it’s discovering our sexual preferences by monitoring the websites we visit or our political views by the kind of music we listen to, or even our body shape and size by the products we browse – and allow that to be spread without our consent or control.

Of course the information is spread to different people and for different reasons. The Duchess’s breasts may be shared over the internet for the purpose of titillation or just gossip – our personal details are spread so that businesses can make money from us, insurance companies raise our premiums, employers learn about our personal habits – or authorities learn when and what we might be wanting to protest about in order to stifle that protest.

What is grotesque?

Where is the greater harm? At a personal, immediate and obvious level, the invasion of the Duchess’s privacy is grotesque, and it should be thoroughly rejected. At a societal level – and at a personal level for each and every one of us, the other, systematic, silent, hardly noticed invasions of privacy may be far more dangerous. They have the potential to be truly grotesque – and we should make that very clear.

Annoyed by those cookie warnings?

…spread your anger!

I’m sure you know the warnings I’m talking about – at least you do if you’re in the European Union. Warnings that appear almost every time you look at a new page on the web, telling you that the site uses cookies, generally telling you that if you continue into the site, you’re accepting they’re going to put cookies on your computer.

Annoying, aren’t they? Patronising, perhaps? Pedantic? Pointless?

Yes, all of the above. The whole thing’s a bit silly, really. As many people who visit this blog probably realise, they’re appearing as a result of a bit of European law – often referred to as the ‘cookies directive’, but more accurately an update to the e-privacy directive (the Directive on Privacy and Electronic Communications). An annoying piece of legislation, one which even before it was passed in 2009 had been subject to pretty intense criticism – and rightly so. The drafters of the legislation deserve a great deal of criticism and a good deal of anger – it’s a bit of a pig’s ear, to be frank. So should the politicians and bureaucrats who brought it into action. Typical European busybodies, I’ve heard it said. They want to control everything we do…

…and yet, deserving though they are of a lot of criticism, they’re not the only ones who should bear the brunt of the anger, of the annoyance. Legislation, even poorly drafted and misguided legislation, doesn’t emerge in a vacuum. That’s particularly true in the case of the cookies directive – it emerged, as most law does, because there was a problem. In this case, the problem was that our privacy was being invaded, persistently and on a large scale, particularly by those involved in the online advertising industry.

Those who follow my blog may have heard me write before about Phorm, perhaps the most invasive and offensive of the behavioural advertisers, whose systems were designed to intercept your entire internet activity, track you and profile you, so as to be able to target advertisements at you. Their activities were hugely invasive of privacy – so much so that the outrage that grew about it played a key part in forcing them to abandon their business – and yet the online advertising industry bodies supported them throughout and did their very best to discourage any kind of investigation into their activities.

The cookies directive – and all those annoying warnings – has its origins in that story. Whilst privacy advocates investigated and European politicians and bureaucrats tried to first of all find out what was happening and then try to work out some kind of solution, what they got from the industry was characterised by denial, obfuscation and obstruction. Either there wasn’t a problem at all, or it would be best solved by self-regulation. Neither of those were true – and the people, politicians and bureaucrats knew it. Their equivalents in the US know it too, which is why they’re still trying to get the ‘Do Not Track’ initiative off the ground – and in the US they’re receiving the same kind of resistance as they got in Europe.

Regulators don’t like being fobbed off. They don’t like being treated without respect, or told they’re being foolish – it’s not the best way to get useful, helpful and productive regulation. Instead, it’s likely to bring about bad law – stuff like the cookies directive. Yes, it’s a stupid law – but it would never have been brought into action if the online advertisers had admitted that there was a problem, and at least tried to do something about it. If they’d shown some degree of understanding first of all that people were upset, secondly that they had a reason to be upset, and thirdly that they should do something about it, then they might have been able to head off the legislative mess that has resulted. They didn’t.

It’s not an unusual story – there are parallels with the way the newspaper industry’s far-from effective self-regulation led to the Leveson Inquiry, and may end up in over-the-top regulation of the press. If you behave badly, and continue to behave badly even when people complain, things like that happen…. and you can’t just blame the regulators.

In the case of the cookies directive – and all those annoying warnings – the online advertising industry should take their share of your annoyance and anger…..

Privacy is not the enemy…

I attended the Oxford Institute event ‘Anonymity, Privacy and Open Data’ yesterday, notable amongst other things for Professor Ross Anderson’s systematic and incredibly powerful destruction of the argument in favour of ‘anonymisation’ as a protection for privacy. It was a remarkable event, with excellent speakers talking on the most pertinent subjects of the day in terms of data privacy: compelling stuff, and good to see so many interesting people working in the privacy and related fields.

And yet, at one point, one of the audience asked a question about whether a group like this was not too narrow, and that by focussing on privacy we were losing sight of other ‘goods’ – he was thinking particularly of medical goods, as ‘privacy’ was seen as threatening the possibility of sharing medical data. I understood his point – and I understood his difficulty, as he was in a room to a great extent full of people interested in privacy (hardly surprising given the title of the event). Privacy advocates are often used to the reverse position – trying to ‘shout out’ about privacy to a room full of avid data-sharers or supporters of business innovation above all things. A lot of antagonism. A lot of feelings about being ‘threatened’. And yet I believe that many of those threatened are missing the point about privacy. Just as Guido Fawkes is wrong to characterise privacy just as a ‘euphemism for censorship’ (as I’ve written about before) and Paul McMullan is wrong to suggest that ‘privacy is for paedos’, the idea that privacy is the ‘enemy’ of so many things is fundamentally misconceived. To a great extent the opposite is true.

Privacy is not the enemy of free expression – indeed, as Jo Glanville of Index on Censorship has argued, privacy is essential for free expression. Without the protection provided by privacy, people are shackled by the risk that their enemies, those that would censor them, arrest them or worse, can uncover their indentures, find them and do their worst. Without privacy, there is no free expression.

Privacy is not the enemy of ‘publicness’ – in a similar way, to be truly ‘public’, people need to be able to protect what is private. They need to be able to have at least some control over what they share, what they put into the public. If they have no privacy, no control at all, how can they know what to share?

Privacy is not the enemy of law enforcement – privacy is sometimes suggested to be a tool for criminals, something behind which they can hide behind. The old argument that ‘if you’ve got nothing to hide, you’ve got nothing to fear’ has been exposed as a fallacy many times – perhaps most notably by Daniel Solove (e.g. here), but there is another side to the argument. Criminals will use whatever tools you present them with. If you provide an internet with privacy and anonymity they’ll use that privacy and anonymity – but if you provide an internet without privacy, they’ll exploit that lack of privacy. Many scams related to identity theft are based around taking advantage of that lack of privacy. It would perhaps be stretching a point to suggest that privacy is a friend to law enforcement – but it is as much of an enemy to criminals as it is to law enforcement agencies. Properly implemented privacy can protect us from crime.

Privacy is not the enemy of security – in a similar way, terrorists and those behind what’s loosely described as cyberwarfare will exploit whatever environment they are provided with. If Western Law enforcement agencies demand that social networks install ‘back doors’ to allow them to pursue terrorists and criminals, you can be sure that those back doors will be used by their enemies – terrorists, criminals, agents of enemy states and so forth. This last week has seen Privacy International launch their ‘Big Brother Inc’ database, revealing the extent to which surveillance products developed in the West are being sold to despotic and oppressive regimes. It’s systematic, and understandable. Surveillance is a double-edged sword – and privacy is a shield which faces many ways (to stretch a metaphor beyond its limits!). Proper privacy protection works against the ‘bad guys’ as well as the ‘good’. It’s a supporter of security, not an enemy.

Privacy is not the enemy of business – though it is the enemy of certain particular business models, just as ‘health’ is the enemy of the tobacco industry. Ultimately, privacy is a supporter of business, because better privacy increases trust, and trust helps business. Governments need to start to be clear that this is the case – and that by undermining privacy (for example though the oppressive and disproportionate attempts to control copyright infringement) they undermine trust, both in businesses and in themselves as governments. Privacy is certainly a challenge to business – but that’s merely reflective of the challenges that all businesses face (and should face) in developing businesses that people want to use and are willing to pay money for.

Privacy is not the enemy of open data – indeed, precisely the opposite. First of all, privacy should make it clear which data should be shared, and how. ‘Public’ data doesn’t infringe privacy – from bus timetables to meteorological records, from public accounts to parliamentary voting records. Personal data is just that – personal – and sharing it should happen with real consent. When is that consent likely to be given? When people trust that their data will be used appropriately. When will they trust? When privacy is generally in place. Better privacy means better data sharing.

All this is without addressing the question of whether (and to what extent) privacy is a fundamental right. I won’t get into that here – it’s a philosophical question and one of great interest to me, but the arguments in favour of privacy are highly practical as well as philosophical. Privacy shouldn’t be the enemy – it should be seen as something positive, something that can assist and support. Privacy builds trust, and trust helps everyone.

Opting out of Street View….

Nearly 250,000 Germans have ‘opted out’ of having their homes visible when Google’s Street View comes online, though Andreas Türk, Product Manager for Street View in Germany, has admitted that some of those homes will still be visible when the service comes online, which will be some time in the near future, as the process is complex and not all instructions were clear. His blog here provides the explanations.

It’s an interesting figure – is 250,000 (or, to be more precise, 244,237) a large number? As Andreas Türk says, it amounts to 2.89% of those who could have objected, and the argument can be made both ways. Google might argue that it means that the vast, vast majority don’t object to Street View, so their service has some kind of overall ‘acceptance’ or even ‘support’ by the populace. Privacy advocates might say the converse – in absolute terms, 250,000 is a LOT of people. If you had 250,000 people marching on the streets with banners saying ‘NO TO STREET VIEW’ it would make headline news, certainly in Germany, and probably throughout Europe.

Both sides have a point: 2.89% isn’t a very large proportion, but 250,000 is a lot of people, and when you look closer at the process I suspect that the privacy advocates have a stronger position. Given that the opt-out required an active process (and Google say that 2/3 of those who objected used their own online tool to do so) it does suggest that quite a lot of people care about this. If the reverse system had been in place – and you had to actively choose to HAVE your home ‘unblurred’ on Street View, what kind of figures would you get? Would more than 250,000 have gone through a process to make their houses visible? I doubt it….

…and what of the rest of us? Germans got a choice because their government made a point about it, and demanded that Google give them the choice before the service went active. As the BBC reports, other governments have made other kinds of objections, but none have been given the choice that the Germans have had. As I’ve blogged before, Germany has a pretty active privacy lobby, so it’s not surprising that they are the country that has taken this step – what would the result have been if the option had been given in the UK? Or the US? Probably not as dramatic as the German result – which makes me wonder whether Google has missed a trick by not providing the option elsewhere. If they did so, and an even tinier fraction than the 2.9% in privacy-aware Germany objected, they might be able to be even bolder about proclaiming that people love Street View…..

Consent: a red herring?

I asked Peter Fleischer, Google’s Global Privacy Counsel, a question about ‘opt-in’ or ‘opt-out’, in a panel session at the Computers, Privacy and Data Protection Conference in Brussels in January, to which he gave an interesting answer, but one that was greeted with more than a little dismay. In essence, his answer was that the whole question of ‘opt-in/opt-out’, and by implication the whole issue of consent, was a bit of a red herring. Unsurprisingly, that was not a popular view at a conference where many of the delegates were privacy advocates – but he did and does have a very good point. He went on to explain, quite reasonably, that if someone wants something online, they’ll just consent to anything – scrolling down through whatever legalese is put in the consent form without reading it, then clicking OK without a second thought, just to get at the service or website they want. And he’s right, isn’t he? That IS what we all do, except in the most exceptional circumstances.

The question, then, is what can or should be done about it. Peter Fleischer’s implication – one shared, it appears, by most in the industry, is that we should realise that emptiness and unhelpfulness of consent, and not bang on so much about ‘opt-in’ or ‘opt-out’. We’re missing the point, and barking up the wrong tree. And, to a certain extent, I’m sure he’s right. As things stand, consent, and opt-in, and not really very helpful. However, it seems to me that he’s also missing the point – whether deliberately, as it suits the interests of his employers to have opt-out systems and allow such things as browse-wrap consent on the net, or because he thinks there’s no alternative, I wouldn’t like to say – in the conclusions that he draws, and the suggestions as to what we do next.

If consent, in its current form on the net, is next to meaningless, rather than abandoning the concept as useless wouldn’t it be better to find a way to make it more meaningful? This is something that many people are wrestling with – including the EnCoRe (Ensuring Consent & Revocation) group – and something I shall be presenting a paper about at the BILETA conference in Vienna next week. The way I see it, the internet offers unprecedented opportunities for real-time communication and interaction, for supplying information and for allowing users choices and options – shouldn’t there be a way to harness these opportunities to make the consent process more communicative, more interactive, more ‘real-time’, and to give users more choice and more options?

Peter Fleischer’s employers, Google, actually do some really interesting and positive things in this field – the Google Dashboard and Google’s AdPreferences both provide information and allow options and choices for people whose data is being gathered and used – the next stage is for these to be given more prominence, for right now they’re pretty hidden away, and it’s mostly just the hackers and privacy advocates that even know they exist, let alone use them well. If they can perhaps Google can help consent to become much more than a red herring, and instead part of the basic process of the internet.