My submission to the Online Harms White Paper consultation is set out below. This has been one of the hardest government consultations for me to respond to. In part this is because the White Paper covers so much ground that there is far too much to say than can fit into a reasonably sized response – one that stands a chance of being read properly – but in part it is because the consultation looks very much as though it has already assumed the main answers. The questions as set out in the consultation are very much on the detail level about how to do what they’ve already decided to do, although a great deal of what they’ve decided to do is at best questionable, at worst extremely likely to be not just ineffective but actually counterproductive, as well as restricting crucial internet freedom for many of the people who need it the most.
That means my response is somewhat ‘bitty’, covering only a few select areas as well as giving general comments. Fortunately there are some other really excellent responses out there,
Response to the Online Harms White Paper consultation
I am making this submission in my capacity as Senior Lecturer in Information Technology, Intellectual Property and Media Law at the UEA Law School. I research into internet law and specialise in internet regulation from both a theoretical and a practical perspective. My first book, Internet Privacy Rights – Rights to Protect Autonomy, was published by Cambridge University Press in 2014. My second book, The Internet, Warts and All: Free Speech, Privacy and Truth, was also published by Cambridge University Press in August 2018, and has the question of regulation of the Internet as one of its central themes. ‘Online harms’, as set out in the White Paper, are central to that book, from the chapters covering freedom of speech and fake news to the chapter on the nature and practice of trolling. There are direct recommendations about regulation of all of this contained within that book.
I have previously responded to a series of government consultations in this and related fields, including the House of Lords Internet Regulation Inquiry and the DCMS Fake News Inquiry in 2018, and was involved in the Law Commission Abusive and Offensive Online Communications Project that same year. This area falls squarely within my field of expertise and I have written extensively about it in forms other than the two academic books already mentioned. I would be happy to contribute further if that would be of assistance.
Introduction to this submission
Whilst the problem of online harms is a significant one, there are significant dangers associated with inappropriate and excessive regulation. As well as potentially putting freedom of speech, freedom of association and assembly and other human rights at risk, many of the methods suggested could end up being counterproductive, actually causing more harm than they address. They can encourage countermeasures that mean that the real ‘villains’ avoid being held to account, they can create tools that are used bywhat might loosely be called ‘trolls’ against their victims, and they can produce more arbitrary punishment that make it harder to respect the laws and those attempting to enforce them.
It is important not to be persuaded by inaccurate characterisations of the internet as a ‘wild west’ that is ungoverned and needs ‘reining in’. For the vast majority of people, the vast majority of the time, the internet is a place that provides great benefits and an essentially safe and secure environment to socialise, do business, find information and much more. Moreover, the internet is already regulated by a wide range of laws, from those governing speech (such as S127 of the Communications Act 2003, the Malicious Communication Act 1988) and public order law to data protection, copyright, fraud and ‘revenge porn’, as well as civil law such as defamation law, misuse of private information and much more. Regulators such as Ofcom and the Information Commissioner’s Office already have extensive powers to operate online. This is not in any real sense an unregulated area – indeed, in many ways, speech online is subject to tighter control and more regulation than speech ‘offline’.
Further, the nature of the online world means that rather than being a place where anonymity provides excessive ‘protection’, it is an environment where records are more precise, more persistent and more easily analysed than the ‘offline’ world, often making people moreaccountable for their speech than in the past. There are technological and legal mechanisms to both locate and potentially prosecute perpetrators of online harms – and many have indeed been prosecuted in ways that might well have been seen as disproportionate if their speech had been offline rather than on the internet.
All this means is that much more care needs to be taken about how – and indeed whetherto regulate speech any more harshly than it is currently regulated. There are some specific areas where it might be appropriate, but a heavy-handed approach to the regulation of online speech, though politically attractive, will almost certainly cause much more harm than good. Moreover, it can create a sense of complacency about dealing with much more important problems in the online environment, as well as providing inappropriate reassurance that distracts from the critical need to encourage people to be self-supportive and ‘savvy’ online, which is much more important than any regulator could be.
This is perhaps the most important point, and it is good to see that a section of the White Paper is devoted to awareness and in particular to empowering users. This should be emphasised in all communications – and the idea that we can somehow create an internet that is completely ‘safe’ should not be promoted so positively. A ‘safe’ internet can become a sterile internet, losing the creativity and dynamism that is the lifeblood of the environment. We should neither overplay the dangers – as the portrayal of the internet as a lawless ‘wild west’ suggests – nor exaggerate the capability to remove those dangers entirely, as the idea of making the internet ‘safe’ implies.
Similarly, if the government does try to regulate along the lines set out in the White Paper, it is important not to expect too much. This kind of regulation is highly unlikely to have a significant impact on the level of ‘online harms’ that are encountered. The risks associated with this kind of regulation, as well as the significant costs involved in setting it up, make it hard to justify pursuing it as it stands.
1 Focussing on illegal content and activity
1.1 That the White Paper starts by referring to illegal ‘and unacceptable’content and activity should be a concern from the start. If something is really ‘unacceptable’, it should be made illegal – and unless it is illegal, it should not be deemed unacceptable. If acceptability can be determined by policy or politics rather than law, the scope for abuse, uncertainty and bias is enormous. Setting what amounts to a ‘moral’ or ‘ethical’ view of acceptability is a very slippery slope.
1.2 Further, setting one set of standards of ‘acceptability’ for the whole of the internet is not only doomed to failure but likely to destroy some online communities that are in most ways positive and supportive for people who spend time in them – something that should be strenuously avoided. One of the key strengths of the internet is that it allows space for the existence of very different communities and very different platforms – this has been true since the beginning of the ‘social’ internet in particular. Imposing a set of standards ‘from above’ that do not meet either the needs or the expectations of those communities is not only unlikely to succeed in any meaningful way but is likely to cause anger and resentment.
1.3 Where content and behaviour is illegalthe law should apply across all platforms and communities. Deciding ‘acceptability’ should be kept to the platforms and the communities to decide. This way the different platforms and communities can develop in ways that suit them. Encouraging a diversity of platforms and communities has the additional potential benefit of dispersing the power currently wielded by the internet giants – and reducing vulnerability to things like fake news and political manipulation, as part of the reason for the effectiveness of both has been the concentration of data and audience on particular platforms, Facebook and YouTube in particular.[1]
2 Online harms
2.1 The online harms discussed in the White Paper need to be considered in the light of this. The first main types discussed in the White Paper fit clearly into the illegalcategory: CSEA, terrorist contents, content uploaded from prisons, the sale of illegal opioids. Law already exists to address all of these, and to a significant degree this law is already effective, insofar as it can be effective given the nature of the problem. A new online regulator, following the lines discussed in the White Paper is unlikely to have a significant effect on any of these areas – more resources to law enforcement, to prisons (to take more control over the supply of mobile phones for example) and so forth is much more likely to be effective.
2.2 The other harms discussed, ‘[b]eyond illegal activity’, from section 1.15 of the paper on, are another matter. Cyber bullying, misogyny and other forms of online abuse can cross the threshold into illegality, and many have been successfully prosecuted, (e.g. under the Malicious Communications Act 1988 and S127 of the Communications Act 2003). This does not mean that further law or regulation is required, but that more consistency, better training and clarity from those enforcing the law, and more resources to them, could improve matters, particular where the application of these laws has been seen to appear arbitrary and out of touch. The notorious ‘Twitter Joke Trial’ of Paul Chambers in 2012, which eventually saw the conviction quashed after a series of appeals, left those enforcing the law looking more than foolish. This was not a result of too little law or too little regulation but of authorities that did not understand the online world.
3 Anonymity online
3.1 The White Paper notes that ‘tackling online anonymous abuse’ is a key concern. This has been a subject of discussion for those studying the internet for many years – and it is important to raise a strong, cautionary note against the idea that requiring ‘real names’ would be an effective tool against online abuse. In practice, there is little evidence so suggest that it might be, and significant evidence that it would not – and that it would put vulnerable people in particular situations at risk.[2]
3.2 It may seem counterintuitive but empirical evidence has shown that ‘trolls’ required to use their real names online actually become morerather than less aggressive. Trolls often ‘waive their anonymity’ online, becoming even more aggressive when posting with their real names.[3]As I note in my 2018 book, The Internet, Warts and All, it may be that having a real name displayed emboldens trolls, adding credibility and kudos to their trolling activities. It may also be that they feel they have less to lose and less to protect when their names are revealed – or that it creates a ‘badge of honour’. Whatever the reason, the evidence does notsuggest that requiring real names deters trolls or trolling.
3.3 Further, forcing people to use real names puts some people at risk – from whistle-blowers to victims of spousal abuse, to people with religious or ethnically identifying names and many more groups. It also makes the victimsof online abuse more vulnerable, as their attackers can learn more about them and use that to abuse or threaten them – finding out their personal details and using them against them, threatening to report them or tell lies about them to their families and friends, employers and so forth. The classical troll tactic of ‘doxxing’ – releasing documents about a victim – is made much easier by a real names policy
3.4 There are already legal and technical methods for revealing who lies behind an anonymous account – anonymity online is never more than a basic protection – which can and should be used when required. There are also platforms where real names are required already – Facebook for one – but there is little evidence that they provide more protection from abuse. What could help, as noted above, is a greater diversity of platforms and communities online, so that people can find places that are safer for themonline. The rise of group-based private social media system like WhatsApp may be in part a response to this problem: groups kept private and secure are less open to external abusers.
4 Young people online
4.1 The note in the paper that most children nave a positive experience online is very welcome: it is really important not to portray the online world as somewhere fundamentally dangerous for children and young people. An overly protective approach to children online would reflect a mischaracterisation and misunderstanding of how the internet works for children, and any regulation that restricts rather than supports children online should be avoided.
4.2 It is critical in understanding this notto put too much emphasis on the worries of parents about their children’s online activities, particularly when those worries are actively encouragedby the ways that they are questioned about it. If can be a reflection of the way that parents misunderstand what their children are doing, and feel out of touch. A greater emphasis on educating parents so that they don’t worry would be very welcome.
4.3 Recent studies also show that concerns about the impact of ‘screen time’ on adolescents mental health are likely to be unfounded.[4]This fits into a common pattern of misplaced fears and concerns based on misunderstanding of both technology and the lives of young people. It is important not to overreact to ideas and fears spread through ignorance. An overly onerous regulatory approach towards young people online should be avoided. This is not to underplay the importance of dealing with key issues such as self-harm and suicide, sexting and revenge porn, but to place them in context. It is important also to understand the causality here: where there are correlations between online activity and self-harm, for example.
4.4 An area where the government is already attempting to regulate in relation to children – age verification for access to pornographic and other ‘adult’ content – is another example where regulation is highly unlikely to be effective, and a prime example of another classical failure of regulation, the failure to listen to experts. Almost everyone in the technology industry has advised against the path that the government has taken: it won’t in practice help protect children from harm, will encourage complacency, has already encouraged countermeasures both technical (including the rise in usage of VPNs) and tactical (using privacy groups and so forth), and does not address the real issue of harm. Moreover, it is likely to be very expensive and technologically almost impossible to function well. It was and remains a regulatory trap – the government should do its best not to fall into similar regulatory traps in other areas. Caution, care, and a willingness to listen to experts even when they go against what might seem ‘obvious’, are very much needed in the area of internet regulation.
4.5 One area where regulation in relation to children could, however, be useful, is privacy – in common with other areas mentioned in this submission, privacy underpins protection in other ways. A requirement for real names, as noted above, would be likely to harmrather than help children at risk of cyber bullying and other online abuse. The ability for children to protect their privacy is critical – and restricting the gathering of data about children by social media platforms, advertisers and so forth should be encouraged.
5 Privacy, fake news and political manipulation
5.1 That leads to the more general point about privacy and personal data: the gathering and use of personal information underpins many of the worst problems on the internet at present. Privacy invasion and profiling lies behind the current manifestation of the fake news phenomenon and the broader issue of political manipulation (as graphically illustrated by the Cambridge Analytica saga) discussed in the White Paper, as well as providing tools for scammers and other criminals, creating vulnerabilities that can be exploited and much more.
5.2 Indeed, rather than focus on the symptoms of fake news and related harms, as the White Paper seems to do in paragraphs 7.25 onwards, focus should be placed on privacy, on data gathering, profiling and targeting. It is these techniques (again, as graphically illustrated by the Cambridge Analytica saga) that make misinformation and political manipulation so particularly effective in the current internet. The White Paper notes that it will be looking at advertising online – but does not make the connection between the techniques used by online advertisers and those used by people spreading fake news and misinformation. They are, in practice, the same methods, the same techniques (data analysis, profiling and targeting), and whilst these are seen as essentially harmless, normal business practices, any attempts to ‘deal with’ fake news, political manipulation and electoral interference are bound to fail. ‘Fact checking’ and labelling of fake news or unreliable sources has been empirically demonstrated to be counterproductive, making people morelikely to believe the fake news, one of the reasons Facebook abandoned its practice in 2017.[5]Making this kind of labelling part of any ‘duty of care’ would be directly counterproductive to combatting this kind of online harm.
5.3 Privacy and personal data is also an area where extensive law already exists. Data protection law, and in particular the new General Data Protection Regulation, has the potential to provide a good deal of support for individual privacy – but only if it is enforced with sufficient rigour and support. The Information Commissioner’s Office (‘ICO’) needs to be given more resources both in terms of finance and expertise, and perhaps more responsibilities.
6 The role of a regulator
6.1 As noted in various sections above, there are many areas discussed in the White Paper for which either regulation already exists or further regulation is likely to be counterproductive. The idea of imposing a ‘duty of care’ on internet platforms for some of subjects discussed in the White Paper should therefore be viewed with great caution. There are further areas where internet platforms are already working extensively to address, and where the question of whether a regulator is really needed should be asked. These include the online abuse of public figures – much of what is suggested is already being done, particular by Facebook and Twitter, and it is easy to fall into a trap of saying ‘it’s all the fault of the social media companies’ when there is a much bigger, underlying issue that is on a societal level. The online abuse of public figures is closely connected with racism and misogyny – female and ethnic minority public figures are subjected to more and more virulent abuse than others – and whilst these are still tightly embedded in our society, blaming the social media companies for the existence of such abuse can easily become a form of deflection or avoidance.
6.2 Codes of practice could be welcome in these areas, but as noted above, imposing one set of standards on all (or most) platforms is likely to be ineffective and to have significant side effects. Enforcing that code of practice is likely to be difficult and hard to make consistent, fair or appropriate.
6.3 As noted above, privacy is of critical importance, and yet some of the suggestions for the ‘duty of care’ involve actually invading or weakening privacy for precisely the people who need it the most. In 7.35 for example, it is suggested that ‘vulnerable users and users who actively search for…’ certain content should be monitored – how is this to be done without extensive invasions of privacy, and how are those invasions of privacy to be done in ways that do not put the specifically vulnerable users at further risk? Again, the encouragement of people to take countermeasures and to develop tools and techniques to avoid this kind of monitoring should not be underestimated. Much of this kind of content will be driven to areas where it is lesseasy to provide support and help for people who really need it.
6.4 These are just some of the example that indicate quite how difficult doing effective regulation in this kind of way is likely to be. It is vital to understand that this regulatory exercise be understood to be highly challenging, very likely to be ineffective, as well as extremely expensive. Expectations as to its effectiveness, in particular, should be kept in check, as well as the potential damage to internet freedom at precisely the time when it is most needed.
7 Internet Freedom
7.1 It is easy to blame the internet for problems that have other causes, and easy to see it as something that needs to be ‘reined in’ or controlled. As well as being, as noted in various sections above, a mischaracterisation of the current situation, where for the vast majority of people the vast majority of the time the internet is something immensely positive, productive and supportive, providing for most ordinary people forms of communication and access to information that was previously the province only of the extremely privileged. Part of the reason for this huge positive is the amount of freedom that we currently have – and how it underpins many of our human rights, from freedom of expression to assembly and association, both online and off, freedom from discrimination, the right to a fair trial and more.
7.2 This freedom is something that should not be lightly sacrificed, particularly on the basis of myths and misunderstandings, or from an intention to assuage particular sections of the media. Almost all of the measures suggested in the White Paper have an impact on both freedom of speech and access to information, and many have a significant impact on privacy and the other vital human rights already mentioned. That is not to say that they should not be considered, but that those impacts need to be considered very seriously, and regulation not undertaken lightly. Excessive regulation can end up arbitrary and unfair, it can exacerbate existing problems, it can be gamedby people to the detriment of their enemies – and internet trolls and others wishing harm can be experts in such gaming, using tools created to protect people to actively harm them.
7.3 It should also be borne in mind that tools created now, with authorities that we deem to be benign, can be used by successor authorities that are less benign – we need to learn the lessons from history about this, and to avoid setting things up that can end up being used to oppress rather than protect. This is another key reason for caution in regulating too harshly.
8 Responses to specific questions in the consultation
This response has focussed on the overall effect of the White Paper, and on some particular areas where problems might arise, rather than on the specific consultation questions. Some of the questions are beyond the scope of this response but some do warrant a specific answer. In particular:
Q1: The first and most important thing that the government should do is demonstrate more transparency, trust and accountability itself. The government should lead by example – and a code of conduct for ministers in relation to things like misinformation would be a good start. In practice, minsters not only spread misinformation themselves but contribute to an environment in which information is not trusted. Proper accountability should begin with the government.
Q4 Any regulator needs to be fully accountable to Parliament, through parliamentary committee, rather than through the DCMS itself. It should be responsible to parliamentrather that to the government, particularly as it needs at times to hold the government to account (see response to Q1).
Q5 As noted throughout this submission, great care needs to be taken to avoid excessive regulation.
Q6-7 These are crucial questions, but I am afraid it betrays a misunderstanding of the nature of privacy, something discussed in depth in Chapter 6 of my book The Internet, Warts and All.Privacy is not ‘two-valued’ with some communications being private, others public. It is much more nuanced than that, and sometimes ‘public’ forums include extremely private conversations and communications. The infamous Samaritans Radarfailed precisely because it misunderstood this – and the ICO confirmed at the time that private and personal information can exist on ‘public’ social media platforms.[6]Much more care and thought is needed here, rather than assuming that private and public can be easily separated. Moreover, if the criteria for what counts as private becomes known, it can (a) drive people to more private forms of communication that mean they are less easily helped and (b) create an opportunity for ‘gaming’ the regulations.
Q8 As noted throughout this submission, this is the big question for the whole plan. Much more time and thought is needed to avoid the regulation being both heavy-handed and ineffective.
Q10 The bigger question is whether the regulator should exist at all in the form proposed. The government should be asking that bigger question before looking at the precise legal form. If a regulator is definitely decided upon, a new public body would seem more appropriate than an existing one: the ICO has too much to do already, broadcast and related areas are too dissimilar for Ofcom to have much chance of succeeding, the BBFC is struggling over the contentious issue of age verification.
Q11 Making a regulator ‘cost neutral’ is laudable but brings about the risk of even more potent lobbying than already exists, and the power of the lobbies of Google, Facebook et al is already remarkably powerful. Whatever funding mechanism is determined needs to be clear, simple and not gameable – and that is very difficult to do given the expertise of those likely to be required to pay.
Q12 i) Unless any regulator has the power to disrupt business activities it is unlikely to have any impact at all. ii) ISP blocking already exists in relation to copyright, CSEA (via the IWF) and other areas. Extending those areas should be very much resisted, as the impact on freedom of expression is direct and significant, but given that it already exists for those areas there is little logical reason why not to extend it. iii) Senior management liability, though attractive, is unlikely to be sustainable.
Q13 Under terms similar to the GDPR.
Q14 Yes, but the details would depend very much on precisely how the regulations are set out.
Q15 The risks associated with Brexit, the excessive nature of our surveillance regime and in particular things like demands for backdoors to encryption are the biggest barriers to innovation in the UK technology industry. Both are understandably beyond the remit of this consultation, but those involved should be aware how damaging both are to the technology industry.
Q17-18See section 4 above. Children need empowerment more than protection, and parents need to learn more than the children do. The regulator should play an informative role – but be aware that this is very limited, and not place too heavy an expectation of its success.
I hope this response is helpful. If you need any further information, or links to the research that underpins any of the answers, please let me know.
Dr Paul Bernal
Senior Lecturer in Information Technology, Intellectual Property and Media Law
UEA Law School
University of East Anglia
Norwich NR4 7TJ
Email: paul.bernal@uea.ac.uk
[1]See my article in the Northern Ireland Legal Quarterly in December 2018, Fakebook: why Facebook makes the fake news problem inevitable, online at https://nilq.qub.ac.uk/index.php/nilq/article/view/189
[2]This area is covered in depth in Chapter 8 of my bookThe Internet, Warts and All: Free Speech, Privacy and Truth, published by Cambridge University Press, 2018.
[3]Most notably the 2016 study from the University of Zurich, reported in Rost, Stahel and Frey, Digital Social Norm Enforcement: Online Firestorms in Social Media, PLoS ONE 11 (6)
[4]See Orben and Przybylski, Screens, Teens, and Psychological Well-Being: Evidence From Three Time-Use-Diary Studies, 2019 https://journals.sagepub.com/doi/10.1177/0956797619830329
[5]See https://www.newsweek.com/facebook-label-fake-news-believe-hoaxes-756426
[6]The Samaritans Radar story is the central case study of chapter 6 of The Internet, Warts and All.It involved analysing social media postings in order to identify when vulnerable people might be contemplating suicide, and failed within ten days of its launch as its privacy invasions were found to be deeply intrusive to exactly the online community it intended to support, and seen as putting them at intense risk.