It’s not just the porn that will be over-blocked….

Newsnight last night included a feature on how the recently introduced internet ‘porn-filters’ were actually blocking more than just porn. Specifically, they noted that sex-education websites, sexual health websites and so forth were being blocked by the filters. This comes as no surprise to anyone working in the area – indeed, my own blog post asking questions about porn-filters was itself blocked – but it is still good to see that the mainstream media is now taking it on board, albeit very late in the day.

It wasn’t a bad feature, but it only began to scratch the surface of the issue. It left a lot of questions unanswered and a lot of crucial issues untouched. The first of these was the suggestion, insufficiently challenged, that this over-blocking was just some sort of ‘teething trouble’. Once the systems get properly up and running, the problems will be ironed out, and everything will work perfectly. As anyone who understands the systems would tell you, this is far from being the case: over-blocking is an inherent problem, one that will never go away. The nature of these filters, the fact that they essentially work algorithmically, means that they will always (and automatically) pick out sites that deal with the same subject matter as the ones that you are trying to block. If you want to block sites that deal with sex, you will block sites that deal with sex education, with sexuality and so forth. It will also be almost certain to block sites connected with LGBT issues, leaving a lot of young and vulnerable people without access to key information. Here, as in so many cases, ignorance is not bliss. Far from it. Now you can clean things up bit by bit in some cases, as site owners complain – but this is a slow and painstaking process, and anyway will only work when site owners discover that they have been blocked – something far from certain to occur. Very often, filtering/censorship will happen without anyone even noticing.

The second key absence from the Newsnight feature was the fact that these filters are not planned just to filter out pornography. They are planned to deal with a whole lot of other websites, from ‘extremism’ and ‘esoterica’, gambling, violence and so forth. Quite apart from the immense difficulty in defining things such as extremism and esoterica – let alone whether it is appropriate to block such sites in the first place – it needs to be remembered that the over blocking issue with porn sites will apply equally to these other sites. Blocking ‘extremist’ sites will end up blocking sites that discuss extremism, for example – and sites that might help people to find their way out of extremist groups and so forth. This won’t just fail to stop the growth in extremism – it will hinder attempts to prevent that growth. It won’t just fail to be effective – it will be actively counterproductive.

This is not an accident. Censorship in general does not work – and students of its history should be aware of this. Though freedom of speech should not be viewed as an absolute, it should be viewed as an ideal, and curtailing it without sufficient reason should only be done with great care and with great understanding. The blunt instrument of internet filtering has neither the care nor the understanding. It will do far more harm than good.

Porn filters, politics and liberalism….

This afternoon the Lib Dems conference rejected a call for ‘default-on’ porn filters – an idea being pushed strongly by David Cameron – and rejected it decisively. A great deal has been written about this before – including by me (e.g. ’10 questions for David Cameron’ and the ‘porn-filter Venn diagram’) – so I won’t rehash the all of the old discussions, except to say that it seems to me that the decision to reject the plan is a victory for intelligence, understanding and liberalism. The plan would not do what it intended to do, it would produce deeply counterproductive censorship, and could both encourage complacency and discourage the crucial kind of engagement between parents and children that could really help in this area.

A revealing debate….

What does interest me, however, is the nature of the discussions that happened at the conference. The divide between those pushing the motion and those opposing it was very stark: it was primarily a divide of understanding, but it was also one of liberalism. The difference in understanding was the most direct: those in favour of the filters seemed to have very little comprehension of how the internet worked – or even how young people think and behave. In technical terms, their arguments were close to meaningless and the solutions they proposed were unworkable – while in terms of understanding the needs of young people, they seemed to be even further from the mark. The contributors on the other side, however, were remarkably good. I’m not a Lib Dem, but I couldn’t help but be very impressed by many of them. Three stood out: Julian Huppert, who is my MP and may well be the MP who understands the internet the best, Sarah Brown (@auntysarah on twitter, and councillor and noted LGBT activist), and Jezz Palmer (@LoyaulteMeLie on Twitter).

Jezz Palmer was particularly impressive, setting out exactly why this kind of thing would be disastrous for kids.

“Implementing these filters punishes children who are otherwise completely alone,” she said. “I know there are still kids growing up who feel how I did and there will be for generations to come. Don’t take away their only research, don’t leave them alone in the dark.”

Ultimately that’s the point. This kind of filter over-blocks – indeed, the blog post I first wrote on the subject a few months ago was itself blocked by a porn filter – and over-blocks precisely the sort of material that young people need to be able to find if they are to grow up in the kind of healthy and positive way that anyone with any sense of liberalism should promote. They need to explore, to learn, to find ways to understand – much more than they need to be controlled and nannied, or even ‘protected’.

The role of liberalism…

Is it a coincidence that those who understood the issues – both the technical issues and those concerning young people – were also those with the most liberal ideas about filtering and censorship? I don’t know, but I suspect that often there is a connection. The people that I know who work with the internet insofar as it relates to privacy and free expression come from a wide variety of political backgrounds – from the extremes of both right and left – but the one thing they tend to have in common is a sense of liberalism (with a very small ‘l’) in that they believe in individual freedom and individual rights. An understanding of or belief in the importance of autonomy, at a personal level, doesn’t fit neatly in the ‘left-right’ spectrum…

Whether the decision of the Lib Dem conference really matters has yet to be seen. The Lib Dem conference often makes fairly radical decisions – supporting the legalisation of cannabis is one of the more notable ones – but its leadership (and Nick Clegg in particular) doesn’t always follow them. They (and he) have, however, taken the advice of Julian Huppert seriously, particularly in the crucial decision not to back the Communications Data Bill (the Snoopers’ Charter). I hope they listen to Dr Huppert again, and come out as firmly against these filters as their conference has – because it could be a significant and positive move politically.

How will the other parties react?

The other parties will have been watching the debate – and both the Tories and the Labour Party are in many ways deeply untrustworthy where the internet is concerned, leaning in distinctly authoritarian directions in relation to surveillance, hard-line enforcement of copyright and the idea of censorship. I hope they noticed the key aspect of today’s debate: that the people who knew and understood the internet were all firmly against filters. I  suspect that fear of the likely headlines in the tabloids will stop them being as strongly against porn-filters as the Lib Dem conference has shown itself, but they might at least decide not to push so strongly in favour of those filters, and perhaps let the idea melt away like so many other unworkable political ideas. I hope so – but I will be watching the other conferences very closely to find out.

Syria and the myth of simple solutions

Last night’s parliamentary defeat for the government over the proposed military intervention in Syria was both dramatic and potentially hugely significant. There will be (indeed there has already been) a great deal written about it by many, many people – from the impact it might have on the people of Syria to the possibly massive political ramifications in the UK. I don’t propose to add to those – I was hugely surprised by the events of last night, and fully expected, right until the moment that the result of the vote was announced, the government to win and for military action to be given the green light. That shows how much I know… and means I’ll await events rather than pretend that I know what’s going to happen next.

There is, however, one aspect that I want to say a few words about – and one aspect of the debate in the House of Commons that really impressed me. That’s the way that a large number of MPs, on all sides of the House, were unwilling to accept the idea that there was a simple solution to a highly complex problem: that there was one ‘obvious’ way to deal with it. It is a highly seductive way to look at things – but it very rarely turns out to be true.  The simple solution suggested here – effectively ‘surgical strikes’, limited in scope and in impact – didn’t convince me, and didn’t convince the MPs. Speech after speech asked questions to which there seemed to be no answers forthcoming: most directly, ‘how do you know this will actually work?’

There’s no question that the Assad regime is nightmarish. The atrocities that we’ve seen the results of should sicken everyone. They certainly sicken me – and I would love to be able to find a way to stop them. That, however, does not mean that I would do ‘anything’ that is suggested – however bad things are, they can get worse, and they very often do when a seemingly simple solution is suggested.

From a political perspective simple solutions are very attractive – they can be ‘sold’ to the public, they can make good headlines in the newspapers – but when you look closer, they rarely provide the answers that are needed. In my specialist field, we’ve seen this again and again in recent months. The idea that you can ‘solve’ the complex challenge of pornography on the internet with a ‘simple’ filter system (about which I’ve blogged many times, starting with these 10 questions) is attractive but pretty much unworkable and has highly damaging side effects. The ideas that you can ‘solve’ the problem of abusive tweeting with a ‘simple’ report abuse button (see here) or deal with trolling comments on blogs with a ‘simple’ real names policy (see here) are similarly seductive – and similarly counterproductive. These are complex problems – and in reality they need more complex, more nuanced, more thoughtful solutions.

The MPs in last night’s debate seemed to understand that – and not like it. They were being asked to support something essentially on trust – with minimalist legal evidence and even flimsier intelligence support – and they wanted to know more. Enough of them could see that the ‘simple’ solution being suggested was not as simple as it seemed – and wanted to know more, to think more, and to be given more time. Most of all, they wanted to have more evidence that it would work – that it would make the situation better for the people of Syria. Personally, I don’t think we know that it would. It might – but it might well not.

I was challenged last night on Twitter by a few people to offer an alternative – and I couldn’t provide one, certainly not in 140 characters. I want to see more diplomacy, I want to see more imagination, I want to see more humanitarian support, I want to see more engagement with the people not only of Syria but Iran – but do I know that this will work? Of course I don’t. I can’t offer a simple solution. I highly suspect there isn’t one. This will be messy, this will be bloody, this will be hideous – it already is. ‘Standing by and doing nothing’ is horrible – but military intervention is horrible too. There’s no easy way out – and we should stop pretending, particularly to ourselves, that there is.

My porn-blocking blog post got porn-blocked!

Screen Shot 2013-07-26 at 08.35.38Just to make the point about porn-blocking filters even more concrete, I’ve discovered that my blog post on porn-blocking has been automatically blocked by Strathmore University’s system (thanks to @LucyPurdon for pointing it out). Strathmore University is in Kenya, and I don’t know much about it, but the implication of the message is clear: the blog post was blocked because the system saw too many mentions of the word pornography – I’m still not clear about the proxies issue, though.

What does all this imply? Well, it shows the limitations of an automated system: analysing my blog post would indeed find I mention the word ‘pornography’ rather a lot – appropriately, as I’m discussing how we deal with pornography on the net – but it certainly doesn’t make the post pornographic. Any automated system will have that kind of a limitation… and will therefore block a whole swathe of material that is educational, informative and directly relevant to important issues. Automatically block things this way and you will drastically reduce access to information about crucial subject – sex is just one of them. Cutting down access to information, as well as all the freedom of speech issues, will leave kids less well informed, and less able to deal with these issues. Education is the key – and filters will and do(!) reduce that.

One key thing to note: the Strathmore University system is at least transparent – it tells you why a site is blocked, which might at least give you some way to get around it. Many systems (for example the way that many ISPs implement the IWF’s blacklist) are not transparent: you don’t know why you can’t get access to a site, either getting a ‘site not found’ message or even nothing. With those systems, there’s even more of a problem – and I have a feeling that those are the systems that David Cameron is likely to push….

Porn-blocking filters not only don’t work in their own terms, they’re actually damaging!

There’s no ‘silver bullet’ for porn….

Werewolf attack jumpI was lucky enough to be on Radio 4′s ‘The Media Show’ yesterday, to talk about Cameron’s porn-blocking plans: I think I was invited as a result of my blog post from Monday, asking 10 questions about the plan. I didn’t have much time – and I’m still very much an amateur on the radio – and though I think I managed to get across some of what I wanted to say, I didn’t get close to persuading the other person talking about the subject – Eleanor Mills, of the Sunday Times. I think I know why: she’s in many ways correctly identified the monster that she wants to slay, and she thinks that she’s found the silver bullet. The problem is, for porn, there IS no silver bullet. It’s not that simple.

The solution that she suggested – and she said that ‘the man from Google’ told her it was possible – was a simple ‘switch’ to turn a ‘porn filter’ on or off. If you wanted to see ‘restricted’ material for some justified reason (e.g. to look at material for sex education purposes) you could turn it on, and you’d be asked a question in a pop-up, something like ‘Do you want to look at this for research purposes?’. You’d click OK, look at the stuff, then turn the filter back on. Simple. Why not do it?

It doesn’t really take a technical expert to see the flaws in that plan even if it was possible to create such a switch – how it wouldn’t stop viewing stuff for bad reasons (who’s going to be honest when asked why you want?), how it avoids the fundamental question of how you define ‘porn’, and all the other crucial issues that I mentioned in my other blog. That’s not to mention the technical difficulties, the problem of over-censorship and under-censorship, of the way that the really bad stuff will avoid the filters anyway – let alone the even more fundamental issues of free speech and the need to be able to access information free of fetters or limitations…. There are so many flaws in the plan that it’s hard to know where to start – but it’s easy to see the attraction of the solution.

We all want to find easy solutions – and computerised, technical solutions often promise those kinds of easy solutions. Porn, however, is not amenable to easy solutions. It’s a complex subject – and sadly for those looking for silver bullets, it needs complex, multifaceted solutions that take time, effort and attention.

We do, however, know what a lot of those solutions are – but they’re not really politically acceptable at the moment, it seems. We know, for example, that really good sex and relationships education helps – but the government recently voted down a bill that would make that kind of education compulsory in schools. The ‘traditional’ education favoured by Michael Gove and the Daily Mail has no truck with new-fangled trendy things like that, and the puritanical religious approach still claims, despite all the evidence, that ignorance of sexual matters is bliss. It isn’t. Better education is the key starting point to helping kids to find their way with sex and relationships – and to make the ‘poisonous’ influence of ‘bad’ porn (which, it must be remembered, is generally NOT illegal) the kind of thing that Eleanor Mills justifiably wants to deal with. If she really wants to help, she should be fighting the government on that, not pushing technical, magical solutions that really won’t work.

The next stage is putting more resources – and yes, that means money – into the solutions that we know work well. The IWF in dealing with child abuse images. CEOP in dealing with sex-offenders online activities. Work on a targeted, intelligent level. The experts know it works – but it’s hard work, it’s not headline-grabbing, and it’s not ‘instant’. What’s more, it’s not cheap.

The other part of the jigsaw for me, is to start having a more intelligent, more mature and more honest debate about this. If the politicians didn’t go for soundbite solutions without talking to experts, but actually listened to what people said, this might be possible. Sadly, with the current lot of politicians on pretty much every side, that seems impossible. This isn’t a party-politcal issue: Labour are every bit as bad as the Tories on this, with Helen Goodman a notable offender. It’s an issue of politicians being unwilling to admit they don’t understand, and unwilling to take advice that doesn’t fit with their ‘world view’. It’s an issue of the corrosive influence of hypocritical and puritanical newspapers like the Daily Mail on the one hand calling for internet porn bans and on the other parading their ‘sidebar of shame’ complete with images and stories that objectify women and girls to an extreme.

The one saving grace here is that the solution they suggest simply won’t work – and eventually they’ll realise that. In Australia, a similarly facile solution was tried, only to be ignominiously abandoned a few years later. If only that lesson was the one from Australia that Lynton Crosby managed to get across to David Cameron….

10 questions about Cameron’s ‘new’ porn-blocking

There’s been a bit of a media onslaught from David Cameron about his ‘war on porn’ over the weekend. Some of the messages given out have been very welcome – but some are contradictory and others make very little sense when examined closely. The latest pronouncement, as presented to/by the BBC, says

“Online pornography to be blocked automatically, PM announces”

The overall thrust seem to be that, as Cameron is going to put in a speech:

“Every household in the UK is to have pornography blocked by their internet provider unless they choose to receive it.”

So is this the ‘opt-in to porn’ idea that the government has been pushing for the last couple of years? The BBC page seems to suggest so. It suggests that all new customers to ISPs will have their ‘porn-filters’ turned on by default, so will have to actively choose to turn them off – and that ‘millions of existing computer users will be contacted by their internet providers and told they must decide whether to activate filters’.

Some of this is welcome – the statement about making it a criminal offence to possess images depicting rape sounds a good idea on the face of it, for example, for such material is deeply offensive, though quite where it would leave anyone who owns a DVD of Jodie Foster being raped in The Accused doesn’t appear to be clear. Indeed, that is the first of my ten questions for David Cameron.

1     Who will decide what counts as ‘pornography’, and how?

And not just pornography, but images depicting rape? Will this be done automatically, or will there be some kind of ‘porn board’ of people who will scour the internet for images and decide what is ‘OK’ and what isn’t? Automatic systems already exist to do this for child abuse images, and by most accounts they work reasonably well, but they haven’t eradicated the problem of child abuse images. Far from it. If it’s going to be a ‘human’ system – perhaps an extension of the Child Exploitation and Online Protection Centre (CEOP) – how are you planning to fund it, and do you have any idea how much this is going to cost?

2     Do you understand and acknowledge the difference between pornography, child abuse images and images depicting rape? 

One of the greatest sources of confusion over the various messages given out over the weekend has been the mismatch between headlines, sound bites, and actual proposals (such as they exist) over what you’re actually talking about. Child abuse images are already illegal pretty much everywhere on the planet – and are hunted down and policed as such. As Google’s spokespeople say, Google already has a zero-tolerance policy for those images, and has done for a while. Images depicting rape are another category, and the idea of making it illegal to possess them would be a significant step – but what about ‘pornography’. Currently, pornography is legal – but it comes in many forms, and is generally legal – and to many people have very little to do with either of the first two categories…. which brings me to the third question

3     Are you planning to make all pornography illegal?

…because that seems to be the logical extension of the idea that the essential position should be that ‘pornography’ should be blocked as standard. That, of course, brings up the first two questions again. Who’s going to make the decisions, and on what basis? Further to that, who’s going to ‘watch the watchmen’. The Internet Watch Foundation, that currently ‘police’ child abuse images, though an admirable body in many ways, are far from a model of transparency (see this excellent article by my colleague Emily Laidlaw). If a body is to have sweeping powers to control content is available – powers above and beyond those set out in law – that body needs to be accountable and their operations transparent. How are you planning to do that?

4     What about Page 3?

I assume you’re not considering banning this. If you want to be logically consistent – and, indeed, if you want to stop the ‘corrosion of childhood’ then doing something about Page 3 would seem to make much more sense. Given the new seriousness of your attitude, I assume you don’t subscribe to the view that Page 3 is just ‘harmless fun’…. but perhaps you do. Where is your line drawn? What would Mr Murdoch say?

5     What else do you want to censor?

…and I use the word ‘censor’ advisedly, because this is censorship, unless you confine it to material that is illegal. As I have said, child abuse images are already illegal, and the extension to images depicting rape is a welcome idea, so long as the definitions can be made to work (which may be very difficult). Deciding to censor pornography is one step – but what next? Censoring material depicting violence? ‘Glorifying’ terrorism etc?  Anything linking to ‘illegal content’ like material in breach of copyright? It’s a very slippery slope towards censoring pretty much anything you don’t like, whether it be for political purposes or otherwise. ‘Function creep’ is a recognised phenomenon in this area, and one that’s very difficult to guard against. What you design and build for one purpose can easily end up being used for quite another, which brings me to another question…

6     What happens when people ‘opt-in’?

In particular, what kind of records will be kept? Will there be a ‘list’ of those people who have ‘opted-in to porn’? Actually, scratch that part of the question – because there will, automatically be a list of those people who have opted in. That’s how the digital world works – perhaps not a single list, but a set of lists that can be complied into a complete list. The real question is what are you planning to do with that list. Will it be considered a list of people who are ‘untrustworthy’. Will the police have immediate access to it at all times? How will the list be kept secure? Will is become available to others? How about GCHQ? The NSA? Have the opportunities for the misuse of such a list been considered? Function creep applies here as well – and it’s equally difficult to guard against!

7     What was that letter to the ISPs about?

You know, the letter that got leaked, asking the ISPs to keep doing what they were already doing, but allow you to say that this was a great new initiative? Are you really ‘at war’ with the ISPs? Or does the letter reveal that this initiative of yours is essentially a PR exercise, aimed at saying that you’re doing something when in reality you’re not? Conversely, have you been talking to the ISPs in any detail? Do you have their agreement over much of this? Or are you going to try to ‘strong-arm’ them into cooperating with you in a plan that they think won’t work and will cost a great deal of money, time and effort? For a plan like this to work you need to work closely with them, not fight against them.

8     Are you going to get the ISPs to block Facebook?

I have been wondering about this for a while – because Facebook regularly includes images and pages that would fit within your apparent definitions, particularly as regards violence against women, and Facebook show no signs of removing them. The most they’ve done is remove advertisements from these kinds of pages – so anyone who accesses Facebook will have access to this material. Will the default be for Facebook to be blocked? Or do you imagine you’re going to convince Facebook to change their policy? If you do, I fear you don’t understand the strength of the ‘First Amendment’ lobby in the US… which brings me to another question

9     How do you think your plans will go down with US internet companies?

All I’ve seen from Google have been some pretty stony-faced comments – but for your plan to work you need to be able to get US companies to comply. Few will do so easily and willingly, partly on principle (the First Amendment really matters to most Americans), partly because it will cost them money to do so, and partly because it will thoroughly piss-off many of their American customers. So how do you plan to get them to comply? I assume you do have a plan…

10     Do you really think these plans will stop the ‘corrosion’ of childhood?

That’s my biggest question. As I’ve blogged before, I suspect this whole thing misses the point. It perpetuates a myth that you can make the internet a ‘safe’ place, and absolves parents of the real responsibility they have for helping their kids to grow up as savvy, wary and discerning internet users. It creates a straw man – the corrosion of childhood, such as it exists, comes from a much broader societal problem than internet porn, and if you focus only on internet porn, you can miss all the rest.

Plans like these, worthy though they may appear, do not, to me, seem likely to be in any way effective – the real ‘bad guys’ will find ways around them, the material will still exist, will keep being created, and we’ll pretend to have solved the problem – and at the same time put in a structure to allow censorship, create a deeply vulnerable database of ‘untrustworthy people’, and potentially alienate many of the most important companies on the internet. I’m not convinced it’s a good idea. To say the least.

Safe…. or Savvy?

What kind of an internet do we want for our kids? And, perhaps more importantly, what kind of kids do we want to bring up?

These questions have been coming up a lot for me over the last week or so. The primary trigger has been the reemergence of the idea, seemingly backed by David Cameron (perhaps to distract us from the local elections!), of comprehensive, ‘opt-out’ porn blocking. The idea, apparently, is that ISPs would block porn by default, and that adults would have to ‘opt-out’ of the porn blocking in order to access pornographic websites. I’ve blogged on the subject before – there are lost of issues connected with it, from slippery slopes of censorship to the creation of databases of those who ‘opt-out’, akin to ‘potential sex-offender’ databases. That, though is not the subject of this blog – what I’m interested in is the whole philosophy behind it, a philosophy that I believe is fundamentally flawed.

That philosophy, it seems to me, is based on two fallacies:

  1. That it’s possible to make a place – even virtual ‘places’ like areas of the internet – ‘safe’; and
  2. That the best way to help kids is to ‘protect’ them

For me, neither of these are true – ultimately, both are actually harmful. The first idea promotes complacency – because if you believe an environment is ‘safe’, you don’t have to take care, you don’t have to equip kids with the tools that they need, you can just leave them to it and forget about it. The second idea magnifies this problem, by encouraging a form of dependency – kids will ‘expect’ everything to be safe for them, and they won’t be as creative, as critical, as analytical as they should be, first of all because their sanitised and controlled environment won’t allow it, and secondly because they’ll just get used to being wrapped in cotton wool.

Related to this is the idea, which I’ve seen discussed a number of times recently, of electronic IDs for kids, to ‘prove’ that they’re young enough to enter into these ‘safe’ areas where the kids are ‘protected’ – another laudable idea, but one fraught with problems. There’s already anecdotal evidence of the sale of ‘youth IDs’ on the black market in Belgium, to allow paedophiles access to children’s areas on the net – a kind of reverse of the more familiar sale of ‘adult’ IDs to kids wanting to buy alcohol or visit nightclubs. With the growth of databases in schools (about which I’ve also blogged) the idea that a kids electronic ID would actually guarantee that a ‘kid’ is a kid is deeply flawed. ‘Safe’ areas may easily become stalking grounds…

There’s also the question of who would run these ‘safe’ areas, and for what purpose? A lovely Disney-run ‘safe’ area that is designed to get children to buy into the idea of Disney’s movies – and to buy (or persuade their parents to buy) Disney products? Politically or religiously run ‘safe’ areas which promote, directly or indirectly, particular political or ethical standpoints? Who decides what constitutes ‘unacceptable’ material for kids?

So what do we need to do?

First of all, to disabuse ourselves of these illusions. The internet isn’t ‘safe’ – any more than anywhere in the real world is ‘safe’. Kids can have accidents, meet ‘bad’ people and so on – just as they do in the real world. Remember, too, that the whole idea of ‘stranger danger’ is fundamentally misleading – most abuse that kids receive comes from people they know, people in their family or closely connected to it.

That doesn’t mean that kids should be kept away from the internet – the opposite. The internet offers massive opportunities to kids – and they should be encouraged to use it from a young age, but to use it with intelligence, with a critical and analytical outlook. Kids are far better at this than most people seem to give them credit for – they’re much more ‘savvy’ instinctively than we often think. That ‘savvy’ approach should be encouraged and supported.

What’s more, we have to understand our roles as parents, as teachers, as adults in relation to kids – we’re there to help, and to support, and to encourage. My daughter’s just coming up to six years old, and when she wants to know things, I tell her. If she’s doing something I think is too dangerous, I tell her – and sometimes I stop her. BUT, much of the time – most of the time – I know I need to help her rather than tell her what to do. She learns things best in her own way, in her own time, through her own experience. I watch her and help her – but not all the time. I encourage her to be independent, not to take what people say as guaranteed to be true, but to criticise and judge it for herself.

I don’t always get it right – indeed, I very often get it wrong – but I do at least know that this is how it is, and I try to learn. I know she’s learning – and I know she’ll make mistakes too. She’ll also encounter some bad stuff when she starts exploring the internet for real – I don’t want to stop her encountering it – I want to equip her with the skills she needs to deal with it, and to help her through problems that arise as a result.

I want a savvy kid – not the illusion of a safe internet. Isn’t that a better way?

Privacy, Parenting and Porn

One of the stories doing the media rounds today surrounded the latest pronouncements from the Prime Minister concerning porn on the internet. Two of my most commonly used news sources, the BBC and the Guardian, had very different takes on in. The BBC suggested that internet providers were offering parents an opportunity to block porn (and ‘opt-in’ to website blocking) while the Guardian took it exactly the other way – suggesting that users would have to opt out of the blocking – or, to be more direct, to ‘opt-in’ to being able to receive porn.

Fool that I am, I fell for the Guardian’s version of the story (as did a lot of people, from the buzz on twitter) which seems now to have been thoroughly debunked, with the main ISPs saying that the new system would make no difference, and bloggers like the excellent David Meyer of ZDNet making it clear that the BBC was a lot closer to the truth. The idea would be that parents would be given the choice as to whether to accept the filtering/blocking system, which, on the face of it, seems much more sensible.

Even so, the whole thing sets off a series of alarm bells. Why does this sort of thing seem worrying? The first angle that bothers me is the censorship one – who is it that decides what is filtered and what is not? Where do the boundaries lie? One person’s porn is another person’s art – and standards are constantly changing. Cultural and religious attitudes all come into play. Now I’m not an expert in this area – and there are plenty of people who have written and said a great deal about it, far more eloquently than me – but at the very least it appears clear that there are no universal standards, and that decisions as to what should or should not be put on ‘block lists’ need to be made very carefully, with transparency about the process and accountability from those who make the decisions. There needs to be a proper notification and appeals process – because decisions made can have a huge impact. None of that appears true about most ‘porn-blocking’ systems, including the UK’s Internet Watch Foundation, often very misleadingly portrayed as an example of how this kind of thing should be done.

The censorship side of things, however, is not the angle that interests me the most. Two others are of far more interest: the parenting angle, and the privacy angle. As a father myself, of course I want to protect my child – but children need independence and privacy, and need to learn how to protect themselves. The more we try to wrap them in cotton wool, to make their world risk-free, the less able they are to learn how to judge for themselves, and to protect themselves. If I expect technology, the prime minister, the Internet Watch Foundation to do all the work for me, not only am I abdicating responsibility as a parent but I’m denying my child the opportunity to learn and to develop. The existence of schemes like the one planned could work both ways at once: it could make parents think that their parenting job is done for them, and it could also reduce children’s chances to learn to discriminate, to decide, and to develop their moral judgment….

….but that is, of course, a very personal view. Other parents might view it very differently – what we need is some kind of balance, and, as noted above, proper transparency and accountability.

The other angle is that of privacy. Systems like this have huge potential impacts on privacy, in many different ways. One, however, is of particular concern to me. First of all, suppose the Guardian was right, and you had to ‘opt-in’ to be able to view the ‘uncensored internet’. That would create a database of people who might be considered ‘people who want to watch porn’. How long before that becomes something that can be searched when looking for potential sex offenders? If I want an uncensored internet, does that make me a potential paedophile? Now the Guardian appears to be wrong, and instead we’re going to have to opt-in to accept the filtering system – so there won’t be a list of people who want to watch porn, instead a list of people who want to block porn. It wouldn’t take much work, however, on the customer database of a participating ISP to select all those users who had the option to choose the blocking system, and didn’t take it. Again, you have a database of people who, if looked at from this perspective, want to watch porn….

Now maybe I’m overreacting, maybe I’m thinking too much about what might happen rather than what will happen – but slippery slopes and function creep are far from rare in this kind of a field. I always think of the words of Bruce Schneier, on a related subject:

“It’s bad civic hygiene to build technologies that could someday be used to facilitate a police state”

Now I’m not suggesting that this kind of thing would work like this – but the more ‘lists’ and ‘databases’ we have of people who don’t do what’s ‘expected’ of them, or what society deems ‘normal’, the more opportunities we create for potential abuse. We should be very careful…