IT Law in Ireland
Information Technology law issues with a focus on freedom of expression, privacy and other fundamental rights.
Monday, December 08, 2008
Internet Watch Foundation blocks Wikipedia
This presents all sorts of interesting problems for the law and civil liberties. There is no legislation underpinning the IWF, which is a purely private body. There is no judicial control of its activities, and the process by which it blocks sites is particularly opaque (it does not notify site owners either before or after sites are blocked, nor does it offer a right to be heard). It does claim to offer a right of appeal against blocking, but that is not an appeal to an independent body but to a division of the Metropolitan Police. In short, it has (with government backing) implemented a remarkable system of censorship which departs from almost every traditional understanding of freedom of expression in the UK.
I've been following the development of this system for some time now, and I spoke about some of these issues in this paper at the 2008 BILETA Conference in Glasgow. Here are some excerpts (references and hyperlinks removed):
Regulation in the UK: “Cleanfeed”
Background
Cleanfeed has its genesis in the Internet Watch Foundation (IWF) which was founded in 1996 by way of agreement between the government, police and the internet service providers association. Originally termed the Safety Net Foundation, this body had its origin in warnings by the Metropolitan Police that ISPs faced liability for hosting illicit content – particularly child pornography – unless they established procedures for removing this material. The IWF was established in response to provide a voluntary mechanism to monitor and report illegal material on the UK Internet (including a hotline for members of the public to report material), and to coordinate a notice and takedown procedure in respect of this material.
Although a private, charitable body with no formal state representation on its board, the IWF nevertheless enjoys public funding (by way of the EU Safer Internet Initiative and government grants for specific projects) as well as being funded by the Internet industry in the UK. It also enjoys public recognition as the only private body to be recognised as a relevant authority under Section 46 of the Sexual Offences Act 2003 – granting immunity in the carrying out of its functions.
In carrying out this hotline function the IWF drew up a database (formally known as the Child Sexual Abuse Content URL List ) of particular URLs which its staff had determined hosted child pornography. That list is said by the IWF to typically contain between 800-1200 URLs at any time...
In November 2002 legal advice was received indicating that the IWF could make this list available to members for the purpose of blocking these images to prevent their customers from being inadvertently exposed to them. In late 2004 this advice was ultimately implemented via a system whereby members and others can access the database under licence (for a size-related fee up to £5,000 per year). This is not limited to ISPs – Google, for example, has chosen to implement the system in order to filter search results.
It would appear from media reports that this took place following pressure from children’s charity NCH and intervention from the Home Office. BT then appears to have taken the initiative both in seeking to have this list be made available and also in developing a technical system whereby subscriber access to the web could effectively be filtered against the list. They did this in 2004 via what was internally termed “Cleanfeed” – a two stage filtering process intended to minimise both false positives and slowdown of connections. The promise of this system was that it appeared to offer a cost effective means of implementing filtering and to limit overblocking by looking at the more granular level of individual URLs rather than domain names. BT also agreed to make their solution available to other UK ISPs.
(It should be noted that even though the term Cleanfeed appears to have stuck as a generic description for this blocking, the correct name for this technology is the BT Anti Child Abuse Initiative – “Cleanfeed” is a trade mark of the THUS group of companies and is used by them to describe voluntary filtering at an end-user level. )
This technical implementation of the IWF blacklist has attracted some criticism. In particular, some have complained that by tackling only port 80 or http:// traffic it fails to deal with IRC, instant messaging or peer to peer access to child pornography. Moreover, the way in which it is implemented enables an “oracle” attack where users with a moderate degree of technical knowledge will be able to identify what sites are on the blacklist. More fundamentally, the complaint has been made that it misleads the end user by presenting an error message rather than specifying that a site is blocked.
Nevertheless, once BT provided a voluntary “proof of concept” there soon followed calls for other ISPs to follow suit – by compulsion if necessary. For the most part, ISPs agreed to do so, citing commercial and practical rather than constitutional and principled concerns where they did express reluctance. For the rest, the matter was soon put effectively beyond debate when the Government in 2006 signalled its intention to introduce legislation unless 100% coverage was achieved “voluntarily”. As of 31 December 2007 it now appears that all UK broadband providers have adopted filtering against the IWF blacklist – whether or not they use the particular BT Cleanfeed hybrid blocking solution.
Issues arising
To what extent, then, does the “Cleanfeed” system adopted throughout the UK implicate the concerns we raised earlier?
Transparency in introducing regulation
Critics have expressed concern about the way in which the Cleanfeed system has been introduced. Even though at an individual level the actions of ISPs might be described as voluntary self-regulation, taken together the effect is to subject internet users to a system of state-directed censorship of the internet, with no legislative basis of any description. Indeed, the system adopted again appears to conflict with existing legislation such as the prohibition in the E-Commerce directive of imposing general duties on ISPs to monitor activity on their networks. Edwards, for example, has questioned the rule of law implications:“This censorship needs no laws to be passed, no court to rule, with the publicity that entails. It only needs the collaboration, forced or otherwise, of ISPs. ISPs are not public bodies; their acts are not subject to judicial review. Nor are they traditional news organisations; their first concern (quite properly) is for their shareholders and their own legal and PR risks, not for values like freedom of expression. Research has shown that most ISPs, asked to remove or block objectionable, but not illegal, content, or face legal pressure, tend to take down first, and worry about human rights afterwards. And even those ISPs who might have fought against censorship will have no choice after 2007.Evading public law norms
Does this all sound familiar? It should, because it’s exactly what Google recently faced media outrage over, when they agreed to censor their own search engine to fit the requirements of the Chinese government. Here in the UK, the state itself, not a private company, proposing China-style censorship tools as part of a compulsory package for all ISPs, doesn't seem to have raised many eyebrows.”
Suppose that a site owner suffers damage from finding their site wrongfully blacklisted. What remedy might they have? ISPs as private actors do not appear to be subject to judicial review, and in the absence of any contractual relationship it is difficult to see what claim a site owner might have against an ISP.
A stronger case might be made against the IWF itself. Despite its nominally private status, the IWF has accepted that it is “a public body” for the purposes of the European Convention on Human Rights and has undertaken to be governed subject to the Human Rights Act 1998. Although it is not clear whether this concession would be binding if a judicial review were brought, it might provide the basis for such an action notwithstanding the lack of “any visible means of legal support” for the IWF. (compare R. v. Panel on Takeovers and Mergers, ex p. Datafin ). Having said that, however, the remedies offered by judicial review are relatively limited and other governance norms which would apply to public bodies (such the Freedom of Information Act 2000) would appear to be lacking. This case would appear to prove the truth of the point made by Akdeniz that:Fair procedures
“When censorship is implemented by government threat in the background, but run by private parties, legal action is nearly impossible, accountability difficult, and the system is not open or democratic.”
The IWF does not notify site owners of an intention to blacklist a URL or even an entire domain, nor does it notify site owners that a decision has been made. While it does offer an internal review (to those who happen to find out that their sites have been blocked) that mechanism does not provide for any appeal to a court – instead, the IWF will make a final determination on the legality of material in consultation with a specialist unit of the Metropolitan Police. There is no provision for any further or judicial appeal.
Transparency in application
The IWF is in many regards a transparent organisation, with for example policy documents and minutes of all its board meetings available online.
It does not publish the list of blacklisted URLs, arguing (not unreasonably) that to do so would be to provide a roadmap for paedophiles. There appears to be less justification, however, for the practice adopted by BT and other ISPs of serving error pages instead of indicating that a site has been blocked. This deceptive approach has not been followed in other systems blocking child pornography...
Some such notification would appear to be the minimum necessary in order to ensure that there is a feedback mechanism to identify sites which have been wrongly blocked (particularly as there is no other provision for notification).
The lack of transparency in the administration of the Cleanfeed system by BT is also reflected in a dispute as to the effect the system has had. Soon after the introduction of Cleanfeed BT publicly claimed that it succeeded in thwarting over 230,000 attempts to access child pornography over a three week period – a figure which unsurprisingly caught the attention of the media. The ISPA was, however, skeptical of these figures, suggesting that at a very basic level they appeared to confuse visits with hits and pages with images. Moreover, as users might try to reload pages, and some hits might be due to images in spam, the numbers may have little validity. Notwithstanding this confusion, the BT figure still appeared to be instrumental in shaping public opinion as to the need for this system.
Intermediaries and incentives; Proportionality
It has emerged that one ISP – Be Unlimited – in order to comply with government demands to implement a filtering system but apparently unwilling or unable to incur the cost associated with a BT-style hybrid blocking system, have done so by simply implementing crude IP address blocking, resulting in collateral damage to what may have been many thousands of innocent sites which happened to share a host with the blacklisted site. This would appear to confirm the fear expressed earlier that intermediaries, faced with difficulties in complying with regulatory demands, will systematically overblock.
Function creep
Fears of function creep appear to have been borne out recently when the Home Secretary, Jacqui Smith, announced her intention to extend the Cleanfeed system to cover extremist Islamic websites: “On the threat from the internet, Smith said the government was already working closely with the communications industry to take action against paedophiles, and planned to target extremist websites in the same way. ‘Where there is illegal material on the net, I want it removed,’ she said.
The move comes after details were revealed of an extremist website containing threats against the prime minister and calling for the creation of a ‘British al-Qaida’.
‘If we are ready and wiling to take action to stop the grooming of vulnerable young [people] on social networking sites, then I believe we should also take action against those who groom vulnerable people for the purposes of violent extremism,’ she said.”
Indeed, even before this there was an interesting comment in the IWF board minutes from January 2007 which appeared to indicate that the blacklist was already being used by some IWF members as an intelligence gathering tool, apparently intended to identify users attempting to access child pornography. Would there be public support for such an intelligence led use of the system? In an interesting recent survey on behalf of a children's charity, it was said that 89% of the public would support ISP measures to track access to “paedophile” sites while in Canada child-safety advocates have similarly expressed support for systems which would require ISPs to identify and report to police users who view child pornography.
Conclusion
The system of filtering of child pornography which has now been adopted by UK ISPs raises some difficult issues of transparency, process and legitimacy, particularly insofar as it appears to blur the public / private boundary and leave the process by which law is made and enforced unclear. This appears to present an ongoing threat to civil liberties insofar as the system – with its reference to a central blacklist – appears to be readily extensible to cover, for example, “extreme” political content.