Skip to main content

The Intermediary Rules, Takedowns and Free Speech

The Information Technology (Intermediaries Guidelines) Rules, 2011, (i.e. the “Rules”) have been the subject of much derision given that they have been understood to be an attack on free speech online. However, it isn’t entirely clear to what extent the Rules themselves are impediments to free speech — free speech in India has never been an absolute right, and the circumstances in which the Rules say that user generated content online should be taken down are circumstances in which it would, in any case, have been possible to have content taken down.

If one were to look at the Rules, Rule 3(1) requires intermediaries to publish ‘rules and regulations, privacy policy and user agreement for access or usage of the intermediary’s computer resource by any person’. This Rule mandates that intermediaries do what is in any case almost standard industry practice: publish a ‘ToS’ / Terms of Service applicable to a website, etc. and publish a privacy policy.

Rule 3(2), states that an intermediary’s ToS must INFORM users ‘not to host, display, upload, modify, publish, transmit, update or share’ certain kinds of information specified later on in the Rule. The word used in Rule 3(2) is ‘inform’ — there is a difference between a mandatory requirement and a prescriptive guideline. Rule 3(2) by itself does not impose a mandatory requirement on users, although read with other provisions of a ToS making the document a binding contract, it may amount to a mandatory requirement, particularly since under Rule 3(5), intermediaries are required to inform users that if they do not comply with the ToS, the intermediary has the right to immediately terminate their access or usage rights to the computer resource. Termination is, however, at the discretion of the intermediary, and the intermediary is not required to terminate users’ access or usage rights for ToS non-compliance. Thus, the ‘requirement’ that users not upload, publish, etc. content which is in contravention to the content which is considered to be acceptable under Rule 3(2) is not a direct result of the contents of Rule 3(2) alone.

The specific kinds of information which are referred to in Rule 3(2) are information which:
(a) belongs to another person and to which the user does not have any right to;
(b) is grossly harmful, harassing, blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another's privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever;
(c) harm minors in any way;
(d) infringes any patent, trademark, copyright or other proprietary rights;
(e) violates any law for the time being in force;
(f) deceives or misleads the addressee about the origin of such messages or communicates any information which is grossly offensive or menacing in nature;
(g) impersonate another person;
(h) contains software viruses or any other computer code, files or programs designed to interrupt, destroy or limit the functionality of any computer resource;
(i) threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign states, or public order or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting any other nation.

The terms used in the list are broad and have not been specifically defined. However, lack of clarity within the Rules aside, as argued earlier, the content contemplated by Rule 3(2) is, for the most part, illegal beyond the Internet as well as on it, as even a superficial study of Indian content laws would show.

In fact, the case of the 2011 IT Act Rules doesn’t appear to be a case where Internet content has been specifically ‘targeted’. Looking at the Rules, from a content law point of view, it appears that the Rules have been drafted to virtually mirror regulations with regard to other forms of content: TV programmes and films. (See: Rule 6 of the Cable Television Networks Rules, 1994, and the Guidelines for Certification of Films for Public Exhibition under the 1952 Cinematograph Act.)

Such mirroring, obviously, isn’t especially practical given the dynamics of the Internet, and those of online interaction especially on social media sites. However, the promulgation of the 2011 IT Act Rules could well be about using a tried-and-tested (albeit imperfect) formula such as the one used in the realms of broadcasting and cinematography.

There is a good chance that the 2011 IT Act Rules are simply the product of assuming that the dynamics which apply to regulating limited content for broadcast (including film content) can also be reasonably applied to virtually unlimited online content (including user-generated content). The assumption though, if at all it was being made by those who drafted the 2011 IT Act Rules, appears challengeable, to say the least, and, as such, there appears to be good reason to have the words used to structure Rule 3(2) redrafted.

It could be argued that the Rule would have been just as effective if it had merely said that user generated content should not be content that is ‘unlawful in any manner’. This would have covered what appears to be intention of including the items in the Rule 3(2) list in a list of objectionable content without having the Rule suffer (quite as much) from the lack of clarity for which it has been heavily criticised.

After listing the kinds of content which should not be published, etc. by users, the Rules go on to state, in Rule 3(3): ‘The intermediary shall not knowingly host or publish any information or shall not initiate the transmission, select the receiver of transmission, and select or modify the information contained in the transmission as specified in sub-rule (2)’ although unmanned ‘temporary or transient or intermediate storage of information automatically within the computer resource as an intrinsic feature of such computer resource for onward transmission or communication to another computer resource’ is an exception to the general rule contained in Rule 3(2). Further, if an intermediary was to remove access to illegal content (as contemplated by Rule 3(2)) after it came to the actual knowledge of the intermediary, the intermediary would not be deemed to have flouted the Rules as per Rule 3(3).

All that Rule 3(3) therefore appears to intend to say is: ‘Intermediaries shall not host or publish illegal content although the automated temporary or transient or intermediate storage of such content would not be considered to fall within the scope of this prohibition. Also, if intermediaries remove illegal content once they become aware of it, they are in the clear.’ Unfortunately, the terms in which illegal content has been contemplated by Rule 3(2) make this relatively simple provision ‘cloudy’, and to stay in the clear of the Rules with reference to Rule 3(3), there is a possibility that intermediaries would takedown any content which is complained of.

Unsurprisingly, the takedown of content has become extremely contentious. Rule 3(11) requires intermediaries to designate a Grievance Officer to whom complaints may be made. And under Rule 3(4), an intermediary is required to take down content which is in contravention of Rule 3(2):
  • upon obtaining knowledge by itself or being brought to actual knowledge by an affected person in writing, etc. about any prohibited content as mentioned in Rule 3(2); and
  • within thirty six hours and where applicable, work with user or owner of such information to disable such content that is in contravention of Rule 3(2). [Update: The 36-hour takedown requirement was modified by a clarification issued on March 18, 2013 stating that an intermediary is only required to respond or acknowledge to the complainant within thirty six hours; it has a month to act on the complaint.]
Pertinently, Rule 3(4) does not state that a takedown must be effected every time a complaint that content violates Rule 3(2) is received. It states that content must be taken down whenever an intermediary becomes aware that content violates Rule 3(2).

There is a critical difference between saying that content must be taken down when a complaint relating to it is received, and that illegal content must be taken down. The Rules say the latter — that illegal content must be taken down. Admittedly, the manner in which illegal content has been defined in Rule 3(2) is very susceptible to interpretation, and it should have been far clearer. Nonetheless, to state that all content must be taken down merely upon a complaint being received does not appear to be what the Rules intend or to be what they state.

The problem for users is likely to be that intermediaries will act over-enthusiastically and take down any and all content in relation to which they receive a complaint. Should an intermediary refuse to take down content, it would be making a determination that the content does not violate Rule 3(2), and the intermediary may — depending on the enthusiasm of the complainant — be required to defend its position in court. And an intermediary may have no desire to stick its own neck out on behalf of a user, so to speak.

This situation is, however, no different form the situation which would exist even without the Rules coming into play — the likelihood of an intermediary keeping content up after receiving a complaint / legal notice in relation to it is low. And there are no items in the list of ‘prohibited’ information in Rule 3(2) in relation to which a legitimate legal notice could not legitimately be sent to an intermediary. If content is illegal under any law, an intermediary has an obligation not to publish it, etc. independent of the provisions of Rules 3(2) and 3(4). In fact, Rule 3(6) clearly states that an ‘intermediary shall strictly follow the provisions of the Act or any other laws for the time being in force’.

What is different about the Rules is that Rule 3(4) contains a specific time period within which an intermediary must take down prohibited content. Considering that the time period does not appear to be unduly short, this may not make much of a difference to either an intermediary or a user in practical terms.

Another apparent shortcoming of the Rules is that there is no counter notice procedure within its framework. However, given that there appears to be no mandatory takedown of content upon the receipt of a complaint either, it could be argued that there is nothing to prevent an intermediary from not taking down content which it does not consider to be in violation on Rule 3(2) and replying to a complaint. This too, would require an intermediary to stick its neck out though, on behalf of a user and in favour of the freedom of speech. Whether an intermediary would do this routinely (or ever!) is debatable.

The problem with the Rules therefore primarily seems to be that they have first defined illegal content in extremely broad terms in Rule 3(2), and then put the onus of determining whether or not content is illegal on intermediaries under Rule 3(4) in circumstances where an intermediary may not want to fight against an allegation that content is illegal given the all-inclusive language used Rule 3(2). Simply stating that illegal content is prohibited in Rule 3(2) instead of drafting an ill-defined list, and explicitly limiting intermediary liability (possibly to nominal damages), would probably have made the Rules far less problematic.