Hate Speech on Social Media: Are self-regulative measures enough?


11–16 minutes

October 31, 2019

Sticks and stones could break our bones, but the internet was only ever seen as an invention for progress. Spurred by the commercialisation of the internet and its mass accessibility to users worldwide, seemingly innocent web-based interfaces, initially designed with the motive of “making friends and meeting new people” were born. Labelled ‘social media’, in a mere matter of decades, these transformed into “super-connected” (read intrusive) platforms meticulously recording and adapting to every single interaction the user has with it – gaining notoriety as untamed beasts possessive of all the potentials of “influencing” how people think, to the extent of manipulating elections in “free and democratic” nation-states.

This invention that was once heralded as a decentralised free-for-all space facilitating the exchange of ideas and knowledge having the potential for paving the way to other future innovations also became host to negative elements including fringe groups, racial supremacist groups, and religious bigots. These groups duly recognised the outreach impact of these platforms and manifested their beliefs by spreading hate and propaganda by targeting certain racial, religious, regional or linguistic groups – often using it as a tool of oppression against minorities and other indigenous groups by frightening, humiliating or silencing them. The example of the Christchurch gunman – who after spending years on social media to advance the cause of white power decided that mere posts were not enough and it was “time to make a real-life effort post” – is often cited as ‘hate speech’ transpiring into ‘hate crimes’.

This paper shall attempt to conceptualise ‘hate speech’ in the 21st Century “digital world” and shall impress upon the ramifications of such ‘hate speech’ on its victims. The focus of the paper shall then shift to a more domestic case-study – the Rohingya crisis in Myanmar and the impact of social media forum – Facebook with regards to the propagation of ‘hate speech’ on its platform, whilst discussing its culpability and the prospective liabilities that may be imposed upon it.

The implications of inflicting harm upon an individual or community either verbally or through written publications are well-established and legislations and case-laws stand testament to the same. The foundations of Aristotle’s postulations wherein he impressed upon that “Man is, by nature, a social animal” are well reflected in the Indian Supreme Court’s understanding of the issue as reputation itself was likened to life in the landmark case of Om Prakash Chautala v. Kanwar Bhan and Ors[1]  in holding that “When reputation is hurt, a man is half-dead. It is dear to life and on some occasions, it is dearer than life … thereby becoming an inseparable facet to the ‘Right to Life’ envisaged under Article 21 of the Constitution of India.”[2] Thereby an intentional defamatory attack on an individual’s reputation has been equated to an attack on the individual’s life itself.

In such a context, it is important to discuss the ramifications of an attack on an individual or a group on an online ‘social media’ platform. An argument often substantiated by absolutist free speech thinkers with regards to dealing with ‘hate speech’ online is to merely “log out” from such platforms if one is offended by another’s words. However, this rudimentary “solution” fails to take into account the impact of these ‘social media’ platforms that have manifested so intrinsically in the day-to-day rigmarole of people’s lives in such manner that everything from news to interactions with other humans are now extensively inter-connected with these services in such a short span of time. Notwithstanding, the severity of the mental and emotional repercussions, the “solution” to merely “log out” on being recipient to ‘hate speech’ is nothing more than the enforcement of a form of social exclusion – from the denial of services of an increasingly inextricable platform intertwined with our modern-day lives whereby ideas and stories are shared among people. The silence of the State and people in general to such acts is at best tantamount to a form of implicit ostracization stemming either unbeknownst or through indifference.

Before delving any further, it is imperative that the complex and contested term “hate speech” is defined under a set of general parameters. One of the biggest roadblocks faced whilst doing so is to strike a balance between the non-absolute Right to freedom of speech and the protection of certain social groups from its rampant misuse in the form of ‘hate speech’ in such manner that the restrictions imposed must be reasonable and not broad or vague thereby curtailing genuine ‘speech and expression’ – a concern well expressed by the Committee on the Elimination of Racial Discrimination (hereinafter, CERD) while keeping in mind its detrimental effects to groups protected under the International Convention on the Elimination of All Forms of Racial Discrimination (hereinafter, ICERD).[3]

Upon thorough examination of multiple human rights treaties viz. the ICERD, the ICCPR and the UNHCR, it may be argued that albeit the apparent absence of the term ‘hate speech’ from these treaties and conventions and a definition within set parameters, the treaties more or less have conceptualised the same as offensive, inciting or discriminatory speech. It is postulated under Article 7 of UNHCR that everyone is entitled to protection against discrimination and against “any incitement to such discrimination”.[4] Despite providing that “any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law”, Article 20(2) of the International Covenant on Civil and Political Rights (ICCPR) remains ambiguous in as much as it fails to qualify the distinction between ‘advocacy and incitement’, and ‘hatred and hostility’. Additionally, Articles 4, 5 and 7 of the ICERD lay out parameters for the identification of ‘hate speech’, especially under Article 4(a) wherein it conceptualises ‘hate speech’ as having four distinguishable variants: (i) the dissemination of ideas promoting racial superiority; (ii) the dissemination of ideas fostering the hatred for a race; (iii) inciting others to indulge in racially aggravated acts of discrimination and (iv) violent acts motivated by racial hatred. However, this was limited to ‘racially aggravated’ crimes only. Nonetheless, it was only until as recently as 2013, that the CERD expressly defined ‘hate speech’ as a General Comment as “a form of other-directed speech which rejects the core human rights principles of human dignity and equality and seeks to degrade the standing of individuals and groups in the estimation of society”(General Comment No. 35).[5] It is gathered from these treaties, conventions and other sources that ‘hate speech’ may be defined as an expression of hatred that is likely to incite or produce violence targeting a specific group on the basis of protected attributes such as race, religion, descent, nationality, ethnicity,[6] sex, disability or gender orientation whilst seeking to denigrate, humiliate, discriminate or incite violence against the targeted group.[7][8]

As per the terms of the treaties, the signatory State parties are obliged to prohibit the propagation of ‘hate speech’ under their respective domestic legislations. It is imperative to note that these obligations are not extended to non-state actors viz. ‘social media’ platforms despite their role in implicit facilitation of such ‘hate speech’ on their forums. Interestingly, a recently as 2011, the senior executives of leading ‘social media’ platforms such as Twitter took immense pride in their hosting of absolutely all forms of free speech and even referred to themselves as the “the free-speech wing of the free-speech party.” However, with the advent of time ‘internet time’, a few years later pressure groups and Governmental pressures culminated in this absolutist approach to freedom of speech being put under scrutiny.

At this conjecture, the author shall seek to address the issue at hand whilst citing and analysing the case-study pertaining to the 2018 reports of the OCHR’s Fact-Finding Mission (FFM) regarding the infliction of ‘hate speech’ in Myanmar on the online ‘social media’ platform Facebook which allegedly was a vital cog in the ethnic cleansing of Rohingyas Muslims.[9]

The Rohingyas – an ethnic group – were targeted via Facebook. In this context, ‘hate speech’, was not only merely discriminatory in nature but also culminated as incitement of violence. Notwithstanding the fact that such form of ‘hate speech’ is prohibited under Human Rights Law, the obligation under the same extends only to States and not to a non-state actor like Facebook. States are required to prohibit and even criminalize severe forms of hate speech because the harm caused by such speech is irreversible. Furthermore, speech conveying a direct message to incite genocide is prohibited in the Genocide Convention and Rome Statute, ICC[10]. With due consideration to the fact that Facebook moderates and curates the content on its platform, it should be accountable for the propagation of ‘hate speech’. The impediment in holding it accountable is that these provisions do not extend to a non-state actor like Facebook.

As stated earlier, whilst the substance of ‘hate speech’ propagated online is akin to its offline counterpart, the modus operandi with regard to the propagation of ‘hate speech’ on the internet is problematic in as much as the lacuna vis-a-vis its moderation by States, its cross-jurisdictional character, the permanent nature of data storage, the instantaneous outreach of these platforms and their consequent impact and the anonymity of the users or (offenders in this situation).

In addition to this, the threats of over-moderation, thereby culminating in the removal of expressions or content genuine in nature so as to evade liability under ‘hate speech’ laws coupled with the problem associated with the itinerant nature of these platforms whereby it may even be reuploaded by the same individual or group operating under a new user account – created from merely a few clicks – thereby bringing to light the drawbacks associated with the anonymity of the internet.[11] Moreover, every second that ‘hate speech’ exists online it makes it all the more difficult to completely delete or erase the same.

The lacuna pertaining to the extent of infliction of a legal liability against ‘social media’ intermediaries such as Facebook in contributing to such ‘crimes against humanity’ albeit the moral and ethical arguments remain unresolved as a grey area. The issues pertaining to the ‘lack of a definitive jurisdiction’ given their nature – wherein, the United States of America where most of the ‘social media’ platforms are based out, remains having never ratified the Rome Statute thereby preventing the ICC from taking cognizance; what exactly is the “crime” that these companies may be tried for; and the prospective counterproductive effect in countries with oppressive and corrupt judiciaries wherein there is reasonable cause to believe that a ‘free and fair trial’ may not granted to the victims.

It is also a known fact that Facebook merely doesn’t display whatever data is uploaded on it by its users. The sheer expanse of data on its servers and the respective transmission of such data to each user steered the development of an artificial intelligence (AI) driven algorithm which upon recording the past “likes” and “interactions” of a user, alters the presentation according to relevancy viz. ‘what’ is seen first and ‘whose’ posts and ads are displayed.

Therefore, with regards to the liability associated with Facebook and other ‘social media’ platforms, it is imperative to consider the following arguments. Notwithstanding the stipulation under the Alien Tort Statute of 1789, whereby non-citizens could file suits against American companies acting in contravention to various international law responsibilities and obligations, the Supreme Court of the United States (SCOTUS) in Kiobel v. Royal Dutch Petroleum held that “the statute does not apply to actions which occurred outside the jurisdiction of the United States”.[12] Therefore, this coupled with the fact that the United States is not bound by the Rome Statute means that the ICC is prevented from taking cognizance of matters and is consequently deterring the filing of suits against these companies in the first place.

Moreover, Article 20(2) of the ICCPR pertains to ‘hate speech’, its’ extent of its scope although is limited towards establishing obligations owed only by State actors and does not encompass non-State actors and organisations such as Facebook – as is the case with other international human rights treaties. In lieu of such circumstances, certain strides have been made towards bringing ‘social media’ companies under the rein of human rights law standards. It is mandated that companies respect human rights in their activities as envisioned under the UN Guiding Principles on Business and Human Rights. These principles compel companies to carry out ‘due diligence’ practices to identify potential human rights violations that may be committed through these companies so as to prevent contributions to adverse impacts on human rights.[13] Regardless, these are merely recommendatory responsibilities in nature that companies and compelled to undertake and there continues to be no treaty in force legally obligating ‘social media’ companies to suppress or prohibit ‘hate speech’ across their channels.

The European approach looks promising as the European Commission took initiative at the continental level in introducing to social media companies a ‘Code of Conduct’ in order to deter ‘hate speech’ on their respective platforms.[14] The code compels companies to review user-flagged content for ‘hate speech’ and subsequently proceed to erase the same from their platforms and subsequently report and suspend the user posting such ‘hate speech’. The European Commission also proposed to member States that they comply with the European Union’s legal obligations in criminalising such ‘hate speech’ online by making changes to their municipal law accordingly. Again, this code of conduct is a “mere commitment” undertaken by these ‘social media’ companies and is not legally binding against them. Nonetheless, Germany took a great initiative in making this code into domestic law binding on companies and others are expected to follow suit in the foreseeable future. Although still in its nascent stages, companies under the Network Enforcement Act of 2017 are obligated to takedown ‘hate speech’ and other impermissible content hosted on their websites in a stipulated timeframe after due review.

Reproaching the ensuing of events in Myanmar, wherein Facebook was employed as a communicatory platform to carry out the ethnic cleansing in Myanmar, it must be said that had it complied to the aforementioned UN Guiding Principles, such situation could have been averred and multiple lives saved, perhaps. The ‘hate speech’ fuelled campaign that was orchestrated on Facebook in Myanmar proved decisive in the creation of a negative perception of the Rohingya Muslims. Facebook, however, were absolved of any liability since the re-amendments to Victorian-era legislations deem ‘social media’ companies as mere intermediaries and facilitators of to an open-access platform thereby attaching no culpability to them. Even among more recent treaties and regulations under international law, companies are at best subject to non-binding self-regulatory mechanisms. Is this enough though? Can the world place its best interests at hand in the ethics and moralities of corporate conglomerates? This remains the question that needs to be answered.


[1] Om Prakash Chautala v. Kanwar Bhan and Ors (2014) 5 SCC 417.

[2] ibid

[3] UN Committee on the Elimination of Racial Discrimination, General Recommendation No. 35 on Combatting Racist Hate Speech, CERD/C/GC/35, (26 September 2013).

[4]  Article 7, UNCHR.

[5] Supra n (3).

[6] Article 1, ICERD.

[7] Nockleby, John T. (2000), “Hate Speech” in Encyclopaedia of the American Constitution, ed. Leonard W. Levy and Kenneth L. Karst, vol. 3. (2nd ed.), Detroit: Macmillan Reference US, pp. 1277–79. 

[8] Black’s Law Dictionary, ‘The Legalities of hate speech’, <https://thelawdictionary.org/article/the-legalities-of-hate-speech/&gt; accessed 8 September 2019.

[9] Human Rights Council, Report of the Independent International Fact-Finding Mission on Myanmar, UN Doc. A/HRC/39/64, para.74 (Aug. 24, 2018) 

[10] “‘Hate Speech’ Explained. A Tool Kit.https://www.article19.org. Accessed October 26, 2019. https://www.article19.org/data/files/medialibrary/38231/Hate_speech_report-ID-files–final.pdf.

[11] Brown, Alexander. “What Is so Special about Online (as Compared to Offline) Hate Speech?” University of East Anglia. Accessed October 26, 2019 https://ueaeprints.uea.ac.uk/64133/1/ Accepted_manuscript.pdf.

[12] ibid

[13] Irving, E. (2019). Suppressing Atrocity Speech on Social Media. AJIL Unbound, 113, 256-261. <www.doi:10.1017/aju.2019.46> accessed 24 October 2019.

[14] ibid


Leave a comment