Facebook India - Towards a Tipping Point of Violence
Caste and Religious Hate Speech

India has rapidly outpaced the United States as the largest global market for Facebook and it’s growing faster each day. The powerful influence that social media platforms like Facebook have had on everything from elections to civil society is unlike anything seen before in human history. This report released in 2019 from Equality Lab’s presents a crucial analysis and summary of one of the most grave forms of this influence found throughout the Facebook India platform - Hate Speech and Disinformation. The particularly alarming pitfalls and failures of Facebook India’s content moderation policies and their implementation are examined; and, numerous, disturbing examples of real hate speech and calls to violence against minorities are presented. With an estimated 350 million+ Indian caste, religious, gender, and queer minorities currently at risk from this hate speech in India, this report provides timely and expert analysis and solutions. Informed partly by actual affected users, the insight and answers in the report provide a road map for stakeholders from multiple vantage points to help counteract a looming human rights disaster. The authors warn that without urgent intervention, such hate speech is likely to be weaponized as a trigger for large-scale communal violence in India.


Facebook staff lacks the cultural competency needed to recognize, respect, and serve caste, religious, gender, and queer minorities. Hiring of Indian staff alone does not ensure cultural competence across India’s multitude of marginalized communities. Minorities require meaningful representation across Facebook’s staff and contractor relationships. Collaboration with civil society and greater transparency of staffing diversity strengthens hate speech mitigation mechanisms like content moderation.

There is widespread doxxing occuring on the platform, threatening activists, journalists, and others who speak on behalf of the vulnerable.


Procedures for reporting these activities to Facebook are opaque, increasing people’s vulnerability, and safety concerns for the persons affected.

Most hate speech violations on Facebook India are Islamophobic. 6% of Islamophobicposts were specifically anti-Rohingya, with calls to violence similar to content that led to the Rohingya genocide in Myanmar.


While hate speech almost completely remains online or is reinstated by moderatorson Facebook, an increasing number of minority user accounts are being banned or removed entirely.



Content moderation of hate speech in India is complex, requiring relevant context and collaborative expertise from civil society and advocates. Facebook cannot solve this problem in a vacuum. Transparency, accountability, and resource allocation are the keystones required for an effective and durable solution to the exponential expansion of hate speech enabled by social media platforms like Facebook. The result will be a safer and more welcoming platform for users in a growing market. We believe the following steps are necessary in order to achieve this outcome.


• The human rights impact of Facebook policy and programs and how the platform has been used by hate
groups, political entities, and other public figures to stoke casteist and religious animosity or violence

• What risk assessments, if any, were conducted to improve understanding of the threats faced by Indian
minorities on the platform

• Facebook’s hiring practices, especially with respect to safety, policy, and content moderation teams.

• Content Moderation, which should include hiring practices, contractor demographics, and slur lists. These
lists should be open and transparent to the public.

• User Privacy

• Targeted Advertisements

• Security Policies

• Facebook’s elections and government unit’s work from elections 2014 to elections 2019.


• Empowering an independent audit team that is approved and monitored by both civil society and Internet
Freedom advocates as well as by Facebook.

• This audit team must have clear competencies in caste, religious, and gender/queer minorities and
includes members of Indian minorities in its composition.

• Research comparing impacts across India’s discrete language markets for analysis of implementations.

• Determinations regarding risk prevention, mitigation, and remediation plans for vulnerable communities.

• Revision of policies and practices to address the human rights risks identified.


• We recommend a regularly convened working group of Indian Internet freedom and civil society groups
that work on the issues of caste, religious, and gender/ queer minorities. This group could work actively to
counter casteist and religious bigotry while also helping provide input into Facebook’s policies and


So many of the problems of hate speech are amplified by the lack of transparency and engagement between Facebook and organizations that could help with frontline experience.


Hate Speech and Content Moderation Failure Examples

Below are examples of hate speech presented and addressed in the report. For more examples, analysis, and solutions, you can read the full report here:

Example Post 6 - Anti Rohingya Content: This horrifying post features screenshots from a since debunked, staged video claiming Rohingya in India were slaughtering and after cannibalizing Hindus. The post translated from Hindi says, “This is Rohingya Muslims. Watch how they are killing and eating Hindus and yet you keep crying about [the price of Petrol!”


The shocking nature of this post was not only was the video staged as reported by Boom live in 2018 but the post was disseminated both on Whatsapp and on Facebook in coordinated posts across both platforms. Additionally even when videos were taken down images of the videos were recirculated over and over again with some shares happening again during the 2019 Indian Lok Sabha elections. Shockingly if there had been no fact checks these alarming and clearing inciting videos would have no counter on the platform as they linger on the platform a year after having been widely debunked. Clearly something is wrong with Facebook moderation when it comes to Rohingya centered hate speech and given the precarious conditions Rohingya face in India and across South Asia this issue must be dealt with immediately.

Example Post 8 - “Love Jihad” Consipiracy Theory: “Love Jihad” posts repeatedly emphasize the dangerous and fictitious conspiracy theory of a Muslim plot to target and entrap Hindu girls into marriage with Muslim men with the intention to defile their (Hindu) honor and grow the Muslim community.

Much work has been devoted to debunking “Love Jihad” but the conspiracy persists, abetted in part by a large “Love Jihad” disinformation echochamber on Facebook and other social media platforms. Meanwhile, Muslim men in the real world face the consequences of being portrayed as bogeymen in such interfaith relationships; and, in many cases they are subjected to extreme violence as a result. Ultimately the fear of “Love Jihad” is an externalization of an anxiety against interfaith relationships and the supposed assault on family, religious, and caste honor that these relationships represent.

The below Facebook post directed at Hindu girls says, “This is not love dear sister, they pretend to love you and convert you and use you for terrorist activities. The girl called Deepa Cherian became a terrorist and went to jail through this kind of love jihad.” It ends with an advertisement for a “Hindu Helpline.”


Example Post 9 - Calls to Violence: In this post an image of live bullets is accompanied by this alarming caption: “Either by words or guns, no matter who comes in between, the temple will be built in Ayodhya where Lord Ram was born, and it will be built by the Bajrang Dal of Hindustan (ed.note: India is being identified here as the Hindu homeland). My workers and my office bearers and all my respected members are all ready, we will not listen to the courts. You can build your Taj Mahal where you want we will build our Ram Temple Jai Shri Ram.”


Example Post 12 - Dehumanizing Speech: Here a primitive hominid or monkey’s skull is used to represent a Muslim. This is offensive, dehumanizing and racist. It represents nearly everything about hate speech that Facebook says it discourages on its platform.

Example Post 13 - Anti-Caste Hate Speech: This is from a group called Anti-Chamaar Group. Chamaar is a Dalit caste found throughout North India who work with leather. Both the group and the image are explicitly against this caste. The existence of this group was named as early as 2016 by Soibal Dasgupta and yet the group continues to exist despite numerous attempts to report it to Facebook.


The main form of hate for Dalits and Adivasi people that we observed came through rhetoric of anti-reservationism. Additional casteist posts included caste-based slurs, derogatory references to caste-based occupations such as manual scavenging, anti-Ambedkar posts (such as Photoshopping Ambedkar’s face onto memes as an echo of real-world vandalism), and anti-inter-caste love unions posts.

Example Post 19 - Gender or Sexuality: While a vast majority of posts that were classified under Hate Speech Based on Gender or Sexuality displayed outright misogyny, over 25% of posts in this category were transphobic or queerphobic in content. 12% of the posts in this category made direct reference to violent rape, either calling for rape or glorifying or trivializing rape.

This post is hosted by a casteist meme group called Normien’t Memes for Chamaar Teens. The use of “Chamaar” is used as a slur here and is meant to provoke Dalit communites. The meme itself is a call for domestic violence implying that a baseball bat is an education tool for wives.


Example Post 24 - Against Religious Minorities: There is an insidious attempt by Hindu nationalists to depict Indian Christians as foreigners. This is exemplified by the the frequency of “No conversion” memes that showcwase the harassment of Indian Christians at the hands of Hindu Extremists. These posts often depict verbal and physical bullying, burning of Bibles, and exhortations to violence. This post is one example of a common anti-Christian meme. It features an image of four Christians arrested for spreading their religion. The text accompanying this image in the post appears non-defamatory: “Two priests and two nuns arrested in Gorakhpur for religious conversion acts. All belong to the Church of Kerala.”


However the totality of the post is meant to incite violence as the whole ecosystem of the post is the mechanism for how hate speech is delivered, in this case tempting commenters for hate speech. Some users make horrific calls to violence like, “Shave their heads, blacken their face, garland these pigs with old sleepers [“slippers” - a slur], strip them, get them paraded and make them apologize.”

Another says, “If they are let off with a slight slap on the wrist. They’ll keep repeating these malicious acts. Better is to kill them in encounter. They are the unwanted scum of the society and are pollution in the country.”

The above summary cites only some of the examples, data, anlysis, and solutions presented in Equality Lab’s report Facebook India - Toward a Tipping Point of Violence : Caste and Religious Hate Speech.

To read the report in full online, click the link below.