While social media companies pride themselves on having strict guidelines against online abuse, in reality they aren’t doing much to address this deep-rooted problem. Even death and rape threats are not taken seriously by many social media giants
Social media platforms were once known for funny memes and hilarious videos but today, these have become notorious for threats and harassment. It’s a toxic environment where people are singled out and threatened for their beliefs.
Adding to the problem is the lack of interest shown by social media giants to handle harassment. When you report a threat to platforms like Instagram, Twitter or Facebook, often comes the polite message, ‘Thank you for reporting. We carefully review reports of threats and consider many things when determining whether a threat is credible’. However, they soon revert, ‘We reviewed your report carefully and found violation of no rules’.
On Sunday, journalist Nidhi Razdan got a death threat in a private message saying, “I will hang you, I will execute you”. When she reported it, Facebook initially pointed that it doesn’t violate their rules. According to them, “In determining whether a threat is credible, we may also consider additional information such as a targeted person’s public visibility and vulnerability”.
On the face of it, the threat did seem serious. Naming and shaming the social media platform on another helped, in this case. No sooner did the journalist tweet about it, the issue garnered a lot of attention and eventually Facebook admitted to its mistake and suspended the detractor’s account.
Actress Richa Chadha, who reported a rape and death threat, was at the receiving end of similar attitude. Recalling Twitter’s response, she said, “They said ‘no violation within context’.”
Many people wonder why threats get a free pass, like actor Faran Akhtar who had asked Instagram: “How is a death threat not a violation of your guidelines?”
The problem seems to be “algorithmic and human reviewers” who label comments or posts as offensive or non-offensive without considering the context.
Careful consideration is only given when a user fumes about it for the second or third time. Threats go unnoticed because these platforms don’t have enough moderators trained with proper skills to act on them. Hiring people for content moderation incurs a huge cost and this is why these platforms are slow to react to abuse.
While all of them say they have strict norms, it’s the implementation that often suffers. A Twitter spokesperson said, “We start from a position of assuming that people do not intend to violate our rules. Unless a violation is so egregious that we must immediately suspend an account, we first try to educate people about our rules and give them a chance to correct their behavior. We show the violator the offensive Tweet(s), explain which rule was broken, and require them to delete the content before they can Tweet again.”
But when it comes to death threats, the logic of waiting for repeated violations is totally bizarre. While the European Union has made it mandatory for social media platforms to act on all kinds of content, the Indian IT Act, section 79, makes it optional for platforms to take down content, as a result of which reacting to threats isn’t a top priority for them. The only way a victim can get redressal is by reaching out to the police and making use of the provisions in the IPC. Since the way the interactions take place differ across the world, activists are now asking Facebook to develop guidelines specific to each region, instead of following a universal policy.
Nidhi Razdan (@Nidhi)
I got a death threat on ??@instagram?? via a pvt msg: “I will hang you,I will execute you”. I reported the account to ??@instagram??. They replied that it does not violate their guidelines. Shame on you ??@instagram??. Am deleting my account. And yes, I’m filing an FIR.
When you report a threat to platforms like Instagram, Twitter or Facebook, often comes the polite message, ‘Thank you for reporting. We carefully review reports of threats and consider many things when determining whether a threat is credible’. However, they soon revert, ‘We reviewed your report carefully and found violation of no rules’.
Must have stringent laws
For abusive content being posted online, which everybody can view, there should be some amount of immunity from the law. But Indian law has gone too far post the Shreya Singhal judgment. Even after you report abusive content, platforms are not obligated to take down the content. They’re obligated to take it down only when they get a court order.
You can’t expect a common woman to go to court every time she gets a death threat. Regarding threats made in private messages, platforms are under no obligation under the law to reveal the identity or block the person, unless the police launches an investigation. The obligation to take down abusive content is only for public. If you want Facebook or Instagram to react faster to threats, the law should be amended to give users more rights against social media companies.
— Prashant Reddy T.assistant professor atthe National Academyfor Legal Studies and Research (NALSAR), Hyderabad