` `

Twitter’s New Policy to Moderate Hate Speech on Its Platform

Wesam Abo Marq Wesam Abo Marq
25th April 2023
Twitter’s New Policy to Moderate Hate Speech on Its Platform
If a tweet is flagged, the user will not be blocked (Getty)

Twitter recently unveiled a new policy that is meant to increase clarity on which hateful tweets on its network are subject to enforcement action.
Twitter stated in a blog post that it wants to make its network secure under Twitter 2.0, but not a haven for censorship. Some content will be less discoverable since the company claims it supports "freedom of speech" but not "freedom of reach."

Twitter Unveils 'Hate Speech' Shadowban Policy

Twitter declared on April 17 that it would conceal "hate speech" that breaks its guidelines behind warning screens.

The company stated in a blog post that its mission is "to promote and protect the public conversation. We believe Twitter users have the right to express their opinions and ideas without fear of censorship. We also believe it is our responsibility to keep users on our platform safe from content violating our Rules."

The new policy, according to Twitter, intends to provide moderators with more options than the standard "leave up versus take down" method of content control. 

Hate speech-related tweets that break Twitter's Terms of Service will be marked as such. A shadowbanned tweet will be marked as prohibited for anyone who sees it. The account spreading hate speech will not be banned or penalized, but their tweets will not contain ads. The new policy is to start soon, according to the blog post.

A supporting image within the article body
Photo description: (A screenshot from Twitter Safety’s blog)

Will Twitter’s Users Get Banned If Labeled?

The business states that if a tweet is flagged, the user will not be blocked or kicked off the network because policy actions only take place "at the tweet level and won't affect the user's account."  

Additionally, Twitter clarified that users whose tweets have been labeled will have the option to respond if they believe their tweets were incorrectly reported, but warned that they might not get a response and that doing so does not guarantee or increase the reach of the tweet. 

This Policy Will Be Extended to Additional Areas

Twitter claims that the label is currently only used in connection with violations of the platform's anti-hate speech policy, but that it will be extended to "other applicable policy areas in the coming months" to ensure "enforcement actions" become "more proportional and transparent for everyone on our platform." 
Twitter has announced that it will start implementing visibility filtering for other kinds of speech other than hate speech. 

A supporting image within the article body
Photo description: (A screenshot from Twitter Safety’s blog)

Elon Musk’s Acquisition of Twitter

Since Musk's acquisition, the network has been in turmoil with ever-changing policies and features, including Twitter Blue.

In addition, in the past few days, Twitter's labeling of major news outlets has created chaos and led many outlets, such as CBC, NPR, and PBS, to leave the platform.

After Musk's takeover of Twitter, the platform faced serious problems due to the difficulty of controlling harmful and false content, leading to changes in some of the policies on the platform.

In a blog, Misbar’s team discussed how Elon Musk's inner circle receives Twitter boosts, the Twitter Blue service increases fake accounts, Twitter extends political ads, and Twitter spreads false information about climate change. 

Elon Musk's Response to Twitter's Hate Speech Question

Twitter's CEO, Elon Musk, agreed to a last-minute interview with the BBC's James Clayton to discuss his takeover of the social media platform. 

During the interview, Musk was asked about the prevalence of hate speech and misinformation on Twitter, to which he responded, "I don't see a rise of hate speech." 

He then asked Clayton for specific examples of hateful content, and when Clayton could not provide them, Musk accused him of lying, saying, "You don't know what you're talking about...you just lied." 

A supporting image within the article body
Photo description: (A screenshot of the transcribed BBC interview)

The BBC reported evidence based on studies that misinformation and hate speech have been growing under Elon Musk’s acquisition.

A supporting image within the article body
Photo description: (A screenshot of the BBC’s article)

Under the current leadership, some individuals who were previously banned from a social media platform have been allowed back. This includes figures such as Andrew Anglin, the founder of the neo-Nazi website Daily Stormer, and Liz Crokin, a prominent proponent of the QAnon conspiracy theory. 

Critics have highlighted the lack of clarity in Elon Musk's personal definition of hate speech, as per their observations.

Research conducted by the Institute of Strategic Dialogue (ISD) revealed that anti-Semitic tweets doubled from June 2022 to February 2023. While takedowns of such content have also increased, they have not kept up with the surge in hate speech.

After Elon Musk's takeover of Twitter, the Center for Countering Digital Hate, a campaign group based in London, discovered a substantial increase in slurs on the platform. 

The BBC also conducted its own investigation, analyzing over 1,100 Twitter accounts that were reinstated under Musk's leadership after being previously banned. It was found that a third of these reinstated accounts appeared to violate Twitter's own guidelines, with some depicting rape and child sexual abuse in extreme cases. Such content had been a longstanding issue on Twitter even before Musk acquired the platform.

Furthermore, the BBC investigation revealed concerns raised by Twitter insiders who expressed doubts about the company's ability to protect users from trolling, state-coordinated disinformation, and child sexual exploitation. 

According to a study conducted by NewsGuard, a company that monitors online misinformation, engagement with popular accounts known for spreading misinformation significantly increased after Elon Musk's takeover of Twitter. 

Additionally, Science Feedback, a fact-checking organization with a left-leaning perspective, found that accounts referred to as "super spreaders" of misinformation, which consistently share tweets containing links to known misinformation, have experienced a noticeable increase in engagement since Musk assumed control of the platform.

Keith Burghardt, a computer scientist at the Information Sciences Institute (ISI) affiliated with the USC Viterbi School of Engineering, has conducted research on social media for five years, with a particular focus on online hate in the last year. 

Using a specific methodology, Burghardt's team analyzed timelines of a sample of users who posted hateful tweets one month before and after Elon Musk's purchase of Twitter, measuring their daily rates of hate speech during both time periods. This allowed them to assess the extent to which these users changed their levels of hate speech. 

The findings revealed that the proportion of hate words in the tweets of users who were already posting hateful content increased after Musk's acquisition of Twitter. Additionally, the average daily use of hate speech by these users nearly doubled during the same period.

A supporting image within the article body
Photo description: (A screenshot from Science Blog shows proportion of hate speech average)

Misbar’s Sources:

Twitter Safety

Twitter Safety



Silicon Angle



Talk TV

Science Blog

Unusual Whales