Twitter recently announced the testing of a new feature called “Safety Mode,” which aims to help users avoid being overwhelmed by harmful tweets and unwanted replies and mentions. The feature will temporarily block accounts from interacting with users to whom they have sent harmful language, replies, and mentions.
Twitter said in a statement that the “feature… may help you to feel more comfortable and in control of your experience, and we want to do more to reduce the burden on people dealing with unwelcome interactions.”
Here’s how it works?
According to the Twitter help center, Safety Mode is a “feature that temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions.”
Twitter added that when the feature is “turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering the Tweet’s content and the relationship between the Tweet author and replier. Our technology considers existing relationships, so accounts you follow or frequently interact with will not be auto-blocked.”
Twitter announced that implementing this procedure has encountered difficulties, stating that “we have become aware of a large number of malicious and coordinated reports. Unfortunately, our teams have made several mistakes.”
”We have corrected these errors and are conducting an internal investigation to ensure that these rules are being used in the appropriate manner,” the company added.
This kind of problem was anticipated by many anti-racism activists when the new policy was announced.
Twitter’s new photo permission policy was intended to combat online abuse. Still, according to Vice World News, US activists and researchers said Friday that far-right supporters are using it “to shield themselves from scrutiny and harass opponents.”
Even the social network acknowledged the rollout of the rules, which say anyone can ask Twitter to remove images posted without their consent, following malicious reports and its teams’ own errors.
This was confirmed when researcher Christopher Goldsmith shared a screenshot of a message from the far-right Proud Boys party on Telegram, which stated that “things are unexpectedly working in our favor” due to Twitter's new privacy policy.
A group of far-right activists and white supremacists had urged their followers to use a new company rule to target anti-extremism researchers and journalists.
Far-right activists and white supremacists began urging their followers to file reports against accounts used to identify neo-Nazis, monitor extremists, and document the attendees of hate rallies.
”We have become aware of a large number of malicious and coordinated reports. Unfortunately, our teams have made several mistakes,” Twitter told AFP.
"We have corrected these errors and are conducting an internal investigation to ensure that these rules are being used in the appropriate manner,” the company added.
The company said the new rule is an attempt to extend “right to privacy” protections, as some countries have, to accounts around the world. But the change does not apply to “public figures or individuals who are a part of public conversations and discourse online or offline.”
It was unclear whether all of the erroneous suspensions had been resolved. Some Twitter users who regularly track or identify far-right activists said their accounts remained locked.
For now, Safety Mode is just a limited test, rolling out Wednesday to “a small feedback group” of English-language users on iOS, Android, and Twitter.com, including “people from marginalized communities and female journalists.”
“Twitter has given extremists a new weapon to bring harm to those in the greatest need of protection and those shining a light on danger,” said Michael Breen, president of Human Rights First.