The social media company on Wednesday announced the test of a new feature called “Safety Mode,” which aims to help users prevent being overwhelmed by harmful tweets and unwanted replies and mentions. The feature will temporarily block accounts from interacting with users to whom they have sent harmful language or repeated and uninvited replies or mentions.
With the Safety Mode tool, when a user turns it on in settings, Twitter’s systems will assess incoming tweets’ “content and the relationship between the tweet author and replier.” If Twitter’s automated system finds an account to have repeated, harmful engagement with the user, it will block the account for seven days from following the user’s account, viewing their tweets or sending them direct messages.
Twitter spokesperson Tatiana Britt said the platform does not proactively send notifications letting people know they’ve been blocked. However, if the violator navigates to the user’s page, they’ll see that “Twitter autoblocked them” and that the user is in Safety Mode, she said.
The company says its technology takes existing relationships into account to avoid blocking accounts a user frequently interacts with, and that users can review and change blocking decisions at any time.
For now, Safety Mode is just a limited test, rolling out Wednesday to “a small feedback group” of English-language users on iOS, Android, and Twitter.com, including “people from marginalized communities and female journalists.”