As part of its continuing efforts to bring back civility in online discussions, Twitter has started testing a new moderation tool that will warn users before they post replies containing language that the company deems ‘harmful’. The micro-blogging platform describes the feature as a ‘limited experiment’ that’s available to select users on iOS, but one would expect it to be rolled out more widely and on more platforms in the days to come.
Announcing the new feature via a tweet on Tuesday, the company said: “When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful”.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Twitter’s new feature is the latest in a series of actions on cyber-bullying by social media networks to deal with hate speech and abusive conduct. The move follows a similar initiative from Instagram late last year when it started rolling out the Caption Warning feature globally in a bid to clamp down on abusive and hateful messages on its platform. The Facebook-owned company had earlier announced that it will use AI to curb bullying on its site.
It will be interesting to see how users and free-speech advocates will react to the new development, but if it does eventually become a standard feature on Twitter across all available platforms, it could have a major effect in reducing some of the offensive language, bullying and open hostility that has been growing exponentially on social media platforms over the years.