Already live and in action on Twitter, the tool currently detects 12 behaviour profiles that contribute to a bad experience on social networks. Moderate analyzes your twitter mentions and identifies tweets and accounts that exhibit these types of behaviour and places them in a moderation queue that makes it easy to mute or block them. These behaviour profiles range from actor-based AI models such as bot detection, to content-based AI models such as racism or sexism detection.
Designed by disinformation detection experts, Astroscreen, the online tool puts the power of social media moderation into the hands of the individual user.
Ali Tehrani of Astroscreen, explains: ‘Most of the content that users find unbearable on social networks actually fall short of platform moderation, which involves banning or suspending offending accounts. So we built Moderate to empower users to become power moderators and take control of their social media account. Users can mute, block and unfollow other accounts, so tweets that fall short of platform moderation, can still be moderated by users.’
And this is important, because right now, the platform-led approach to toxic content is failing.
43% of Twitter users say they have faced abuse on the platform.
In the fourth quarter of 2020, Facebook removed 6.3 million pieces of violating content – a tiny percentage of all those reported.
41% of U.S. adults have personally experienced online harassment, and 25% have experienced more severe harassment.
75% of online harassment goes unreported
But these figures don’t reflect a lack of interest or care on the part of the social media platforms. Rather, an inability to act. Take Facebook, for example. The platform has 15,000 moderators and will spend $3bn on moderation this year. The reason they aren’t solving the problem is because their hands are tied. They have a duty to both protect their users while respecting freedom of speech, and with user suspension and account banning being the only course of action available in the event of reported content, only the very worst cases are acted upon. 90% of bad behaviour just falls short of this threshold.
Juan Echeverria, CTO of Astroscreen comments: ‘Since most of the content in question is subjective, involving the user in the moderation process is the right approach. Our Content-based AI models detect for toxic language and conspiracy theories, and actor-based models that detect for bots and trolling behaviour, but ultimately we let the user decide.’
Astroscreen, founded in 2018, first caught the media attention through their Disinformation detection platform. With Moderate, the brand is now moving into the consumer market, allowing individuals the opportunity to enjoy their online experience without the threat of unacceptable abuse.
The executive team is expanding too, with Mark Coatney, former director at Tumblr, joining as co-founder and COO of Moderate. Coatney comments: ‘The future of social media is in giving people control over what they see in their feeds. I’m so excited to be part of a team building a tool that everyone can use to reclaim their social media space.’
How do you take control of a problem that is too big to handle? You put the power in the hands of the user. That’s Moderate.
Created by Astroscreen, Moderate makes your social media experience better. Moderate is used by journalists, celebrities and influencers to protect themselves from online trolls. Moderate uses AI to detect social media accounts that are bots or use toxic language and makes it easy to mute or block them.
Astroscreen is a London-based startup that uses Artificial intelligence to detect social media disinformation, helping prevent election interference and protecting companies from brand attacks.