The report found that most automated content-based tools rely on either matching images/videos to a database or using machine learning to classify content. However, these approaches have shortcomings, including difficulties compiling suitable training data and algorithms lacking cultural sensitivity.
To address this, the report recommends “developing minimum standards for content moderators, promoting AI tools to safeguard moderator wellbeing, and enabling collaboration across the industry.”
The report’s recommendations come as platforms adapt to the EU’s 2021 Terrorist Content Online Regulation mandating swift takedown of terrorist content. While many platforms are expanding automated detection to meet legal requirements, the report cautions that exclusively automated enforcement risks disproportionate impacts on marginalised groups and activists. It calls for human oversight and appropriate accountability mechanisms.
Access the full resource here.