In the last two decades, online platforms that permit users to interact and upload content for others to view have become integral to the lives of many people. However, there is growing awareness amongst the public, businesses and policy makers of the potential damage caused by harmful online material. The user generated content (UGC) posted by users contributes to the richness and variety of content on the internet but is not subject to the editorial controls associated with traditional media. This enables some users to post content which could harm others, particularly children or vulnerable people.
As the amount of UGC that platform users upload continues to accelerate, it has become impossible to identify and remove harmful content using traditional human-led moderation approaches at the speed and scale necessary.
This paper, commissioned by Ofcom, the UK’s converged communications regulator, examines the capabilities of artificial intelligence (AI) technologies in meeting the challenges of moderating online content and how improvements are likely to enhance those capabilities over approximately the next five years.
The paper will be released during July 2019. Register now to receive the paper on the day of release.