In recent years a wide-ranging, global debate has emerged around the risks faced by internet users, with a specific focus on protecting users from harmful content. A key element of this debate has centred on the role and capabilities of automated approaches (driven by Artificial Intelligence and Machine Learning techniques) to enhance the effectiveness of online content moderation and offer users greater protection from potentially harmful material.

These approaches may have implications for people’s future use of – and attitudes towards – online communications services, and may be applicable more broadly to developing new techniques for moderating and cataloguing content in the broadcast and audiovisual media industries, as well as back-office support functions in the telecoms, media and postal sectors.

Ofcom has commissioned Cambridge Consultants to produce this report as a contribution to the evidence base on people’s use of and attitudes towards online services, which helps enable wider debate on the risks faced by internet users.

Click the icon below to download the whitepaper.

Media downloads

Use of AI in online content moderation.pdf
Tim Winchcomb
Head of Technology Strategy

Tim leads the Technology Strategy team in our Wireless & Digital Services division. His 20 years' experience in the high-tech sector spans product development to commercial strategy, including applications of connectivity and digital services across telecoms, financial services, hospitality and healthcare sectors.