A recent research study published on Friday reveals that children in Britain are encountering violent and harmful content online, including material promoting self-harm, from a young age, often considering it an unavoidable aspect of internet use.
The findings underscore the pressing challenge faced by governments worldwide and tech companies like Meta, Google, Snap Inc, and ByteDance to implement effective safeguarding measures, particularly for minors.
Last October, Britain enacted legislation imposing stricter regulations on social media platforms, requiring them to prevent children from accessing harmful and age-inappropriate content through measures such as age limits and age verification.
However, penalties for non-compliance have yet to be enforced as the regulatory authority, Ofcom, must first develop codes of practice to enforce the law.
Messaging platforms, notably WhatsApp, have opposed certain provisions in the legislation, citing concerns that they could compel companies to compromise end-to-end encryption.
According to a report commissioned by Ofcom and conducted by research agency Family Kids & Youth, all 247 children aged 8-17 who participated in the study encountered violent content online, primarily through social media, video-sharing platforms, and messaging apps.
The content included violent gaming, verbal discrimination, and footage of street fights. Many children expressed feeling powerless over the content suggested to them and had a limited understanding of recommender systems, often referred to as “the algorithm.” These systems use data to predict users’ preferred content.
Gill Whitehead, Ofcom’s Online Safety Group Director, emphasized the urgent need for action by tech companies, stating that the research sends a clear message to firms to prepare for their child protection responsibilities under the new online safety laws.
Ofcom intends to engage with the industry to ensure that children have a safer online experience that is age-appropriate, through consultation and collaboration.