The number of child sexual abuse image crimes recorded by Gwent Police increased by 18 per cent last year, new data obtained by the NSPCC has revealed.
A total of 475 offences where child abuse images were collected and distributed, were logged in Gwent in 2022/23, with 33,000 offences across the UK, according to Freedom of Information data.
This is a rise of 78 per cent in Gwent since 2017/18 when the NSPCC first called for social media regulation, with more than 2,000 crimes recorded while children and families waited for online safety laws, while there was an increase of 79 per cent across the UK during this time.
The charity said the figures show the need for swift and ambitious action by tech companies to address what is currently happening on their platforms and for Ofcom to significantly strengthen its approach to tackling child sexual abuse through effective enforcement of the Online Safety Act.
The new data shows the widespread use of social media and messaging apps in child sexual abuse image crimes which the NSPCC say results largely from a failure to design child safety into products.
Social media and messaging apps used in abuse crimes
Where police for disclosed the site involved, Snapchat was flagged in almost half (44%) of instances – over 4,000 times. Meta-owned products (Facebook, Instagram and WhatsApp) were flagged more than 2,500 times making up a quarter (26%) of known instances.
It comes as insight from Childline shows young people being targeted by adults to share child abuse images via social media and the calculated use of end-to-end encrypted private messaging apps by adults to find and share child abuse images.
A 14-year-old girl told the NSPCC-run service: “One night I got chatting with this guy online who I’d never met and he made me feel so good about myself. He told me he was 15, even though deep down I didn’t believe him. I sent him a couple of semi-nudes on Snap(chat), but then instantly regretted it. I asked him to delete the pics, but he just kept on making me do stuff for him not to post them – like getting me to strip live on camera. I just want to block him, but if I block him he will just post the pictures.”
A 15-year-old boy told Childline: “A while ago I saw a video on YouTube about how a guy was busting paedophiles and creeps on the internet by pretending to be a kid, and I kind of wanted to do a similar thing. I looked around Instagram for the creepiest accounts about kids my age and younger. In the end, I came across this link on one of their stories. It’s a link to a WhatsApp group chat in which [child sexual abuse material] is sent daily! There are literally hundreds of members in this group chat and they’re always calling the kids ‘hot’ and just being disgusting.”
Online Safety Act implementation
The NSPCC said that disrupting online child sexual abuse taking place at increasing levels will require regulated tech platforms to introduce systemic changes to their products to stop them being used to organise, commit, and share child abuse.
A consultation into Ofcom’s first codes for companies to adopt to disrupt child sexual abuse on their platforms closed last week.
The NSPCC want these measures introduced without delay but urged Ofcom to begin work on a second version of the codes that will require companies to go much further.
The charity said companies should be required to use technology that can help identify and tackle grooming, sextortion and new child abuse images.
They also want tougher measures for private messaging services to make child protection a priority, including in end-to-end encrypted environments.
Facebook and Instagram were used in more than a fifth abuse image instances where a platform was recorded by police forces. The NSPCC warned that Meta’s roll-out of end-to-end encryption on these sites will prevent authorities from identifying offenders and safeguarding victims.
The charity wants plans paused until Meta can prove child safety will not be compromised and have urged parties to find a balance between the safety and privacy of all users, including children. The NSPCC said further rollout should be delayed until Ofcom can study Meta’s risk assessment as part of the new regulatory regime.
Sir Peter Wanless, NSPCC Chief Executive, said: “It’s alarming to see online child abuse continue to rise, especially when tech companies should be acting to make their sites safe by design ahead of incoming regulation.
“Behind these crimes are children who have been targeted by adults who are able to organise and share sexual abuse with other offenders seamlessly across social media and messaging apps.
“The Online Safety Act sets out robust measures to make children fundamentally safer on the sites and apps they use so they can enjoy the benefits of a healthy online experience.
“Ofcom has been quick off the blocks but must act with greater ambition to ensure companies prioritise child safety in the comprehensive way that is so desperately needed.”
Susie Hargreaves OBE, Chief Executive of the Internet Watch Foundation, the UK’s front line against child sexual abuse imagery online, said: “This is a truly disturbing picture, and a reflection of the growing scale of the availability, and demand, for images and videos of children suffering sexual abuse.
“The people viewing and sharing and distributing this material need to know it is not a victimless crime. They are real children, suffering real abuse and sexual torture, the effects of which can linger a lifetime.
“That more and more people are trying to share and spread this material shows we should all be doing everything we can to stop this, building more, and innovative solutions to keep children safe. The IWF is ready to support technology companies and Ofcom in implementing the Online Safety Act to help make the UK the safest place in the world to be online.”