The Children’s Commissioner for England is urging the government to ban apps that use artificial intelligence (AI) to create explicit images of children. Dame Rachel de Souza emphasized the need for a complete ban on “nudification” apps, which manipulate real photos to make individuals appear naked.
She criticized the government for allowing these apps to operate unchecked, warning that they could lead to severe real-world consequences. A government spokesperson reaffirmed that child sexual abuse material is illegal and mentioned plans to introduce further penalties for creating, possessing, or distributing AI tools designed to generate such content. Deepfakes, which are AI-generated videos, images, or audio that mimic reality, are also a growing concern.
In a report published on Monday, Dame Rachel highlighted that the technology disproportionately targets girls and young women, with many apps seemingly designed to alter only female bodies. The report revealed that girls are increasingly avoiding sharing images or engaging online to protect themselves, much like how they follow offline safety rules, such as not walking home alone at night.
The report also noted that children fear being targeted by strangers, classmates, or even friends using technologies that can be accessed via popular search engines and social media platforms.
Dame Rachel warned that the rapid evolution of these tools is overwhelming and stressed that society cannot stand by while these AI apps negatively impact children. Under the Online Safety Act, sharing or threatening to share explicit deepfake images is illegal.
While the government announced measures in February to address the growing concern of AI-generated child sexual abuse material, Dame Rachel believes these measures do not go far enough. Her spokesperson told the BBC that there should be an outright ban on nudifying apps, not just those categorized as child sexual abuse generators.
Increase in reported incidents.
In February, the Internet Watch Foundation (IWF), a UK-based charity partially funded by tech companies, confirmed 245 reports of AI-generated child sexual abuse in 2024, a significant rise from 51 in 2023—representing a 380% increase.
IWF Interim Chief Executive Derek Ray-Hill stated on Monday, “We know these apps are being misused in schools, and the images quickly spiral out of control.”
A government spokesperson emphasized that creating, possessing, or distributing child sexual abuse material, including AI-generated images, is “abhorrent and illegal.”
“Under the Online Safety Act, platforms of all sizes must remove such content, or face substantial fines,” they added.
“The UK is the first country to introduce new AI child sexual abuse offences, making it illegal to possess, create, or distribute AI tools designed to generate harmful child sex abuse material.”
Dame Rachel also urged the government to:
- Impose legal obligations on developers of generative AI tools to recognize and address the risks their products pose to children and take action to mitigate these risks.
- Establish a systematic process to remove sexually explicit deepfake images of children from the internet.
- Recognize deepfake sexual abuse as a form of violence against women and girls.
Paul Whiteman, general secretary of the NAHT, which represents school leaders, shared the commissioner’s concerns, stating, “This is an area that urgently requires review, as technology is outpacing both the law and education on the matter.”
On Friday, media regulator Ofcom released the final version of its Children’s Code, imposing legal requirements on platforms hosting pornography and content related to self-harm, suicide, or eating disorders to prevent child access. Websites must implement stronger age verification checks or face large fines.
Dame Rachel criticized the code for prioritizing “business interests of technology companies over children’s safety.”