AI-generated photos of child sex abuse ‘threaten to overwhelm internet’

According to the Internet Watch Foundation, 3,000 abuse photos created by AI violate UK law.

A safety watchdog has warned that the “worst nightmares” regarding artificial intelligence-generated images of child sexual assault are coming reality and threaten to overtake the internet.

According to the Internet Watch Foundation (IWF), it has discovered around 3,000 abusive photos created by AI that violated UK legislation.

According to the UK-based organization, AI models are being trained with pre-existing photos of actual victims of abuse to generate fresh representations of them.

It went on to say that the technology was also being used to produce pictures of famous people who had been “de-aged,” showing them as kids in situations involving sexual abuse. Another example of child sexual abuse material (CSAM) was the use of artificial intelligence (AI) algorithms to “nudify” online images of children in clothing.

The IWF claimed that its most recent study demonstrated an increase in the deployment of the technology, despite having warned in the summer that evidence of abuse caused by AI was beginning to surface. The IWF’s chief executive, Susie Hargreaves, declared that “worst nightmares have come true” for the watchdog.

“Earlier this year, we issued a warning that artificial intelligence (AI) photography would soon blend in with authentic images of abused children and that we might witness a significant increase in the amount of this imagery. We’ve moved past that phase now, she declared.

It’s horrifying to observe that criminals are purposefully using photos of actual victims who have already experienced abuse to train their AI. Since someone, somewhere, wants to witness it, children who have been raped in the past are now being included in fresh situations.

The IWF claimed to have seen proof of the sale of AI-generated photos online.

The newest research it conducted was based on a month-long examination of a forum on child abuse on the dark web—a portion of the internet that requires a specialized browser to access.

2,978 of the 11,108 photographs on the forum that were the subject of the investigation violated UK law by showing child sex abuse.

The Protection of Children Act 1978 forbids the taking, dissemination, and ownership of any “indecent photograph or pseudo photograph” containing a child, and this includes AI-generated CSAM. Over 50% of the photographs were categorized as category A, the most serious form of content that can depict sexual abuse and rape. According to the IWF, the vast bulk of the illicit material it had identified violated the Protection of Children Act.

The 2009 Coroners and Justice Act also makes it illegal to depict children in non-photographic forms, such as cartoons or drawings.

The IWF worries that a flood of CSAM produced by AI may divert law enforcement from identifying actual abuse and assisting victims.

“This material threatens to overwhelm the internet if we don’t get a grip on this threat,” Hargreaves stated.

The only AI product being discussed on the forum, according to Dan Sexton, the IWF’s chief technology officer, is the image-generating tool Stable Diffusion, a publicly accessible AI model that can be modified to assist develop CSAM.

“Discussions about using the freely available program Stable Diffusion to create content have been observed.”

Steadiness “Prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM,” stated AI, the UK business that created Stable Diffusion.

According to the government, AI-generated CSAM will be subject to the online safety laws, which is expected to be signed into law soon, and social media companies will have to take steps to stop it from appearing on their platforms.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like