Generative AI992 Archivesexacerbating the problem of online child sexual abuse materials (CSAM), as watchdogs report a proliferation of deepfake content featuring real victims' imagery.
Published by the UK's Internet Watch Foundation (IWF), the report documents a significant increase in digitally altered or completely synthetic images featuring children in explicit scenarios, with one forum sharing 3,512 images and videos over a 30 day period. The majority were of young girls. Offenders were also documented sharing advice and even AI models fed by real images with each other.
"Without proper controls, generative AI tools provide a playground for online predators to realize their most perverse and sickening fantasies," wrote IWF CEO Susie Hargreaves OBE. "Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet."
According to the snapshot study, there has been 17 percent increase in online AI-altered CSAM since the fall of 2023, as well as a startling increase in materials showing extreme and explicit sex acts. Materials include adult pornography altered to show a child’s face, as well as existing child sexual abuse content digitally edited with another child's likeness on top.
"The report also underscores how fast the technology is improving in its ability to generate fully synthetic AI videos of CSAM," the IWF writes. "While these types of videos are not yet sophisticated enough to pass for real videos of child sexual abuse, analysts say this is the ‘worst’ that fully synthetic video will ever be. Advances in AI will soon render more lifelike videos in the same way that still images have become photo-realistic."
In a review of 12,000 new AI-generated imagesposted to a dark web forum over a one month period, 90 percent were realistic enough to be assessed under existing laws for real CSAM, according to IWF analysts.
Another UK watchdog report, published in the Guardian today,alleges that Apple is vastly underreporting the amount of child sexual abuse materials shared via its products, prompting concern over how the company will manage content made with generative AI. In it's investigation, the National Society for the Prevention of Cruelty to Children (NSPCC) compared official numbers published by Apple to numbers gathered through freedom of information requests.
While Apple made 267 worldwide reports of CSAM to the National Center for Missing and Exploited Children (NCMEC) in 2023, the NSPCC alleges that the company was implicated in 337 offenses of child abuse images in just England and Wales, alone — and those numbers were just for the period between April 2022 and March 2023.
Apple declined the Guardian'srequest for comment, pointing the publication to a previous company decision to not scan iCloud photo libraries for CSAM, in an effort to prioritize user security and privacy. Mashable reached out to Apple, as well, and will update this article if they respond.
Under U.S. law, U.S.-based tech companies are required to report cases of CSAM to the NCMEC. Google reported more than 1.47 million cases to the NCMEC in 2023. Facebook, in another example, removed 14.4 million pieces of content for child sexual exploitation between January and March of this year. Over the last five years, the company has also reported a significant decline in the number of posts reported for child nudity and abuse, but watchdogs remain wary.
Online child exploitation is notoriously hard to fight, with child predators frequently exploiting social media platforms, and their conduct loopholes, to continue engaging with minors online. Now with the added power of generative AI in the hands of bad actors, the battle is only intensifying.
Read more of Mashable's reporting on the effects of nonconsensual synthetic imagery:
What to do if someone makes a deepfake of you
Explicit deepfakes are traumatic. How to deal with the pain.
The consequences of making a nonconsensual deepfake
Victims of nonconsensual deepfakes arm themselves with copyright law to fight the content's spread
How to stop students from making explicit deepfakes of each other
If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful informationas well as a list of international resources.
Topics Apple Artificial Intelligence Social Good
Longchamp and Burberry have set up stores on WeChat, China's top messenger appFacebook tests locking down Messenger app with Face IDFox News used doctored images to, uh, report on Seattle protestsJustin Bieber searched 'MDMA used for' on YouTube and Instagrammed the resultsUniversity tells female students to wear 'lowPeople think Maya Angelou just died thanks to FacebookPopular Chinese lesbian dating app, Rela, suddenly goes offlineGoogle pays homage to an architectural genius with its latest doodleGoogle's Pixel 4a might be delayed again, without 'Barely Blue'Hillary Clinton jokes that 'covfefe' was a 'hidden message to the Russians'J.K. Rowling wades in on Donald Trump's latest Twitter gaffeSarah Cooper debuts her new Trump vs. the bathroom TikTok on FallonAndrew Garfield channels his inner Whitney Houston in this drag show lipsync battleGoogle Maps adds images of Black Lives Matter mural in Washington, D.C.'Artemis Fowl' is a criminal waste of time: Disney Plus ReviewAndrew Garfield channels his inner Whitney Houston in this drag show lipsync battleSamsung teams with BTS for special edition Galaxy S20+ and Buds+12 excellent podcasts with black hosts for pop culture, politics, or history fansTesla reportedly adding wireless charging, USBCameo now lets you pay to Zoom with celebrities, including Tony Hawk, Lance Bass, and Sean Astin Imagining a Mystery Novel as a Building An Interview with Aaron Stern and Jordan Sullivan Ann Beattie: “Upon Knowing I Must Soon Depart” The Film “Happy Hour” is Five Hours—And Worth It H.G. Wells to Joyce: “You Have Turned Your Back on Common Men” Omelets, Jams, Enemas, and Other Ways to Get It Up Monday: Terry McDonell and Graydon Carter at 92Y The Lost Art of Goofy Election Merchandise Hunting the Sound Stack in the Rondels of D’Orléans In Which Stéphane Mallarmé Confronts the Void Living with Volcanoes in the Caribbean Islands Staff Picks: Nicholson Baker, Alex Prager, Gary Panter Netanyahu’s Ready for More Puzzling?: The Answers Ursula K. Le Guin: How I Started Writing A New Photobook Captures Brazil’s Love Motels Beauty Marks: On Pre Staff Picks: Patrick Hoffman; May Sarton; Secret Art in Melrose Place Staff Picks: Forehead Blotches, Lasagna Hogs, and Crust Punks by The Paris Review The Genesis of “Channel,” a Poem in Our Fall 2016 Issue Having Trouble Sleeping? Read This.
1.9616s , 10137.1328125 kb
Copyright © 2025 Powered by 【1992 Archives】,Unobstructed Information Network