Home Tech/AIUS authorities are employing AI to identify child abuse images generated by AI

US authorities are employing AI to identify child abuse images generated by AI

by admin
0 comments
US authorities are employing AI to identify child abuse images generated by AI

Generative AI has caused a dramatic increase in the creation of child sexual abuse images. According to a recent government filing, the prominent investigator of child exploitation in the US is testing the use of AI to differentiate between AI-generated images and those showcasing actual victims.

The Cyber Crimes Center of the Department of Homeland Security, which investigates child exploitation across borders, has granted a $150,000 contract to Hive AI, based in San Francisco, for its software that can determine if a piece of content is AI-generated.

The filing, released on September 19, is extensively redacted, and Kevin Guo, co-founder and CEO of Hive, informed MIT Technology Review that he couldn’t elaborate on the contract details but confirmed it pertains to the application of the company’s AI detection algorithms for child sexual abuse material (CSAM).

The document cites data from the National Center for Missing and Exploited Children, indicating a 1,325% rise in incidents involving generative AI in 2024. “The vast amount of digital content circulating online requires automated tools for efficient data processing and analysis,” the filing states.

Child exploitation investigators prioritize locating and halting any ongoing abuse, but the surge of AI-generated CSAM complicates their efforts to ascertain if images represent an actual victim in danger. A tool capable of accurately flagging real victims would be extremely beneficial for prioritizing cases.

Identifying AI-generated images “ensures investigative resources focus on cases involving actual victims, enhancing the program’s effectiveness and protecting at-risk individuals,” the document notes.

Hive AI provides AI tools for creating videos and images, in addition to various content moderation tools that can flag violence, spam, and sexual content, and even recognize celebrities. In December, MIT Technology Review reported on the company’s sale of its deepfake-detection technology to the US military. 

To detect CSAM, Hive has developed a tool in collaboration with Thorn, a child safety nonprofit, which companies can incorporate into their platforms. This tool employs a “hashing” system, assigning unique IDs to content identified by investigators as CSAM, blocking that material from being uploaded. Such tools have become a standard defense line for tech companies. 

However, these tools merely categorize a piece of content as CSAM; they do not determine if it was AI-generated. Hive has created a distinct tool to ascertain whether images are AI-generated. While it isn’t specifically designed for CSAM, Guo states that it doesn’t need to be.

“There’s a certain fundamental combination of pixels in these images that we can recognize” as AI-generated, he explains. “It can be generalized.” 

Guo indicates that this tool will be utilized by the Cyber Crimes Center to assess CSAM. He adds that Hive measures its detection tools based on the specific needs of each customer.

The National Center for Missing and Exploited Children, involved in initiatives to prevent the proliferation of CSAM, did not respond to inquiries regarding the effectiveness of such detection models for publication in time. 

In its filing, the government justifies giving the contract to Hive without a competitive bidding process. While some parts of this justification are redacted, it primarily references two points also found in a Hive presentation slide deck. One refers to a 2024 study from the University of Chicago, which concluded that Hive’s AI detection tool surpassed four other detectors in identifying AI-generated art. The other is its agreement with the Pentagon for identifying deepfakes. The trial period will last three months. 

You may also like

Leave a Comment