

This article first featured in The Algorithm, our weekly newsletter focused on AI. To receive stories like this directly in your inbox, subscribe here.
What would it require to persuade you that the anticipated age of truth decay, in which AI-generated content misleads us, influences our perceptions even when we detect falsehoods, and diminishes societal confidence, is upon us? An article I released last week tipped the scales for me. It also highlighted that the solutions we were promised to remedy this dilemma are exceptionally inadequate.
On Thursday, I reported the initial confirmation that the US Department of Homeland Security, which oversees immigration divisions, is utilizing AI video generators from Google and Adobe to create content for public distribution. This news aligns with the trend of immigration agencies inundating social media with material in support of President Trump’s mass deportation policy—some of which appears to be AI-produced (such as a video about “Christmas post mass deportations”).
However, I encountered two distinct types of feedback from readers that could reveal important insights about the epistemic crisis we find ourselves in.
One response came from individuals who weren’t astonished, as on January 22 the White House had posted a digitally manipulated image of a woman detained at an ICE protest, which depicted her as distraught and tearful. Kaelan Dorr, deputy communications director at the White House, did not answer inquiries regarding whether the White House altered the image but tweeted, “The memes will persist.”
The second reaction was from readers who saw no value in reporting that DHS was using AI to alter content shared with the public, suggesting that news organizations were evidently doing likewise. They referenced that the news network MS Now (previously MSNBC) shared an AI-altered image of Alex Pretti, which made him appear more attractive, leading to numerous viral clips this week, including one from Joe Rogan’s podcast. Is it an instance of matching tactics? A representative for MS Now informed Snopes that the news organization broadcast the image unaware that it had been edited.
There’s no justification for collapsing these two instances of altered imagery into the same classification or interpreting them as proof that truth holds no significance. One scenario involved the US government distributing a clearly altered image to the public while evading questions about its intentional manipulation; the other involved a news organization airing a photo it should have recognized was altered but taking measures to disclose the error.
What these reactions actually indicate is a deficiency in how we collectively readied ourselves for this juncture. Warnings regarding the AI truth crisis revolved around a principal premise: that the inability to discern reality will be our downfall, necessitating tools for independently verifying the truth. My two grim conclusions are that these tools are underperforming, and that while validating the truth remains crucial, it is no longer sufficient by itself to cultivate the societal trust we were assured.
For instance, there was significant enthusiasm in 2024 about the Content Authenticity Initiative, co-founded by Adobe and embraced by major tech firms, which aimed to attach labels to content disclosing its origin, the creator, and any AI involvement. Yet, Adobe applies automatic labeling solely for content that is entirely AI-generated. Otherwise, labeling is left to the discretion of the creator.
Moreover, platforms like X, where the manipulated arrest photo was shared, can remove such labels regardless (a note that the image was altered was inserted by users). Platforms can simply opt not to display the label; in fact, when Adobe initiated the program, it acknowledged that the Pentagon’s site for sharing official images, DVIDS, would show the labels for authenticity proof, but a review of the site today reveals no such labels.
Recognizing how much engagement the White House’s photo garnered even after being revealed as AI-manipulated, I was struck by the results of a very pertinent new study published in the journal Communications Psychology. In the investigation, participants viewed a deepfake “confession” regarding a crime, and the researchers discovered that even when told specifically that the evidence was fabricated, participants relied on it when assessing an individual’s guilt. In essence, even when individuals discover that the content they are viewing is completely false, they remain emotionally influenced by it.
“Transparency helps, but it isn’t sufficient on its own,” noted disinformation expert Christopher Nehring recently regarding the study’s outcomes. “We need to formulate a new master plan for addressing deepfakes.”
AI technologies for creating and editing content are becoming increasingly sophisticated, user-friendly, and affordable—factors prompting the US government to invest more in their use. Although we were warned of this trend, we responded by preparing for a scenario where the primary threat was confusion. Instead, we are entering a world where influence persists despite exposure, skepticism is easily weaponized, and establishing the truth doesn’t serve as a reset mechanism. And the protectors of truth are already lagging considerably behind.
Update: This article was revised on February 2 to include information regarding how Adobe applies its content authenticity labels.