Home Tech/AIMicrosoft has devised a new strategy to distinguish between what is genuine and what is AI-generated on the internet.

Microsoft has devised a new strategy to distinguish between what is genuine and what is AI-generated on the internet.

by admin
0 comments
Microsoft has devised a new strategy to distinguish between what is genuine and what is AI-generated on the internet.

Deceptive AI is increasingly a part of our digital experiences. There are conspicuous incidents that are easy to notice, such as when White House representatives recently published an altered photograph of a protester in Minnesota and later ridiculed inquiries about it. At other times, it stealthily infiltrates social media feeds and accumulates views, much like the videos that Russian propaganda campaigns are currently disseminating to dissuade Ukrainians from joining the military. 

In light of this chaos, Microsoft has proposed a framework, shared with MIT Technology Review, outlining how to validate authenticity online. 

The AI safety research team within the company recently assessed the effectiveness of methods for documenting digital alterations against some of the most alarming advancements in AI today, such as interactive deepfakes and widely available hyperrealistic models. They suggested technical benchmarks that could be embraced by AI developers and social media sites.

To grasp the exemplary standard that Microsoft is advocating, envision possessing a Rembrandt artwork and seeking to authenticate it. You may outline its provenance with an extensive record detailing its origins and all transactions. You could add a watermark that is imperceptible to the human eye yet detectable by a machine. Furthermore, obtaining a digital scan of the painting could allow you to create a mathematical signature, akin to a fingerprint, based on the brushwork. If you presented the artwork at a museum, a skeptical viewer could examine these evidences to affirm it as an original.

All these techniques are presently being applied to varying extents in the quest to verify online content. Microsoft scrutinized 60 different permutations of these approaches, simulating how each arrangement would endure under diverse failure scenarios—from metadata being erased to content being slightly modified or intentionally altered. The team then evaluated which combinations yield reliable results that platforms can confidently present to users, and which ones are so questionable that they risk creating more confusion than clarity. 

Ask AI

Why is this important for you?BETA
This is the significance of this story for you, as per AI. This feature is in beta and AI may produce oddities—it might get strange
Explain why it is significant

The company’s chief scientific officer, Eric Horvitz, states that the initiative was stimulated by legislation—such as California’s AI Transparency Act, which is set to launch in August—and the rapid advancements in AI that merge video and audio with remarkable accuracy.

“You might label this as self-regulatory,” Horvitz informed MIT Technology Review. However, it’s apparent he views this undertaking as enhancing Microsoft’s reputation: “We’re also aiming to be a preferred, trusted provider for those who seek to understand current world events.”

Nevertheless, Horvitz did not pledge to implement Microsoft’s recommendations across its platforms. The organization finds itself central to a vast AI content ecosystem: It manages Copilot, which is capable of generating images and text; it operates Azure, the cloud service through which clients access OpenAI and other prominent AI frameworks; it owns LinkedIn, one of the largest professional networks globally; and it has a substantial investment in OpenAI. Yet, when questioned about internal deployment, Horvitz stated, “Product teams and leaders throughout the organization participated in this study to guide product trajectories and frameworks, and our engineering units are responding to the findings of the report.”

It is crucial to recognize that these tools possess inherent limitations; as they wouldn’t reveal the meaning of your Rembrandt painting, they are not designed to ascertain whether content is accurate. They simply indicate whether it has been altered. This is a clarification Horvitz asserts he must convey to lawmakers and others who question Big Tech as a reliable fact-checker.

“It’s not about reaching any conclusions about what is true or false,” he stated. “It’s centered on developing labels that simply inform people about the origin of the content.”

Hany Farid, a professor at UC Berkeley specializing in digital forensics who did not participate in the Microsoft study, believes that if the industry were to adopt the company’s framework, it would significantly hinder the capability to mislead the public with manipulated content. Though skilled individuals or governments could potentially evade these tools, he argues that the new standard could eliminate a substantial portion of misleading information.

“I don’t think it resolves the issue, but I believe it considerably diminishes it,” he asserts.

Still, one could view Microsoft’s strategy as a manifestation of somewhat optimistic technological idealism. There is mounting evidence that individuals can be influenced by AI-generated content even when they recognize it as false. In a recent study on pro-Russian AI-generated footage regarding the war in Ukraine, comments identifying the videos as AI-produced garnered significantly less engagement than those treating them as authentic. 

“Are there individuals who, regardless of what you inform them, will hold onto their beliefs?” Farid questions. “Indeed.” However, he adds, “there exists a substantial majority of Americans and citizens globally who genuinely wish to ascertain the truth.”

This desire has not precisely driven urgent responses from technology firms. Google began adding watermarks to content created by its AI tools in 2023, which Farid indicates has been beneficial in his investigations. Some platforms employ C2PA, a provenance standard Microsoft assisted in launching in 2021. However, the complete set of changes proposed by Microsoft, as impactful as they may be, might remain mere suggestions should they threaten the business models of AI developers or social media companies.

“If the likes of Mark Zuckerberg and Elon Musk believe that labeling something as ‘AI generated’ will diminish engagement, then they’re certainly disincentivized to adopt it,” Farid states. Platforms such as Meta and Google have previously indicated they would label AI-generated content, but an audit by Indicator last year revealed that only 30% of the test posts on Instagram, LinkedIn, Pinterest, TikTok, and YouTube were accurately recognized as AI-generated.

More substantial measures toward verification of content could arise from the various AI regulations currently in discussion worldwide. The European Union’s AI Act, along with proposed regulations in India and other regions, would mandate AI corporations to implement some form of disclosure for AI-generated materials. 

One priority for Microsoft, unsurprisingly, is to influence the development of these regulations. The company engaged in advocacy during the creation of California’s AI Transparency Act, which Horvitz mentioned made the requirements on tech companies for disclosing AI-generated content “more practical.”

Yet, another pressing concern revolves around the potential repercussions if the deployment of such content-verification technology is mishandled. Legislators are demanding verifiable tools, yet these mechanisms are delicate. Rushed, inconsistently applied, or frequently erroneous labeling systems could lead to widespread distrust, undermining the entire initiative. Researchers argue that in certain situations, it may be preferable to present no information rather than risk delivering an inaccurate judgment.

Subpar tools could also open new paths for what the researchers label sociotechnical assaults. Picture someone taking an authentic image from a tense political occurrence and utilizing an AI tool to alter only a trivial portion of pixels. When this image goes viral, it could be misleadingly categorized by platforms as AI-altered. However, by integrating provenance and watermarking techniques, platforms could clarify that the content was only partially AI modified and indicate where the adjustments were made.

California’s AI Transparency Act will serve as the initial significant evaluation of these instruments in the United States, yet enforcement may face obstacles due to President Trump’s executive order from late last year aiming to restrict state AI regulations deemed “burdensome” for the industry. The administration has also generally adopted a position against efforts to mitigate disinformation, and last year, via DOGE, it canceled funding related to misinformation. Furthermore, official government channels during the Trump era have disseminated content modified with AI (MIT Technology Review reported that the Department of Homeland Security, for instance, utilizes video generation tools from Google and Adobe to create content it shares with the public).

I inquired if Horvitz is as concerned about fake content from this source as he is with that originating from the broader social media landscape. Initially, he refrained from commenting, but later remarked, “Governments have not been excluded from the sectors responsible for various forms of manipulative disinformation, and this issue is global.”

You may also like

Leave a Comment