For two weeks, a Discord group has been able to access the Mythos model.
For two weeks, a Discord group has been able to access the Mythos model.


The Mythos AI model from Anthropic, a potent cybersecurity instrument that the firm warned could be perilous if misused, has reportedly been accessed by a “limited group of unauthorized users,” Bloomberg states. An anonymous participant from the group, who is described simply as “a third-party contractor for Anthropic,” informed the outlet that members of a private online forum gained entry to Mythos using a combination of strategies, leveraging the contractor’s access along with “well-known internet sleuthing tools.”
The Claude Mythos Preview is a new general-purpose model capable of identifying and taking advantage of weaknesses “in every major operating system and every major web browser when prompted by the user,” according to Anthropic. Official access to the model is confined to a select few companies as part of the Project Glasswing initiative, which includes Nvidia, Google, Amazon Web Services, Apple, and Microsoft. Governments are also considering the technology. Anthropic currently has no intentions to publicly release the model due to fears that it may be weaponized.
“We are looking into a claim of unauthorized access to Claude Mythos Preview via one of our third-party vendor networks,” an Anthropic spokesperson stated in a comment to Bloomberg. Anthropic has found no proof that the unauthorized access is affecting the company’s systems or extends beyond the third-party vendor’s network.
The model was allegedly accessed without permission on April 7th, the same date that Anthropic disclosed it was providing Mythos to a limited number of companies for evaluation. The entity responsible for the unauthorized access has not been publicly disclosed; however, Bloomberg indicates that its members belong to a Discord channel that seeks information regarding unreleased AI models.
The group accessed Mythos by leveraging insights from Anthropic’s other model formats acquired from a recent Mercor data breach to form “an informed guess” about its online whereabouts. Since gaining access, members have been regularly using Mythos — supplying screenshots and a live demo of the model as proof to Bloomberg — although they reportedly avoided using it for cybersecurity-related tasks to elude detection by Anthropic. Additional unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg.