The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity.
The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic’s past practices, that hackers obtained from AI training startup Mercor.
Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported.
Anthropic did not immediately respond to Fortune’s request for comment. A spokesperson from Anthropic told Bloomberg the company was “investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.”
The fact that the model was leaked so quickly doesn’t surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies—including Microsoft, Apple, and Google— to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said.
“It was bound to happen,” Lindner said. “The more they add to this elite group, the more likely it was to get released to someone who shouldn’t probably have access to it.”
Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser.
And yet, Mythos’ release has been plagued by security breaches from the start. Fortune was the first to report on the model’s existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database.
For Lindner, this most recent unauthorized access shows it’s likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks.
“If some group—some random Discord online forum, got access to it. it’s already been breached by China,” Lindner told Fortune.
Although Lindner is still unsure how much of Mythos’ supposed danger is real or just marketing hype—OpenAI’s Sam Altman this week called Anthropic’s promotion of Mythos “fear-based marketing”—it’s clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks.
“The real thing is there’s a real compression of timelines here for defenders,” he said.
AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company’s security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations.
“It’s the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now,” he said.
Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said.
“AI is not a golden ticket, but if you’re not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side,” he said.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: fortune.com






