Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

0
1

At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn’t help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn’t spill nuclear secrets.

The manufacture of nuclear weapons is both a precise science and a solved problem. A lot of the information about America’s most advanced nuclear weapons is Top Secret, but the original nuclear science is 80 years old. North Korea proved that a dedicated country with an interest in acquiring the bomb can do it, and it didn’t need a chatbot’s help.

How, exactly, did the US government work with an AI company to make sure a chatbot wasn’t spilling sensitive nuclear secrets? And also: Was there ever a danger of a chatbot helping someone build a nuke in the first place?

The answer to the first question is that it used Amazon. The answer to the second question is complicated.

Amazon Web Services (AWS) offers Top Secret cloud services to government clients where they can store sensitive and classified information. The DOE already had several of these servers when it started to work with Anthropic.

“We deployed a then-frontier version of Claude in a Top Secret environment so that the NNSA could systematically test whether AI models could create or exacerbate nuclear risks,” Marina Favaro, who oversees National Security Policy & Partnerships at Anthropic tells WIRED. “Since then, the NNSA has been red-teaming successive Claude models in their secure cloud environment and providing us with feedback.”

The NNSA red-teaming process—meaning, testing for weaknesses—helped Anthropic and America’s nuclear scientists develop a proactive solution for chatbot-assisted nuclear programs. Together, they “codeveloped a nuclear classifier, which you can think of like a sophisticated filter for AI conversations,” Favaro says. “We built it using a list developed by the NNSA of nuclear risk indicators, specific topics, and technical details that help us identify when a conversation might be veering into harmful territory. The list itself is controlled but not classified, which is crucial, because it means our technical staff and other companies can implement it.”

Favaro says it took months of tweaking and testing to get the classifier working. “It catches concerning conversations without flagging legitimate discussions about nuclear energy or medical isotopes,” she says.

Wendin Smith, the NNSA’s administrator and deputy undersecretary for counterterrorism and counterproliferation, tells WIRED that “the emergence of [AI]-enabled technologies has profoundly shifted the national security space. NNSA’s authoritative expertise in radiological and nuclear security places us in a unique position to aid in the deployment of tools that guard against potential risk in these domains, and that enables us to execute our mission more efficiently and effectively.”

Both NNSA and Anthropic were vague about the “potential risks in these domains,” and it’s unclear how helpful Claude or any other chatbot would be in the construction of a nuclear weapon.

“I don’t dismiss these concerns, I think they are worth taking seriously,” Oliver Stephenson, an AI expert at the Federation of American Scientists, tells WIRED. “I don’t think the models in their current iteration are incredibly worrying in most cases, but I do think we don’t know where they’ll be in five years time … and it’s worth being prudent about that fact.”

Stephenson points out that much is hidden behind a barrier of classification, so it’s hard to know what impact Anthropic’s classifier has had. “There is a lot of detail in the design of implosion lenses that go around the nuclear core,” Stephenson says. “You need to structure them very precisely to perfectly compress the core to get a high yield explosion … I could imagine that being the kind of thing where AI could help synthesize information from a bunch of different physics papers, a bunch of different publications on nuclear weapons.”

Still, he says, AI companies should be more specific when they talk about safety. “When Anthropic puts out stuff like this, I’d like to see them talking in a little more detail about the risk model they’re really worried about,” he says. “It is good to see collaboration between AI companies and the government, but there is always the danger with classification that you put a lot of trust into people determining what goes into those classifiers.”

For Heidy Khlaaf, the chief AI scientist at the AI Now Institute with a background in nuclear safety, Anthropic’s promise that Claude won’t help someone build a nuke is both a magic trick and security theater. She says that a large language model like Claude is only as good as its training data. And if Claude never had access to nuclear secrets to begin with, then the classifier is moot.

“If the NNSA probed a model which was not trained on sensitive nuclear material, then their results are not an indication that their probing prompts were comprehensive, but that the model likely did not contain the data or training to demonstrate any sufficient nuclear capabilities,” Khlaaf tells WIRED. “To then use this inconclusive result along with common nuclear knowledge to build a classifier for nuclear ‘risk indicators’ would be quite insufficient and a long way from legal and technical definitions of nuclear safeguarding.”

Khlaaf adds that this kind of announcement fuels speculation about capabilities that chatbots don’t have. “This work seems to be relying on an unsubstantiated assumption that Antrophic’s models will produce emergent nuclear capabilities without further training, and that is simply not aligned with the available science,” she says.

Anthropic disagrees. “A lot of our safety work is focused on proactively building safety systems that can identify future risks and mitigate against them,” an Anthropic spokesperson tells WIRED. “This classifier is an example of that. Our work with NNSA allows us to do the appropriate risk assessments and create safeguards that prevent potential misuse of our models.”

Khlaaf was also less excited about the partnership between the US government and a private AI company. Companies like Anthropic are hungry for training data, and she sees the US government’s broader rush to embrace AI as an opportunity for the AI industry to acquire data it couldn’t get elsewhere. “Do we want these private corporations that are largely unregulated to have access to that incredibly sensitive national security data?” she says. “Whether you’re talking about military systems, nuclear weapons, or even nuclear energy.”

And then there’s the precision. “These are precise sciences, and we know that large language models have failure modes in which they’re unable to even do the most basic mathematics,” Khlaaf says. In 1954, a math error tripled the yield of a nuclear weapon the US tested in the Pacific Ocean, and the government is still dealing with the literal fallout. What might happen if a chatbot did nuclear weapons math wrong and a human didn’t double check its work?

To Anthropic’s credit, it says it doesn’t want a future where people are using chatbots to play around with nuclear weapons science. It’s even offering its classifier to any other AI company that wants it. “In our ideal world, this becomes a voluntary industry standard, a shared safety practice that everyone adopts,” Favaro says. “This would require a small technical investment, and it could meaningfully reduce risks in a sensitive national security domain.”

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com