What AI Models for War Actually Look Like

0
6

Anthropic might have misgivings about giving the US military unfettered access to its AI models, but some startups are building advanced AI specifically for military applications.

Smack Technologies, which announced a $32 million funding round this week, is developing models that it says will soon surpass Claude’s capabilities when it comes to planning and executing military operations. And, unlike Anthropic, the startup appears less concerned with banning specific types of military use.

“When you serve in the military, you take an oath you’re going to serve honorably, lawfully, in accordance with the rules of war,” says CEO Andy Markoff. “To me, the people who deploy the technology and make sure it is used ethically need to be in a uniform.”

Markoff is hardly a regular AI executive. A former commander in the US Marine Forces Special Operations Command, he helped execute high-stakes special forces operations in Iraq and Afghanistan. He cofounded Smack with Clint Alanis, another ex-Marine, and Dan Gould, a computer scientist who previously worked as the VP of technology at Tinder.

Smack’s models learn to identify optimal mission plans through a process of trial and error, similar to how Google trained its 2017 program AlphaGo. In Smack’s case, the strategy involves running the model through various war game scenarios and having expert analysts provide a signal that tells the model if its chosen strategy will pay off. The startup may not have the budget of a conventional frontier AI lab, but it’s spending millions to train its first AI models, Markoff says.

Battle Lines

Military use of AI has become a hot topic in Silicon Valley after officials at the Department of Defense went head-to-head with Anthropic executives over the terms of a roughly $200 million contract.

One of the issues that led to the breakdown, which resulted in defense secretary Pete Hegseth declaring Anthropic a supply chain risk, was Anthropic’s desire to limit the use of its models in autonomous weapons.

Markoff says the furor obscures the fact that today’s large language models are not optimized for military use. General-purpose models like Claude are good at summarizing reports, he says. But they’re not trained on military data and lack a human-level understanding of the physical world, making them ill suited to controlling physical hardware. “I can tell you they are absolutely not capable of target identification,” Markoff claims.

“No one that I’m aware of in the Department of War is talking about fully automating the kill chain,” he claims, referring to the steps involved in making decisions on the use of deadly force.

Mission Scope

The US and other militaries already use autonomous weapons in certain situations, including in missile defense systems that need to react at superhuman speeds.

“The US and over 30 other states are already deploying weapon systems with varying degrees of autonomy, including some I would define as fully autonomous,” claims Rebecca Crootof, an authority on the legal issues surrounding autonomous weapons at the University of Richmond School of Law.

In the future, specialized models like the one Smack is working on could be used for mission planning purposes, too, according to Markoff. The company’s models are meant to help commanders automate much of the drudgery involved in sketching out mission plans. Planning military missions is still typically done manually with whiteboards and notepads, Markoff says.

If the US went to war with a “near peer” such as Russia or China, Markoff says, automated decisionmaking could offer the US a much needed “decision dominance.”

But it’s still an open question whether AI could be used reliably in such circumstances. One recent experiment, run by a researcher at King’s College London, alarmingly showed that LLMs tended to escalate nuclear conflicts in war games.

Recent conflicts, particularly the war between Russia and Ukraine, have highlighted the importance of low-cost semi-autonomous systems built with commercial hardware and software. In 2023, I wrote about the US Navy testing new kinds of autonomous systems in the Persian Gulf as a way to identify drones operated by Iranian-backed insurgents, among other things.

Some experts say that there needs to be clearer red lines around how AI is deployed by the military, especially given the open-ended terms of contracts currently being signed.

Anna Hehir, head of military AI governance at the Future of Life Institute, a nonprofit opposed to the development of AI-controlled autonomous weapons, says that even if Anthropic’s models are not used to control fully autonomous systems now, they could be integrated into the kill chain in problematic ways.

“AI is too unreliable, unpredictable and unexplainable to be used in such high-stakes scenarios,” Hehir claims. “These systems cannot recognize who is a combatant and who is a child, let alone the act of surrender.”

Markoff says that limits on autonomy might be necessary due to the chaos that comes with conflict. “I have never executed an operation in the real world that even went 50 percent according to plan, and that’s not going to change,” he says.


Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com