Trump declares war on one of his weapons

0
1
Advertisement

Even as Donald Trump directed government agencies to “immediately cease” all use of Anthropic’s Claude artificial intelligence tools last week, those tools were being used to launch the assault on Iran.

Deeply embedded within the US military – it was the only AI technology approved for classified use – Claude is used for operational planning, to collect and collate intelligence, to identify targets, simulate battle scenarios and undertake cyber operations.

Claude is so entrenched within the US Defence Department – Pete Hegseth’s “Department of War” – that the administration has, despite the immediacy of Trump’s directive, given the department six months to phase it out and  replace it with Anthropic rivals’ models.AP

Also used in the kidnapping of Venezuela’s president, Nicolas Maduro, Claude is so entrenched within the US Defence Department – Pete Hegseth’s “Department of War” – that the administration has, despite the immediacy of Trump’s directive, given the department six months to phase it out and replace it with Anthropic rivals’ models.

The confrontation between the Pentagon and Anthropic relates to an attempt by the department to rewrite a contract it has with the AI company.

Advertisement

The company, whose co-founder, Dario Amodei, has been outspoken about the need for regulation of AI, has insisted on two red lines: that its model isn’t used to conduct mass surveillance of Americans or to control autonomous weapons without human involvement in the “kill chain.”

Given that both those applications would appear to be illegal, and that the Pentagon has insisted that it wouldn’t use AI unlawfully, what’s the problem?

The Pentagon is presenting its stance as an issue of principle. It doesn’t want private contractors constraining the use of products that it has bought, arguing that it doesn’t allow companies that supply it with missiles to tell it where it can and can’t deploy them.

Anthropic believes that AI is too immature and dangerous to be deployed without the restrictions it is insisting on and argues that current laws aren’t keeping pace with the rate at which AI capabilities are developing.

Advertisement

Amodei says, for instance, that AI makes it possible for individually innocuous data to be lawfully collected that would provide a comprehensive picture of anyone’s life, automatically and at massive scale.

There’s another element at play.

Anthropic’s co-founder Dario Amodei has been outspoken about the need for regulation of AI and has insisted on two red lines: that its model isn’t used to conduct mass surveillance of Americans or to control autonomous weapons without human involvement in the “kill chain.”Bloomberg

Amodei has been critical of Trump, supported Kamala Harris at last year’s election and has been vocal in his calls for regulation of AI even as Trump’s executive orders have torn up the relatively light safety and privacy obligations that Joe Biden had imposed on the sector.

Trump, in announcing the ban on Anthropic’s technology, labelled it a “radical left” company, its executives “leftwing nutjobs” who had no idea of what the real world is about and who were out of control. This administration declares war on its opponents and critics.

Advertisement

Hegseth has directed his department to establish benchmarks for “model objectivity” as a procurement criterion, saying “out with Utopian idealism, in with hard-nosed realism,” and that “diversity, equity and inclusion and social ideology” have no place in the Department of War.

The action taken against Anthropic is unprecedented.

Not only will then Pentagon cease dealing with the company, but it has declared it a “supply chain risk.”

Hegseth, in a social media post, wrote that no contractor, supplier or partner with the US military can conduct any commercial activity with Anthropic, which could prevent the company from working with partners within the incestuous AI ecosystem and force companies like Nvidia, Amazon and Google to divest shareholdings and end any commercial relationship they have with Anthropic if they want to do deals with the Pentagon.

Advertisement

It could, some AI analysts have said, be a death blow for a company regarded as being at the forefront of AI development. Anthropic’s release of its Claude Code tools last month wiped more than $US1 trillion off the value of software companies.

The administration has gone beyond ending the relationship with Anthropic and trying to intimidate those it does business with. Trump has also threatened to use the Cold War-era Defence Production Act to force it to supply Claude without any caveats.

Claude is used for operational planning, to collect and collate intelligence, to identify targets, simulate battle scenarios and undertake cyber operations.AP

In other words, by declaring it a supply chain risk and ordering other companies not to deal with it on pain of losing their own defence contracts, the administration is saying it is a risk to national security. At the same time, however, it is signalling that Claude is so vital to national security that it must be made available without any restrictions.

Amodei has said that the company cannot, in good conscious, accede to the administration’s demands and will fight it in the courts, albeit that so far the only information it has on the stand-off is from Trump and Hegseth tweets.

Advertisement

He sees, at this stage of AI’s development, potential risks to humanity if Claude were to be deployed in a fully autonomous mode and the potential undermining of the Fourth Amendment privacy protections if it were used to collect information on Americans without changes to existing privacy laws.

An obvious solution to the conflict would be for Anthropic to simply cease dealing with the Pentagon and for the Pentagon to enlist other AI companies to replace it, as it has. It has already signed a deal with OpenAI and is close to one with Elon Musk’s xAI.

Anthropic believes that AI is too immature and dangerous to be deployed without the restrictions it is insisting on and argues that current laws aren’t keeping pace with the rate at which AI capabilities are developing.

A $US200 million ($281 million) contract is immaterial for Anthropic, which just raised $US30 billion at a $US380 billion ($535 billion) valuation. It would happily walk away, but the administration, however, is not prepared to allow that to happen – which signals that Claude is the only product currently available that can deliver what the Pentagon wants.

To his credit, Amodei is standing his ground, because the issues at stake couldn’t be more critical and he clearly mistrusts this administration, with its record of simply ignoring US laws and the Constitution, to use its AI tools responsibly.

Advertisement

He clearly doesn’t believe Claude has been developed to the point where it is reliable enough to trust to control autonomous weapons without human oversight.

Claude has a reputation for experiencing less “hallucinations” – inaccurate or fabricated responses – than its rivals, but nevertheless does generate them. AI is, at this moment, an imperfect technology.

Anthropic’s instance highlights the inadequacy of current laws to provide protection against those inadequacies, or abuses of the technology’s capabilities, or the risks of allowing applications of AI before those risks are understood.

Trump, having revoked Biden’s executive orders that provided some regulation of AI (directed more at transparency of what the companies were doing rather than restricting their development) has also issued orders that purport to override state laws that it sees as restricting AI’s development. The administration wants a “minimally burdensome” national framework for AI.

Advertisement

AI is the most consequential technology of our time. It threatens to change the way we work and live, the relationships between governments and their citizens and the way wars are conducted. There ought to be some guardrails.

Instead, there is a global race occurring (even the Europeans, who have some of the more stringent relations have been talking about winding them back), despite an awareness that the technology isn’t fully developed and carries significant risks.

There’s significant support for Anthropic’s stance from those at the coal face of AI development – those building the models rather than the billionaires profiting from them – which suggests that those who know the technology best are very mindful of the those risks.

Rather than leave it to people like Trump and Hegseth – or Amodei – to develop the moral framework for how AI might be deployed, it’s a situation crying out for lawmakers to get involved and provide and create a 21st century framework of laws for a 21st century technology which, while promising so much opportunity, carries to so much latent risk.

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.

Stephen BartholomeuszStephen Bartholomeusz is one of Australia’s most respected business journalists. He was most recently co-founder and associate editor of the Business Spectator website and an associate editor and senior columnist at The Australian.Connect via email.

From our partners

Advertisement
Advertisement

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: www.smh.com.au