By Vitaly Ryumshin, journalist and political analyst
While Russia closely follows the negotiations over Ukraine and the ongoing saga regarding Telegram, a different drama is unfolding across the Atlantic. It’s one that feels less like geopolitics and more like a real-world science fiction thriller. And this time, it’s not fiction.
At the center of the story is Claude, an AI system developed by the American company Anthropic. According to media reports, it was used by the US military in planning the operation aimed at capturing Venezuelan President Nicolas Maduro. The use of AI in serious military planning is striking in itself. But the scandal that followed is far more revealing.
Anthropic, it turns out, holds a strict ideological position: Its AI systems are not supposed to be used for warfare or mass surveillance. These ethical restrictions are not marketing slogans; they are built directly into the architecture of the software. The company applies these limits internally and expects its clients to do the same.
The Pentagon, unsurprisingly, sees things differently.
The US Department of War reportedly used Claude without informing Anthropic of its intended purpose. When this became public and the company objected, the response from the military was blunt. Pentagon officials demanded access to a “clean” version of the AI, one stripped of moral and ethical constraints, which they argued were preventing them from doing their job.
Anthropic refused. In response, US Secretary of War Pete Hegseth publicly complained that the Pentagon does not need neural networks “that can’t fight” and threatened to label the company a “supply chain threat.” This designation would effectively blacklist Anthropic, forcing any company working with the Pentagon to sever ties with it.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: rt.com






