Anthropic’s release of its Claude Mythos Preview tool earlier this month shows that the development of artificial intelligence has reached a critical point – one that illustrates that those who highlighted the potential dangers of AI weren’t crying wolf.
Mythos, Anthropic has said, is able to identify and exploit flaws in every operating system and web browser at a scale and speed beyond almost all human capabilities. It is capable, autonomously, of executing attacks on systems that would bring down critical national infrastructure like power, water, health or banking systems.
So dangerous does its creator consider the model that it hasn’t yet released it generally, instead offering access (it calls it Project Glasswing) to about 40 organisations, including competitors, to enable them to test it on their systems and expose and patch flaws before anyone with malicious intention can discover and exploit them.
Last Friday, Anthropic’s chief executive Dario Amodei met with the Trump administration, which is seeking access to the model.
The administration has, of course, labelled Anthropic a national security and supply chain threat and purported to ban it from doing business with the government, or companies that deal with the government because the company sought to prevent the administration from using one of its tools for autonomous control of weaponry or mass domestic surveillance.
Trump has described Anthropic – a company that prides itself on a safety-first approach to AI – as “a radical left, woke company” full of “left-wing nutjobs” and said he had “fired them like dogs” and wouldn’t do business with them again. Now the administration is urgently seeking the nutjobs’ help to avert a national security threat.
‘Everyone realises [AI tools] have enormous economic value, but they need to be built carefully. If they aren’t built right, they can kill you.’
Anthropic CEO Dario Amodei
The US Treasury Secretary Scott Bessent, with the Federal Reserve Board chair Jerome Powell, convened a meeting of the country’s largest banks earlier this month to discuss the cybersecurity threat to the US banking and financial system posed by Mythos.
The administration is taking Mythos– which, unlike the tools caught up in the earlier stoush between the company and the administration, does appear to constitute a national security threat, and not just to the US – seriously because of the superhuman threat it poses with its potential ability to, not just expose flaws in software, but exploit them and put the financial system, economy, public safety and national security at risk.
Mythos’ superpower appears to be its ability to identify and chain together multiple different vulnerabilities in systems that could enable it to mount an attack of unprecedented scale and breadth. Concerningly, it escaped its testing environment (which its developers had challenged it to do), took some “reckless excessive measures” and tried to cover up what it had done.
So significant a development is Mythos seen to be that it was a major topic of discussion at last week’s International Monetary Fund and World Bank semi-annual meetings in Washington. It was also discussed by G-7 finance ministers and central bankers, who reportedly discussed the need for an international institutional framework to oversee governance of AI.
Mythos is only the first of what is likely to be a spate of products with similar capabilities. OpenAI has said it is close to releasing its tool for identifying coding flaws.
It was perhaps fortunate that it was Anthropic, which operates within a self-proclaimed and imposed moral and ethical framework and stresses a safety-first approach to development, which was the first cab off the rank. That has enabled at least some discussion and remedial action to occur.
It highlights, however, that – particularly in the US, the epicentre for AI development – the reliance on individual AI developers to resist the commercial pressures to exploit their advances and provide the guard rails on AI development.
At a federal level, the US has no meaningful regulation of AI. Trump, almost as soon as he regained the White House – and after intense lobbying and substantial donations to his election campaign by AI promoters — removed Biden administration executive orders that set some very basic safety, security, and privacy standards for AI development.
His administration has adopted the broader industry view (Anthropic is an exception) that any regulation stifles creativity and development and will handicap the US in the race to AI supremacy with China.
Trump has ordered US agencies to eliminate any policy that might “hinder American AI dominance.”
There are some US states – California, for instance – that have legislated some light-touch regulation of AI, but the only comprehensive regulatory regime is the European Union’s, which can only regulate products marketed in the EU.
Unless the revelation of Mythos’ powers shocks the US into action, it is unlikely there will be any change while Trump remains in office, with the AI industry raising a reported $US300 million ($420 million) to oppose candidates advocating AI regulation – mainly Democrats – at this year’s midterm elections.
No one denies the potential of AI to transform economies and societies, but those who know the technology best are cognisant of its dangers.
That means the world is reliant on companies that, between them, are spending trillions of dollars – US dollars – to develop tools that are advancing at a steeply accelerating rate and whose potential isn’t well understood, even by those who developed them.
Those companies – investing sums that would have been unimaginable before AI for meagre near-term revenues – are under commercial pressure from shareholders and the potential capital providers they are reliant on to fund development of their models and the data centre infrastructure required to train them.
Can we rely on them to self-regulate and prioritise the safety of models that are increasingly autonomous?
Anthropic’s Amodei, for instance, has written that “people outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work.
“This lack of understanding is essentially unprecedented in the history of technology,” he added.
OpenAI’s Sam Altman has said that he doesn’t think it is right that “a few AI labs” should be making the most consequential decisions about the shape of the future.
We regulate the aviation industry. There is both national and global regulatory coverage and/or oversight of the nuclear industry. Banking system are regulated, with globally systemic systems singled out for special treatment developed by international prudential regulators. The pharmaceutical and automotive industries are highly regulated at domestic levels.
No one denies the potential of AI to transform economies and societies, but those who know the technology best – people like Amodei and Altman – are cognisant of its dangers.
After the release of Mythos, Amodei said regulation of AU should be thought of in the same way cars and aeroplanes are regulated.
“Everyone realises they (AI tools) have enormous economic value, but they need to be built carefully. If they aren’t built right, they can kill you.”
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.
From our partners
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: www.smh.com.au









