When Treasury Secretary Scott Bessent and Federal Reserve Chair Jay Powell convened the chief executives of leading U.S. banks earlier this month to discuss Anthropic’s latest model, Mythos, they signaled a shift in how artificial intelligence is being understood in finance. This was not a meeting about innovation but a warning: that models capable of identifying and exploiting vulnerabilities could pose a material risk to core financial infrastructure.
That concern is justified. But the focus remains too narrow.
In recent years, in discussions with leading financial institutions, I have seen how quickly concern rises once the adversarial uses of AI are understood. Yet the translation into action remains slow and uneven. Much of the current attention is focused on cyber risk. This is a serious threat. But it is not the only one and not the most immediate.
Alongside the risks highlighted by Mythos, a parallel threat is already unfolding at scale. It does not depend on new frontier models, but on AI capabilities that are already widely available. And unlike cyber attacks, which require access to systems, this threat operates by targeting people.
What Has Changed Is Not Just Sophistication — It’s Economics
Artificial intelligence has made fraud dramatically cheaper, easier to execute, and far more scalable. What once required time and coordination can now be automated and deployed at industrial scale. AI systems can generate thousands of convincing messages, voices and videos in seconds, each tailored to a specific individual. This is not incremental. It is structural.
Fraud has shifted from a manual activity to a machine-driven one. Hyper-personalised social engineering campaigns, often powered by AI agents, now operate across multiple channels, jurisdictions, and identities. They impersonate executives, advisers, or family members with increasing credibility, creating urgency and inducing authorised transfers.
In these scenarios, the system is not breached. It is bypassed.
The System Isn’t Hacked. The Customer Is Convinced.
Customers are not necessarily hacked. They are convinced. And because transactions are authorised, existing safeguards are often ineffective. Biometric checks can be defeated by deepfakes. Rule-based monitoring is calibrated to detect human fraudsters, not coordinated networks of AI agents operating at machine speed.
This creates a fundamentally different type of risk.
Unlike cyber attacks, which tend to be episodic and visible, AI-enabled fraud operates as a continuous and distributed leakage of funds across millions of transactions. It is a creeping threat: easier to execute, faster to scale, and often invisible until losses become material. The trajectory points toward trillions of dollars in losses in the coming years.
The Risk Is Not Only Financial
If the public comes to believe that financial institutions cannot protect customers from manipulation and fraud, trust in the system will erode. The consequences will extend beyond losses. Friction will rise, customers will hesitate, and confidence in banks’ ability to safeguard money may weaken in ways no less damaging than cyber threats.
This is not a greater threat than cyber risk. It is a parallel one. And it deserves similar attention.
A Defense Redesign, Not an Incremental Fix
Most institutions still rely on fragmented data, legacy monitoring and human-led analysis that cannot keep pace with adaptive, AI-driven threats. A meaningful response requires architectural redesign: real-time, AI-native detection; integration of fraud, AML and behavioural signals; and the ability to intervene at the point of transaction, including in authorised payments.
It also requires moving from isolated to coordinated defence. Fraud campaigns target customers across institutions simultaneously, while controls remain siloed. Effective response depends on identifying patterns and campaigns in real time. Privacy and competition considerations remain important, but they can no longer justify structural blind spots. Privacy-preserving technologies offer a path forward, enabling institutions to share signals without exposing sensitive data.
In parallel, institutions need to adopt a “Defence AI” approach: using AI to defend against AI-driven threats. Human-only first lines of defence cannot scale. AI-native systems must support faster detection and response under human oversight.
Regulators Must Convene on This Too — Before the Catastrophe Arrives
The lesson from the Mythos moment is not only that AI can break systems. It is that the financial system is already being exploited in another way: that is less visible, more scalable and potentially just as corrosive.
If the financial system does not respond quickly, the consequences will be severe: rising losses, rising friction, and a significant erosion of public trust.
Regulators should be convening senior financial leaders on this issue, too, as a parallel AI risk, before a catastrophe that is already within reach of bad actors fully materialises. The financial system, the technology sector and policymakers must now recognise the scale of this vulnerability and act with far greater urgency.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: fortune.com




