Why Claude Is Beating ChatGPT at Work

0
4

TOKYO, Feb 20 (News On Japan) –
Anthropic’s latest Claude rollout is reigniting a familiar fear across Silicon Valley: that AI “agents” will hollow out the software-as-a-service business by replacing subscription tools with a single model that can handle office workflows end to end.

[embedded content]

The announcement sparked a selloff in software stocks and revived talk of “the death of SaaS,” a narrative that resurfaced in a discussion on the program “AI QUEST” featuring AI researcher Shota Imai.

Imai described Claude as a model that has long been favored by “serious” developers because it performs strongly on economically meaningful tasks such as coding and routine business work, rather than casual chat or personal advice, and he pointed to “Claude Code,” a developer-focused interface designed to work directly with files and folders, as a reason many engineers have shifted their daily workflow away from standard chat windows.

The conversation highlighted a recent upgrade that expanded Claude’s context window to 1 million tokens, a change Imai framed as more than a headline number because it makes it practical to feed the model massive materials such as long financial reports, research papers, and large codebases while still keeping performance high on real-world tasks, a combination he argued competitors have struggled to match consistently.

Imai said the market impact intensified after the release of an agent-style product he referred to as “Claude Co-work,” which can execute multi-step tasks across software tools, because the product suggests two disruptive paths for customers: replacing existing SaaS subscriptions outright, or using Claude to generate in-house equivalents that replicate paid tools, while also reducing the “stacking” effect of multiple per-seat subscriptions when a single agent can operate across systems.

In contrasting corporate strategies, the program depicted OpenAI as pursuing an all-direction bid for artificial general intelligence, while portraying Anthropic as concentrating resources on “work AI,” especially coding and enterprise use, with Imai arguing that this focus can translate into higher revenue per token because professional users pay directly for useful output rather than for open-ended conversation, even as it forces Anthropic into a brutal race where demand can shift quickly if a rival model overtakes it.

The segment also focused on Anthropic CEO Dario Amodei, describing him as a central figure in modern large-model development through his involvement in the “scaling laws” research that helped define how performance improves with more data and compute, and it framed his departure from OpenAI as rooted in a strong safety-first outlook that later shaped Anthropic’s identity, including its “constitutional AI” approach that codifies model behavior and constraints at a high level.

Imai said Amodei’s influence shows up not only in research but in the company’s culture and public writing, citing essays discussed on the program such as “Machines of Loving Grace,” which it described as outlining an unusually fast path for AI-driven advances, as well as a more risk-focused follow-up essay that the program said lists five major risk categories including loss of control, malicious misuse such as biological weapons, authoritarian capture, and economic shock through job displacement.

The discussion also acknowledged criticisms of Anthropic’s closed approach, with Imai arguing that despite public mockery of “closed AI,” Anthropic is among the least transparent major labs in terms of releasing model weights or detailed technical papers, a stance he linked to safety concerns, while also noting controversy around training data practices that have triggered legal challenges in the wider industry.

On funding, the program said Anthropic formally announced a financing round on February 13 that it described as larger than earlier expectations, putting the amount at $30 billion, or roughly 4.6 trillion yen, and valuing the company at about $380 billion, or roughly 58 trillion yen, while arguing that the firm is seeking to diversify its compute strategy through options beyond Nvidia, including Google TPUs and Amazon infrastructure.

The final part of the episode turned to Japan’s policy direction, pointing to a recent Cabinet Office request for public input on regulations that may hinder AI’s adoption and arguing the effort extends earlier “analog regulation reform” aimed at removing rules that block digital transformation, with Imai offering examples that ranged from physical-world frictions such as stair-heavy infrastructure to legal frameworks such as copyright rules that can affect how machines handle data compared with humans.

The program ended with an announcement that a public recording is scheduled to be held at a TBS studio in Akasaka on March 26, a Thursday, with doors opening at 5:30 p.m. and the recording starting after 6 p.m., and it said attendees will have an opportunity to ask questions during a dedicated segment.

Source: TBS

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: newsonjapan.com