Hello and welcome to Eye on AI…In this edition: Sparks fly as Musk and Brockman testify in battle over OpenAI’s restructuring…the White House does a 180 degree U-turn on AI regulation and may begin reviewing AI models prior to release…OpenAI and Anthropic both target PE-backed companies with new joint ventures…a breakthrough in a foundation model for robotics…AI scientists may still be a ways off.
People in Silicon Valley and far beyond have been enthralled by the drama playing out in a courtroom in Oakland, California, where a jury is currently hearing testimony in Elon Musk’s lawsuit against OpenAI cofounders Sam Altman and Greg Brockman. The judge and jurors in the case (the jury’s verdict is merely advisory) will need to decide whether Altman’s and Brockman’s communications with Musk around the formation of OpenAI established a formal “charitable trust” and whether Altman and Brockman subsequently violated that trust when they restructured OpenAI so that its non-profit board no longer had sole control over its for-profit arm. They will also have to decide on Musk’s allegations that Altman and Brockman unjustly enriched themselves as OpenAI re-oriented from a research-oriented lab to being primarily a commercial entity.
Most legal analysts say Musk’s case is weak and that he’s likely to lose. In fact, I’m surprised the case has even come to trial. I thought that Musk would opt to settle at the last minute. I had long-assumed that this was one of those legal cases where the lawsuit itself was the whole point, not whether Musk ultimately prevailed. I thought his intention was two-fold: 1) to sow enough investor doubt about the viability of OpenAI’s new for-profit company structure to make it harder for OpenAI to raise further investment and possibly go for an IPO and 2) to use the discovery process to surface lots of embarrassing emails, internal documents, and details about Altman, Brockman, and the constant drama at OpenAI that would tarnish the reputation of his former cofounders.
Has Musk’s lawsuit already accomplished what he wanted?
So far, it’s not clear the litigation has had much impact on OpenAI’s ability to continue to raise money. It has held several successful funding rounds since Musk filed his suit, including an additional $122 billion fundraise at a $852 billion valuation that closed in March. An IPO still appears to be on the cards—and to the extent that it is looking shaky, it has nothing to do with Musk’s lawsuit.
But plenty of documents have emerged that paint Altman and Brockman in a less than flattering light and those documents have helped feed lots of media coverage about internal strife at OpenAI. So you might think Musk would say: blows landed, mission accomplished, time to cut bait. Yet Musk apparently thought there was more potential to damage that could be done by going to trial. We know this because Musk said so explicitly in an email to Brockman on the eve of the trial—an email that OpenAI’s lawyers made public on Sunday and tried, unsuccessfully, to have admitted into evidence.
According to OpenAI’s lawyers, Musk reached out to Brockman about discussing a settlement of the case in the week before the trial. Brockman suggested that both sides drop their respective claims (OpenAI has counter-sued Musk claiming harassment.) Musk wrote back that “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.”
The email was a spectacular moment in a trial that has, so far, resulted in few bombshell revelations on the witness stand. That’s because much of the sensational stuff has already been disclosed in the documents that surfaced through the pre-trial discovery process. Hearing those details repeated on the stand doesn’t change the public narrative much.
A few fireworks from both Musk and Brockman
There have been a couple of wowzer moments though: One was Musk’s admission that his AI company, xAI, had trained its Grok model in part by ‘distilling’ OpenAI’s GPT models. Distillation is the process of training a model on the answers from another model. This tactic violates OpenAI’s terms of service, so it is likely that this was done using fake or fraudulent OpenAI accounts, and Musk’s admission to this conduct was something of a bombshell. Musk’s excuse was essentially “everyone does it.”
The other startling moments so far came in Monday’s testimony from Brockman, which included a number of potentially damaging moments. Brockman acknowledged he never followed through on his own initial pledge to donate $100,000 to OpenAI’s non-profit when it was set up, but now has a stake in the for-profit company worth $30 billion.
Musk’s lawyers also questioned Brockman about his own journal entries from November 2017 in which he wrote about being “warm to steal the nonprofit from [Musk] to convert to b corp without him.” He also wrote, “[Musk’s] story will correctly be that we weren’t honest with him in the end about still wanting to do for profit just without him.” Brockman’s words may prove damning, since they seem to confirm some of the key allegations Musk makes in his suit. So too may be Brockman’s admission that he was an investor in the AI chip startup Cerebras at the time OpenAI was discussing a potential acquisition of the company and that he never disclosed his investment to Musk. Altman was also a Cerebras investor. That may help Musk’s attorneys make the case for unjust enrichment although the merger proposal did not go ahead. (OpenAI did later sign a major partnership with Cerebras that significantly boosted the chip startup’s valuation.)
Still, it’s far from certain Musk will prevail, either legally, or in shifting public opinion against his one-time-cofounders-turned-bitter-rivals, Brockman and Altman. In many ways, the trial is a distraction, generating much more heat than it is shedding light on the bigger concerns about who controls AI and the risks the technology presents. While the Musk-OpenAI courtroom showdown has been billed as the first great technology trial of the AI era, a legal showdown that matters far more will take place two weeks from now in a courtroom in Washington, D.C. That’s when a federal appeals court panel will hear arguments in Anthropic’s challenge to the ‘supply chain risk’ designation the Trump Administration slapped on it for refusing to agree to its specified contract terms for providing its AI models to the U.S. military. That’s a case with huge implications not just for Anthropic and the fate of the AI industry, but also for the balance of power between the state and industry more generally.
Even as that case moves forward, the ground is shifting in D.C. Anthropic’s Mythos model, with its powerful cyber capabilities, combined with growing public fears about AI technology, seem to have convinced the Trump administration to perform a head-spinning U-turn: moving from a highly-laissez faire approach to AI to a mandate that the government receive early access to AI models and essentially license their release to the wider public. (More on that in the news section below.) This policy reversal may not have the drama of a trial, but it matters far more for the shape of AI development.
Ok, with that, here’s this week’s AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech’s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year’s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference’s founding. We will hear from CEOs such as Carol Tomé from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.
FORTUNE ON AI
UK-based Google DeepMind workers vote to unionize over military AI contracts amid internal backlash over its Pentagon deal—by Beatrice Nolan
Employee revolt once forced Google to back off on military contracts. But, in the wake of a new Pentagon AI contract, their leverage appears limited—by Beatrice Nolan
A decade after the ‘Godfather of AI’ said radiologists were obsolete, their salaries are up to $571K and demand is growing fast—by Marco Quiroz-Gutierrez
AI IN THE NEWS
White House looks to control access to advanced AI models. The Trump administration—which spent the past year tearing up the Biden-era AI rulebook—is now weighing an executive order to convene a working group of tech executives and officials to design frontier-model oversight, with a formal pre-release review process reportedly among the options on the table, the New York Times reports citing sources familiar with the deliberations. White House officials briefed Anthropic, Google and OpenAI on the plans last week, and some inside the administration are pushing for a system that would give the government first access to new models but without the ability to block their release. The abrupt policy shift has been driven in part by Anthropic’s Mythos model, whose cyber-vulnerability discovery capabilities prompted the company to withhold a public release, and by mounting bipartisan public concern about AI’s impact on jobs, energy, education and mental health. It also tracks a leadership change at the West Wing: AI czar David Sacks departed in March, and Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent—who recently held a “productive” meeting with Dario Amodei aimed at thawing the Pentagon-Anthropic standoff—have stepped in to shape policy. Meanwhile, the Wall Street Journal reports that Google, Microsoft, and xAI have already agreed to give early access to their advanced models to the U.S. government. It also reported previously that the White House has opposed Anthropic sharing Mythos with more companies to help them safeguard their systems—although it is unclear if this is because it fears that sharing the model more widely will increase the chance it will wind up in the hands of bad actors or because it wants to hoard Mythos’ potential offensive cyber capabilities for itself and doesn’t want more companies using it to harden their defenses.
OpenAI and Anthropic both set up companies to push AI into private equity-backed companies. The two AI rivals unveiled competing joint ventures within minutes of each other on Monday, both designed to push their AI tools deep into the operations of private equity-backed companies. OpenAI’s “Deployment Company” drew more than $4 billion from 19 investors—led by TPG, Brookfield Asset Management, Advent and Bain Capital, with Dragoneer and SoftBank also participating—at a $10 billion valuation, with OpenAI itself contributing capital and retaining majority control. The PE backers were, according to press reports citing leaked documents, offered a 17.5% guaranteed annual return floor over five years. Anthropic’s $1.5 billion vehicle, by contrast, is anchored by Blackstone, Hellman & Friedman and Goldman Sachs—with General Atlantic, Leonard Green, Apollo, GIC and Sequoia also backing it. It is targeting mid-sized businesses, and will see “forward-deployed engineers” sent to implement Anthropic’s AI models inside those companies. You can read more from the Wall Street Journal here and Bloomberg here.
Anthropic announces new financial services agents. The company debuted 10 new AI agents built for banks and financial services firms—handling tasks like building pitchbooks, closing the books, and drafting credit memos—as it deepens its push into a sector that’s central to its enterprise strategy ahead of an anticipated IPO this year. Anthropic’s arch rival OpenAI has also been targeting financial services use cases, but the new roll out also puts Anthropic in more direct competition with vendors like Microsoft and Salesforce, as well as specialist financial data providers such as Bloomberg and Alpha Sense. Read more from the Wall Street Journal here.
SAP moves to stop OpenClaw and other third-party agents from using its software. SAP last month told customers it could throttle, suspend or terminate access for those using unauthorized external AI agents to pull data from its apps—an escalation in the brewing data wars between incumbent enterprise software vendors and vendors of AI tools, the Information reports. SAP has its own AI agent called Joule, but many customers prefer the functionality that third-party agents have to handle workflows across many different software applications. SAP CEO Christian Klein framed the move as protection against “mass data requests” that strain performance and as a defense of SAP’s proprietary semantic models, but the policy lands amid clear signs of pressure: SAP shares are down roughly 28% this year and longtime customer Mercedes-Benz has cut its SAP instances by 40% in recent months while leaning on its own and frontier-lab AI models to clean and analyze data. SAP says it already permits agents from some other companies, including Microsoft, Google, Amazon and IBM, and hinted at “agentic integration architectures” with Anthropic—suggesting Claude Code or Cowork access may be close—while singling out open-source harnesses like OpenClaw as a security risk. SAP’s stance mirrors that of Workday, Salesforce and ServiceNow, which have all made moves to erect some form of tollgates around their data.
OpenAI changes privacy policy to share user data with advertisers. OpenAI updated its U.S. privacy policy on April 30 to allow the use of cookies and limited identifiers (like email addresses or cookie IDs) to promote its products on third-party websites and measure ad effectiveness, Wired reported. The company has said, however, that ChatGPT conversations remain private and aren’t shared with marketing partners. Wired found that this marketing tracking was enabled by default for free accounts but off by default for Plus and Enterprise subscribers, with users able to opt out by changing a toggle in account settings. The change comes as OpenAI expands its own in-product advertising (rolling out ads beneath ChatGPT outputs in February) and prepares for a potential IPO later this year, with the off-platform ads aimed largely at converting free users into paying subscribers.
EYE ON AI RESEARCH
Foundation models for robotics makes a big leap forward. Physical Intelligence, a San Francisco-based company with some pedigreed cofounders (ex-Google DeepMind and both Stanford and UC Berkeley robotics profs) that builds foundation models for robotics, achieved a breakthrough with a new foundation model called π0.7. The model can recombine learned skills to handle new situations, something large language models can do, but which has proved elusive in physical AI. A single π0.7 model can fold laundry, operate an espresso machine, peel vegetables, and take out the trash without any task-specific fine-tuning, matching the performance of specialized models trained for each individual task. More striking, π0.7 showed that it could transfer those skills between different brands and types of robots without additional training—although here the performance only matched that of a human operator who had never done the task before operating the robot by remote control. The team also showed it can be “coached” through entirely new multi-stage tasks, such as loading a sweet potato into an air fryer, using only verbal step-by-step instructions.
All of this is a pretty big deal that will make it far easier for more companies to begin to deploy robots in more settings far faster than before. One of the big breakthroughs that Physical Intelligence made was in what they call “diverse context conditioning”—training the model not just on what to do but on rich metadata describing how each demonstration went, including quality scores, speed, mistakes, and AI-generated images of intermediate subgoals. The meta data labels seem to be key, helping the model learn which intermediate actions were most likely to result in success. You can read the research paper here on arxiv.org and see the company’s blog on π0.7 here.
AI CALENDAR
June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.
June 17-20: VivaTech, Paris.
July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.
July 7-10: AI for Good Summit, Geneva, Switzerland.
Aug. 4-6: Ai4 2026, Las Vegas.
BRAIN FOOD
Maybe AI scientists aren’t so close after all. There’s been a lot of hype recently about how fast AI scientists are coming along and that AI models will soon be able to automate scientific research. AI research itself certainly seems on the cusp of automation with AI, and there have been promising experiments in other fields, such as drug discovery and material discovery.
But researchers from Germany’s Friedrich Schiller University Jena and the Indian Institute of Technology Delhi found that large language models (they tested OpenAI’s GPT-4o and GPT-OSS, as well as Anthropic’s Claude Sonnet 4.5) that have not been specifically trained to act as AI scientists, can produce scientific results that seem superficially valid but actually lack key evidence and reasoning steps.
The results are actually pretty abysmal. Hypotheses were stated but left untested by experiments in 63% of cases. In 68% of cases, the models failed to incorporate available experimental evidence into their process. In 71% of reasoning traces, the models’ hypotheses are not updated in the face of counter-evidence. Only 26% of reasoning traces showed any belief revision based on new evidence from experiments. Using multiple experiments and independent lines of evidence to bear on a single hypothesis occurred in less than 10% of cases. Results like these make it seem like scientists’ jobs will be safe for quite a while longer than some AI boosters claim. You can read the research here.
AI Playbook: Keeping up with AI’s rapid evolution
AI is becoming an even more useful—and dangerous—tool as it gets smarter. Fortune AI Editor Jeremy Kahn breaks down best practices for deploying AI agents, how to protect your data from AI-powered cyberattacks, and just how smart AI can really get. Watch the playbook.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: fortune.com





