Attorneys delivered closing arguments in the Musk v. Altman trial on Thursday in a final attempt to convince a judge and jury that their respective clients, Elon Musk and Sam Altman, are the most well-intentioned, truth-telling stewards of OpenAI’s founding nonprofit mission. A judgement could be delivered as soon as next week, ending a decade-long battle between two of the technology industry’s most influential entrepreneurs.
But regardless of the outcome, there is a wide set of losers in this case. Based on ample amounts of evidence, it appears that the people worst off are the employees, policy makers, and members of the public who believed in the mission of a nonprofit research lab—and supported OpenAI because of it. What seemed to take precedent for Musk and OpenAI’s other cofounders at almost every turn was building the world’s leading AI lab—even if that meant creating a multibillion dollar for-profit company in the process.
“It’s hard to see how the public interest is being protected by either of these parties, and that is really what is ultimately at stake in a case about a nonprofit,” says Jill Horwitz, a Northwestern University law professor with expertise in nonprofits and innovation, who listened to the closing arguments. “The public interest in the nonprofit is at risk no matter who wins.”
OpenAI’s stated mission is to ensure artificial general intelligence (AGI) benefits humanity, but humanity is not a party in this case. In practice, OpenAI has spent the last decade attempting to rival multitrillion dollar companies like Google, and build AGI first. Additionally, Musk and Altman have fought tooth and nail to be the ones who control OpenAI.
“Musk and Altman are basically locked in a race to be the first to build superintelligence, and they both rightly fear what the other will do if they win. The rest of us should fear them both,” says Daniel Kokotajlo, a former OpenAI researcher who joined in 2022 and has raised concerns over the company’s safety culture. He was part of a group of former OpenAI researchers that filed an amicus brief in this case against OpenAI’s for-profit conversion, arguing that the nonprofit structure was critical in their decision to join the company.
| Got a Tip? |
|---|
| Are you a current or former OpenAI or Tesla employee who wants to talk about what’s happening? We’d like to hear from you. Using a nonwork phone or computer, contact the reporters securely on Signal at Mzeff.88 and Peard33.24. |
At trial, OpenAI’s nonprofit was discussed as if it were yet another corporate investor. OpenAI’s lawyers argued that giving the nonprofit a $200 billion stake in the for-profit company is proof that OpenAI is fulfilling its mission. Public advocacy groups disagree that funding alone is sufficient.
“I am among the many people who are glad to see how many philanthropic resources the OpenAI foundation has at its disposal to do good work,” says Nathan Calvin, VP of state affairs for the AI safety nonprofit Encode, which filed an amicus brief opposing OpenAI’s restructuring earlier in this case. “But it’s worth remembering that the nonprofit also has a governance role, and that the mission of the nonprofit is not that of a typical foundation, it is specifically to ensure that AGI benefits all of humanity. Money is important for that goal and is useful all else equal, but it is not the goal in and of itself.”
Origin Story
Evidence revealed in this case suggests Altman and Musk were in agreement about OpenAI launching as a nonprofit and operating much like a typical startup. They shared the goal of beating Google DeepMind in the race to AGI. But creating OpenAI as a nonprofit turned out to be a horribly inconvenient means to winning that race.
Musk has accused Altman, OpenAI’s CEO, and Greg Brockman, its cofounder and president, of straying from the nonprofit’s founding mission. He claims the founders used his $38 million investment to turn OpenAI into an $850 billion company and make several of its cofounders billionaires.
To win this case, Musk has to convince a jury and judge that he attached certain conditions to his investment, specifically that OpenAI could only use the money for a charitable purpose, and that he filed the case in a timely manner. In response, OpenAI has argued that Musk has failed to prove either of these accusations, and that he simply has sour grapes for losing control of the AI lab.
In one of the first emails Altman sent to Musk about setting up “some sort of nonprofit” that ultimately became OpenAI, in May 2015, he wrote that the people working on it would get “startup-like compensation.” Musk said it was “worth a conversation.”
Virtually nothing presented at trial has explained what the business partners planned to do if the nonprofit ended up with more money than it needed. There were some discussions about open sourcing technology, but OpenAI’s lawyers have argued there was never any agreement about doing so. In practice, the focus appeared to be on buying expensive servers to generate more powerful AI models, albeit with significant research into developing safeguards around them.
In her closing argument, OpenAI lawyer Sarah Eddy said it was essentially “uncontested” among the cofounders that they would eventually need more money than they could ever hope to raise through donations alone. She cited Ilya Sutskever’s testimony that “the mission of OpenAI is larger than a structure.” Eddy went on to say that if OpenAI hadn’t obtained the funds it needed, the mission would have collapsed.
OpenAI’s cofounders have repeatedly said, in emails and testimonies, that they have benefited from the nonprofit structure and mission. They argued it gave OpenAI “moral high ground,” which would prove strategically valuable in its quest to overtake Google DeepMind. The nonprofit mission was used to attract research talent, as well as garner goodwill among policymakers and the public.
But throughout OpenAI’s history, the nonprofit structure was apparently seen as a roadblock to building OpenAI into a massive business. In December 2016, Musk wrote an email to OpenAI’s cofounders saying that setting up OpenAI “as a non-profit might, in hindsight, have been the wrong move,” adding that the “sense of urgency is not as high.” The following year, Musk and the cofounders tried to create a for-profit arm, and even considered scrapping the nonprofit entirely. However, the talks broke down after Musk requested control of the company and Brockman and Sutskever asked for large equity stakes. Around this time, Brockman wrote in his diary about how OpenAI could make him a billionaire.
Shortly after these talks, in February 2018, Musk suggested folding OpenAI into Tesla—his for-profit car company—and even tried to recruit Altman to run the AI unit, offering him a Tesla board seat to entice him. Shivon Zilis, Musk’s deputy and the mother of four of his children, wrote in text messages at the time that Altman and Brockman had not “internalized the advantages of burying this in Tesla for stealth advantage.” In an FAQ Zilis wrote for the proposed Tesla AI group, she said that its strategy hadn’t been determined but that it “may be deeply proprietary.”
Kevin Scott, Microsoft’s chief technology officer, wondered around that time whether early OpenAI donors such as tech investor Reid Hoffman were okay with OpenAI essentially becoming a for-profit company. “I can’t imagine that they funded an open effort to concentrate [machine learning] talent so that they could then go build a closed, for profit thing on its back,” he wrote in an email to his boss. Hoffman relayed that he didn’t mind, and Microsoft later agreed to deepen its financial and technical support of OpenAI after it launched a for-profit arm.
During OpenAI’s brief ouster of Altman in November 2023, which has been rehashed ad nauseum in this trial, text messages show that Altman and Microsoft CEO Satya Nadella hand picked new nonprofit board members. Altman presented these to the old board members, who had fired him, as conditions under which he would return to the company. “I was willing to run back into a burning building,” Altman said.
William Savitt, an attorney for OpenAI, emphasized on Thursday that no other AI company in the world sits under a nonprofit. “OpenAI remains a charity … more stronger and powerful than ever,” he said.
Despite OpenAI’s unique structure, it’s plagued by the pitfalls of any tech giant. In several lawsuits from ChatGPT users and their families, OpenAI has been accused of negligence and wrongful death for allegedly contributing to a suicide, a drug overdose, a mass shooting, and other deadly incidents. Last month, OpenAI supported an Illinois bill that would help AI labs dodge liability if their models contribute to societal disasters (a rival, Anthropic, opposed it). Media companies have sued OpenAI for copyright infringement. Current and former employees allege that OpenAI’s economic research unit has morphed into an advocacy arm for the company.
OpenAI has defended its work, launching new initiatives to address AI’s societal impacts, and introducing safeguards to mitigate the dangers of AI models. Google DeepMind, Meta, and other competitors are facing many of the same allegations. In fact, OpenAI is increasingly indistinguishable from those profitable, publicly traded companies as it continues to pursue ever-loftier valuations. The nonprofit once burnished OpenAI’s public image, but Musk v. Altman appears to have removed all but the last of the shine.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com




