Elon Musk’s lawsuit is putting OpenAI’s safety record under the microscope
- by TechCrunch
- May 07, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
REGISTER NOW
OpenAI releases evaluations of its models and shares a safety framework publicly, but the company declined to comment on its current approach to AGI alignment. Dylan Scandinaro, its current head of preparedness, was hired from Anthropic in February. Altman said the hire would let him “sleep better tonight.”
The deployment of GPT-4 in India, however, was one of the red flags that led OpenAI’s non-profit board to briefly fire CEO Sam Altman in 2023. That incident took place after employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. Tasha McCauley, a member of the board at the time, testified about concerns that Altman was not forthcoming enough with the board for its unusual structure to function.
McCauley also discussed a widely reported pattern of Altman misleading the board. Notably, Altman lied to another board member about McCauley’s intention to remove Helen Toner, a third board member who published a white paper that included some implied criticism of OpenAI’s safety policy. Altman also failed to inform the board about the decision to launch ChatGPT publicly, and members were concerned about his lack of disclosure of potential conflicts of interest.
“We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.”
However, the decision to boot Altman came at the same time as a tender offer to the company’s employees. McCauley said that when OpenAI’s staff started to side with Altman and Microsoft worked to restore the status quo, the board ultimately reversed course, with the members opposed to Altman stepping down.
The apparent failure of the non-profit board to influence the for-profit organization goes directly to Musk’s case that the transformation of OpenAI from research organization into one of the largest private companies in the world broke the implicit agreement of the organization’s founders.
David Schizer, a former dean of Columbia Law School who is being paid by Musk’s team to act as an expert witness, echoed McCauley’s concerns.
“OpenAI has emphasized that a key part of its mission is safety and they are going to prioritize safety over profits,” Schizer said. “Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue.”
With AI already deeply embedded in for-profit companies, the issue goes far beyond a single lab. McCauley said the failures of internal governance at OpenAI should be a reason to embrace stronger government regulation of advanced AI — “[if] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.”
Topics
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




