Nathan Calvin, the 29-year-old general counsel of Encode—a small AI policy nonprofit with just three full-time employees—published a viral thread on X Friday accusing OpenAI of using intimidation tactics to undermine California’s SB 53, the California Transparency in Frontier Artificial Intelligence Act, while it was still being debated. He also alleged that OpenAI used its ongoing legal battle with Elon Musk as a pretext to target and intimidate critics, including Encode, which it implied was secretly funded by Musk.
Calvin’s thread quickly drew widespread attention, including from inside OpenAI itself. Joshua Achiam, the company’s head of mission alignment, weighed in on X with his own thread, written in a personal capacity, starting by saying, “At what is possibly a risk to my whole career I will say: this doesn’t seem great.”
Former OpenAI employees and prominent AI safety researchers also joined the conversation, many expressing concern over the company’s alleged tactics. Helen Toner, the former OpenAI board member who resigned after a failed 2023 effort to oust CEO Sam Altman, wrote that some things the company does are great, but “the dishonesty & intimidation tactics in their policy work are really not.”
And at least one other nonprofit founder also weighed in: Tyler Johnston, founder of AI watchdog group the Midas Project, responded to Calvin’s thread with his own, saying: “[I] got a knock at my door in Oklahoma with a demand for every text/email/document that, in the ‘broadest sense permitted,’ relates to OpenAI’s governance and investors.” As with Calvin, he added, he received the personal subpoena, and the Midas Project was also served.
“Had they just asked if I’m funded by Musk, I would have been happy to give them a simple ‘man I wish’ and call it a day,” he wrote. “Instead, they asked for what was, practically speaking, a list of every journalist, congressional office, partner organization, former employee, and member of the public we’d spoken to about their restructuring.”
OpenAI referred Fortune to a post by chief strategy officer Jason Kwon on Friday in which Kwon said Encode’s decision to support Musk in the lawsuit, and the organization’s not “fully disclosed” funding, “raises legitimate questions about what is going on.”
“We wanted to know, and still are curious to know, whether Encode is working in collaboration with third parties who have a commercial competitive interest adverse to OpenAI,” Kwon wrote, noting that subpoenas are a standard method of gathering information in any litigation. “The stated narrative makes it sound like something it wasn’t.” Kwon included an excerpt of the subpoena that he said showed all the requests for documents OpenAI made.
As reported by the San Francisco Standard in September, Calvin was served with a subpoena from OpenAI in August, delivered by a sheriff’s deputy as he and his wife were sitting down to dinner. Encode, the organization he works for, was also served. The article reported that OpenAI appeared concerned that some of its most vocal critics were being funded by Elon Musk and other billionaire competitors—and was targeting those nonprofit groups despite offering little evidence to support the claim.
Calvin wrote Friday that Encode—which he emphasized is not funded by Musk—had criticized OpenAI’s restructuring and worked on AI regulations, including SB 53. In the subpoena, OpenAI asked for all of Calvin’s private communications on SB 53.
“I believe OpenAI used the pretext of their lawsuit against Elon Musk to intimidate their critics and imply that Elon is behind all of them,” he said, referring to the ongoing legal battle between OpenAI and Musk over the company’s original nonprofit mission and governance. Encode had filed an amicus brief in the case supporting some of Musk’s arguments.
In a conversation with Fortune, Calvin emphasized that what has not been sufficiently covered is how inappropriate OpenAI’s actions were in connection with SB 53, which was signed into law by Gov. Gavin Newsom at the end of September. The law requires certain developers of “frontier” AI models to publish a public frontier AI framework and a transparency report when deploying or substantially modifying a model, report critical safety incidents to the state, and share assessments of catastrophic risks under the state’s oversight.
Calvin alleges that OpenAI sought to weaken those requirements. In a letter to Governor Newsom’s office while the bill was still under negotiation, which was shared on X in early September by a former AI policy researcher, the company urged California to treat companies as compliant with the state’s rules if they had already signed a safety agreement with a U.S. federal agency or joined international frameworks such as the EU’s AI Code of Practice. Calvin argues that such a provision could have significantly narrowed the law’s reach—potentially exempting OpenAI and other major AI developers from key safety and transparency requirements.
“I didn’t want to go into a ton of detail about it while SB 53 negotiations were still ongoing and we were trying to get it through,” he said. “I didn’t want it to become a story about Encode and OpenAI fighting, rather than about the merits of the bill, which I think are really important. So I wanted to wait until the bill was signed.”
He added that another reason he decided to speak out now was a recent LinkedIn post from Chris Lehane, OpenAI’s head of global affairs, describing the company as having “worked to improve” SB 53—a characterization Calvin said felt deeply at odds with his experience over the past few months.
Encode was founded by Sneha Revanur, who launched the organization in 2020 when she was 15 years old. “She is not a full-time employee yet because she’s still in college,” said Sunny Gandhi, Encode’s vice president of political affairs. “It’s terrifying to have a half a trillion dollar company come after you,” Gandhi said.
Encode formally responded to OpenAI’s subpoena, Calvin said, stating that it would not be turning over any documents because the organization is not funded by Elon Musk. “They have not said anything since,” he added.
Writing on X, OpenAI’s Achiam publicly urged his company to engage more constructively with its critics. “Elon is certainly out to get us, and the man has got an extensive reach,” he wrote. “But there is so much that is public that we can fight him on. And for something like SB 53, there are so many ways to engage productively.” He added, “We can’t be doing things that make us into a frightening power instead of a virtuous one. We have a duty and a mission to all of humanity, and the bar to pursue that duty is remarkably high.”
Calvin described the episode as the “most stressful period of my professional life.” He added that he uses and gets value from OpenAI products and that the company conducts and publishes AI safety research that is “worthy of genuine praise.” Many OpenAI employees, he said, care a lot about OpenAI being a force for good in the world.
“I want to see that side of OAI, but instead I see them trying to intimidate critics into silence,” he wrote. “Does anyone believe these actions are consistent with OpenAI’s nonprofit mission to ensure that AGI benefits humanity?”
Credit: Source link