A series of high-profile departures at OpenAI has raised questions as to whether the team responsible for AI safety is gradually being hollowed out.
Immediately following the announcement by chief scientist Ilya Sutskever that he was leaving the company after almost a decade, his team partner and one of Time’s 100 most important AI figures, Jan Leike, also announced he was quitting.
“I resigned,” Leike posted on Tuesday.
The duo follow Leopold Aschenbrenner, reportedly fired for leaking information, as well as Daniel Kokotajlo, who left in April, and William Saunders earlier this year.
Really nothing to see here. Just an exodus of safety researchers at one of the powerful company’s in the world. What could possibly go wrong? https://t.co/uxK2owlOku
— Rutger Bregman (@rcbregman) May 15, 2024
Several staffers at OpenAI, which did not respond to a request by Fortune for comment, posted their disappointment upon hearing the news.
“It was an honor to work with Jan the past two and a half years at OpenAI. No one pushed harder than he did to make AGI safe and beneficial,” wrote OpenAI researcher Carroll Wainwright. “The company will be poorer without him.”
High-level envoys from China and the USA are meeting in Geneva this week to discuss what must be done now that mankind is on the cusp of developing artificial general intelligence (AGI), when AI can compete with humans in a wide variety of tasks.
Superintelligence alignment
But scientists have already turned their attention to the next stage of evolution—artificial super intelligence.
Sutskever and Leike jointly headed up a team created in July tasked with solving the core technical challenges of ASI alignment, a euphemism for ensuring humans retain control over machines far more intelligent and capable than they.
OpenAI pledged to commit 20% of its existing computing resources towards that goal with the aim of achieving superalignment in the next four years.
But the costs associated with developing cutting-edge AI are prohibitive.
Earlier this month, Altman said that while he’s prepared to burn billions every year in the pursuit of AGI, he still needs to ensure that OpenAI can continually secure enough funding to keep the lights on.
That money needs to come from deep-pocketed investors like Satya Nadella, CEO of Microsoft.
Four prominent safety-focused members of OpenAI – @ilyasut @janleike @DanPKoko William Saunders) – departed over the last week or so.
So many questions.
• Should the public be worried
• Will the board at OpenAI take note? Will they do anything to address the situation?
— Gary Marcus (@GaryMarcus) May 15, 2024
This means constantly delivering results ahead of its rivals like Google.
This includes OpenAI’s newest flagship product, GPT-4o, which the company claims can actually “reason”—a verb laden with controversy in GenAI circles—across text, audio and video in real time.
The female voice assistant it displayed this week is so lifelike people are remarking it seems to have been lifted straight out of Spike Jonze’s AI science fiction film “Her”.
‘What did Ilya see?’
A few months after the Superalignment team was formed, Sutskever, together with other non-executive directors on the board of the non-profit arm that controls the company, ousted Altman, claiming they no longer had faith in their CEO.
Nadella quickly negotiated his return amid fears the company could split, and days later a rueful Sutskever apologized for his role in the mutiny.
At the time, Reuters reported it may have been linked to a secret project with the goal of developing an AI capable of higher reasoning.
Since then, Sutskever has barely been visible. The spectacular nature of the coup, along with the manner in which it was subsequently swept under the carpet prompted widespread speculation in social media.
“What did Ilya see?” became a common refrain within the broader AI community.
Seriously though — what did Ilya see?
— Marc Andreessen 🇺🇸 (@pmarca) November 24, 2023
Kokotajlo furthered these concerns recently by remarking he had resigned in protest after losing confidence in the company.
In a statement on Tuesday, Sutskever seemed to suggest, however, that he was not leaving OpenAI due to concerns over safety but to pursue other interests personal to him that he would reveal at a later date.
“The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial,” he wrote, endorsing OpenAI’s trio of top leaders, Sam Altman, Greg Brockman and Mira Murati, as well as his successor, Jakub Pachocki.
Credit: Source link