
Hello and welcome to Eye on AI, with Sharon Goldman filling in for Jeremy Kahn. In this edition: What OpenAI’s OpenClaw hire really means…The Pentagon threatens Anthropic punishment…Why an AI video of Tom Cruise battling Brad Pitt spooked Hollywood…The anxiety driving AI’s brutal work culture.
It wouldn’t be a weekend without a big AI news drop. This time, OpenAI dominated the cycle after CEO Sam Altman revealed that the company had hired Peter Steinberger, the Austrian developer behind OpenClaw—open-source software to build autonomous AI agents that had gone wildly viral over the past three months. In a post on his personal site, Steinberger said joining OpenAI would allow him to pursue his goal of bringing AI agents to the masses, without the added burden of running a company.
OpenClaw was presented as a way to build the ultimate personal assistant, automating complex, multi-step tasks by connecting LLMs like ChatGPT and Claude to messaging platforms and everyday applications to manage email, schedule calendars, book flights, make restaurant reservations, and the like. But Steinberger demonstrated that it could go further: In one example, when he accidentally sent OpenClaw a voice message it wasn’t designed to handle, the system didn’t fail. Instead, it inferred the file format, identified the tools it needed, and responded normally, without being explicitly instructed to do any of that.
That kind of autonomous behavior is precisely what made OpenClaw exciting to developers, getting them closer to their dream of a real J.A.R.V.I.S., the always-on helper from the Iron Man movies. But it quickly triggered alarms among security experts. Just last week, I described OpenClaw as the “bad boy” of AI agents, because an assistant that is persistent, autonomous, and deeply connected across systems is also far harder to secure.
Some say OpenAI hire is the ‘best outcome’
That tension helps explain why some see OpenAI’s intervention as a necessary step. “I think it’s probably the best outcome for everyone,” said Gavriel Cohen, a software engineer who built NanoClaw, which he calls a “secure alternative” to OpenClaw. “Peter has great product sense, but the project got way too big, way too fast, without enough attention to architecture and security. OpenClaw is fundamentally insecure and flawed. They can’t just patch their way out of it.”
Others see the move as equally strategic for OpenAI. “It’s a great move on their part,” said William Falcon, CEO of developer-focused AI cloud company Lightning AI, who said that Anthropic’s Claude products–including Claude Code–have dominated the developer segment. OpenAI, he explained, wants “to win all developers, that’s where the majority of spending in AI is.” OpenClaw, which is in many ways an open source alternative to Claude Code, and became a favorite of developers overnight, gives OpenAI a “get out of jail free card,” he said.
Altman, for his part, has framed the hire as a bet on what comes next. He said Steinberger brings “a lot of amazing ideas” about how AI agents could interact with one another, adding that “the future is going to be extremely multi-agent” and that such capabilities will “quickly become core to our product offerings.” OpenAI has said it plans to keep OpenClaw running as an independent, open-source project through a foundation rather than folding it into its own products—a pledge Steinberger has said was central to his decision to choose OpenAI over rivals like Anthropic and Meta (In an interview with Lex Fridman, Steinberger said Mark Zuckerberg even reached out to him personally on WhatsApp).
Next phase is winning developer trust for AI agents
Beyond the weekend buzz, OpenAI’s OpenClaw hire offers a window into how the AI agent race is evolving. As models become more interchangeable, the competition is shifting toward the less visible infrastructure that determines whether agents can run reliably, securely, and at scale. By bringing in the creator of a viral—but controversial—autonomous agent while pledging to keep the project open source, OpenAI is signaling that the next phase of AI won’t be defined solely by smarter models, but by winning the trust of developers tasked with turning experimental agents into dependable systems.
That could lead to a wave of new products, said Yohei Nakajima, a partner at Untapped Capital whose 2023 open source experiment called BabyAGI helped demonstrate how LLMs could autonomously generate and execute tasks—which helped kick off the modern AI agent movement. Both BabyAGI and OpenClaw, he said, inspired developers to see what more they could build with the latest technologies. “Shortly after BabyAGI, we saw the first wave of agentic companies launch: gpt-engineer (became Lovable), Crew AI, Manus, Genspark,” he said. “I hope we’ll see similar new inspired products after this recent wave.”
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman
FORTUNE ON AI
AI investments surge in India as tech leaders convene for Delhi summit – by Beatrice Nolan
Big tech approaches ‘red flag’ moment: AI capex is so great hyperscalers could go cash-flow negative, Evercore warns – by Jim Edwards
Anthropic CEO Dario Amodei explains his spending caution, warning if AI growth forecasts are off by just a year, ‘then you go bankrupt’ – by Jason Ma
AI IN THE NEWS
Pentagon threatens Anthropic punishment. The Pentagon is threatening to designate Anthropic a “supply chain risk,” a rare and punitive move that would effectively force any company doing business with the U.S. military to cut ties with the AI startup, according to Axios. Defense officials say they are frustrated with Anthropic’s refusal to fully relax safeguards on how its Claude model can be used—particularly limits meant to prevent mass surveillance of Americans or the development of fully autonomous weapons—arguing the military must be able to use AI for “all lawful purposes.” The standoff is especially fraught because Claude is currently the only AI model approved for use in the Pentagon’s classified systems and is deeply embedded in military workflows, meaning an abrupt break would be costly and disruptive. The dispute underscores a growing tension between AI labs that want to impose ethical boundaries and a U.S. military establishment increasingly willing to play hardball as it seeks broader control over powerful AI tools.
Why an AI video of Tom Cruise battling Brad Pitt spooked Hollywood. I’ve been following this eye-opening story, which the New York Times explained very well: Essentially, a hyper-realistic AI video showing Tom Cruise and Brad Pitt fighting on a rooftop has sent shockwaves through Hollywood, underscoring how quickly generative video technology is advancing—and how unprepared existing guardrails may be. The clip was created with Seedance 2.0, a new AI video model from Chinese company ByteDance, whose dramatic leap in realism has prompted fierce backlash from studios, unions, and industry groups over copyright, likeness rights, and job losses. Hollywood organizations accused ByteDance of training on copyrighted material at massive scale, while Disney sent a cease-and-desist letter and unions warned that such tools threaten performers’ control over their images and voices. ByteDance says it is strengthening safeguards, but the episode highlights a growing fault line: as AI video moves from novelty to near-cinematic quality, the fight over who controls creative labor, intellectual property, and digital identity is entering a far more urgent phase.
The anxiety driving AI’s brutal work culture. If you’ve ever worried about your own work-life balance, I think you’ll feel better after reading this piece. According to the Guardian, in San Francisco’s booming AI economy, the tech sector’s long-standing perks and flexible culture are being replaced by relentless “grind” expectations as startups push employees into long hours, little time off, and extreme productivity pressures in the name of keeping up with rapid advances and intense competition. Workers describe 12-hour days, six-day weeks, and environments where skipping weekends or social life feels like the price of staying relevant, even as anxiety about job security and AI’s impact on future roles grows. The shift reflects a broader transformation in how AI labor is valued—one that is reshaping workplace norms and could foreshadow similar pressures in other sectors as automation and innovation accelerate. I’ll definitely have to check out how this looks on the ground the next time I head to the Bay.
EYE ON AI RESEARCH
DEF CON, the world’s largest and longest running hacker conference, released its latest Hackers’ Almanack, an annual report distilling the research presented at the most recent edition in August 2025. The report focused on how researchers showed that AI systems are no longer just helping humans hack faster—they can sometimes outperform them. In several cybersecurity competitions, teams using AI agents beat human-only teams, and in one case an AI was allowed to run on its own, successfully breaking into a target system without further human input. Researchers also demonstrated AI tools that can find software flaws at scale, imitate human voices, and manipulate machine-learning systems, highlighting how quickly offensive uses of AI are advancing.
The problem, the researchers argue, is that most policymakers have little visibility into these capabilities, raising the risk of poorly informed AI rules. Their proposal: allow AI systems to openly compete in public hacking contests, record the results in a shared, open database, and use that real-world evidence to help governments develop smarter, more realistic AI security policies.
AI CALENDAR
Feb 16-20: India AI Impact Summit 2026, Delhi.
Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 12-18: South by Southwest, Austin, Texas.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco
BRAIN FOOD
The trust dilemma when AI enters the exam room. I was fascinated by this new article in Scientific American, which points out that as AI seeps deeper into clinical care, nurses are finding themselves on the front lines of a new trust dilemma: should they follow algorithm-generated orders when real-world judgement says otherwise? For example, a sepsis alert pushed an ER team to push fluids on a patient with compromised kidneys — until a nurse refused and a doctor overrode the AI. Across U.S. hospitals, the article found that predictive models are now embedded in everything from risk scoring and documentation to logistics and even autonomous prescription renewals, but front line staff increasingly complain that these tools misfire, lack transparency and sometimes undermine clinical judgment. That friction has sparked demonstrations and strikes, with advocates insisting that nurses must be at the table for AI decisions — because it’s ultimately humans who bear the outcomes.
Credit: Source link











