Clicky

  • Login
  • Register
  • Submit Your Content
  • Contact Us
Thursday, February 26, 2026
World Tribune
No Result
View All Result
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food
Submit
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food
No Result
View All Result
World Tribune
No Result
View All Result

Anthropic weakens its safety pledge in the wake of the Pentagon’s pressure campaign

February 25, 2026
in Technology
Reading Time: 6 mins read
A A
Anthropic weakens its safety pledge in the wake of the Pentagon’s pressure campaign
0
SHARES
ShareShareShareShareShare

READ ALSO

Burger King will use AI to monitor employee ‘friendliness’

The best budget cameras for 2026

Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.

On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company’s core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic’s pitch to businesses and consumers.

“Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not,” Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.

Anthropic’s quotes in an interview with Time sound reasonable enough in a vacuum. “We felt that it wouldn’t actually help anyone for us to stop training AI models,” Jared Kaplan, Anthropic’s chief science officer, told Time. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead.”

Anthropic weakens its safety pledge in the wake of the Pentagon’s pressure campaign

Anthropic CEO Dario Amodei (Photo by David Dee Delgado/Getty Images for The New York Times) (David Dee Delgado via Getty Images)

But you could also read those quotes as the latest example of a hot startup’s ethics becoming grayer as its valuation rises. (Remember Google’s old “Don’t be evil” mantra that it later removed from its code of conduct?) The latest versions of Claude have drawn widespread praise, especially in coding. In February, Anthropic raised $30 billion in new investments. It now has a valuation of $380 billion. (Speaking of the competition Kaplan referred to, rival OpenAI is currently valued at over $850 billion.)

In place of Anthropic’s previous tripwires, it will implement new “Risk Reports” and “Frontier Safety Roadmaps.” These disclosure models are designed to provide transparency to the public in place of those hard lines in the sand.

Anthropic says the change was motivated by a “collective action problem” stemming from the competitive AI landscape and the US’s anti-regulatory approach. “If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe,” the new RSP reads. “The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit.”

LOUISVILLE , CO - FEBRUARY 23: United States Secretary of War Pete Hegseth speaks during a visit to Sierra Space in Louisville, Colorado on Monday, February 23, 2026. (Photo by AAron Ontiveroz/The Denver Post)

Defense Secretary Pete Hegseth (Photo by AAron Ontiveroz/The Denver Post) (AAron Ontiveroz via Getty Images)

Neither Anthropic’s announcement nor the Time exclusive mentions the elephant in the room: the Pentagon’s pressure campaign. On Tuesday, Axios reported that Hegseth told Anthropic CEO Dario Amodei that the company has until Friday to give the military unfettered access to its AI model or face penalties. The company has reportedly offered to adopt its usage policies for the Pentagon. However, it wouldn’t allow its model to be used for the mass surveillance of Americans or weapons that fire without human involvement.

If Anthropic doesn’t relent, experts say its best bet would be legal action. But will the Pentagon’s proposed penalties be enough to scare a profit-driven startup into compliance? Hegseths’ threats reportedly include invoking the Defense Production Act, which gives the president authority to direct private companies prioritize certain contracts in the name of national defense. The military could also sever its contract with Anthropic and designate it as a supply chain risk. That would force other companies working with the Pentagon to certify that Claude isn’t included in their workflows.

Claude is the only AI model currently used for the military’s most sensitive work. “The only reason we’re still talking to these people is we need them and we need them now,” a defense official told Axios. “The problem for these guys is they are that good.” Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir.

Time‘s story about the new RSP included reactions from a nonprofit director focused on AI risks. Chris Painter, director of METR, described the changes as both understandable and perhaps an ill omen. “I like the emphasis on transparent risk reporting and publicly verifiable safety roadmaps,” he said. However, he also raised concerns that the more flexible RSP could lead to a “frog-boiling” effect. In other words, when safety becomes a gray area, a seemingly never-ending series of rationalizations could take the company down the very dark path it once condemned.

Painter said the new RSP shows that Anthropic “believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities. This is more evidence that society is not prepared for the potential catastrophic risks posed by AI.”

Credit: Source link

ShareTweetSendSharePin
Previous Post

Google announces new Android AI features coming to the Galaxy S26 and Pixel 10 series

Next Post

The 4-Stage Integration Gap Holding Food Manufacturers Back in 2026

Related Posts

Burger King will use AI to monitor employee ‘friendliness’
Technology

Burger King will use AI to monitor employee ‘friendliness’

February 26, 2026
The best budget cameras for 2026
Technology

The best budget cameras for 2026

February 26, 2026
The Galaxy S26 Ultra, Galaxy Buds 4 and more
Technology

The Galaxy S26 Ultra, Galaxy Buds 4 and more

February 26, 2026
Tecno just unveiled a ridiculously thin modular smartphone concept design
Technology

Tecno just unveiled a ridiculously thin modular smartphone concept design

February 25, 2026
Google announces new Android AI features coming to the Galaxy S26 and Pixel 10 series
Technology

Google announces new Android AI features coming to the Galaxy S26 and Pixel 10 series

February 25, 2026
A lot more of the same for a little more money
Technology

A lot more of the same for a little more money

February 25, 2026
Next Post
The 4-Stage Integration Gap Holding Food Manufacturers Back in 2026

The 4-Stage Integration Gap Holding Food Manufacturers Back in 2026

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

What's New Here!

PepsiCo’s Return to Value Pricing: What This Snack Strategy Signals for 2026

PepsiCo’s Return to Value Pricing: What This Snack Strategy Signals for 2026

February 5, 2026
Microsoft gaming chief Phil Spencer retires, Asha Sharma replacing

Microsoft gaming chief Phil Spencer retires, Asha Sharma replacing

February 21, 2026
Morgan Stanley hails rare ‘reindustrialization renaissance’ of AI economy

Morgan Stanley hails rare ‘reindustrialization renaissance’ of AI economy

February 23, 2026
Australian Open women’s final rocked by ‘very uncomfortable’ Elena Rybakina coach photo

Australian Open women’s final rocked by ‘very uncomfortable’ Elena Rybakina coach photo

February 1, 2026
Winter storm slams U.S. northeast as NYC issues travel ban

Winter storm slams U.S. northeast as NYC issues travel ban

February 23, 2026
JPMorgan’s nationwide home price forecast hides a SunBelt full of pain. Watch out, Florida and Texas

JPMorgan’s nationwide home price forecast hides a SunBelt full of pain. Watch out, Florida and Texas

February 9, 2026
Over a million people are losing power during a freezing snowstorm while data centers nearby guzzle electricity

Over a million people are losing power during a freezing snowstorm while data centers nearby guzzle electricity

February 5, 2026

About

World Tribune is an online news portal that shares the latest news on world, business, health, tech, sports, and related topics.

Follow us

Recent Posts

  • Bitcoin rides Nvidia wave to spike above $70,000 before pulling back
  • Why U.S. Allies Are Lining Up to Meet China
  • Burger King will use AI to monitor employee ‘friendliness’
  • Pro golfer Andrea Pavan ‘badly injured’ in freak elevator incident

Newslatter

Loading
  • Submit Your Content
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • DMCA

© 2024 World Tribune - All Rights Reserved!

No Result
View All Result
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food

© 2024 World Tribune - All Rights Reserved!

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In