Hi, Shrewd!        Login  
Shrewd'm.com 
A merry & shrewd investing community
Best Of Macro | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Macro
Shrewd'm.com Merry shrewd investors
Best Of Macro | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Macro


Personal Finance Topics / Macroeconomic Trends and Risks
Unthreaded | Threaded | Whole Thread (3) |
Author: OrmontUS   😊 😞
Number: of 3852 
Subject: Re: Pentagon blackmails Anthropic
Date: 02/26/26 12:51 AM
Post New | Post Reply | Report Post | Recommend It!
No. of Recommendations: 1
https://edition.cnn.com/2026/02/25/tech/anthropic-...

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines.

The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter. Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei an ultimatum on Tuesday to roll back the company’s AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.

But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks – guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washington’s current anti-regulatory political climate.

Anthropic’s previous policy stipulated that it should pause training more powerful models if their capabilities outstripped the company’s ability to control them and ensure their safety — a measure that’s been removed in the new policy. Anthropic argued that responsible AI developers pausing growth while less careful actors plowed ahead could “result in a world that is less safe.”

Jeff
Post New | Post Reply | Report Post | Recommend It!
Print the post
Unthreaded | Threaded | Whole Thread (3) |


Announcements
Macroeconomic Trends and Risks FAQ
Contact Shrewd'm
Contact the developer of these message boards.

Best Of Macro | Best Of | Favourites & Replies | All Boards | Followed Shrewds