Hi, Shrewd!        Login  
Shrewd'm.com 
A merry & shrewd investing community
Best Of Macro | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Macro
Shrewd'm.com Merry shrewd investors
Best Of Macro | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Macro


Personal Finance Topics / Macroeconomic Trends and Risks
Unthreaded | Threaded | Whole Thread (3) |
Post New
Author: OrmontUS   😊 😞
Number: of 3852 
Subject: Pentagon blackmails Anthropic
Date: 02/25/26 6:18 AM
Post Reply | Report Post | Recommend It!
No. of Recommendations: 12
https://www.cnn.com/2026/02/24/tech/hegseth-anthro...

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to comply with demands to peel back safeguards on its AI model or risk losing a Pentagon contract.

He also threatened to put the AI company on what could amount to a government blacklist.

At issue is the guardrails Anthropic placed on its AI model Claude. The Pentagon, which has a $200 million contract with Anthropic, wants the company to lift its restrictions for the military to be able to use the model for “all lawful use,” according to two sources familiar with the discussions.

But Anthropic has concerns over two issues that it isn’t willing to drop, the source said: AI-controlled weapons and mass domestic surveillance of American citizens. According to one source familiar, Anthropic believes AI is not reliable enough to operate weapons, and there are no laws or regulations yet that cover how AI could be used in mass surveillance.

“You can’t lead tactical ops by exception,” a Pentagon official said. “Legality is the Pentagon’s responsibility as the end user.”

A source familiar with the Tuesday meeting says the Pentagon plans to terminate Anthropic’s contract by Friday if the company does not agree to its terms. The Pentagon official told CNN the company has until 5:01pm on Friday to “get on board or not.” And if it doesn’t, Hegseth will ensure “the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not.” Hegseth will also label Anthropic a supply chain risk, the official said.

The DPA is a law that gives the government the ability to influence businesses in the interest of national defense, recently invoked by the Trump administration during the pandemic. The supply chain risk designation would prohibit companies with military contracts from using Anthropic’s products in any of their military work. It could deal a major blow to the AI firm at a time when it’s trying to expand its reach in the enterprise space, considering many large corporations have military contracts. The designation is usually reserved for companies seen as extensions of foreign adversaries like Russia or China.

Jeff
Print the post


Author: OrmontUS   😊 😞
Number: of 3852 
Subject: Re: Pentagon blackmails Anthropic
Date: 02/26/26 12:51 AM
Post Reply | Report Post | Recommend It!
No. of Recommendations: 1
https://edition.cnn.com/2026/02/25/tech/anthropic-...

Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.

Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.

In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.

The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines.

The policy change is separate and unrelated to Anthropic’s discussions with the Pentagon, according to a source familiar with the matter. Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei an ultimatum on Tuesday to roll back the company’s AI safeguards or risk losing a $200 million Pentagon contract. The Pentagon threatened to put Anthropic on what is effectively a government blacklist.

But the company said in its blog post that its previous safety policy was designed to build industry consensus around mitigating AI risks – guardrails that the industry blew through. Anthropic also noted its safety policy was out of step with Washington’s current anti-regulatory political climate.

Anthropic’s previous policy stipulated that it should pause training more powerful models if their capabilities outstripped the company’s ability to control them and ensure their safety — a measure that’s been removed in the new policy. Anthropic argued that responsible AI developers pausing growth while less careful actors plowed ahead could “result in a world that is less safe.”

Jeff
Print the post


Author: Steve203 🐝  😊 😞
Number: of 3852 
Subject: Re: Pentagon blackmails Anthropic
Date: 02/26/26 12:43 PM
Post Reply | Report Post | Recommend It!
No. of Recommendations: 5


At issue is the guardrails Anthropic placed on its AI model Claude. The Pentagon, which has a $200 million contract with Anthropic, wants the company to lift its restrictions for the military to be able to use the model for “all lawful use,” according to two sources familiar with the discussions.

Reports I have seen indicate the issue is larger than that, words to the effect "China has no privacy protections, which allows them to develop their AI systems faster. In order for the US to keep up with China, privacy protections need to be eliminated here too", which provides the "national security" excuse to override all other concerns.

The DOD is reportedly insisting “that all AI labs make their models available for ‘all lawful uses'” while “Anthropic is willing to loosen its usage restrictions” except for “the mass surveillance of Americans” and “the development of weapons that fire without human involvement,” which—it should be noted—are only a small fraction of Anthropic’s existing usage policy. Critically, the DOD has not said why it objects to the restriction against using Claude to develop “the mass surveillance of Americans,”

https://www.americanprogress.org/article/the-trump...

US diplomats urged to oppose data sovereignty laws impacting AI services

The dominance of U.S. artificial intelligence companies – many of which draw on massive stores of personal data to power their models – has underlined European concerns around privacy and surveillance. Officials across the continent have increased pressure on American social media giants, too.


https://journalrecord.com/2026/02/26/us-diplomats-...

From the Google net sifter:

The assertion that the United States must eliminate privacy laws to compete with China in artificial intelligence (AI) is a central, debated theme in current U.S. technology policy, particularly highlighted by the 2025-2026 Trump administration’s push to remove "barriers to American leadership" in AI
.
Proponents of this view argue that state-level privacy and AI regulations create a "patchwork" of rules that hinder innovation, and that unrestricted data access is necessary to keep pace with China's centralized, less-restricted AI development. Conversely, opponents and some experts argue that lowering privacy standards risks national security, civil liberties, and that focusing on trust and security can serve as a "secret weapon" for the U.S.


Steve



Print the post


Post New
Unthreaded | Threaded | Whole Thread (3) |


Announcements
Macroeconomic Trends and Risks FAQ
Contact Shrewd'm
Contact the developer of these message boards.

Best Of Macro | Best Of | Favourites & Replies | All Boards | Followed Shrewds