Hi, Shrewd!        Login  
Shrewd'm.com 
A merry & shrewd investing community
Best Of Macro | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Macro
Shrewd'm.com Merry shrewd investors
Best Of Macro | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Macro


Personal Finance Topics / Macroeconomic Trends and Risks
Unthreaded | Threaded | Whole Thread (2) |
Author: OrmontUS   😊 😞
Number: of 3852 
Subject: DoD terms: OpenAI vs Anthropic
Date: 03/01/26 12:58 AM
Post New | Post Reply | Report Post | Recommend It!
No. of Recommendations: 4
According to ChatGPT (OpenAI):

In my opinion, boils down to whether a firm should trust the Department of Defense (War) with interpreting whether an action they take with an AI was legal or should the AI vendor have veto power (per contract) if the application was to be used of either fully-autonomous weapon systems or for surveillance of the US population. I wonder what the US population would say, if surveyed, was their preference.

Jeff

Core Differences in the Agreements

1. “All lawful purposes” clause

OpenAI agreed to contractual language that explicitly allows the DoD to use its AI for “all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” This language was the central point of disagreement in negotiations with Anthropic.

Anthropic refused to accept this clause, arguing that because current law may not explicitly prohibit some ethically concerning uses of AI (e.g., AI-enabled surveillance or autonomous targeting), an “all lawful uses” formulation was too permissive.

2. Enforcement of safety “red lines”
OpenAI’s agreement includes three contractually framed “red lines,” and OpenAI says they are backed by:

its own safety stack (technical restrictions it controls),

cloud-only deployment,

cleared OpenAI personnel oversight,

strong contractual protections.

Those red lines are:

No use for mass domestic surveillance.

No use to direct autonomous weapons systems.

No use for high-stakes automated decisions (e.g., “social credit”-like systems).

Anthropic’s refusal stemmed from how those red lines were framed and enforced in contract language. Anthropic maintained that the proposal from the DoD did not meaningfully bind the defense department to those limitations and could be overridden by policy changes, so Anthropic would not agree.

3. Contractual vs. policy reliance

OpenAI accepted a structure in which the DoD’s use of its technology — including prohibitions on certain applications — depended on existing law and policy as referenced in the contract. That means the restrictions are tied to what law and DoD policy currently provide for, rather than independent, standalone prohibitions imposed by the company.

Anthropic’s position was that simply aligning with existing law/policy was insufficient, because laws may not yet comprehensively regulate AI use, and the company wanted clearer, explicit contractual protections that could not be bypassed or altered by future policy changes.

What Anthropic Did Not Agree To

Anthropic declined DoD terms that would have:

Embedded an “all lawful purposes” clause enabling broader use of Claude by the military without company-controlled guardrails.

Accepted safety restrictions tied primarily to existing statute and DoD policy rather than independent, explicit contractual prohibitions.

As a result, the Pentagon called Anthropic a supply-chain risk and ordered U.S. agencies to cease use of its technology in government systems.

Summary (Clear Contrast)
Term / Condition OpenAI Agreed Anthropic Accepted
Allow DoD use for all lawful purposes ✔︎ (contract language) ✘ (Rejected)
Limitations on domestic surveillance ✔︎ (via contract tied to law/policy) Wanted stronger, explicit protections
Limitations on autonomous weapons use ✔︎ (via contract tied to law/policy) Wanted stronger, explicit protections
Additional ban on high-stakes automated decisions ✔︎ (in OpenAI’s stated “red lines”) Not known to have been part of Anthropic’s negotiation position

In short: OpenAI accepted terms that tie safety commitments to existing law and broad “lawful use” authority for the Pentagon, whereas Anthropic insisted on more concrete, independent contractual guardrails and did not agree to the DoD’s framing. The key divergence was not primarily which outcomes to avoid but how the contractual language binds the parties to those outcomes.
Post New | Post Reply | Report Post | Recommend It!
Print the post
Unthreaded | Threaded | Whole Thread (2) |


Announcements
Macroeconomic Trends and Risks FAQ
Contact Shrewd'm
Contact the developer of these message boards.

Best Of Macro | Best Of | Favourites & Replies | All Boards | Followed Shrewds