No. of Recommendations: 1
Because Anthropic wouldn't drop safeguards in it's AI: that it not be used to automatically kill people, and that it not be used for mass surveillance of citizens.
Trump Orders Government to Stop Using Anthropic After Pentagon Standoff
The company had clashed with the military over how officials wanted to use its A.I. model. The order could vastly complicate intelligence analysis and defense work.
Writing on Truth Social, Mr. Trump used harsh words for Anthropic, describing it as a “radical Left AI company run by people who have no idea what the real World is all about.”
Shortly after Mr. Trump’s announcement, and 13 minutes after a Pentagon deadline, Defense Secretary Pete Hegseth designated the company a “supply-chain risk to national security.” The label means that no contractor or supplier that works with the military can do business with Anthropic.
The move is all but unheard-of, legal experts said. It strips an American company of its government work by using a process previously deployed only with foreign companies the United States considered security risks.
No. of Recommendations: 0
that it not be used to automatically kill people
Show that it (AI) COULD (without safeguards) be used to target the current US leadership and watch Spankee (et al) ban ALL AI.
No. of Recommendations: 2
just the nudge grok needed to rekindle the musk\trump $bromance.
on a related note, musk has now checked his bucket list for war crimes, first denying starlink was used to guide russian missles at civilian targets.
in such use, vended guidance for munitions has long been subject to sanctions.
musk now trying to compensate :
https://www.theatlantic.com/national-security/2026...