No. of Recommendations: 4
> She is wrong. A US with those capabilities is no longer a US. Anthropic has taken a proper moral stand. If only other companies were so nobly inclined (not just AI companies).
> Dangerous times.
I try to post when I have information to add, or a novel viewpoint. Sometimes I fail, but I try.
This is an intentional exception, though.
I wholeheartedly agree with you and felt it was important to speak up. Using these information doomsday weapons as glibly as the current leadership is will lead to leading researchers, both foreign and domestic, leaving the US to research and work elsewhere. China, India, and Europe are hiring, paying, and are far different than the average mass media consumer in the US has been led to believe. This would be a substantially larger, national scale example of how PLTR has recruitment issues even with its notoriously high compensation, because of the stink of the name on resume.
The DoD's stance was morally and - almost worse - strategically wrong. Stupid, self-defeating, and *uncharacteristically and embarrassingly public* behavior. Clearly DoD leadership has changed in quality and character.
What I can offer is that OpenAI in some areas seems better connected to Republican-corporate power brokers. This seemed contrived and a giveaway to political allies. One can follow model performance closed and OpenAI has fallen beyond almost all its rivals. The scaled back data center plans with Oracle are just one example. If you know the name Sam Altman and followed the OpenAI Foundation PBC politics over the last 5 or so years you'll know about as much as you need to know about the company. They have hemorrhaged top research talent since then, as those people can write their own offers at any frontier research lab or startup and can vote on ethics with their feet.