No. of Recommendations: 2
Maybe I'm missing something. I didn't find it that scary. Mostly because the article doesn't identify any plausible vector by which an AI could end the world - like, say, engineering a deadly virus.
All the examples in the article are "data in, data out." The existing outputs of malicious prompts are just malicious (or possibly malicious) information outputs: a horrific image, horrific text, a malicious piece of code. But those are all things that humans can (and do) generate today...and they can only have specific impacts on the world.
That's not nothing, of course - it's easy to spin up scenarios where an expertly doctored fake video might have geopolitical implications. But it's a far cry from an AI ever having the ability on its own to manufacture a physical virus that can Kill All Humans. AI could help enormously in assisting a group of humans do that - but there's no vector described in the article where AI's could do that on their own.