Subject: Re: ELON! With Sorkin
If you want to test whether Twitter's algorithms actually prevent major brand's ads next to pictures of Hitler, you need to set up an account that will have pictures of Hitler into it. They're not testing whether typical users will frequently - or even ever - see pictures of Hitler. They're testing whether the Twitter "speech not reach" algorithms will prevent the ads from being put next to Hitler. It's not really relevant whether there were 5 or 5 billion non-Hitler posts on the site at the time; what they're testing is what the software does with the Hitler images.

Media Matters wrote some code with the express intent of finding a corner case on Twitter where you could see Hitler or something next to an iPhone ad or whatever. After running through a billion tries they got something like 3 hits and then launched their PR campaign.

The PR campaign had the specific intent of smearing X and making the claim that, "Look, advertisers, your images are going to appear right alongside Hitler!!". That's not illegal although it is 100% sleazy.

Musk's Twitter changed that policy, allowing the Hateful Conduct (and the posters) remain on site - but claimed that it had precautions in place to protect the major brands from being exposed to the Hateful Conduct

Correction: Old Twitter's "Hateful conduct" stomped on speech from the right side of the aisle. New Twitter's Hateful Content is intended to actually target hateful content.