Subject: Re: ELON! With Sorkin
Media Matters wrote some code with the express intent of finding a corner case on Twitter where you could see Hitler or something next to an iPhone ad or whatever. After running through a billion tries they got something like 3 hits and then launched their PR campaign.

No, they didn't. Or at least, not according to the complaint.

The complaint doesn't allege that MM wrote any code. They took an account, and set it to follow a bunch of folks who post "hateful content." They then just scrolled and refreshed their feed. Not by a billion tries, either - per the complaint, they engaged in activity that generated about 13-15 times more ads per hour than a "typical" user. More, but not a crazy amount more.

They didn't "hack" the system. That's actually....a pretty reasonable way to test whether Twitter's algorithms really do keep major brand ads away from toxic content? Make an account that has a lot of toxic content in the feed and see whether major brand ads show up? And just speed run through the ads to see if it happens? So that they can see in a few hours what someone who was sampling a lot of toxic content might see over the course of a week?

Correction: Old Twitter's "Hateful conduct" stomped on speech from the right side of the aisle. New Twitter's Hateful Content is intended to actually target hateful content.

Even if true, it's not really relevant. Twitter's old policy on Hateful Conduct - which certainly included anti-semitic content - was to remove it from the site. Twitter's new policy on Hateful Conduct - which will still include anti-semitic content - is to leave it on the site and allow the posters to keep posting and readers to keep reading it, but "quarantine" it away from major brands and normie users. Clearly, that approach doesn't guarantee results.