No. of Recommendations: 2
There are thousands of examples of how AI screws up, some hilarious, some frightening - and yes, it will get better. But absent some industry standard to watermark or otherwise ignore the falsehoods (so prevalent today), I don't see how this makes things better. Easier, perhaps, but that is a different thing.
I'd expect this to happen fairly organically over time. There will be self correcting functions in place.
While the focus of this discussion is on the large number of errors in AI content, human generated content has had no shortage of errors in the past either, and society has worked through it.
Publishers will lose confidence in AI content generation options that produce a larger amount of errors, especially when those errors get identified and called out publicly by consumers or competitors.
This will lead to publishers seeking out more discerning (accurate) options for AI content generation, which will drive further continuous improvement efforts.
And the more exposure consumers have to error-ridden AI content, the greater the market will grow for context/fact checking services.
We already see a form of this today on Twitter with the community notes feature being added that provides context or fact checks tweets, eliminating the need for undue censoring.