No. of Recommendations: 10
At Manlobbi's request, I am responding to a post/thread on the Berkshire board here, where it is more appropriate.
The original thread:
https://www.shrewdm.com/MB?pid=765143986&wholeThre... It seems the wrong and ugly ones will likely be short lived and the overall size would be tiny compared to human generated and the good AI generated ones (which come from human generated). Couldn't disagree more strongly. We are barely at the small end of the timeline funnel. As AI grows it will multiply and multiply again on itself in geometric progression, even as human interaction remains relatively constant or even diminishes. Absent some sort of 'marking', we will never know what is original and what is mixmastered AI repeating mistakes and falsehoods.
The examples are already numerous: CNET found dozens, more than half its AI generated stories contained errors. Sports Illustrated tried, and bungled its very first published story. IOP Publishing retracted 5 stories in journals for Earth and Environmental Sciences. None were know by the general public to be machine generated, all lingered in the public sphere for some time. Without some sort of watermark, this will continue, and AI, further instructed on these false articles will amplify them again. And again.