Halls of Shrewd'm / US Policy❤
No. of Recommendations: 10
At Manlobbi's request, I am responding to a post/thread on the Berkshire board here, where it is more appropriate.
The original thread:
https://www.shrewdm.com/MB?pid=765143986&wholeThre... It seems the wrong and ugly ones will likely be short lived and the overall size would be tiny compared to human generated and the good AI generated ones (which come from human generated). Couldn't disagree more strongly. We are barely at the small end of the timeline funnel. As AI grows it will multiply and multiply again on itself in geometric progression, even as human interaction remains relatively constant or even diminishes. Absent some sort of 'marking', we will never know what is original and what is mixmastered AI repeating mistakes and falsehoods.
The examples are already numerous: CNET found dozens, more than half its AI generated stories contained errors. Sports Illustrated tried, and bungled its very first published story. IOP Publishing retracted 5 stories in journals for Earth and Environmental Sciences. None were know by the general public to be machine generated, all lingered in the public sphere for some time. Without some sort of watermark, this will continue, and AI, further instructed on these false articles will amplify them again. And again.
No. of Recommendations: 1
It's the engineers behind the ChatGPT who determine what data to use for building or improving the ChatGPT. It can be reasonably assumed they will find a way to discard those generated by bots that add no information, so future bots won't be trained on wrong data; further, they will correct mistakes by the bots. ChatGPT should get better over time.
No. of Recommendations: 6
"It's the engineers behind the ChatGPT who determine what data to use for building or improving the ChatGPT. It can be reasonably assumed they..."
"Whose bread I eat, his song I sing" (Charlie Munger, quoting a German idiom)
It won't be the engineers selecting the filter sets. It will be the engineers bosses' diktat...whether that be Mark Zuckerberg, Sundar Pichai, the Chinese Communist Party, or the Koch/Fox empire as the case may be
--sutton
No. of Recommendations: 1
<It won't be the engineers selecting the filter sets. It will be the engineers bosses' diktat...whether that be Mark Zuckerberg, Sundar Pichai, the Chinese Communist Party, or the Koch/Fox empire as the case may be>
You raise a good question. There should be a discussion at the national level about the rules of selecting data for building AI.
No. of Recommendations: 7
You raise a good question. There should be a discussion at the national level about the rules of selecting data for building AI.
Well that's fine, but. How do they know whether a particular piece - or source - is AI in the first place? Or, for that matter, just wrong?
The Lancet and the New England Journal of Medicine have both retracted an early published paper on the effectiveness of hydroxychloroquine for treating or preventing Covid, saying that the data set used was insufficient or manipulated (respectively). Nevertheless the story was cited more than 200 times in other scientific papers and journals before the retraction - and here's the interesting part - more than 100 of those cites have not been annotated, retracted, or otherwise changed in any way since then.
Presumably a 'smart enough' AI can figure that out and work backwards, or will the sheer volume of cites be enough to continue spreading misinformation - as the recent experiments by Microsoft, Google and others shows is all too easy.
And for others instances, how will it know if it is merely regurgitating misinformation if it doesn't know that it's merely repeating AI information that was wrong prior? It's turtles all the way down!
There are thousands of examples of how AI screws up, some hilarious, some frightening - and yes, it will get better. But absent some industry standard to watermark or otherwise ignore the falsehoods (so prevalent today), I don't see how this makes things better. Easier, perhaps, but that is a different thing.
No. of Recommendations: 0
<Well that's fine, but. How do they know whether a particular piece - or source - is AI in the first place? Or, for that matter, just wrong?>
Whether the source is AI or not is less important than whether it's correct or not. I think there could be many hints and the smart AI guys would figure out, including cross examination, cooperation from journalists and websites that have signed on a pledge that any information generated by AI is humanly verified.
No. of Recommendations: 2
There are thousands of examples of how AI screws up, some hilarious, some frightening - and yes, it will get better. But absent some industry standard to watermark or otherwise ignore the falsehoods (so prevalent today), I don't see how this makes things better. Easier, perhaps, but that is a different thing.
I'd expect this to happen fairly organically over time. There will be self correcting functions in place.
While the focus of this discussion is on the large number of errors in AI content, human generated content has had no shortage of errors in the past either, and society has worked through it.
Publishers will lose confidence in AI content generation options that produce a larger amount of errors, especially when those errors get identified and called out publicly by consumers or competitors.
This will lead to publishers seeking out more discerning (accurate) options for AI content generation, which will drive further continuous improvement efforts.
And the more exposure consumers have to error-ridden AI content, the greater the market will grow for context/fact checking services.
We already see a form of this today on Twitter with the community notes feature being added that provides context or fact checks tweets, eliminating the need for undue censoring.
No. of Recommendations: 2
The genie is out of the bottle , it cannot self correct in time and wishing for effective legislation is never going to happen. I'm not sure where this leaves us, but probably not in a good place.
No. of Recommendations: 0
I actually think ChatGPT (or other AI system) has the potential to solve the misinformation problem on the Internet. ChatGPT has the ability that no human has, that is to analyze all data and come to the most likely truth.