Hi, Shrewd!        Login  
Shrewd'm.com 
A merry & shrewd investing community
Best Of Politics | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Politics
Shrewd'm.com Merry shrewd investors
Best Of Politics | Best Of | Favourites & Replies | All Boards | Post of the Week! | How To Invest
Search Politics


Halls of Shrewd'm / US Policy
Unthreaded | Threaded | Whole Thread (19) |
Author: mungofitch 🐝🐝 SILVER
SHREWD
  😊 😞

Number: of 75974 
Subject: Re: Brazil the cheapest?
Date: 04/14/26 9:31 AM
Post New | Post Reply | Report Post | Recommend It!
No. of Recommendations: 13
Can anyone explain to me how AI hallucinations work?

I can share my personal view/overview.

The main principal underlying the way they work is to pick the next most likely word in a sentence. To almost everyone's surprise, given some local context and enough probability table entries, this ends up sounding like a rational person.

But it isn't. The end result is more like a very well spoken and confident person who (a) has read everything on the internet, but (b) unfortunately believed it all. Also (c) an imperfect memory, and (d) is very good at plausible bullshitting.

The reason for the last one is that, given that it's just picking probable word combinations, it's inevitably picking whatever is most plausible sounding, not necessarily with any connection to a fact. That's what hallucinations are: strings of words that go together in a way that is statistically likely given the context, but don't actually have a connection to the real world because...why would they? Since by construction it is so darned plausible sounding, after a few useful answers we all have a HUGE temptation to start skipping the necessary task of validating whatever it says.

Some fancy models being developed then take what they've written and actually try and go and validate it, but that requires good and locatable source for the validation of any possible answer. They tend to search the web.

To get an idea of the scale of the bullshit problem, a variety of state of the art models were tested on examples from a medical diagnosis test. Each round starts with a few symptoms, and one has to work through a series of steps of test results to do the differential diagnosis correctly. The AIs were wrong, and confident, at early stages in 80-90% of the test cases. They jumped to plausible sounding conclusions too early with high confidence, and couldn't even give complete lists of the conditions that met the facts known so far. If you hear hoofbeats you should suspect a horse before you suspect a zebra, but you should actually have a look before asserting that it's definitely a horse.

Jim
Post New | Post Reply | Report Post | Recommend It!
Print the post
Unthreaded | Threaded | Whole Thread (19) |


Announcements
US Policy FAQ
Contact Shrewd'm
Contact the developer of these message boards.

Best Of Politics | Best Of | Favourites & Replies | All Boards | Followed Shrewds