Subject: Re: Scary AI experiment
but assuming it did a decent job
...
There has been an argument that, while SAAS was vulnerable to AI, custom-built software designed for a specific industry was better protected.
I guess for me the key word there is "assuming".
For some industries, that's almost certainly about all there is to it. For others, there might easily be 100 person-years of work to validate the solution. I wouldn't run my business on a black box, so for certain classes of problem the box has to be disassembled and checked almost assuming that the coder was an adversary trying to slip something past you.
The achievements are very impressive, but I imagine almost all of us are now at the stage of having seen some LLM produce wonderfully plausible WRONG results, with carefully constructed incorrect reasoning and sources to justify them. The most pernicious thing, and the reason I mention "assuming the coder was an adversary", is that LLMs are trained specifically to produce the most plausible sounding results. That can be rephrased as "the results that are most likely to fool you about their validity."
So, from the point of view of (say) an investor in a SAAS company, you have to ponder which vertical industry segment software is in the "whatever, close enough" category--a movie recommendation service?--and which is in the "too expensive to validate over again" category.
Jim