Subject: Scary AI experiment
My electrical contracting business depended on an estimating package which I wrote over the years, spending endless hours to create ever more powerful versions - continually giving our firm a significant edge over our competitors.
Last night, having nothing better to do, I plugged its GW Basic code into ChatGPT and asked it if it had any bugs (I had it pretty tight after years of use, but figured there might still be one or two cockroaches hiding in the corners). It found one, but then, after complimenting me (flattery will get you anything), asked if I wanted it to enhance it "a bit". "Sure", I said.
After around 2 1/2 hours and 29 version updates (and a road-map to version 40, but even I had to sleep sometime), it had a version where AI would input an AutoCAD drawing or PDF, count all the items, measure the conduit and wire, check for whether the design complied with the National Electric Code match "build-groups" of items needed, apply labor, etc. and then track whether jobs based on the estimates were profitable - and so on and so on. It wrote it in Python, but took a little under two minutes to re-write it in VBasic when I asked.
Now, to be honest, I have no way to test its work, but assuming it did a decent job, I now have a piece of business-specific software which, I'm guessing, favorably competes with packages costing tens of thousands of dollars (and possibly provides functionality not yet in common use.
The guy at the helm (me) has barely written a line of code in a couple of decades and the code is being written in languages I have little experience in. Yet, at my prime, if I could do something of this level (over 8,000 lines of code), it would likely have taken over a year of dedicated work.
There has been an argument that, while SAAS was vulnerable to AI, custom-built software designed for a specific industry was better protected. I am now convinced that the only thing which protects the unique is force of habit (which is rather strong) rather than reality.
Jeff