No. of Recommendations: 8
@tecmo
I agree that there is great potential for AI in areas other than hard science.
It allows doctors to consensually record their visits with patients, and AI automatically transforms them into clinical notes and summaries.
Health-care organizations will test and validate them before the company rolls them out more broadly.
I sure hope they validate the hell out of it before they go summarizing my medical records!
The summary of one of my scientific papers didn't go so well.
Regulation:
A possible approach is that Microsoft (and other providers of AI) formalize a validation protocol, the health care organizations and their providers then follow the defined protocol, Microsoft reviews and signs off on the validation, and only then is the AI application allowed by Microsoft for enterprise-wide use.
Similar to how doctors sign their names to clinical notes, those medical personnel involved in validation would need to sign their name that the AI summaries agreed with their notes.
Everyone needs to be on the hook: Microsoft, the health care organization, and the individual providers involved in validation.
Validation might need to be done annually, because Microsoft would undoubtably be rolling out new, updated, AI versions that'd need validation.
LLMs are currently error-prone.
Stupidity by isolated people can be fixed (if necessary, isolated people can be fired).
But systemized stupidity is insidious, if allowed to take root it will be almost impossible to fix.
Everyone needs to be on the hook in a validation process for AI's to actually achieve their potential in enhancing productivity (IMO).