When reading a post with a keyboard, you can type the keys , and . to move backwards and forward between posts! You can also press 'return' to read through posts one at a time. There's freedom in Shrewd'm!
- Manlobbi
Stocks A to Z / Stocks B / Berkshire Hathaway (BRK.A) ❤
No. of Recommendations: 28
I am just leaving my 25th business school reunion where I got a chance to sit in on some really great talks about the future of things. Generative AI was a dominant theme at this year's reunion and presented in ways that were humorous and optimistic but also worrisome. As one quote had it, like many technical innovations AI is neither good or bad, but its also not neutral. A few thoughts from the weekend.
* Outside of those deeply invested in the cutting edge of development, most of the world doesn't appreciate how advanced AI is becoming and how much impact it is going to have in a short period of time. A lot of the reason for the underestimation is not because people can't imagine how capable general AI can be "eventually", its because people have a hard time appreciating the time scale at which the advances are happening. The exponential nature of the development happening now means that going from "not as good" to "as good" to "better than human" capability in an area is happening in the span of months, not years like it used to be just 5 years ago.
* AI systems have gotten massively larger recently and encode a lot of context about the world. The context allows them to provide responses that are very sophisticated and allows far greater capacity to infer from limited data the way humans do. It picks up new areas of skill far faster, and at much higher levels of performance to start than humans. We aren't prepared for what's coming in the very near future.
* AI ecosystem has recently (continues to) shifted from a research collabaration paradigm to a economic returns paradigm. The amount of capital required for the largest AI development efforts means money is coming from a lot of sources that want returns on the capital. This is what is happening with OpenAI. With this change in regime, there are some troubling predictions. The focus has changed to one of figuring out who the competitive winners and losers are going to be, and a push to "be first" to claim economic rewards. When that happens, you tend to see a lot more willingness to cheat, break rules, focus on upside and not the downsides, etc. in an incentive system. There is a lot at stake for the world in how things evolve over the next little while and you have to ask who should be looking after safeguards.
* The capacity for efficiency gains is massive. As a company, its not just that you can do the same with fewer people/resources by using AI. In most cases, you will be able to do more with less. It will be a powerful restructuring of how companies organize in areas like software engineering, customer service, planning and analysis, etc.
* Expect there to be a lot of displacement. There has been a saying in radiology, a field that I am familiar with, that AI won't replace radiologists, but radiologists who know how to use AI will replace radiologists who don't. At this reunion, the professors challenged that conventional wisdom. If you could see how AI is starting to function, its hard to conclude that AI won't be coming after a lot of jobs straight up. When cars starting taking over for horses as primary powered transportation, the population of horses plummetted in a handful of years. They didn't become drivers for cars (OK not a great analogy but it was a funny slide they put up to make the point). The speaker made the point that a lot of people today will say that we are so much better off today because of the industrial revolution, but they forget that history is biased by the victors/beneficiaries of a revolution and we are mostly the descendants of groups that benefited. But for 80 years, the industrial revolution caused enormous misery for a lot of the people who were essentially trapped as cogs in the machinery with no way to escape. There are entire classes of people that didn't make it through the revolution. Something similar is likely to be the case with AI.
* Back to those who know how to use AI vs. those who don't. There are scenarios where the distribution of power in society or amongst countries or amongst companies that are as large or larger than countries will shift dramatically. The use of AI by state actors in elections was discussed and detailed work done by a internet security research institute was highlighted as the only reason we have been able to trace that these things happened. (BTW, that institute's funding was recently gutted by the efforts of a congress person who wanted to investigate them because they highlighted russian efforts to tilt the race toward republicans). But it was fascinating to see how AI allows really rapid, scaled, inexpensive AB testing to find out what works and doesn't on social media to rouse us a group and create an us vs. them narrative. Some of the examples seem non-sensical as humans, but effective if you remove our human biases. Can you run a credible newsservice that serves 50M people in rural India from a Chinese editorial HQ where no one speaks any Indian language? Apparently you can if you can build the AI tools to do all of the gathering, contextualizing, and distribution for you. Can a group in Russia figure out how to get into the inner dynamics of a particular "identity group" like southern christians, or alternatively liberal bay area techies, and figure out how to be really effective at creating viral, influential content tailored for those groups? No problem. Some of the examples were eyeopening for sure.
How these changes coming down the pipe will affect BRK, I don't know. But I've been through the hype cycle with both the dot com era and now AI. This one actually feels far more substantial and based in demonstrated examples of capabilities enabled than the hype around dot com. And no one can deny how much the things that came out of the dot com era (separate from investment returns) have radically changed our world.
No. of Recommendations: 2
I 100% agree. Well written. Did you use AI?😃
Several months back Musk invited WEB to invest in Tesla. I got two responses with over 40 rec's to my post suggesting we accept: 1) Musk, not interested. 2) Tesla doesn't have any good businesses to invest in.
I was lucky enough to get an invite to the 10/10 Robotaxi reveal in LA. (I'll be the old geezer in tan Carhartts screaming his head off!) I view it as a pivotal point in the history of the world: the dawn of the age of autonomous, electric and intelligent robots.
Is robotics and AI investible? Hell yes. The obvious vehicle is TSLA.
Is the board interested? No. Doesn't fit WEB's methodology. Lot's of seniors here. First rule is don't lose money. Another is, stay within your circle of competence. Just not realistic to expect a retiree to spend a thousand hours learning about AI. Someone replied a while back in response to one of my posts that they just want to sleep well at night. A reasonable objective.
No. of Recommendations: 0
" Several months back Musk invited WEB to invest in Tesla. I got two responses with over 40 rec's to my post suggesting we accept: "
There is little chance Buffett would do material business with Musk. Musk is a compliance nightmare and way too political.
No. of Recommendations: 14
/Start of rant
I continue to be amazed about how bad the current AI is, even Perplexity.ai
If you ask questions about topics for which there is layperson confusion, expecting expert clarification, it confidently replies with layperson confusion.
I asked about how adjusted prices are calculated, and why it's calculated that way. It confidently gave a wrong answer.
I mentioned in another post that when I asked it to explain Buffett's "five groves", it confidently and incorrectly replied that it's five fundamental ratios, P/E, P/B, etc.
I've mentally noted about four other cases.
The point in common is that these are topics that non-experts find confusing, and so there are lots of wrong answers to train on.
Bottom line: until things are fixed, the natural language processing implementation of "AI" is basically wandering down to the village to ask random villagers for answers, and there are more village idiots than experts.
The danger is that popular misconceptions will only get amplified by current AI.
Fixes:
Some fixes are cheap and require no re-training. For example, even if e.g. reddit is used in current training, just filter the output so that perplexity.ai doesn't reference reddit and similar sites in its output. A more expensive fix is to retrain using only reputable input sources. It is not hard to come up with a list of reputable sources for either filtering current output or for retraining, I asked perplexity.ai to do that and its list was perfectly fine.
Where AI will shine:
In contrast to NLP, which has the 'village idiot' problem, AI's that use real world data for input, such as video, sound, radar, touch input, for self-learning autonomous navigation and similar tasks is not subject to the 'village idiot' problem. Learning to read x-rays, CT scans etc is not subject to the village idiot problem. Learning to fold proteins and solving other scientific tasks is not subject to the village idiot problem.
Running/debugging code generated by an AI is not subject to the village idiot problem. The loop where you ask it for code, then you run it and it fails, then you ask the AI to debug it and the code runs, is quite frustrating. Have AI run the code and give you debugged code. Regarding the latter, there are concerns about having AI actually run code. You could "air gap" the sandbox it uses to test code. That may not perfect, nor are e.g. BSL-4 labs, but you make it as perfect as you can.
Here's a simple one: have AI replace RTFM.
Example: I'm learning to use Lightroom, a commercial photo processing software, and when I need to know how to do something I ask perplexity.ai instead of RTFM or watch youtube tutorials (actually, the latter is helpful for a global over-view, but often you just want to know "how do I do X?" and get the right answer immediately. Perplexity.ai does that). I'd kill for a voice interface (but have the response be in text as well as voice)! perplexity.ai is working with SoundHound to develop an advanced voice recognition system for mobile devices to work in noisy situation, cars etc. "Give me four good medium price restaurants near me" might be nice in a car, but the market for replacing RTFM with AI using voice recognition in the quiet environment of your desktop or laptop could be huge.
/End of rant
No. of Recommendations: 0
I continue to be amazed about how bad the current AI is, even Perplexity.ai
Appreciating the pace of change was the important takeaway for me. Our experiences today or yesterday won't be the experience a year from now. Even during one of the sessions where I got access to the enterprise grade AI that the business school pays for and has secured for its own usage, I saw a difference in capability to what I typically access which is the unpaid versions that are generally a generation behind.
No. of Recommendations: 0
AI is really good at doing classifications from a very large population. For example, it’s peculiarly good at identifying people through social media, or facial recognitions, and stuffs like pulling knowledge. Big threats to Google, imo. Not much impact to Brk, except maybe Geico are being hurt from it because progressive is able to identify better drivers.
The whole AI thing is resulting huge capex spending at the big tech firms, it’s actually not a good thing to them. In a few years, these equipments will need to be replaced. Good for Brk because they use huge amounts of electricity.
No. of Recommendations: 4
I continue to be amazed about how bad the current AI is, even Perplexity.ai
Estimating the potential of AI in 2-5 years by asking it questions now is like estimating the compentence of a board certified anesthesiologist by getting her score on the board exams after her Freshman year of college.
I'd kill for a voice interface (but have the response be in text as well as voice)!
My $20/month OpenAI Chat subscription has had that feature on my smartphone for ~ 1 year. It does NOT have that capability on a laptop. I expect it would have that capability on a smartphone even if you had the free version.
AI is currently improving FASTER than human's in college improve. Be careful not to get run over.
R:)
No. of Recommendations: 4
There has been a saying in radiology, a field that I am familiar with, that AI won't replace radiologists, but radiologists who know how to use AI will replace radiologists who don't.
I heard this almost verbatim from a successful radiologist - "Radiologists that use AI effectively will replace 10, or perhaps even 100, radiologists that don't".
No. of Recommendations: 3
My friend, an architect in his 70's, talks about when CAD software transformed architecture and those who adapted continued to do well. He feels the same about AI - not that it will eliminate the need for architects, but that architects who learn to wield AI will replace those who do not.
No. of Recommendations: 6
> "I heard this almost verbatim from a successful radiologist - "Radiologists that use AI effectively will replace 10, or perhaps even 100, radiologists that don't"."
I worked for a software medical device company that utilized NNs and other algorithms to extract features from physiological sensor data. It was in a technical role but not directly in research. I wouldn't bet on a widespread replacement of physicians with AI models happening any time soon.
The capability exists today to send you an inexpensive wearable device capable of sending a clinical quality ECG signal to the cloud in close to real time. I suspect anyone looking at the raw data in a time x voltage graph might be able to notice atrial fibrillation, or at least be aware something is wrong. They might not know what a p wave is, but if they compared a normal cardiac reading it wouldn't look the same, and if they saw a reference a-fib reading they might be able to correctly identify it by sight without any knowledge of the underlying physiological cause. And computationally, for automated detection, you don't need AI for this (eg, discrete wavelet transform). There are certainly products that augment or replace this with ML models which might be even more accurate, but it isn't required.
Another more practical example might be automated external defibrillators. In order to not kill an otherwise healthy person who has fainted, AEDs do ventricular fibrillation (v-fib) detection, in the device, prior to administering the defibrillation shock. This isn't AI, either, it's closer to signal processing in embedded systems (to my indirect understanding).
The point is that - unlike x-rays or MRIs in radiology - tests for automated diagnosis of serious cardiac conditions can be done without AI, without excessive computational cost, without expensive and scarce equipment, with low patient risk, and without a trained technician. And this is *not being done*. So why not? And why would AI change this?
I don't believe AI will change this in the near future for three reasons. First are problems around medical record access and portability, and resistance to that is well deserved. Given the chance, insurers have not earned a lot of trust for secure or ethical use of data. Second would be liability, because automated (mis)diagnosis is a fear to overcome and challenge to insure. Third is what I'd call the "so what?" problem and it applies equally to radiology as cardiology. Whether a doctor, a classical algorithm, or AI accurately diagnosed you with a condition, somebody would still have to treat it and pay the treatment, and the US has a problem delivering and allocating medical treatment. AI diagnosing a metastatic tumor is not going to materialize the money for chemotherapy or a nearby hospital where it can be administered.
On one hand, I'm still excited by the technical possibilities of AI in medicine. An assistant with a perfect memory of every test result in your charts offering a quick nudge to your human physician about several sub-acute trends in your lab results could catch problems earlier and with better outcomes. On the other hand, I'm worried AI without legal safeguards will be used for anything other than making fewer rich people richer and more poor people poorer, faster.
At any rate I'm sure BRK is already widely using AI at all levels like others suggested. Their insurance and investing competition are have been doing so for a long time.
> "But it was fascinating to see how AI allows really rapid, scaled, inexpensive AB testing to find out what works and doesn't on social media to rouse us a group and create an us vs. them narrative"
We've had such a long run of computer technology improving life (arguably) that it can feel unnatural to consider it as a net negative for humanity. It's worth considering how destabilizing AI and programmatic manipulation of social media has been in the past decade, and how much worse it can get. It might be better to consider the threat of disruption it poses to BRK.
No. of Recommendations: 3
<<"I heard this almost verbatim from a successful radiologist - "Radiologists that use AI effectively will replace 10, or perhaps even 100, radiologists that don't".">>
I worked for a software medical device company that utilized NNs and other algorithms to extract features from physiological sensor data. It was in a technical role but not directly in research. I wouldn't bet on a widespread replacement of physicians with AI models happening any time soon.
That was the exact discussion I had with that radiologist. He didn't say that AI will ever replace radiologists (and he thinks that will never happen), what he DID say is that a good radiologist USING AI AS A TOOL could replace 10 (or 100) radiologists. And that stands to reason. If the tool can quickly go through 10 (or 100) scans and overlay a mark where it sees something, then the radiologist can more quickly scan those determinations manually (let's say each suspicious shadow has a red circle overlaid on it, or similar) and submit their findings.
No. of Recommendations: 12
He didn't say that AI will ever replace radiologists (and he thinks that will never happen), what he DID say is that a good radiologist USING AI AS A TOOL could replace 10 (or 100) radiologists.
Well that does logically mean the AI WILL replace a whole bunch of radiologists, it just won't replace ALL radiologists. If one radiologist skilfully using AI can do the job of 10 radiologists, then you could say AI will not replace that first guy, but it will replace the others.
I actually think the impact will be more modest, for several reasons. One, it will take a lot of time, so a lot of it will happen by attrition, with less new radiologists recruited into the pool. Two, a lot of radiologists are doing 'interventional radiology', not just reading films, but doing things like inserting stents in bile ducts anfd stopping brain haemorrhages and biopsying breast masses. Some day, perhaps AI will do some of that as well, but in the early stages, it will be mostly reading and interpreting images.
But third, and most importantly, there will be plenty of work for the 9 people that are no longer needed for reading films. For instance, double reading of films has been shown to vastly improve performance, but a lack of available resources has usually meant that radiologic images are read by a single, fallible person. Double reading of mammographies is standard in many countries, but not in America, but there is nothing unique about mammography that in benefiting from double reading, and it could become a norm of practice not only for mammography but for many other routine diagnostic procedures. Screening for lung cancer is a huge new field where much lower risk people could be screened, if resources were available. Annual mammographies (the norm is biannual, in most countries.) Digital colonoscopies. Low dose whole body CT scans - probably useful, but where would you find the radiologists to read them?
More generally, productivity gains usually mean more output, not less workers, in any industry.
dtb
No. of Recommendations: 2
Appreciating the pace of change was the important takeaway for me. Our experiences today or yesterday won't be the experience a year from now. Even during one of the sessions where I got access to the enterprise grade AI that the business school pays for and has secured for its own usage, I saw a difference in capability to what I typically access which is the unpaid versions that are generally a generation behind.
As I understand it, a current obstacle is that the LLMs have either run out or are about to run out of quality data. They have scoured the Internet and there is not much left.
No. of Recommendations: 8
The point is that - unlike x-rays or MRIs in radiology - tests for automated diagnosis of serious cardiac conditions can be done without AI, without excessive computational cost, without expensive and scarce equipment, with low patient risk, and without a trained technician. And this is *not being done*. So why not? And why would AI change this?
As you correctly point out, many things can be done by regular programming OR AI. In some theoretical sense, anything that can be done by AI can be done by regular programming, except regular programming might require VASTLY (read exponentially) more programming resources than will AI. For all intents and purposes, AI automates an extremely large and complex part of programs that deal with gigantically complex systems with many moving parts all interacting. AI automates the production of programs which is essentially the same as saying AI automates the production of software.
So in some sense, the question is why would you use one of AI or regular programming? I think the answer is paradoxically two fold:
1) You will use regular programming for incredibly simple stuff, a small number of interacting variables with reasonably well understood interactions
2) You will ultimately use AI to write the regular programs for when you need them!
In some sense, your question is similar to a question that might have been asked 100 years ago. Horses can bring the milkman's milk from house to house perfectly well with no new expensive motorized trucks. So why not? Why would we switch to trucks? And certainly for some while, horses did continue to be used for many hauling and transportation tasks, but ultimately motorized vehicles became the preferred technology for 99.99999% of transportation needs.
And it will be like that with AI. A detailed study could attempt to predict how long it will take AI to spread through the economy, taking over the jobs it will take over. But without doing that study, I feel pretty comfortable saying it will be years to decades in the US, and possibly 100 years for the planet as a whole. The extremely large computing centers needed to train AI will limit early AI deployment only to the most valuable tasks. I don't know what those will be, but safe to say "AI assistants" and "AI Driving" will be early applications, based on the fact that those two are already well along the path of development. General purpose, probably humanoid robots will likely be relatively early, but likely at least a year away before we see any seriously commercial deployment of these. But then realizing that literally billions of these robots would have to be built before they are used for the lowest valued jobs done by humans, and how many robot factories have to be built to accomplish that, and how long does that buildout take? Some idea of the answer to that: Tesla has been mass producing cars since 2013, and has currently reached only 3 million cars on the road, after 11 years. Even at 25% growth per year, it would take another 25 years to get to 1 billion cars. So many billions of robots serving man could easily take 30 years or more.
But the days of writing large complex software without using AI to do most of that work are essentially already over. As soon as someone figures out some improvements they want to make to the processing of X-rays, this will NOT be done by improving the already existing codebase by hand, but rather the existing codebase will help produce a part of the traning set for an AI implementation. And the AI implementation will handle corner-cases better than the hand=programmed version.
R:)
No. of Recommendations: 1
I don't believe AI will change this in the near future for three reasons. First are problems around medical record access and portability, and resistance to that is well deserved. Given the chance, insurers have not earned a lot of trust for secure or ethical use of data. Second would be liability, because automated (mis)diagnosis is a fear to overcome and challenge to insure. Third is what I'd call the "so what?" problem and it applies equally to radiology as cardiology. Whether a doctor, a classical algorithm, or AI accurately diagnosed you with a condition, somebody would still have to treat it and pay the treatment, and the US has a problem delivering and allocating medical treatment. AI diagnosing a metastatic tumor is not going to materialize the money for chemotherapy or a nearby hospital where it can be administered.
I suppose it depends on what "the near future" means, exactly. I would predict many doctors will be using AI assistants to improve their diagnoses probably immediately. I would predict (just pulling it out of my butt) that there will be medical chat products available to doctors within 3 years.
Further, I would predict that there will be AI "physician's assistants" for use in the poor part of the third world within something like 3 years. And perhaps the way they will be used is that regular RNs will be able to handle with some confidence some significant fraction of medical situations which would have killed those poor people when they were unable to see a real doctor about those conditions.
I should probably be trying to develop a "training set" for AI that would allow an AI to do some portion of my job (currently as an RF engineer). I might be able to sell a few copies when I am able to couple that training set to GPT5 or whatever the next version is called.
R:
No. of Recommendations: 0
Well that does logically mean the AI WILL replace a whole bunch of radiologists, it just won't replace ALL radiologists. If one radiologist skilfully using AI can do the job of 10 radiologists, then you could say AI will not replace that first guy, but it will replace the others.
That is not the only way it could happen.
Instead, the cost of doing an x-ray could drop by a factor of 10-100, and we could find that many more are used in diagnosis of various things than was done when they still cost a fornicating fortune.
And the Drs don't even have to take a cut in pay!
R:
No. of Recommendations: 2
That is not the only way it could happen.
Instead, the cost of doing an x-ray could drop by a factor of 10-100, and we could find that many more are used in diagnosis of various things than was done when they still cost a fornicating fortune.
Yes, this is what I also imagined might happen, in the second half of my post. Cheaper = more people who can afford it, more use cases, including double reading, more screening (lung cancer for instance), more annual mammographies instead of every 2 years, more virtual colonoscopies, wider range of patients offered screening, etc.
dtb