No. of Recommendations: 11
There are many overviews of AI's potential impact on business, these two aren't bad:
https://www.investing.com/analysis/ai-investment-l...https://www.mckinsey.com/capabilities/quantumblack...The following comments extracted from above seem about right to me:
Trends in the AI investment landscape for 2025 include (beyond the usual suspects of Google, Microsoft, Meta)
1. Enterprise software companies: Firms like Datadog, MongoDB, and Snowflake seek to capitalize on AI integration into existing product lines
2. Infrastructure and hardware: Companies supplying the necessary infrastructure for AI applications, such as data centers, present investment opportunities
3. AI-generated content: This sector might surge, with video seeing new developments in AI-generated media
Personally, I'd add biotech/medicine (see below).
The Nobel prize in chemistry was awarded for (1) solving the protein folding problem (2) creating totally novel proteins that fold to specific desired shapes. Will it pay off? Beats me, but if it does it'd be huge.
Nvidia is doing interesting stuff apart from chip building, they're getting into robotics
https://blogs.nvidia.com/blog/three-computers-robo... They have an approach where they simulate the physics of what goes on in a setting that you'd like a robot to interact in. They teach the robot using this very accurate simulation of the physical world, so you're not having a robot wandering about in the real world learning by making mistakes. Will it pay off? Beats me, but if it does it'd be huge. Is Nvidia hugely over-valued? Probably, but I bought at 17% off in the recent pullback. Even if DeepSeek really did fully train using less powerful chips, perhaps temporarily reducing demand in Nvidia's market, my guess fwiw is that chip demand will continue to exponentially grow.
A concern I have:
I've used perplexity.ai, deepseek.com and chatGPT.com *a lot*. My experience has been that if you scratch beneath the surface, i.e. stress them on topics on which you're expert, then the hallucination/confabulation problem is huge. I'm concerned about this for two reasons (1) people, or businesses, will rely on an AI answer or AI result without further checking of it. As buggy AI-based software gets embedded enterprise-wide, the potential for problems is huge. (2) A high volume of AI generated content is getting spewed back to the internet, polluting a major data source on which LLMs train.
On the other hand, they undeniably can increase productivity, e.g. I regularly use AI to produce code. Obviously, AI code generation will have a huge impact on software companies, and their personnel.