No. of Recommendations: 15
I will interject myself into this conversation with a out loud ramble in response to this out loud ramble.
In the field, there's a lot of concern about development velocity, moats, and what constitutes a durable competitive advantage for a software company in a world of AI enhanced products, AI enhanced development, and independent agentic processes.
If you have a legacy software vendor with a public service interface, it is easier than it's ever been to functionally reproduce that interface by a competitor.
If you have an existing product, you have to evaluate if you still have product-market-fit in a competitive landscape with other tools that integrate agentic flows. To not sound like I'm just spouting jargon, I'm talking about something like Microsoft Word and whether it's valid to assume that all customers want to use Word to author .doc files, or if they want it to accept prompts and co-author a .doc file with Copilot.
If you run a software development organization, you are wondering about how much are leveraging model based code assistants (Cursor + chat) or agentic development tools (Claude Code). You are wondering about what a target for a product developer looks like now, and if your competitors are leveraging these tools better than you organization.
If you are a software architect, you are wondering if you need to develop proprietary models and have previously deterministic components of your platform be replaced with agentic processes that interact with other components via things like MCP (model-context protocol: think having code you write interact with code someone else wrote, but instead of via remote procedure calls, doing a glorified google search). These agentic processes may replace humans in the operations of your platform or add new capabilties.
Having said all that....
Mungofitch wrote something about missing the old crappy internet that I almost replied to. I have joked about a "old web" startup that is human only. Ironically it would be a lot like AOL depending on how you look like it (a private hyperlinked document graph) and Google (pagerank, but with a penalty in detection or reporting of generative content). I do not plan on quiting my day job.
But everyone knows basic LLM generated content when they see it. There is an uncanny valley quality to it that I personally find repulsive.
There's a joke domain (lmgtfy) for "Let me google that for you" and when I get an LLM generated piece of content, my mental reaction is something like, "Well, if I wanted ChatGPT's opinion I would have just asked ChatGPT myself"
I find the impact of AI, in both transformer models and other frontier models in active research, to be disruptive of civilization as we've known it, grossly overestimated, and mostly misunderstood; all at once.
So back to Constellation.
I think that these small operators they roll up understand niches that are consumed by humans with unique and specialized needs and the good ones probably require a great deal of time to develop the requisite understanding of the history, regulatory environment, place within the broader economy, and many other factors.
A little aside about AI, and particular transformer models (the T in ChatGPT), is that context window is limiting. What I just said has a specific meaning in linear algebra, but you can think of it as "the irreducible complexity in a thought that a model can work with". Consider long, formal conversaton greeting and "hey". The former may be dozens or hundreds of words, but could be summarized as "hey, spoken in formal English". This gets into information theory/Claude Shannons work a bit, but if you think of context window as applying to the uncompressible/irredicuible complexity of a thought expressed in words, it is close enough. To my knowledge - which is not at the frontier research level, to be clear - the computational complexity associated with context window size is O(n^2) in so it gets increasingly expensive to have models interact with increasingly complex thoughts. Again, imprecise in a technical sense, but to get the point across: if it takes 100 computers to reason with thoughts expressible in 100 words, it takes 10,000 computers to reason with thoughts expressible in 200 words.
Back to Constellation again...
I bought a share. I hope these little niches are complex enough that current and near term models will not be able to disrupt them economically. It may even deepen the moat of specialized operators that are capable of taking advantage of those tools.
Thank you for the suggestion, homosapien, and follow up posts, Manlobbi and EVBigMacMeal.