For several years, the operating question in applied AI has been cast as a tooling question. Which platform. Which model. Which framework. Which vendor. The underlying assumption is that the performance gap between firms is primarily a gap in the software they have access to.
Our experience, across the last hundred-plus engagements, does not support this. The firms and teams getting the best results from AI are rarely using anything unusual. Often they are using the same commodity tools everyone else has access to. What separates them is not what they hold. It is how they think with it.
The gap that matters is in the quality of the questions being asked. How the problem is framed. How the desired output is structured. How patiently bad first results are iterated into good ones. How specifically the team can articulate what they are actually trying to get done. These are habits of thought, not features of a product.
Framed differently: the hard part of using AI well is not picking the right model. The hard part is knowing what you want, precisely enough that you can tell whether a model has given it to you. Teams that have done that work can extract remarkable value from mediocre tools. Teams that have not cannot extract much of anything from the most expensive tooling on the market.
This is an uncomfortable diagnosis, because it redirects attention away from the purchase order and toward the culture of the team. It is also the diagnosis we keep arriving at. Tools are commoditizing faster than the habits of thought required to use them well are spreading.
When a client asks us what platform to buy, we almost always ask something first. What are you actually trying to get done? In the time it takes to answer that question carefully, most of the apparent tool debate quietly dissolves.
— Pactag Technologies