When infrastructure gets commoditized, data becomes the moat
Anthropic just made agent infrastructure a commodity. That doesn't threaten companies whose moat is data. It threatens companies whose moat was plumbing.
Long-form thinking on building AI systems, the shifting landscape of enterprise AI, and opinions on what the latest news actually means for builders.
Anthropic just made agent infrastructure a commodity. That doesn't threaten companies whose moat is data. It threatens companies whose moat was plumbing.
Anthropic accidentally open-sourced 512,000 lines of Claude Code internals. Forget the drama. The code is a masterclass in agentic architecture, and every company building AI agents should be studying it.
The same problem that makes most AI systems useless for serious work is what makes revenue systems so brittle. The fix is the same too.
Everyone wants to ship AI. Almost nobody wants to build the evaluation framework that tells you whether it's working.
Most teams treat AI guardrails as a safety feature bolted on at the end. In production, the guardrails are what actually makes the system usable.
The biggest barrier to enterprise AI isn't model quality or regulation or talent. It's that most AI systems can't learn. And almost nobody is building to fix this.
The traditional software demo is becoming counterproductive for AI products. When you're selling intelligence, not features, the whole buyer engagement model needs to change.
Deal context lives in a dozen different systems. What would happen if something actually stitched it all together?
Revenue forecasting is narrative-based, not evidence-based. Sellers declare stages, managers interpret stories, leadership aggregates opinions. AI can fix this, but most organizations aren't ready for that level of transparency.
CRM systems were designed for linear SaaS buying. AI deals are non-linear, multi-threaded, and governed by organizational readiness. Your pipeline view can't represent any of it.
Everyone asks whether to build AI internally or buy from vendors. It's the wrong question. What actually determines success is how fast your organization can learn.
AI isn't a harder version of SaaS to sell. It's a different commercial problem entirely. The entire revenue stack needs to be rebuilt.
95% of enterprise AI tools never make it to production. The MIT study quantified what builders already knew, but the reasons aren't what most people think.
Everyone is citing this study as proof that AI fails. They're reading it wrong. The study shows why organizations fail at building systems, period.