The 100-Post Capstone: What I’ve Learned
Writing 100 posts on AI/ML over two years taught me what holds and what bends. The five lessons that survived contact with reality.
1. Scale wins, eventually
The Bitter Lesson keeps holding. Hand-crafted approaches lose to simple methods that scale. Every novel technique I’ve seen succeed has either ridden the scaling wave or made the wave more efficient.
2. Evals matter more than models
Teams that win at AI engineering have great eval sets. The models they use are roughly the same as everyone else’s. The discipline of measuring what works and what doesn’t is the moat.
3. Cost engineering compounds
Caching, routing, batching, prompt compression. None is dramatic alone; together they cut cost 70-90%. The teams that take this seriously have margins that grow with usage. The teams that don’t fund their growth from VC dollars and run out.
4. Portable wins
Don’t lock to one provider, one model, one stack. Build behind a gateway. Treat models as commoditised infrastructure. The team that can swap providers in a sprint has structural cost leverage the locked-in team doesn’t.
5. The boring work pays
Data quality. Eval discipline. Operational guardrails. None is glamorous. All compound. The teams that ship reliable AI products are the ones who do the boring work. The teams chasing novel techniques without the foundation tend to ship demos.
One hundred posts in, none of these is unique to me. They’re what every practitioner who’s shipped AI in production figures out, often after a few painful incidents. If this series saved you a few of those incidents, it did its job.