The 2027 Outlook
Predictions are easy. Confident predictions about AI in 2027 are harder. Here is what feels load-bearing vs what feels like extrapolation.
Five near-certain things
- Inference cost continues to fall ~3x/year for the same quality.
- Open-weight models stay within 6 months of frontier closed.
- Reasoning models become standard for hard tasks.
- Long context windows reach 10M tokens; effective recall lags behind.
- Multimodal becomes the default; text-only is a special case.
Three likely things
- Generalist computer-use agents become reliable enough for real work.
- On-device 7-13B models match 2024-frontier API quality on most tasks.
- Specialised AI hardware (Cerebras, Groq, Tenstorrent) carves out 10-20% of inference market.
Three wildcards
- A new architecture (state-space, retentive, neuro-symbolic) replaces transformers as the dominant choice.
- Capability hits a wall, scaling stops paying off and progress relies on algorithmic breakthroughs.
- Major regulatory action that meaningfully constrains frontier development in the West.
Strategy
Build with portability assumptions. Don’t lock to one provider. Routing/gateway tooling is essential infrastructure. Skills that compound: evaluation rigour, prompt engineering as a practice, ML operations as a workstream. Skills that depreciate: deep knowledge of any single API.