SLO vs Performance Target
SLO is a contract; performance target is internal.
SLO
SLO and performance target sound similar enough that teams often conflate them. The conflation produces real problems: the team commits to a number that was supposed to be aspirational, or treats the contractual target as a stretch goal and routinely misses it. The fix is to keep them mentally and operationally separate.
What an SLO actually is:
- External or quasi-external commitment.: An SLO is a number the team commits to, either to customers via a published SLA or to internal stakeholders via a reliability contract. The commitment carries weight beyond the team that owns it.
- Contract.: The SLO is enforced by an error budget policy. When it is missed, there are consequences: deploy freezes, reliability sprints, customer credits, contract penalties. The team does not get to renegotiate the SLO mid-quarter; that is the point of treating it as a contract.
- Conservative by design.: The SLO target is the number the team is confident they can hit, not the number they wish they could hit. Setting an SLO at the team's stretch goal level means signing up to fail more often than not.
- Tied to customer experience.: The SLO measures what the customer perceived: the request that succeeded, the response that arrived in time, the data that was fresh enough. The metric is from the customer's perspective, not the infrastructure's.
- Reviewed and renegotiated annually.: The SLO can change, but only through deliberate review with stakeholders. Quietly tightening or loosening the published target between reviews undermines the contract. The number is committed for the period.
The SLO is the floor. Below it is breach. The team's job is to stay above it consistently, with budget remaining to absorb future shocks.
Performance target
The performance target is what the team aspires to. It is the number the team thinks they can hit on a good month, the goal they set internally to push themselves toward improvement. It is not a contract.
- Internal stretch goal.: The performance target is set by engineering, for engineering. It might be 99.95% when the SLO is 99.9%. It might be p99 latency under 200 ms when the SLO commits to under 500 ms. The point is to aim higher than the floor without committing to it.
- Not committed.: Missing the performance target is information, not breach. The team uses misses to decide where to invest in the next reliability sprint. Customers do not get credits when performance targets miss.
- Aspirational, not enforced.: The performance target drives engineering behavior (don't deploy on Friday, invest in caching, refactor that hot path) without driving customer-facing consequences. It is the lever that pushes the team to keep improving.
- Reviewed quarterly, adjusted freely.: When the performance target is consistently met, tighten it. When consistently missed, examine why and either invest more or set a more realistic target. The performance target is dynamic; the SLO is not.
- Visible to engineering, not customers.: The performance target lives on internal dashboards. Customers see the SLO. Publishing both would confuse the contract; the performance target is engineering's working number.
The performance target is the engine. The SLO is the floor. Engineering aims for the engine; the SLO catches the team if they fall.
Relationship
Both numbers exist on the same axis. The performance target is tighter, the SLO is looser, and the gap between them is the buffer. The relationship is what makes the SLO defensible without making the practice rigid.
- Performance target tighter than SLO.: If the SLO is 99.9% availability, the performance target might be 99.95% or 99.99%. The team aims for the tighter number, hits it most of the time, and gives the SLO room to be defended consistently.
- The buffer absorbs reality.: Real systems have variability the SLA cannot tolerate but the performance target must. Maintenance windows, dependency outages, regional events, traffic spikes. The buffer is what lets the team recover from these without breaching the customer commitment.
- Sized to the system's volatility.: A service with stable, predictable load needs a smaller buffer (performance target close to SLO). A service with bursty traffic and many dependencies needs a wider buffer. The right gap is calibrated to the real volatility, not to convention.
- Track both, react differently.: When the performance target misses, engineering investigates and adjusts. When the SLO is at risk, engineering escalates and freezes deploys. The two thresholds drive two different responses, and the runbook documents both.
- Don't conflate them.: The most common mistake is using one number for both purposes. Either the SLO becomes too aggressive (the team misses regularly) or the performance target becomes too loose (the team stops improving). Keep them separate, name them differently, treat them differently.
The SLO is the contract; the performance target is the practice. Nova AI Ops tracks both numbers per service, distinguishes them on dashboards, and surfaces the buffer between them so the team can see at a glance whether the practice is producing comfortable margin or running close to the edge.