Observability Intermediate By Samson Tanimawo, PhD Published Dec 10, 2026 9 min read

High-Cardinality Metrics: When to Use Them, When to Convert to Logs

High-cardinality metrics are seductive; sometimes they are the right answer; usually they are not. Here is the rule.

What ‘high-cardinality’ means in 2026

Modern observability tools (Honeycomb, ClickHouse-based platforms) handle hundreds of thousands of unique values per dimension at query time. The old ‘metrics are low-cardinality’ rule is technically obsolete, but the storage cost gradient still exists.

The honest tradeoff in 2026: you can store high-cardinality metrics; the question is whether you should.

When metrics win

When logs win

The answer is a specific record, not a count. ‘Show me request abc-123’ is a log question.

You need the raw context. Headers, body excerpts, full URL paths. Log them.

The cardinality is unbounded and you do not know it yet. Logs scale better than time-series for unbounded keys.

Forensics dominate. Logs are designed for ‘what happened to this one thing.’

The conversion pattern

When you find a metric you wish was a log, the conversion is one-way and modest. Stop emitting the metric; emit a structured log line with the same fields. Build a query that aggregates the log to recover the metric values for dashboards.

Modern log backends (Loki, Elastic) compute these aggregations efficiently. The dashboard query rewrites; the data continues to flow.

Antipatterns

What to do this week

Three moves. (1) Audit your highest-cardinality metric; ask ‘is this an aggregation or a record-lookup?’ If record-lookup, plan the conversion. (2) Add structured-log fields for the use case. (3) Rewrite the dashboard query against logs and verify the numbers match.