Alerts Practical By Samson Tanimawo, PhD Published Jan 31, 2026 4 min read

Alert Dependency Graph

Alerts depend on metrics, services, integrations. Map the graph.

Why map dependencies

Alerts depend on metrics, metrics depend on exporters, exporters depend on services, services depend on integrations. Break any link and the alert silently stops.

Without a graph, the first time you discover that the node-exporter pod has been gone for two months is during an outage when the host alert never fires.

Treat the alert catalog as a dependency graph, not a flat list.

What's in the graph

Nodes: alerts, recording rules, metric names, exporters, services, datasources.

Edges: alert depends_on rule, rule depends_on metric, metric exposed_by exporter, exporter runs_on service, service ingested_by datasource.

Store in Neo4j or a flat YAML committed to the repo. Either works; the discipline of writing it down is what matters.

What the graph unlocks

Blast radius for a metric rename. Renaming http_requests_total breaks 14 alerts; the graph lists them.

Health checking the dependency chain. A monitoring synthetic that walks the graph catches broken exporters before they break alerts.

Onboarding. New engineers see the relationships, not just isolated rules.

Automating the graph

Parse Prometheus rules with promtool to extract metric references. Build the metric -> rule edges automatically.

Pull exporter health from /metrics scrape success. Build the metric -> exporter edges from scrape labels.

Pull service ownership from a CMDB or Backstage. Don't reinvent service catalog data.

Worth it above 100 rules

Below 100 rules, the catalog is small enough to keep in your head. A dependency graph is overkill.

Above 100, the graph pays for itself the first time a renamed metric breaks pages silently.

Skip the visual UI tooling. A queryable JSON file is enough.