Product Updates Intermediate By Samson Tanimawo, PhD Published Aug 16, 2026 6 min read

Multi-Channel Alert Delivery

Fan out alerts across Slack, X, and email, with per-channel rate limits so a single noisy service doesn't take a whole on-call rotation hostage at 2am.

Why fan-out

Different teams want different channels. The platform team lives in Slack. Customer-facing comms goes out on X. Compliance and execs read email digests. The old single-channel model forced a choice, and the choice was always wrong for somebody. So we rebuilt delivery as a fan-out engine where every alert can go to multiple destinations independently, with its own formatting and its own rate limit per channel.

The other reason for fan-out is graceful degradation. Slack has outages. Email gets greylisted. X rate-limits. If your only alerting path goes through a single provider that's having a bad day, your incident response is having a bad day too. Fan-out gives you a second path for free.

The channels

Slack is the primary integration. Webhook or app, both work. The app version unlocks threaded responses, button actions to acknowledge or escalate, and a slash-command for triggering a runbook from within the channel. The webhook version is simpler if you can't get app approval through your security team.

Email goes through SES with custom DKIM on your domain by default; teams can swap in their own SMTP if they need to. We render two versions of every alert email, a clean text version for the alert, and a richer HTML version with the inline graph and the agent activity feed. The text version is what gets to your watch.

X is the newest and most useful for status communications. The integration posts to a configurable account using OAuth tokens you provide. Public-facing incidents (status page changes, customer notifications) flow here; engineering alerts don't. We treat X as a one-way customer comms channel, not an on-call channel.

Per-channel rate limits

This was the headline ask from beta tenants. A misbehaving service that fires 400 alerts in 5 minutes used to flatten a Slack channel and make every other alert invisible. The rate limit fixes it: every channel has its own bucket, and Nova batches overflow into a single rollup message instead of dropping or DoS'ing.

The default is generous, 60 alerts per channel per 5 minutes, and configurable. When you cross it, the next message is a rollup: "47 alerts from billing-api in the last 5 min · latency p99 above threshold · view burst." Click through and you get the full list with per-alert detail.

The rate limit is per channel, not per service, on purpose. A single service genuinely having a bad time should generate one stream of alerts; a single team subscribing to many services across multiple channels shouldn't have one service drown the others. Per-channel buckets give you that isolation cleanly.

Routing rules

Alerts route based on a small DSL. Service tag, severity, time of day, on-call rotation status, all of it composable. A typical rule looks like: "if severity >= P2 and team = platform, fan out to #platform-alerts (Slack) and primary on-call email; suppress X." We ship six templates that cover most teams, and the editor previews which historical alerts a rule would catch before you save.

Rules cascade with explicit precedence. Highest-priority rule wins; ties broken by rule ID, oldest first. We avoided the implicit "most specific match" model from older alert routers because it surprises people too often, the explicit precedence makes troubleshooting routing decisions a one-screen task.

Configuring it

The config lives under your tenant's notification settings. There's a UI for the common cases and a YAML escape hatch for the rest. Both round-trip, edits in the UI write back to the YAML, and YAML changes appear in the UI within seconds. Teams that prefer GitOps can sync the YAML from a repository; we'll pull on commit and apply atomically.

One detail worth knowing: alert delivery is at-least-once, not exactly-once, by design. We retry on transient channel failures, which means an alert can occasionally fire twice. Every alert carries an idempotency key so downstream tools can dedupe; in Slack we use it to update the existing message rather than post a new one. In email we drop the duplicate at the SMTP layer.

If you're still sending every alert to a single Slack channel, the fan-out is probably the lowest-effort meaningful improvement you can make to your on-call experience this quarter. Roll it out one team at a time; the per-channel rate limits do most of the work.