Nova v2.8: Multi-Channel Broadcast and Operator Search
Two big things shipped today. Multi-channel broadcast, write the incident update once, fan it out to Slack, X, LinkedIn, Threads, and your status page. And operator search, a slash command that searches every incident, runbook, and post-mortem you've ever written.
Why we built broadcast
During an incident, comms is a tax. The on-call writes the same update three times, once for the internal Slack channel, once for the public status page, once for the X account. By the time it's posted everywhere, the situation has moved on and the update is stale. Customers see inconsistent timestamps across channels and lose trust.
We watched this pattern across our customer base for six months. The teams that did comms well had a dedicated incident commander whose only job was copy-paste relay. The teams that didn't had silent X accounts during outages and customer support tickets piling up because nobody knew the platform was degraded.
v2.8's multi-channel broadcast turns this into a single action. Write the update once in the incident timeline; pick the channels; Nova fans out the post with channel-appropriate formatting. The on-call gets back to the actual fix.
How broadcast works
Each channel has its own formatter. Slack gets a rich block layout with the incident link, the affected services, and a status colour. X gets the 280-character version with the public status page URL. LinkedIn gets the longer-form executive update with the time-since-detection, mitigations in flight, and the customer-impact summary. Threads gets the X version. The status page gets the full Markdown.
The formatters are pluggable. Each one is a small TypeScript module that takes the incident object and returns the channel-specific payload. Adding Discord or Microsoft Teams is two hours of work; the API surface area is intentionally small.
Auth flows are OAuth where the channel supports it (X, LinkedIn, Threads, Slack, Microsoft Teams) and webhook URLs where it doesn't (status page integrations, Discord). Tokens are stored encrypted at rest with a per-tenant KMS key, the same key path used for the rest of Nova's secrets.
One important detail. Each channel can have a posting cadence cap, for example, X is rate-limited to one post per 15 minutes by default to avoid drowning the timeline during a long incident. The cap is configurable; the default came from watching customers self-DDoS their own comms during the first incident they used broadcast on.
Operator search
The second piece in v2.8 is operator search. Slash command in Slack, `/nova search "redis connection drop"`, searches every incident, every runbook, every post-mortem your team has ever written, plus the public Nova knowledge base. Returns ranked results with a one-line summary and a link.
This sounds simple. The build wasn't. The corpus is small per tenant (mid-sized teams have 500-2000 incidents) but heterogeneous, incident timelines are mostly chat logs, post-mortems are structured documents, runbooks are step-lists. Treating them as homogeneous text gives mediocre search.
v2.8 indexes each document type with its own embedding strategy and re-ranks at query time using the document type weights. Post-mortems weight higher when the query matches "lessons learned" or "what went wrong" intents; runbooks weight higher for "how to fix" intents; raw incident timelines weight higher for "has this happened before" intents.
Search architecture
The index is a per-tenant pgvector store backed by Postgres, with HNSW for the nearest-neighbour search. Embeddings come from a fine-tuned model trained on our incident corpus, generic OpenAI embeddings underperformed on our domain by ~14% in retrieval quality benchmarks, so we ate the cost of training.
Re-ranking uses a smaller cross-encoder that scores query-document pairs at retrieval time. The cross-encoder is the source of the intent-aware re-ranking, we classify the query intent first ("find a fix" vs "find a similar incident" vs "find the post-mortem") and apply different weights for each class.
End-to-end p95 latency for a search query is 240ms. The slow path is the cross-encoder re-rank, which is GPU-bound; we use small batch sizes per request to keep latency low rather than maximise throughput. For high-volume tenants we have a per-org GPU pool that auto-scales.
How to use it
Broadcast is on by default for new tenants. Existing tenants need to authorise the channels in Settings → Integrations → Broadcast. Once channels are connected, the "Broadcast" button appears in the incident timeline editor next to the "Save update" action.
Operator search is on by default for everyone. The Slack slash command is `/nova search`, the in-app shortcut is `Cmd+K`, and the search appears in the global header. There's no setup, the indexer runs nightly on the existing incident and post-mortem corpus and a fresh write hits the index within 30 seconds.
Both features are included in the existing tier, no upcharge, no add-on. They're how the product should have worked from day one.