Consensus Arbiter resolves conflicts between competing agent proposals. When the postgres specialist says "kill the slow query" and the cache specialist says "warm the cache" for the same incident, the arbiter compares evidence quality, blast radius, agent trust scores, and cost. It picks one. The other waits as a fallback. Disagreement no longer freezes the response.
When two proposals conflict, the arbiter scores each on four dimensions: evidence quality (does the proposal cite live signals?), blast radius (smaller is preferred for ambiguous incidents), agent trust (the agent's historical track record on similar incidents), and reversibility (a fix you can undo beats one you cannot). The proposal with the higher composite score wins.
The losing proposal does not get thrown away. It is held as a fallback. If the winner's ground-truth verifier reports a regression, the arbiter activates the fallback automatically. This is why we want competing specialists: when one is wrong, the second guess is already prepared and reviewed.
Sometimes a senior engineer knows the second proposal is right based on context the agents do not have. Override is a single click on the conflict view: pick the other proposal, write a one-line reason, the arbiter records the override and uses your decision. Future arbitrations on the same conflict pattern weight your override into the score.
If the composite scores are within 10 points and at least one proposal has a high blast radius, the arbiter does not pick. It escalates to a human via Approval Queue. Speed is good; speed on a coin-flip with destructive proposals is not. The escalate-on-tie rule prevents agent-on-agent lottery moments.
Subscribe to Nova AI Ops on YouTube for demos, tutorials, and feature deep-dives.
Specialists disagreeing is healthy. Specialists deadlocking on every incident is not. The arbiter is what makes the multi-agent fleet shippable.