Action Schema Validator sits between every agent proposal and every production tool. Each tool has a JSON Schema describing the exact shape of arguments it accepts. The validator runs the proposal against the schema. Wrong type, missing field, unexpected key, blocked before the tool runs. The agent gets a structured rejection it can use to retry, not a half-applied call to fix later.
Every tool exposed to the agents is registered with a JSON Schema describing its argument shape: types, required fields, enum values, numeric ranges. When an agent proposes a tool call, the validator parses the proposal and runs JSON Schema validation against the registered shape. Pass means the tool runs. Fail means the agent receives a structured error and decides whether to retry or escalate.
Loose validation seems convenient until production breaks because an agent passed a string where a number was expected and a tool silently coerced it. Strict validation is one more reason an agent has to retry, which costs tokens. Production not breaking is worth the tokens. Every reject in the ledger has saved a real-world ticket.
When a tool wants to accept a new argument, the schema gets a new version. Agents are pinned to a schema version per tenant. Adding a field is a major bump (existing agents do not get it). Loosening a constraint is a major bump. The validator never silently accepts more than it did yesterday.
The page shows live validation traffic and a per-tool reject rate. A tool with a 5% reject rate is a tuning signal: either the agent prompt is wrong about the schema, or the schema is wrong about the tool. Both are cheap to fix when you can see the data. Use the search to find rejects for one agent or one tool.
Subscribe to Nova AI Ops on YouTube for demos, tutorials, and feature deep-dives.
A model that calls a tool with the wrong arguments is not creative. It is broken. The validator catches it before any production system sees it.