Memory Provenance attaches a confidence score and a signed origin to every memory the agents write. When an agent recalls a memory, it sees the source, the time it was written, the verifier outcomes (if the memory came from a remediation), and the confidence score. Old, stale, or unverified memories are still recallable but weighted down, so they do not silently drive new actions.
Every memory write captures the source: which agent wrote it, which incident it came from, which signal triggered it, which model produced it. The full provenance chain is signed by the platform key. Recalled memories arrive with that chain intact, so the agent (and a human reviewer) can audit the source before acting on it.
Confidence (0-100) is computed at write time and updated over time. Initial score depends on source quality (verified remediation > unverified > model assertion). The score decays with age. The score updates upward when the memory is corroborated by future incidents and downward when it is contradicted. Recall ranks memories by confidence so high-confidence facts beat noise.
Every prompt that includes a recalled memory carries the memory id and a citation. The agent's reasoning ledger entry shows which memories influenced the decision and at what weight. If a customer asks "why did the agent assume X?" the answer is one click away in the bundle.
Memories have retention policies. Time-sensitive memories expire on a schedule. Tenant-deletion requests purge all memories tagged to that tenant within 24 hours. Customer-facing personal data inside memories is automatically redacted at write time by Prompt Egress Scanner before storage. Privacy is built in, not bolted on.
Subscribe to Nova AI Ops on YouTube for demos, tutorials, and feature deep-dives.
Provenance turns AI memory from "trust me" into "here is the receipt." Every fact has a source, every recall has a citation.