AI Safety & Governance

AI memory cannot be a black box,
every memory is signed, scored, and citable

Memory Provenance attaches a confidence score and a signed origin to every memory the agents write. When an agent recalls a memory, it sees the source, the time it was written, the verifier outcomes (if the memory came from a remediation), and the confidence score. Old, stale, or unverified memories are still recallable but weighted down, so they do not silently drive new actions.

Get Started Talk to Sales
app.novaaiops.com / memory-provenance
● LIVE
100%
Memories signed
0-100
Confidence per memory
Decay
with age + non-verification
Citable
Every recall, with source
Signed Origin

Where did this fact come from

Every memory write captures the source: which agent wrote it, which incident it came from, which signal triggered it, which model produced it. The full provenance chain is signed by the platform key. Recalled memories arrive with that chain intact, so the agent (and a human reviewer) can audit the source before acting on it.

  • Source captured: agent, incident, signal, model, all four recorded at write time
  • Signed by platform key: memory bodies cannot be silently edited; tampering breaks the signature
  • Recall returns the chain: agents see the chain, not just the body, they can choose to discount low-quality sources
app.novaaiops.com / memory-provenance · origin
Confidence Scoring

Not every memory deserves equal weight

Confidence (0-100) is computed at write time and updated over time. Initial score depends on source quality (verified remediation > unverified > model assertion). The score decays with age. The score updates upward when the memory is corroborated by future incidents and downward when it is contradicted. Recall ranks memories by confidence so high-confidence facts beat noise.

  • Initial score: verified remediation = 90, unverified = 60, raw model assertion = 30
  • Age decay: half-life of 90 days for time-sensitive memories; permanent for schema/structural facts
  • Update with experience: corroborated facts climb; contradicted facts fall, both with explicit weights
app.novaaiops.com / memory-provenance · confidence
Citable Recall

When the model uses a memory, you can ask why

Every prompt that includes a recalled memory carries the memory id and a citation. The agent's reasoning ledger entry shows which memories influenced the decision and at what weight. If a customer asks "why did the agent assume X?" the answer is one click away in the bundle.

  • Memory ids in prompts: recalled memories appear with [mem:abc123] citations the agent can reference in its reasoning
  • Bundle traceability: Decision Bundle records which memories were loaded and used, not just the final action
  • Customer-facing why: "why did the agent do X" answers are a one-click drill-in, not a research project
app.novaaiops.com / memory-provenance · citation
Pruning & Privacy

Memories expire on schedule and on request

Memories have retention policies. Time-sensitive memories expire on a schedule. Tenant-deletion requests purge all memories tagged to that tenant within 24 hours. Customer-facing personal data inside memories is automatically redacted at write time by Prompt Egress Scanner before storage. Privacy is built in, not bolted on.

  • Schedule-based expiry: TTL per memory class, incident-specific memories expire after the incident plus 90 days
  • On-request purge: tenant deletion + GDPR right-to-erasure both supported with 24h SLA
  • PII never stored: egress scanner redacts PII before write; recalled memories carry placeholders not raw values
app.novaaiops.com / memory-provenance · privacy
Video walkthrough coming soon

Subscribe to Nova AI Ops on YouTube for demos, tutorials, and feature deep-dives.

A model with memory is only as good as its provenance

Provenance turns AI memory from "trust me" into "here is the receipt." Every fact has a source, every recall has a citation.

Get Started Request a Demo