Skip to main content
Concepts

Observability

Moat captures observability data for every run automatically. Container output, network requests, and security-relevant events are recorded as they happen. A cryptographic audit log ties these records together with tamper detection guarantees.

This page explains what data Moat collects, how the audit log provides integrity verification, and how the pieces relate to each other.

What Moat captures

Each run produces four types of observability data:

  • Container logs — Timestamped stdout and stderr output from the container process. These are the raw output lines the agent produces during execution.

  • Network traces — Every HTTP and HTTPS request that passes through the proxy. Each trace records the method, URL, response status, and request duration. Injected credentials are redacted in the trace output so that tokens do not appear in stored data.

  • Execution spans — OpenTelemetry-compatible spans that represent discrete operations: container creation, proxy startup, network requests, and container shutdown. Spans capture timing and hierarchy, showing how operations nest within the run lifecycle.

  • Audit events — Structured records of security-relevant actions: credential injection, secret resolution, SSH agent operations, and container lifecycle transitions. These events feed into the hash-chained audit log described below.

All four data types are written to per-run storage as the run executes. Nothing requires explicit opt-in — observability is always on.

Why default-on observability matters

AI agents make autonomous decisions. They issue network requests, consume credentials, read and write files, and interact with external services. Without observability, the only way to understand what happened during a run is to read the agent’s own output — which may be incomplete or misleading.

Default-on capture means every run produces a complete record regardless of whether the operator anticipated needing it. When a run produces unexpected results, the data is already there. When a credential is used in an unexpected way, the audit log already contains the event. This removes the need to reproduce issues with additional instrumentation enabled.

Audit log architecture

The audit log is the integrity layer of Moat’s observability system. While container logs and network traces are plain records, audit entries are stored in a cryptographic hash chain that makes tampering detectable.

Hash chain structure

Each audit entry contains a SHA-256 hash computed over its sequence number, timestamp, event type, payload, and the hash of the previous entry. This creates a chain: modifying any entry changes its hash, which breaks the link to the next entry.

The chain provides three guarantees:

  • Modification detection — Changing an entry’s content invalidates its hash and every subsequent hash in the chain.
  • Deletion detection — Removing an entry creates a gap in the sequence numbers and breaks the hash linkage.
  • Insertion detection — Adding an entry between two existing entries requires recomputing all subsequent hashes, which invalidates the chain from that point forward.

Verification walks the chain from the first entry to the last, recomputing each hash and checking it against the stored value. A single mismatch indicates tampering.

Event types

The audit log records six categories of events: console output (container stdout and stderr), network requests through the proxy (including method, URL, status, duration, and credential usage), credential injection (when credentials are injected and for which hosts), secret resolution from external backends (the secret value itself is never logged), SSH agent operations (key listing, signing approvals, and denials), and container lifecycle transitions (creation, start, stop, and privileged mode usage).

Events are appended to the chain as they occur. The audit log is stored as a SQLite database within the run’s storage directory.

Attestations and signatures

Beyond the hash chain, audit logs support cryptographic attestations that anchor the log to a point in time.

Local signatures — Ed25519 signatures over the final hash in the chain. These are generated by a per-installation key pair and prove the log was produced by Moat on the signing machine.

External timestamps — When available, Moat integrates with Sigstore’s Rekor transparency log. A Rekor entry provides third-party proof that the log existed at a specific time, independent of the local machine’s clock.

Together, these attestations support non-repudiation: the signer cannot deny creating the log, and the timestamp cannot be backdated without detection.

Proof bundles

A proof bundle is a self-contained export of an audit log. It includes all entries with their hashes, local attestations, and any Rekor proofs. The bundle contains everything needed to verify the audit chain without access to the original run data or the machine that produced it.

Proof bundles serve three purposes:

  • Portability — Share audit evidence with third parties who do not have access to the Moat installation.
  • Archival — Store audit records in external systems (version control, append-only storage, notarization services) where they are protected from local deletion.
  • Offline verification — Verify the integrity of a run’s audit trail on a different machine, without network access or access to the original Moat data directory.

For export and verification commands, see the observability guide.

Storage model

Moat stores all observability data per-run. Each run gets its own storage directory containing container logs, network traces, execution spans, and the audit database. This per-run isolation means data from different runs does not intermingle, and removing a run’s artifacts is a single directory deletion.

Run artifacts persist after the container exits. The container itself is removed automatically, but logs, traces, and audit data remain until explicitly cleaned up. This separation ensures observability data survives the container lifecycle.

Retention is manual by default. Moat does not automatically delete run artifacts. This is a deliberate choice: audit data that disappears on its own undermines the purpose of keeping it. Operators decide when to clean up, either per-run or in bulk.

How the pieces relate

The four data types serve complementary purposes:

  • Container logs provide the raw narrative of what the agent printed. They are unstructured and complete, but offer no integrity guarantees.
  • Network traces provide structured records of external communication. They show what the agent accessed, how long it took, and what responses it received.
  • Execution spans provide timing and hierarchy. They show how operations relate to each other and where time was spent.
  • The audit log provides integrity. It records the same security-relevant events found in logs and traces, but wraps them in a hash chain that makes after-the-fact modification detectable.

Logs and traces are the data you query for debugging. The audit log is the data you verify for trust. In practice, you use logs and traces to investigate what happened, and the audit log to confirm that the investigation is based on unmodified records.

Trust model and limitations

The audit log provides tamper detection, not tamper prevention. It is a local data structure, not a distributed ledger.

  • Local trust boundary — Signatures are generated by the local Moat installation. They prove the log was created by Moat on the signing machine, not that the machine itself is trustworthy. An attacker with write access to Moat’s data directory could replace the entire database and re-sign it.
  • No distributed consensus — There is no network of validators. The audit log is as trustworthy as the machine running Moat.
  • Rekor integration strengthens but does not eliminate trust assumptions — A Rekor timestamp proves the log existed at a specific time, but does not prevent a compromised machine from producing a fraudulent log and submitting it.

For scenarios requiring stronger guarantees, export proof bundles and store them in external systems where they are protected by independent access controls.