Skip to main content
Guides

MCP servers

MCP (Model Context Protocol) servers extend AI agents with additional tools and context. An MCP server exposes capabilities — file search, database queries, API integrations — that an agent can invoke during a run.

Moat supports three types of MCP servers:

  • Remote MCP servers — External HTTPS services accessed through Moat’s credential-injecting proxy
  • Host-local MCP servers — Services running on the host machine, bridged to the container through the proxy relay
  • Sandbox-local MCP servers — Child processes running inside the container

This guide covers configuring all three types, granting credentials, and troubleshooting common issues.

For prepackaged language servers (Go, TypeScript, Python), see Language servers in the Claude Code guide.

Prerequisites

  • Moat installed
  • An agent configured (Claude Code, Codex, or Gemini)
  • For remote MCP servers: the server URL and any required credentials
  • For sandbox-local MCP servers: the server package name or executable

Remote MCP servers

Remote MCP servers are external HTTPS services declared at the top level of moat.yaml. Moat injects credentials at the network layer — the agent never sees raw tokens.

1. Grant credentials

Store credentials for the MCP server with moat grant mcp:

$ moat grant mcp context7
Enter credential for MCP server 'context7': ***************
MCP credential 'mcp-context7' saved to ~/.moat/credentials/mcp-context7.enc

The credential is encrypted and stored locally. The grant name follows the pattern mcp-<name>.

For public MCP servers that do not require authentication, skip this step.

2. Configure in moat.yaml

Declare the MCP server at the top level of moat.yaml (not under claude:, codex:, or gemini:):

mcp:
  - name: context7
    url: https://mcp.context7.com/mcp
    auth:
      grant: mcp-context7
      header: CONTEXT7_API_KEY

Required fields:

FieldDescription
nameUnique identifier for the server
urlHTTPS endpoint for remote servers. HTTP is allowed for host-local servers (localhost, 127.0.0.1, [::1])

Optional fields:

FieldDescription
auth.grantGrant name to use (format: mcp-<name>)
auth.headerHTTP header name for credential injection

Omit the auth block for public MCP servers that do not require authentication:

mcp:
  - name: public-tools
    url: https://public.example.com/mcp

3. Run the agent

moat claude ./my-project

No additional flags are needed. Moat reads the mcp: section from moat.yaml and configures the proxy relay automatically.

Multiple remote servers

Configure multiple MCP servers in a single moat.yaml:

mcp:
  - name: context7
    url: https://mcp.context7.com/mcp
    auth:
      grant: mcp-context7
      header: CONTEXT7_API_KEY

  - name: notion
    url: https://notion-mcp.example.com
    auth:
      grant: mcp-notion
      header: Notion-Token

Grant all required credentials before running:

moat grant mcp context7
moat grant mcp notion

Host-local MCP servers

Host-local MCP servers run on your host machine (outside the container). Containers cannot reach localhost on the host directly, so Moat’s proxy relay bridges the connection.

Configure in moat.yaml

Declare host-local servers in the mcp: section with an http://localhost or http://127.0.0.1 URL:

mcp:
  - name: my-tools
    url: http://localhost:3000/mcp

Authentication is optional. If the host-local server requires credentials:

mcp:
  - name: my-tools
    url: http://localhost:3000/mcp
    auth:
      grant: mcp-my-tools
      header: Authorization

Start the MCP server on the host

Start your MCP server on the host before running the agent:

# Start your MCP server (example)
my-mcp-server --port 3000 &

# Run the agent
moat claude ./my-project

The proxy relay forwards requests from the container to localhost:3000 on the host.

Mixing remote and host-local servers

Remote and host-local servers can be configured together:

mcp:
  - name: context7
    url: https://mcp.context7.com/mcp
    auth:
      grant: mcp-context7
      header: CONTEXT7_API_KEY

  - name: local-tools
    url: http://localhost:3000/mcp

How the relay works

Some agent HTTP clients do not respect HTTP_PROXY for MCP connections, so Moat routes MCP traffic through a local relay endpoint on the proxy. The agent connects to the relay, the relay injects credentials from the grant store, and forwards requests to the real MCP server. For host-local servers, the relay forwards to localhost on the host, bridging the network boundary between the container and the host. The agent never has access to raw credentials, and all MCP traffic appears in network traces and audit logs.

See Proxy architecture for details on how the relay works.

Policy enforcement

Restrict which MCP tool calls an agent can make with the policy field. Rules are evaluated by Keep at the proxy layer before requests reach the MCP server.

Starter packs

Starter packs are built-in policy sets for common MCP servers. Apply one by name:

mcp:
  - name: linear
    url: https://mcp.linear.app/mcp
    auth:
      grant: mcp-linear
      header: Authorization
    policy: linear-readonly

This restricts the Linear MCP server to read-only operations. Write operations like create_issue or delete_issue are denied.

Available starter packs:

NameDescription
linear-readonlyAllows read operations on Linear, denies all writes

Inline rules

For simple policies, define rules directly in moat.yaml:

mcp:
  - name: linear
    url: https://mcp.linear.app/mcp
    policy:
      deny: [delete_issue, update_issue]
      mode: enforce

Listed operations are denied; unlisted operations are implicitly allowed.

The mode field controls enforcement:

ModeBehavior
enforceDeny blocked operations (default)
auditLog policy decisions without blocking

File-based rules

For larger policies, store rules in a separate file. The .keep/ directory in your workspace root is the conventional location:

mcp:
  - name: linear
    url: https://mcp.linear.app/mcp
    policy: .keep/linear-rules.yaml

The file path is relative to the workspace root.

Keep rules files use Keep’s native YAML format with scope, mode, and rules:

# .keep/linear-rules.yaml
scope: linear
mode: enforce
rules:
  - name: deny-deletes
    match:
      operation: "delete_*"
    action: deny
    message: "Delete operations are not allowed."

  - name: deny-updates
    match:
      operation: "update_*"
    action: deny
    message: "Update operations are not allowed."

See the Keep documentation for the full rule format, including pattern matching, CEL expressions, and redaction.

Audit mode

Use mode: audit to observe what the agent does before enforcing restrictions. Policy decisions are logged to the run’s audit trail without blocking any operations:

mcp:
  - name: linear
    url: https://mcp.linear.app/mcp
    policy:
      deny: [delete_issue]
      mode: audit

Run the agent, then review audit entries:

$ moat audit <run-id>

Once you are confident in the rules, switch to mode: enforce.

Sandbox-local MCP servers

Sandbox-local MCP servers run as child processes inside the container. The agent starts them directly — no proxy is involved. Configure them under the agent-specific section of moat.yaml.

1. Install the MCP server

The server executable must be available inside the container. Use a pre_run hook to install it:

hooks:
  pre_run: npm install -g @modelcontextprotocol/server-filesystem

Alternatively, include the server in your project’s dependencies so it is installed alongside your code.

2. Configure in moat.yaml

Declare the MCP server under the agent section (claude:, codex:, or gemini:):

Claude Code

claude:
  mcp:
    filesystem:
      command: npx
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
      cwd: /workspace

Moat writes the server configuration to .claude.json inside the container. Claude Code discovers it automatically.

Codex

codex:
  mcp:
    filesystem:
      command: npx
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
      cwd: /workspace

Moat writes the server configuration to .mcp.json in the workspace directory.

Gemini

gemini:
  mcp:
    filesystem:
      command: npx
      args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
      cwd: /workspace

Moat writes the server configuration to .mcp.json in the workspace directory.

3. Run the agent

moat claude ./my-project

The agent starts the MCP server process inside the container and connects to it over stdio.

Injecting credentials

Use the grant field to inject a credential as an environment variable. This is supported for Codex and Gemini agents:

codex:
  mcp:
    github_server:
      command: /path/to/github-mcp-server
      grant: github
      cwd: /workspace

When grant is specified, Moat sets the corresponding environment variable automatically:

GrantEnvironment variable
githubGITHUB_TOKEN
anthropicANTHROPIC_API_KEY
openaiOPENAI_API_KEY
geminiGEMINI_API_KEY

The grant must also appear in the top-level grants: list.

Environment variables

Pass environment variables directly:

claude:
  mcp:
    my_server:
      command: /path/to/server
      env:
        API_KEY: my-api-key
        DEBUG: "true"
      cwd: /workspace

Sandbox-local MCP server fields

FieldTypeDescription
commandstringServer executable path (required)
argsarray[string]Command arguments
envmap[string]stringEnvironment variables
grantstringCredential to inject as an environment variable (Codex and Gemini only)
cwdstringWorking directory for the server process

OAuth authentication

MCP servers that use OAuth 2.0 can authenticate through moat grant oauth. This acquires tokens via a browser-based authorization code flow with PKCE.

Grant OAuth credentials

moat grant oauth notion --url https://mcp.notion.com/mcp

The --url flag triggers MCP OAuth discovery — Moat fetches the server’s OAuth metadata automatically. No manual endpoint configuration is needed.

For servers without MCP discovery, provide OAuth endpoints directly:

moat grant oauth linear \
    --auth-url https://linear.app/oauth/authorize \
    --token-url https://linear.app/api/oauth/token \
    --client-id your-client-id \
    --scopes "read write"

Alternatively, create a config file at ~/.moat/oauth/<name>.yaml:

auth_url: https://linear.app/oauth/authorize
token_url: https://linear.app/api/oauth/token
client_id: your-client-id
scopes: read write

Config resolution order: CLI flags, then config file (~/.moat/oauth/<name>.yaml), then MCP discovery from --url.

Configure in moat.yaml

Reference the OAuth grant with the oauth:<name> prefix:

grants:
  - oauth:notion

mcp:
  - name: notion
    url: https://mcp.notion.com/mcp
    auth:
      grant: oauth:notion
      header: Authorization

The proxy injects a Bearer prefix when injecting oauth: grants into the Authorization header.

Token refresh

OAuth tokens are auto-refreshed during runs. When a token expires, the proxy uses the stored refresh token to obtain a new access token without interrupting the agent.

Observability

All MCP traffic (both remote and host-local) flows through the proxy relay, so it appears in network traces:

$ moat trace --network

[10:23:44.512] GET https://mcp.context7.com/mcp 200 (89ms)

Credential injection events are recorded in audit logs:

$ moat audit

[10:23:44.500] credential.injected grant=mcp-context7 host=mcp.context7.com header=CONTEXT7_API_KEY

Sandbox-local MCP server output appears in container logs:

moat logs

Troubleshooting

MCP server not appearing in agent

  • Verify the MCP server is declared in moat.yaml (remote servers at the top level, sandbox-local servers under the agent section)
  • For remote servers, check that the grant exists: moat grant list should show mcp-{name}
  • Check container logs for configuration errors: moat logs

Authentication failures (401 or 403)

  • Verify the grant exists and the credential is correct
  • Revoke and re-grant: moat revoke mcp-{name} then moat grant mcp {name}
  • Verify the url in moat.yaml matches the actual MCP server endpoint
  • Verify the header name matches what the MCP server expects

Stub credential in error messages

If you see moat-stub-{grant} in error output, the proxy did not replace the stub with a real credential. Check:

  • The url in moat.yaml matches the host the agent is connecting to
  • The header name matches what the agent sends
  • The grant name in auth.grant matches the stored credential (mcp-{name})

Connection refused

  • The proxy may not be running. Check moat list to verify the run is active
  • For remote servers, verify the URL is reachable from your network
  • For host-local servers, verify the server is running on the host and listening on the configured port
  • For sandbox-local servers, verify the command path exists inside the container

Sandbox-local server not starting

  • Verify the server executable is installed inside the container. Use a pre_run hook or include it in your project dependencies.
  • Check that command is an absolute path or available on $PATH inside the container
  • Review container logs with moat logs for startup errors

SSE streaming issues

Remote MCP servers that use SSE (Server-Sent Events) for streaming responses are supported through the proxy relay. If streaming appears stalled:

  • Check that the MCP server URL is correct and the server supports SSE
  • Review network traces with moat trace --network to see if requests reach the server
  • Verify no intermediate firewalls or proxies are buffering SSE responses

Agent-specific notes

Claude Code

Remote and sandbox-local MCP servers are configured in the generated .claude.json. Remote servers use type: http with relay URLs pointing to the proxy. Sandbox-local servers use type: stdio with the command and arguments from moat.yaml. Claude Code discovers both types through this config file automatically. See Running Claude Code for other Claude Code configuration options.

Codex

Sandbox-local MCP servers are configured under codex.mcp: in moat.yaml. Configuration is written to .mcp.json in the workspace. See Running Codex for Codex-specific configuration.

Gemini

Sandbox-local MCP servers are configured under gemini.mcp: in moat.yaml. Configuration is written to .mcp.json in the workspace. See Running Gemini for Gemini-specific configuration.