Giving Agents Secrets Without Giving Agents Secrets
The pattern: secrets enter the agent's environment at the tool boundary, not in the prompt. Vault-injected env. 1Password op-run envelopes. Hooks that scrub before write. Your session JSONL gets archived; nothing in there should be sensitive.
View companion repoEvery session you run with Claude Code writes a JSONL file to ~/.claude/projects/<project>/<session>.jsonl. That file contains the full transcript: every prompt, every model response, every tool call, every tool result. It sits on disk indefinitely. Mine occupy 11.6 GB across 538 directories — 23,479 sessions of unfiltered conversation logs.
Now imagine one of those files contains your production database password because the agent ran cat .env to debug something and the result went into the transcript. Or your Anthropic API key, because a session was working on the API integration and you let it read the config. Or any one of the dozen kinds of secrets that show up in a typical project's environment.
That's the scenario I started thinking about hard. The session log is an audit artifact, but it's also a liability. Every secret that flows through a session becomes a secret that lives in a flat file forever. Backups, syncs, machine-to-machine transfers — once a secret is in there, you can't reliably get it out.
The Boundary Pattern
Secrets belong at the tool boundary, not in the model's context. Every secret should be injected by something the model doesn't see, used by something the model doesn't read, and discarded before anything the model produces.
Above the line, everything is fair game for the transcript — the model needs to reason about it. Below the line, the secret never surfaces back to the model. The tool uses it, calls the service, returns the (sanitized) result.
That sounds obvious in principle. The implementation has three concrete pieces: env-var injection at process start, hook-mediated scrubbing of tool results, and never-read patterns for sensitive files.
Env-Var Injection With 1Password
I use op run from 1Password as the canonical way to inject secrets into a Claude Code session. The pattern: secrets live in 1Password vaults, get pulled into the environment at session-start time, and stay in environment variables that the model never reads.
The .op-env file in the project root references vault paths instead of plaintext values:
The op:// references are 1Password vault paths. The op run wrapper resolves them at process spawn, replaces them with the actual secret values in the environment, and exec's Claude Code. The secrets exist in the running process's env-block. They never get written to disk in plaintext, and they never appear in the model's context unless something explicitly prints them.
The Scrub Hook
Even with env-var injection, secrets can still leak. A common path: the agent runs printenv to debug, the output goes into a tool result, the tool result goes into the transcript. Now your secrets are in the JSONL.
The fix is a PostToolUse hook that scrubs known-secret patterns before the result reaches the model:
Thirty-two lines. Every tool result passes through this filter. If printenv returns 200 lines of environment variables and three of them are real secrets, the model sees API_KEY=REDACTED instead of the actual key. The audit trail in the JSONL also stores the redacted version. The original was never visible to either.
The Never-Read Pattern
Some files just shouldn't be read in a session, period. .env, .env.local, id_rsa, kubeconfig, anything in ~/.aws/credentials. These contain secrets in plaintext and there's no legitimate reason for an agent to read them — anything that needs the values can get them from the environment instead.
A PreToolUse hook on Read enforces this:
Twenty-two lines. The agent literally cannot read your .env. If it tries, it gets the refusal text back, sees the suggestion to use env vars, and adjusts. I've watched this hook fire seventeen times across the past month. Each one was a session that would have leaked something into the transcript.
The Envelope Pattern for Sensitive Operations
Sometimes the model needs to operate on sensitive data without seeing it. The example I hit most: signing a webhook, where the model needs to construct the payload but not see the signing key.
The solution is an MCP tool that takes an unsigned payload, signs it with a key from the environment, and returns the signed result. The model never sees the key:
The model passes the payload as input. The tool reads the key from its own process env (set by op run at session start). The model gets back the signed envelope. The key never appears anywhere in the transcript. The session log records the unsigned payload going in, the signed envelope coming out, and nothing else.
This pattern generalizes. Any operation that needs a secret can be wrapped in a tool that takes the inputs as parameters and pulls the secret from its own environment. Database queries, API calls, encrypted blob operations — all of them work this way. The model orchestrates; the tools execute; the secrets live in the tools' environment, not in the model's context.
Table Stakes
Three configurations I now treat as table stakes for any project where I'll run multi-turn sessions:
- 01A .op-env file declaring which secrets the session needs and where they live in 1Password
- 02A startup wrapper that runs op run --env-file=.op-env before launching claude-code
- 03The secret-scrub and refuse-secret-files hooks installed at the user level (~/.claude/hooks/)
Total setup time on a new project: about ten minutes once you have the templates. The protection it adds: the next session that accidentally tries to read a secret file gets refused; the next tool result that contains a secret gets scrubbed; the next backup of ~/.claude/ doesn't contain anything that would matter if it leaked.
What This Costs
The performance overhead is negligible. The PostToolUse scrub hook adds about 8ms per tool call (regex pass over the result string). The PreToolUse refusal hook adds about 3ms. Across 60 turns of a typical session, that's under a second of total added latency. The cost-benefit is heavily favorable.
The cognitive overhead is the harder cost. You have to think about secrets at the start of every project — declare what's needed, populate the vault, configure the wrapper. Most projects will use the defaults you've already set up, but some will need additional secrets, and you have to add them deliberately.
The alternative is much worse. A single secret leaked into a session log is a secret you have to assume is compromised. You rotate it, audit access, check downstream systems for unauthorized use. That's hours of work for one slip. The discipline of injecting secrets at the tool boundary is the cheapest way to make slips structurally impossible.
“The 11.6 GB of session logs on my disk contain zero plaintext secrets. I checked. The hooks have done their job.”