How to write a CLAUDE.md file (with examples Claude Code actually respects)
The 18 sections that matter, the four that don't, and the structural rules that turn a memory file into a force multiplier for AI-pair-programming.
Claude Code reads CLAUDE.md at the start of every conversation. If you treat it like a README, you'll get a polite agent that asks for clarification a lot. If you treat it like a contract — what to build, what to refuse, what specific failures to plan for — you get a force multiplier.
I've watched maybe forty CLAUDE.md files now, mostly in research projects we run for indie builders. The ones that produce great software all share a structure. The ones that don't all share the opposite mistakes. This post is that structure.
What CLAUDE.md actually is
CLAUDE.md is a markdown memory file that lives at the root of your repo. Claude Code injects it into the context of every conversation as durable instructions. Three things follow from that:
- It's not documentation. Documentation is for humans. CLAUDE.md is for an agent that already has the code, just not the strategic context.
- It's load-bearing. Mistakes in CLAUDE.md propagate into every reply. A vague rule produces vague code.
- It's budget-constrained. Every token here competes with code context. Aim for tight; cut anything you'd be embarrassed to defend in a code review.
The 18-section structure
This is the order we use in the auto-generated CLAUDE.md files YouTubeToSaaS produces, refined across hundreds of project runs:
# Project Mission
# Research Topic
# Software Goal
# Target User
# What We Learned
# Product Direction
# Defensible Wedge
# MVP Features
# Advanced Features
# Technical Architecture
# AI Agents
# Data Sources
# Risk Rules
# Failure Modes
# Compliance Notes
# Current Tasks
# Do Not Build Yet
# Next Implementation StepsWhy this order? It mirrors the cognitive flow an agent needs: identity (who is this for?) → strategy (what's the wedge?) → tactics (what do I build?) → guardrails (what do I refuse?) → execution (what's next?).
The five non-negotiable sections
Most CLAUDE.md files I've reviewed could be cut in half. But these five sections are non-negotiable. Skip any of them and you'll feel the cost in agent quality.
1. Defensible Wedge
One sentence. What specific positioning can the recommended product credibly own that competitors universally miss?
"Better UX." "AI-powered." "Modern." Every one of these is a non-wedge. They're aspirations, not positionings. If your wedge is "better UX", the agent will produce generic UX. If your wedge is "transcript fallback chain that survives YouTube IP-blocks from cloud datacenters by routing through Supadata", the agent will produce that specifically.
Good wedge example, lifted from a real CLAUDE.md we generated:
Trade-decision software that explains why a strategy was flagged hype before showing the strategy itself — competitors present strategies as facts; we present them as claims with explicit evidence quality. Our defensibility is the explicit-evidence column, not the model behind it.
2. Failure Modes
At least 4 entries, each named. Format: mode → trigger → mitigation.
## Failure Modes
- **OAuth token expiry mid-pipeline**
Trigger: long-running ingest job spans the 1-hour token TTL
Mitigation: refresh in a worker hook before each stage; fail fast if refresh returns 401
- **Provider rate-limit on cold-start of new sessions**
Trigger: simultaneous user surge, each first message hits the API fresh
Mitigation: queue with token-bucket per provider; user-visible "warming up" state
- **Embedding cosine drift between text-embedding-3-small versions**
Trigger: provider silently rotates the model under the same name
Mitigation: pin the model name explicitly, store the model version with each embedding row
- **YouTube IP-block on the worker datacenter**
Trigger: cloud-host IP egress hits Google's bot-detection
Mitigation: HTTPS_PROXY env var pointing at a residential proxy + Supadata fallbackThe reason this section is mandatory: every real product hits these eventually, and an agent without explicit failure-mode context will confidently ship code that works in dev and fails in production. If your CLAUDE.md doesn't list 4 failure modes, you don't understand the product yet. That's not a writing problem, it's a thinking problem, and CLAUDE.md is a great forcing function.
3. Risk Rules
Hard guardrails the agent must obey at runtime. Not philosophical; operational. "Never charge a customer without explicit confirmation" is a rule. "Be careful with money" is not.
## Risk Rules
- Never invoke a Stripe charge without an explicit user-confirmed Checkout session.
- Never run a destructive SQL migration without showing a dry-run plan first.
- Never include verbatim YouTube transcript text > 25 words in user-facing files; paraphrase only.
- For finance/stock projects, every output ends with: "Not financial advice."
- Never store API keys in repo files; always read from env.4. Do Not Build Yet
This is the section nobody writes and everybody needs. List the things that seem obviously next but the evidence doesn't support yet.
Why it matters: an agent without a "do not build" list will eagerly sprint toward whatever's mentioned most in the rest of the doc. Frequency in your prose is not endorsement. The "do not build" list is your way of telling the agent "yes I know this is tempting, but the data doesn't justify it yet".
## Do Not Build Yet
- Mobile app (web-responsive is enough until DAU > 1000)
- Multi-tenant team accounts (single-user covers the wedge)
- Self-hosted Whisper (Supadata + proxy is cheaper at our scale)
- Marketplace / public sharing of research (security + moderation cost out of proportion to value)
- A "real-time" trading mode (compliance scope explodes; hold for v3)5. Current Tasks
The only section that should churn week-to-week. Everything else in CLAUDE.md is durable; this one is the work queue. Keep it tight — 3-7 items max, ordered by priority. Anything older than 2 weeks either gets done or gets explicitly killed.
Anti-patterns I see weekly
Mistake 1: Including code samples
CLAUDE.md should describe constraints, not implementation. "We use FastAPI" is fine. Pasting a 30-line FastAPI route as an example is not — the agent will copy that exact structure even when it's the wrong shape for the new task.
Mistake 2: Marketing copy
I've seen "Our mission is to empower creators" verbatim from a landing page in CLAUDE.md. The agent has no use for marketing copy. Replace it with the operational version: "Built for indie hackers running 1-3 research projects per month."
Mistake 3: Open-ended directives
"Write clean code." "Optimize for performance." "Use best practices." Every one of these reads as noise to the agent because they don't constrain the decision space. Replace each with a named rule: "Functions over 40 lines must be split unless atomicity is required." "First-render LCP target < 2.5s on a cold cache."
Mistake 4: Stale Current Tasks
A 6-month-old "Current Tasks" list is worse than no list — the agent treats stale items as currently relevant. Either edit weekly or delete the section entirely.
A complete (small) example
Here's an actual CLAUDE.md from a real research project we ran on AI-driven B2B sales tools. Copy as a starter; adjust to your domain.
# Project Mission
Build decision-support software that helps sub-30-person B2B SaaS teams identify the 3-5 inbound leads most worth a personalized first-touch each day.
# Defensible Wedge
Surface "why this lead, why now, why this hook" in one screen — competitors show enrichment fields and leave the decision to the user. Our defensibility: the why-now signal we synthesize from public web events.
# MVP Features
- Connect HubSpot inbound list
- Daily ranked top-5 with one-click first-touch draft
- Per-lead "why now" panel with cited sources
# Failure Modes
- HubSpot API rate-limit at 100 req/10s — paginate + cache 4hr per account
- Public-event source rot (LinkedIn changes structure) — abstract behind a vendor adapter, nightly contract test
- Hallucinated "why now" for low-signal leads — gate output on a confidence threshold; fall back to "no clear why now this week"
- Dedup race when two team members open the same lead — optimistic lock + conflict toast
# Risk Rules
- Never auto-send outreach. Always draft, never dispatch.
- Never store HubSpot OAuth refresh token unencrypted.
- Never include scraped LinkedIn personal data older than 90 days in outputs.
# Do Not Build Yet
- Cold outbound list-building (out of scope; tightens spam exposure)
- Email-warmup features (commodity, not differentiating)
- A Slack bot version (web-first until DAU > 200)
# Current Tasks
1. Draft the "why now" prompt and run on 50 sample leads end-to-end
2. Implement the HubSpot daily-pull worker
3. Build the per-lead detail page with cited sources
4. Add the confidence gate + fallback copyNotice what's not there: no marketing copy, no tutorial, no code, no list of "all the libraries we might use someday". The whole thing fits in 30 lines and produces unambiguous agent behavior.
Generating CLAUDE.md from research
The hardest part of writing a good CLAUDE.md is having researched enough to fill the non-negotiable sections honestly. You can't fake a defensible wedge — either the corpus you've studied surfaced one or it didn't. You can't invent failure modes — either your sources described production breakage or they didn't.
That's the gap our product fills. YouTubeToSaaS takes a research topic + 10–50 videos, runs the synthesis methodology described in our product-research guide, and emits a CLAUDE.md with all 18 sections populated from the evidence — including the non-negotiable five with real content rather than placeholders.
Drop in your topic and 10-20 videos. We'll synthesize across them and emit a CLAUDE.md, plus a starter prompt for Claude Code, ready to paste into a fresh repo.
Closing thought
CLAUDE.md is the cheapest leverage you have over agent quality. Five extra minutes writing a real defensible-wedge sentence saves hours of nudging the agent toward the right shape. Five named failure modes saves the production incident you would otherwise ship into.
Don't write it like a README. Write it like a contract.