Skip to content

Prompt Engineering

Beyond basic CLAUDE.md configuration, expert users develop systematic approaches to prompting that consistently produce better results.

Claude Code processes prompts in layers:

┌─────────────────────────────────────────┐
│ 1. System prompt (Anthropic's base) │ ← You can't change this
├─────────────────────────────────────────┤
│ 2. CLAUDE.md (project context) │ ← Always loaded
├─────────────────────────────────────────┤
│ 3. Session context (conversation) │ ← Accumulates
├─────────────────────────────────────────┤
│ 4. Your current message │ ← Most recent = most attention
└─────────────────────────────────────────┘

Each layer affects output. Optimize all of them.

Your CLAUDE.md is always in context. Make every token count.

# [Project Name]
## Critical (Read First)
- [Absolute rules that must never be broken]
- [Security constraints]
- [Off-limits files/patterns]
## Context
[One paragraph: what this is, tech stack, key patterns]
## Commands
- `make dev` - start server
- `make test` - run tests
## Conventions
- [Code style rules]
- [Naming conventions]
- [File organization patterns]
IncludeWhy
Critical constraintsPrevents catastrophic mistakes
Tech stack summaryFocuses tool/library suggestions
Key commandsEnables autonomous execution
Naming conventionsProduces consistent code
File patternsHelps navigation
ExcludeWhy
Project historyWastes tokens
Team member biosIrrelevant to coding
Exhaustive file listsClaude can explore
Full API documentationPull on-demand via MCP
Configuration detailsClaude can read config files
[Context] → [Task] → [Constraints] → [Format]

Example:

> I'm working on the payment module (context).
> Add retry logic for failed charges (task).
> Use exponential backoff, max 3 retries,
> don't retry on card_declined errors (constraints).
> Add tests for each retry scenario (format/output).
# Bad: Long and vague
> I need you to help me with the authentication
> system in my application. There are some issues
> with how users are being logged in and I want
> to make it better and more secure.
# Good: Short and specific
> Add rate limiting to POST /auth/login
> - 5 attempts per IP per minute
> - Return 429 with Retry-After header
> - Store counts in Redis

Put critical constraints at the start AND end:

> IMPORTANT: Do not modify any files in core/
>
> Refactor the user service to use dependency injection
>
> Remember: core/ is off-limits

Why: Claude attends most to the beginning and end of prompts.

Show, don’t just tell:

> Format new API routes like this existing one:
> @router.post("/users")
> async def create_user(user: UserCreate, db: DB = Depends(get_db)):
> """Create a new user."""
> return await user_service.create(db, user)
> Now add: GET /users/{id}, PUT /users/{id}, DELETE /users/{id}

State what NOT to do:

> Add caching to the product queries
>
> Do NOT:
> - Modify the Product model
> - Add new dependencies
> - Change the API response format

Break complex work into explicit phases:

> Let's do this in steps:
>
> Phase 1: Add the database migration
> Phase 2: Update the model
> Phase 3: Add the API endpoint
> Phase 4: Write tests
>
> Start with Phase 1. Wait for my approval before each phase.

Specify exact output format:

> Review this code and output your findings as:
>
> ## Security Issues
> - [issue]: [file:line] - [description]
>
> ## Bugs
> - [issue]: [file:line] - [description]
>
> ## Suggestions
> - [suggestion]: [rationale]

When something isn’t working, test variations:

Terminal window
# Attempt A
claude -p "add input validation to the user form"
# Attempt B
claude -p "add input validation to the user form following the pattern in forms/product.py"
# Compare results

After changing CLAUDE.md, verify it still handles known tasks:

Terminal window
# Your test tasks
claude -p "explain the auth flow" # Should match expected output
claude -p "where is user data stored" # Should find correct files
claude -p "add a new API endpoint" # Should follow conventions
# Bad: Pasting everything upfront
> Here's all our documentation: [10 pages]
> Here's the relevant code: [5 files]
> Here's the ticket: [500 words]
> What do you think?

Fix: Let Claude pull what it needs:

> The ticket is in docs/tickets/AUTH-123.md
> Implement it following our patterns
# Bad: Assuming Claude knows your preferences
> make it better
> fix the issues
> clean this up

Fix: Be explicit:

> Reduce the cyclomatic complexity of this function
> Extract the validation logic into a separate method
> Add type hints to all parameters
# Bad: Changing requirements mid-stream
> add a login page
> [Claude implements]
> actually make it a modal
> [Claude changes]
> wait, add OAuth too

Fix: Plan first:

> Before implementing, let's define:
> - Login method: modal or page?
> - Auth providers: password only or OAuth?
> - Session handling: JWT or cookies?

For complex tasks, chain prompts across sessions:

Terminal window
# Session 1: Planning
claude -p "create a spec for adding search functionality" > spec.md
# Session 2: Implementation
claude -p "implement the spec in spec.md, start with the backend"
# Session 3: Testing
claude -p "write comprehensive tests for the search feature we just built"

Each session gets fresh context focused on its phase.