Aider Context Window: How to Stop Hitting Token Limits
Aider manages its own context window through the repo map — a compressed representation of your codebase structure. But when you add large files to the chat, include verbose test output, or work on big repos, Aider still hits token limits. Here is how to work within those limits and use the Token Limits REST API to compress what Aider sends.
Aider is designed to be context-efficient — its repo map system summarizes your entire codebase into a compact structure rather than reading everything. But the context window still fills up when you add specific files to chat, paste in error output, or work on large files with long functions. Understanding how Aider uses context helps you get more out of each session.
How Aider uses its context window
- ✓Repo map (--map-tokens): A compressed summary of your codebase structure — functions, classes, signatures. Adjustable in size.
- ✓Files in chat (/add): Full contents of files you explicitly add with /add are included in context
- ✓Chat history: All previous messages in the current session
- ✓System prompt: Aider's own instructions consume some context
- ✓Error output: Any error messages you paste in or that Aider captures from running tests
Aider context window by model
| Model | Context window | Recommended --map-tokens |
|---|---|---|
| claude-sonnet-4-6 | 1,048,576 tokens | 8,000-16,000 |
| claude-opus-4-6 | 1,048,576 tokens | 8,000-16,000 |
| gpt-4o | 128,000 tokens | 4,000-8,000 |
| gpt-4.1 | 1,000,000 tokens | 8,000-16,000 |
| claude-haiku-4-5 | 200,000 tokens | 4,000-8,000 |
How to reduce Aider token usage
- /drop files when you are done with them — use /drop filename to remove files from context
- Use /clear to reset chat history when switching tasks without losing your repo map
- Add only the files directly relevant to the current change — not the whole module
- Adjust --map-tokens down (try 4096) on large repos where the map fills context
- Use --model claude-sonnet-4-6 for the largest available context window
- Compress test output and error logs before pasting them with /paste
Compressing Aider error output before pasting
When you paste test failures, build errors, or stack traces into Aider, the raw output is full of noise. A 500-line pytest failure can be 15,000 tokens — most of which is repeated test names, fixture setup, and formatting that Aider does not need. Paste the raw output into tokenlimits.app/compress first. The compressed version is typically 80% smaller and contains all the information Aider needs to fix the bug.
Using Token Limits REST API with Aider in scripts
If you are running Aider programmatically or in CI, you can pipe tool output through the Token Limits REST API before passing it to Aider. POST the raw text to https://tokenlimits.app/api/compress with your license key. The API returns compressed text that you can pass to Aider via stdin or the --message flag. This integrates cleanly into any Aider automation pipeline.
Compress Aider error output before it hits context
tokenlimits.app/compress is free, runs in your browser, no account needed. Paste test failures and build errors — get 60-80% smaller output to paste into Aider.
FAQ
What is the Aider context window limit?
Aider uses whatever context window the underlying model supports. With Claude Sonnet 4.6 (--model claude-sonnet-4-6), you get 1 million tokens. With GPT-4o, you get 128k. Aider's repo map is designed to stay within these limits automatically, but large /add operations can exceed them.
Why does Aider say "context window exceeded"?
You have added more files and chat history than the model's context window allows. Use /drop to remove files you are done with, /clear to reset history, or reduce --map-tokens to make more room for added files.
What is the best model for Aider on large codebases?
Claude Sonnet 4.6 or GPT-4.1, both with 1 million token context windows. These give Aider enough room to hold a large repo map plus several files in chat simultaneously.
How do I see how many tokens Aider is using?
Aider shows token usage at the end of each request in the session output. You can also use --verbose to see detailed token counts for each component of the context.
Can Token Limits work directly with Aider like it does with Cursor?
Aider does not support MCP servers natively, so the automatic MCP integration is not available. However, the free paste compressor at tokenlimits.app/compress works great for compressing error output before pasting, and the REST API works in any automated Aider script.