# Context engineering Most people think AI prompts are just about asking the right questions. But it's actually only a part of the story. **The answers you get from AI don't only depend on the questions on ask, but also on the context you provide, and how you structure your prompts**. Let's put the question and prompt structure parts aside, and let's focus on the context part. For the sake of the explanation, let's also forget about the "tools" and "capabilities" that current [[Large Language Models (LLMs)]] can use (e.g., Web Search, Deep Research, [[Model Context Protocol (MCP)]], etc). Last but not least, consider this as a useful mental model. It's not an exact science. there are tons of variables and variance between different models and things keep changing. Context is the additional information you provide along with your question(s). Think of it like this: **if you just ask questions to AI without providing any context, all you can get are generic responses that could apply to anyone/anything**. But if you provide context, you actually help the AI "focus" its attention mechanism, and provide you an answer that "makes sense" for the given context. Thus, **context can be considered as a filter or funnel. The more specific the context, the more specific the answer**. But it's not as simple. In reality, **the quality and quantity of provided context *heavily* impacts the quality (i.e., relevance, specificity, accuracy, etc.) of the responses you can get**. I have four different cases in mind: - If you don't provide any context, you'll get generic answers - If you don't provide enough context, you'll get vague answers - If you provide the ideal context, you'll get a very specific answer - If you provide too much context, you'll end up with AI hallucinations **I think of context as a "guide", "filter" or "funnel" between the set of possible answers and the one that AI gives you back**. It's an oversimplification of course. But I feel like it's a useful mental model. It's the exact same thing when you discuss with a junior teammate. You could flood them with information, but they would just end up being confused. You have to provide them enough information/context for the task at hand. Here's how I picture this: ![[DeveloPassion's Newsletter 197 - Context Engineering - context vs results.png]] It's far from perfect, but it's the rough idea. The black dots on top represent the set of possible answers AI could give you based on your prompt. Below, AI filters that in some way to provide you with a response: - When you provide no context, there's no filter/funnel and you get a generic answer - When you do provide context but not enough, there's filtering, but it's fuzzy and you get vague answers back - When you provide the "right context", the funnel is effective, and you get just what you need - When you provide too much context, the filter is broken, and you get hallucinations Having this mental model in mind, it should be obvious that **providing the right quantity of the right context matters a lot**. And that's where context engineering comes in. **Context engineering is about carefully designing and managing not just your prompts and their structure, but the entire context that they include**. It's the difference between: - Asking "Help me write a newsletter" - Asking "Help me write a newsletter" and providing clear explanations about your target audience, your writing style, your angle, your ideas, background, examples, constraints, rules, ... The goal isn't just to craft a good prompt, but to optimize the entire context to improve outputs and reduce hallucinations. In addition, I also consider that context engineering not only applies while crafting your prompts, but also during your discussions/interactions with AI. Each time you start a new "conversation", the AI context is empty (you can think of this as "memory"). The context provided in your initial prompt adds context/memory. And as the exchange continues, the context evolves. And it has to be kept under control. If you optimize your initial prompt, but then add too much to the context, you end up with the same issues. So, the context has to be managed throughout the interaction. That's why tools such as [[Claude Code]] integrate commands to clear or compact the context. It not only helps removing what you don't need anymore (helping AI focus on what matters), but also limits the tokens usage. Context engineering may sound stupid/useless, but I think it makes perfect sense and is valuable, just like "software engineering". When you consider it seriously, it does make a difference. It's not just a trendy name. What I like about this idea, is that it emphasizes the importance of carefully designing prompts and considering the resulting context as the things to optimize. I agree that we could do without those kinds of distinctions and just call it all "prompting", but the subtlety is important, as a way to educate people how to better use AI. Because most people only care about explaining the task/question "right", and don't necessarily consider what the context should include or not, whether it's bloated or clean, etc. This is especially impactful when considering that prompts can now pull/push data, search the web, etc. It's all too easy to create "mental overload" for AI, leading to poor (or even dangerous) results, even if the prompts were well designed. In a way, it's similar to the distinction between Programming and Vibe Coding. Both lead to code, but said code is vastly different (at least for now 😂) ## References - https://simonwillison.net/2025/Jun/27/context-engineering - https://www.philschmid.de/context-engineering - https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.html - https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.html