# Prompt Lazy Loading AI Design Pattern
The idea behind prompt lazy loading is to defer the actual loading of prompts, context, data, rules, etc.
By lazy loading all of that, LLM usage becomes much more efficient (spending fewer tokens, reducing the risks of hallucinations, etc).
Works great when combined with the [[Receptionist AI Design Pattern]].
## Benefits
- **Performance**: Only loads relevant information when needed
- **Cost Efficiency**: Reduces token usage
- **Clarity**: Clear separation of concerns
- **Scalability**: Easy to add new roles or expand knowledge bases
- **Maintenance**: Modular design allows for easy updates