# Receptionist AI Design Pattern
With the Receptionist [[Large Language Models (LLMs)|Large Language Model (LLM)]] pattern, all user inputs are passed to a receptionist LLM, which routes the user inputs to other specialized LLMs, and delegate the response generation to those.
I discovered this pattern via [[Matt Pocock]]. He visualized it like this:
![[Receptionist LLM Pattern - visualization.png]]
The interesting aspect of this pattern is that each LLM (including the receptionist) can load/use specific context, data, rules, tools, etc.
In addition, combined with [[Prompt Lazy Loading AI Design Pattern (PLL)]], it's possible to avoid loading/keeping too much context, data, rules, etc. This reduces the risks of hallucinations, makes the usage more efficient, etc.
## Benefits
- **Performance**: Only loads relevant information when needed
- **Cost Efficiency**: Reduces token usage
- **Clarity**: Clear separation of concerns
- **Scalability**: Easy to add new roles or expand knowledge bases
- **Maintenance**: Modular design allows for easy updates
## References
- https://www.tiktok.com/@mattpocockuk/video/7514329600126635296