A composable simulation system models human behavior by separating the world, agents, memory, interaction, and control layers. LLM-driven agents provide cognition, while rules and probabilistic models handle routine behavior, time, and scale.
Key takeaways
- Treat the world as a computational substrate with state, rules, time, and events.
- Keep agents modular so memory, retrieval, reflection, planning, and action can evolve independently.
- Use a control layer to inject events, inspect assumptions, and keep complex scenarios observable.
What a composable simulation system is
A composable simulation system is an architecture for building simulated worlds from interchangeable parts. Instead of hard-coding one scenario, the system defines a world, a population of agents, rules of evolution, and a control surface that can change assumptions while the simulation runs.
For human behavior modeling, composability matters because behavior depends on context. The same agent can react differently when pricing pressure, social proof, urgency, incentives, or competing narratives change inside the world.
- World: state, rules, timeline, events, and global constraints.
- Agent: memory, retrieval, reflection, planning, and action.
- Interaction: agent-to-agent, human-to-agent, and human-to-world operations.
- Control: scenario parameters, event injection, and observability.
Why LLM-driven agents need layers
LLMs are useful for cognition, language, and interpretation, but they should not carry the whole simulation alone. Long-running simulations become expensive, slow, and inconsistent when every routine action asks the model to reason from scratch.
A layered architecture reserves LLM calls for moments that need judgment: conflict, persuasion, uncertainty, social interpretation, or decision making. Deterministic rules and probabilistic models can handle routine movement, decay, scheduling, and low-value transitions.
The cognitive agent loop
A credible synthetic agent needs continuity. The usual loop is perception, memory retrieval, reflection, planning, and action. The agent observes the world, retrieves relevant memories, summarizes what matters, chooses a plan, and emits an action back into the simulation.
This loop becomes more reliable when memory is typed. Episodic memory stores events, semantic memory stores stable beliefs, and reflective memory stores higher-level conclusions that summarize repeated experience.
- Perception converts world events into agent observations.
- Retrieval selects memories by recency, relevance, and importance.
- Reflection turns repeated observations into durable insight.
- Planning chooses the next action inside current constraints.
How teams use this architecture
Composable simulation is strongest when teams need to run what-if analysis. A product team can keep the same synthetic population and compare two onboarding flows. A pricing team can change discount timing. A market team can inject a competitor move and inspect how behavior shifts.
The practical goal is not to predict reality perfectly. The goal is to make assumptions explicit, test scenarios before exposure, and identify which decision deserves live validation.