A small, runnable simulator for controlled multi-agent workflows — demonstrating governance, orchestration, and evaluation through working code rather than documentation alone.
Agent systems are easy to describe but hard to reason about without running them. This simulator gives you a concrete, inspectable implementation of the core governance patterns: explicit roles, bounded retries, fallback paths, escalation triggers, and evaluation.
The design principle: a well-governed agent system should expose its control logic clearly enough to be debugged, evaluated, and improved.
flowchart TD
P[Planner
Decides how to handle the task] --> E
E[Executor
Performs the task action] --> Ev
Ev[Evaluator
Assesses result quality] --> S
S{Supervisor
Decide outcome}
S -->|Accept| Done[✓ Task complete]
S -->|Retry| E
S -->|Fallback| F[Fallback handler]
S -->|Escalate| Esc[Human escalation]
| Agent | Role |
|---|---|
| Planner | Determines the strategy for handling the task |
| Executor | Performs the primary task action |
| Evaluator | Assesses whether the result meets acceptance criteria |
| Supervisor | Decides: accept, retry, fallback, or escalate |
git clone https://github.com/simaba/agent-simulator.git
cd agent-simulator
pip install -r requirements.txt
python run_demo.py --scenario normal_successAvailable scenarios:
python run_demo.py --scenario normal_success
python run_demo.py --scenario retry_then_success
python run_demo.py --scenario fallback_after_failure- Decision log with full agent interaction trace
- Retry and escalation events
- Final outcome status
- Latency measurements
- Cost estimate
- Evaluation summary metrics
See examples/sample-output.md for a full example run.
run_demo.py # Entry point
src/
agents.py # Agent role implementations
controller.py # Orchestration and retry logic
evaluation.py # Evaluation criteria
scenarios.py # Scenario definitions
examples/
sample-output.md # Example run output
requirements.txt
- Multi-Agent Governance Framework — the conceptual blueprint this simulator implements
- AI Agent Evaluation Framework — evaluation dimensions mapped to this simulator's outputs
This repository is part of a connected toolkit for responsible AI operations:
| Repository | Purpose |
|---|---|
| Enterprise AI Governance Playbook | End-to-end AI operating model from intake to improvement |
| AI Release Governance Framework | Risk-based release gates for AI systems |
| AI Release Readiness Checklist | Risk-tiered pre-release checklists with CLI tool |
| AI Accountability Design Patterns | Patterns for human oversight and escalation |
| Multi-Agent Governance Framework | Roles, authority, and escalation for agent systems |
| Multi-Agent Orchestration Patterns | Sequential, parallel, and feedback-loop patterns |
| AI Agent Evaluation Framework | System-level evaluation across 5 dimensions |
| Agent System Simulator | Runnable multi-agent simulator with governance controls |
| LLM-powered Lean Six Sigma | AI copilot for structured process improvement |
Shared in a personal capacity. Open to collaborations and feedback — connect on LinkedIn or Medium.