Skip to content

simaba/agent-simulator

Agent System Simulator

Python License: MIT Last Commit

A small, runnable simulator for controlled multi-agent workflows — demonstrating governance, orchestration, and evaluation through working code rather than documentation alone.


Why this exists

Agent systems are easy to describe but hard to reason about without running them. This simulator gives you a concrete, inspectable implementation of the core governance patterns: explicit roles, bounded retries, fallback paths, escalation triggers, and evaluation.

The design principle: a well-governed agent system should expose its control logic clearly enough to be debugged, evaluated, and improved.


How it works

flowchart TD
    P[Planner
Decides how to handle the task] --> E
    E[Executor
Performs the task action] --> Ev
    Ev[Evaluator
Assesses result quality] --> S
    S{Supervisor
Decide outcome}
    S -->|Accept| Done[✓ Task complete]
    S -->|Retry| E
    S -->|Fallback| F[Fallback handler]
    S -->|Escalate| Esc[Human escalation]
Loading

Agents

Agent Role
Planner Determines the strategy for handling the task
Executor Performs the primary task action
Evaluator Assesses whether the result meets acceptance criteria
Supervisor Decides: accept, retry, fallback, or escalate

Quick start

git clone https://github.com/simaba/agent-simulator.git
cd agent-simulator
pip install -r requirements.txt
python run_demo.py --scenario normal_success

Available scenarios:

python run_demo.py --scenario normal_success
python run_demo.py --scenario retry_then_success
python run_demo.py --scenario fallback_after_failure

What each run produces

  • Decision log with full agent interaction trace
  • Retry and escalation events
  • Final outcome status
  • Latency measurements
  • Cost estimate
  • Evaluation summary metrics

See examples/sample-output.md for a full example run.


Repository structure

run_demo.py             # Entry point
src/
  agents.py             # Agent role implementations
  controller.py         # Orchestration and retry logic
  evaluation.py         # Evaluation criteria
  scenarios.py          # Scenario definitions
examples/
  sample-output.md      # Example run output
requirements.txt

Companion repositories


Related repositories

This repository is part of a connected toolkit for responsible AI operations:

Repository Purpose
Enterprise AI Governance Playbook End-to-end AI operating model from intake to improvement
AI Release Governance Framework Risk-based release gates for AI systems
AI Release Readiness Checklist Risk-tiered pre-release checklists with CLI tool
AI Accountability Design Patterns Patterns for human oversight and escalation
Multi-Agent Governance Framework Roles, authority, and escalation for agent systems
Multi-Agent Orchestration Patterns Sequential, parallel, and feedback-loop patterns
AI Agent Evaluation Framework System-level evaluation across 5 dimensions
Agent System Simulator Runnable multi-agent simulator with governance controls
LLM-powered Lean Six Sigma AI copilot for structured process improvement

Shared in a personal capacity. Open to collaborations and feedback — connect on LinkedIn or Medium.

About

Runnable multi-agent workflow simulator with governance controls — retry logic, fallback, escalation, and evaluation metrics

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages