babyagent is a demo SDK for learning how agent SDKs work. It is intentionally small, explicit, and readable so the core ideas can be explained live: messages, model providers, tool calls, remote MCP servers, skills, and the agent loop.
This project is not intended for production use. It favors clarity over hardening, complete provider coverage, and operational safety.
Install from a local source checkout:
pip install -e .Install directly from Git:
pip install "babyagent@git+https://github.com/serpapi/babyagent.git"If you use a provider API, install the matching extra:
pip install -e ".[openai]"
pip install -e ".[anthropic]"
pip install -e ".[ollama]"The same extras work with Git installs, for example:
pip install "babyagent[openai]@git+https://github.com/serpapi/babyagent.git"Usage examples are in the examples/ folder.
At the center is Agent, a lightweight conversation manager. An agent owns a message history, sends user input to a model provider, receives a normalized ModelResponse, executes any requested tools, appends tool results, and repeats until the model returns a final answer.
Provider adapters implement the model-specific API details behind a common ModelBase interface. OpenAI, Anthropic, and Ollama each convert babyagent messages, tools, remote MCP servers, skills, and provider responses into the local dataclasses used by the rest of the SDK.
Tools are plain Python functions decorated with @tool(...). The decorator inspects the function signature, generates a simple JSON schema, and attaches metadata so Agent(tools=...) can pass decorated function objects directly.
Skills are local folders with a SKILL.md. Providers with native skill support can expose them directly. Providers without native support use a small fallback prompt plus a get_skill tool that lets the model read allowed skill files.
Remote MCP servers are represented by a small RemoteMCPServer dataclass and passed through to providers that support remote MCP tools.
- Multi-turn agent loop with message history
- OpenAI Responses API provider
- Anthropic Messages API provider
- Ollama provider
- Decorated Python function tools
- Tool-call execution and tool-result feedback
- Remote MCP server configuration
- Local skills via
SKILL.md - OpenAI native skill exposure
- Skill fallback through
get_skillfor non-native providers - Debug logging for demo visibility
- Example scripts for chat, tool calling, MCP, skills, shopping, and travel.
src/babyagent/
agent.py Agent loop and tool execution
dataclasses.py Shared message, tool call, shell call, MCP, and response models
model.py Base provider interface
tools.py @tool decorator and JSON schema generation
skills.py Skill loading and fallback get_skill tool
mcp.py Remote MCP server dataclass
providers/ OpenAI, Anthropic, and Ollama adapters
examples/
1*_*.py Basic PyCon bot examples
2*_*.py No-tool and function tool-calling examples
3*_*.py Shopping assistant examples with and without MCP
4*_*.py Travel assistant examples with skills
skills/ Demo skill folders
scripts/
Standalone provider-specific experiments and reference scripts
tests/
Unit tests that protect demo behavior and provider formatting
The SDK deliberately leaves out many production agent features:
- Retries
- Structured output
- File uploads
- Other provider-specific customization options, such as built-in tools
- Permissions
- Multi-agent orchestration
- Context compaction
These omissions keep the code small enough to study and modify.