Build production-ready AI agents in Rust with minimal boilerplate.
Acton-ai handles the hard problems—concurrency, fault tolerance, rate limiting, streaming, and tool execution—so you can focus on your application logic.
use acton_ai::prelude::*;
#[tokio::main]
async fn main() -> Result<(), ActonAIError> {
ActonAI::builder()
.app_name("my-app")
.ollama("qwen2.5:7b")
.with_builtins()
.launch()
.await?
.conversation()
.run_chat()
.await
}Five lines to an interactive chat with file access and command execution.
- Multi-provider support — Anthropic Claude, OpenAI, Ollama, and any OpenAI-compatible API
- Streaming responses — Token-by-token callbacks for real-time output
- Built-in tools — File operations, bash, grep, glob, web fetch, and calculations
- Tool execution loop — Automatic tool calling and result handling until completion
- Two API levels — Simple facade for common cases, full actor access for advanced control
- TOML configuration — Define providers and settings in config files
- Process sandboxing — Portable subprocess isolation for tool execution with rlimits, timeouts, and optional Linux hardening (landlock + seccomp)
- Rate limiting — Built-in request and token limits per provider
- Actor-based architecture — Fault-tolerant, concurrent design via acton-reactive
yay -S acton-ai-bincargo install acton-aicargo add acton-aiFor Ollama (local), no API key is needed. For cloud providers, set environment variables:
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."Common patterns to get you started. Complete examples in examples/.
use acton_ai::prelude::*;
#[tokio::main]
async fn main() -> Result<(), ActonAIError> {
let runtime = ActonAI::builder()
.app_name("my-app")
.ollama("qwen2.5:7b")
.launch()
.await?;
let response = runtime
.prompt("What is the capital of France?")
.system("Be concise.")
.collect()
.await?;
println!("{}", response.text);
Ok(())
}runtime
.prompt("Explain Rust ownership in simple terms.")
.on_token(|token| print!("{token}"))
.collect()
.await?;let mut conv = runtime.conversation()
.system("You are a helpful assistant.")
.build();
let response = conv.send("What is Rust?").await?;
println!("{}", response.text);
// Context is maintained
let response = conv.send("How does ownership work?").await?;
println!("{}", response.text);let runtime = ActonAI::builder()
.app_name("my-app")
.ollama("qwen2.5:7b")
.with_builtins() // Enable all built-in tools
.launch()
.await?;
runtime
.prompt("List the Rust files in the current directory")
.on_token(|t| print!("{t}"))
.collect()
.await?;runtime
.prompt("What is 42 * 17?")
.tool(
"calculator",
"Evaluates math expressions",
json!({
"type": "object",
"properties": {
"expression": { "type": "string" }
},
"required": ["expression"]
}),
|args| async move {
let expr = args["expression"].as_str().unwrap();
Ok(json!({ "result": evaluate(expr) }))
},
)
.collect()
.await?;let runtime = ActonAI::builder()
.app_name("my-app")
.provider_named("local", ProviderConfig::ollama("qwen2.5:7b"))
.provider_named("cloud", ProviderConfig::anthropic("sk-ant-..."))
.default_provider("local")
.launch()
.await?;
// Quick tasks on local
runtime.prompt("Summarize this").collect().await?;
// Complex reasoning on cloud
runtime.prompt("Analyze this code").provider("cloud").collect().await?;Configure providers via TOML files or programmatically.
Create acton-ai.toml in your project root or ~/.config/acton-ai/config.toml:
default_provider = "ollama"
[providers.ollama]
type = "ollama"
model = "qwen2.5:7b"
base_url = "http://localhost:11434/v1"
timeout_secs = 300
[providers.ollama.rate_limit]
requests_per_minute = 1000
tokens_per_minute = 1000000
[providers.claude]
type = "anthropic"
model = "claude-sonnet-4-20250514"
api_key_env = "ANTHROPIC_API_KEY"
# Optional: ProcessSandbox for tool isolation
# Runs sandboxed tools in a subprocess with rlimits, timeouts, and
# (on Linux) best-effort landlock + seccomp hardening.
[sandbox]
hardening = "besteffort" # "off" | "besteffort" | "enforce"
[sandbox.limits]
max_execution_ms = 30000
max_memory_mb = 256Load the configuration:
let runtime = ActonAI::builder()
.app_name("my-app")
.from_config()?
.with_builtins()
.launch()
.await?;let runtime = ActonAI::builder()
.app_name("my-app")
.provider_named("claude",
ProviderConfig::anthropic("sk-ant-...")
.with_model("claude-sonnet-4-20250514")
.with_max_tokens(4096))
.provider_named("local",
ProviderConfig::ollama("qwen2.5:7b"))
.default_provider("local")
.with_builtins()
.with_process_sandbox() // Isolate sandboxed tools in a subprocess
.launch()
.await?;Available when you call .with_builtins():
| Tool | Description |
|---|---|
read_file |
Read file contents with line numbers |
write_file |
Write content to files |
edit_file |
Make targeted string replacements |
list_directory |
List directory contents with metadata |
glob |
Find files matching glob patterns |
grep |
Search file contents with regex |
bash |
Execute shell commands |
calculate |
Evaluate mathematical expressions |
web_fetch |
Fetch content from URLs |
Select specific tools with .with_builtin_tools(&["read_file", "glob", "bash"]).
Acton-ai ships a scriptable CLI with persistent sessions, autonomous task execution, and stdin/stdout piping.
# Single message
acton-ai chat -m "What is Rust?"
# Pipe from stdin
echo "Explain ownership" | acton-ai chat
# Persistent sessions — context carries across invocations
acton-ai chat --session work --create -m "Start a new project plan"
acton-ai chat --session work -m "Add a testing section"
# JSON output for scripting
acton-ai chat -m "List 3 colors" --json | jq .text
# Interactive terminal chat
acton-ai chatDefine reusable jobs in acton-ai.toml with template substitution and agentic tool loops:
[jobs.summarize]
system_prompt = "You are a summarization expert. Be concise."
message_template = "Summarize:\n\n{{input}}"
[jobs.translate]
system_prompt = "Translate to the requested language. Output ONLY the translation."
message_template = "Translate to {{lang}}: {{input}}"cat document.txt | acton-ai run-job summarize
echo "Hello" | acton-ai run-job translate --param lang=SpanishAutonomous wake-up cycle for scheduled tasks. During chat, the agent can create heartbeat entries (recurring tasks). A systemd timer triggers acton-ai heartbeat to review and execute due tasks:
# Run all due heartbeat entries
acton-ai heartbeat
# Run entries for a specific session only
acton-ai heartbeat --session mainOutput is a JSON activity report to stdout, suitable for monitoring pipelines.
acton-ai session list # List all sessions
acton-ai session show work # Session metadata + recent messages
acton-ai session delete work --force # Delete session and history--json Machine-readable JSON output
--config PATH Override config file path
--provider NAME Override default LLM provider
-v / -vv / -vvv Increase verbosity
-q Suppress stderr output
Acton-ai uses the actor model for fault-tolerant, concurrent AI systems:
ActonAI (Facade)
│
├── ActorRuntime (acton-reactive)
│ │
│ ├── Kernel ─────────── Central supervisor, agent lifecycle
│ │
│ ├── LLMProvider(s) ─── API calls, streaming, rate limiting
│ │
│ ├── Agent(s) ───────── Individual AI agents with reasoning
│ │
│ ├── ToolRegistry ───── Tool registration and execution
│ │
│ └── MemoryStore ───── Persistent sessions, memories, embeddings
│
└── BuiltinTools ──────────── File ops, bash, web fetch, etc.
Two API levels:
| Level | Use Case | Access |
|---|---|---|
| High-level | Most applications | ActonAI::builder(), PromptBuilder, Conversation |
| Low-level | Custom agent topologies | Direct actor spawning, message routing, subscriptions |
The high-level API handles actor lifecycle, subscriptions, and message routing automatically. Drop down to the low-level API when you need custom supervision strategies or multi-agent coordination.
# Interactive chat with tools
cargo run --example conversation
# Multiple LLM providers
cargo run --example multi_provider
# Custom tool definitions
cargo run --example ollama_tools
# Process-sandboxed execution
cargo run --example process_sandbox
# Per-agent tool configuration
cargo run --example per_agent_tools- API Documentation (docs.rs)
- acton-reactive — The underlying actor framework
Contributions welcome. Please open an issue to discuss significant changes before submitting a PR.
MIT License. See LICENSE for details.