Oxide is a Rust-based automation kernel that receives text commands from messaging channels (Telegram, Discord, CLI), routes each request to a Lua skill, and replies on the same channel.
The project is designed around three priorities:
- secure execution of user-defined automations,
- low operational cost (token-efficient AI usage),
- channel-agnostic behavior (same skill logic across adapters).
In practice, Oxide behaves like an orchestration runtime for sandboxed Lua skills with optional AI assistance.
- Multi-adapter inbound messaging:
- Telegram adapter (allowlist by admin user IDs)
- Discord adapter (allowlist by user IDs)
- Local interactive CLI adapter
- Semantic intent routing to skills using embeddings (
all-MiniLM-L6-v2viafastembed). - Two-stage parameter extraction:
- Stage 1: local deterministic extraction (
try_local_extractin Lua skill) - Stage 2: AI JSON extraction fallback only when local extraction fails
- Stage 1: local deterministic extraction (
- Skill execution in a hardened Lua 5.4 sandbox (
mlua). - Asynchronous job scheduling and automation:
- ad-hoc queued jobs,
- cron-based automations,
- chained jobs returned by skills (
schedule_job).
- SQLite persistence for queueing, automations, key-value skill state, and embedding cache.
- Management commands from chat/CLI:
/enqueue,/jobs,/kill.
- No web dashboard or GUI management panel.
- No distributed/multi-node scheduler (single-process runtime).
- No built-in RBAC beyond adapter-level allowlists.
- No formal plugin marketplace or remote skill registry.
Oxide follows a Ports-and-Adapters (Hexagonal) architecture.
- Core domain is isolated in
src/core. - External integrations live in
src/adaptersandsrc/network. - Stable contracts are traits in
src/ports.
- Driving adapters (input/output edges): Telegram, Discord, CLI.
- Core orchestration: event handling, routing, extraction strategy, execution dispatch.
- Runtime and execution isolation: Lua sandbox bridge.
- Async scheduling subsystem: producer + worker pool for delayed/periodic jobs.
- Infrastructure: SQLite persistence and HTTP AI client.
- Loads settings (
Settings.toml+ env overrides where supported). - Builds AI provider (enabled or disabled adapter).
- Initializes SQLite and migrations.
- Creates
Orchestratorand starts scheduler. - Registers all configured channel adapters and starts adapter loops.
Responsibilities:
- receives
InboundEventfrom adapters, - resolves management commands,
- computes embeddings and selects best skill by cosine similarity,
- if similarity is below threshold and AI is enabled, attempts AI-based route selection among top candidates,
- runs local extraction first,
- if needed, calls AI for structured JSON extraction,
- executes skill in sandbox,
- returns answer to originating adapter,
- enqueues follow-up scheduled jobs if returned by skill output.
- Loads skills from
skills/*.lua. - Exposes execution entrypoints (
execute,on_schedule). - Enforces sandbox constraints and operational guards.
- Injects execution context (adapter/platform/user metadata) into params.
- Scheduler producer scans due tasks and cron automations.
- Worker pool executes scheduled jobs in blocking threads, while async runtime remains responsive.
- Handles retries, task state transitions, backpressure rescheduling, and cancellation (
/kill).
AiProvider:chat_with_system,extract_json_for_skill.MessagingProvider:send_text,send_image.
This keeps core logic independent from concrete AI or chat platform implementations.
- Adapter normalizes platform payload into
InboundEvent. - Orchestrator checks management commands.
- Orchestrator computes message embedding and ranks skills.
- If confidence is adequate, selected skill is used; otherwise AI router may choose skill (when enabled).
try_local_extractruns first (cheap deterministic path).- If extraction is missing, AI extracts JSON parameters from skill schema.
- Skill executes in Lua sandbox.
- Orchestrator sends resulting answer through original adapter.
- Optional: if result contains
schedule_job, task is inserted into scheduler queue.
- Producer checks cron automations and pending scheduled tasks.
- Due tasks are claimed and sent to worker queue.
- Worker executes
on_schedulein sandbox. - Result may enqueue additional tasks.
- If execution context includes adapter + platform ID, answer can be forwarded to user/channel.
Main tables:
automations: cron-based recurring skill triggers.job_queue: legacy compatibility queue/state.scheduled_tasks: active delayed/priority execution queue.skill_kv: per-skill key-value storage.embedding_cache: hash/model keyed vector cache.
Operational notes:
- SQLite WAL mode is enabled.
- Foreign keys are enabled.
- Indexed queries support status/ready-task lookups.
- Legacy records are normalized/migrated into current scheduling model.
AI is optional and explicitly gateable (ai_enabled).
When enabled, AI is used in constrained places:
- general chat fallback when no skill path is suitable,
- skill parameter extraction fallback,
- low-confidence skill routing fallback.
When disabled:
- deterministic/local skill paths continue to work,
- AI-only operations fail gracefully with fallback responses.
This design keeps average token usage low by prioritizing local extraction and semantic routing before LLM calls.
Security controls implemented in runtime and adapters include:
- Lua sandboxing with reduced standard library surface.
- Runtime guards for instruction/time/resource boundaries during skill execution.
- Adapter allowlists (Telegram admin IDs, Discord allowed users).
- Skill path validation to prevent traversal when enqueueing scheduled jobs.
- Safe handling of scheduler cancellation and queue backpressure.
- Hardened outbound HTTP behavior from Lua runtime and payload size/time limits.
Primary config file: Settings.toml.
Key settings:
- AI endpoint/model/key (
litellmblock), ai_enabled,similarity_threshold,static_fallback_msg,- channel definitions (
[[channels]]withtelegramordiscord).
Environment variable overrides are supported for channel tokens/user allowlists in runtime startup.
- Language/runtime: Rust + Tokio.
- Skill runtime: Lua 5.4 via
mlua. - Embeddings:
fastembed(AllMiniLML6V2). - Persistence: SQLite via
sqlx. - Messaging:
teloxide(Telegram),serenity(Discord). - AI connectivity: OpenAI-compatible chat endpoint (often via LiteLLM).
- Concurrency model:
- async IO/event loops in Tokio,
- blocking skill execution and queue processing via crossbeam/thread workers.
- Deployment target is a single process with local SQLite.
- Skills are trusted project assets but executed under sandbox constraints.
- Reliability focus is graceful degradation (fallback message, retries, queue reschedule) rather than strict exactly-once distributed semantics.
- Existing observability is log-centric (tracing), not metrics/dashboard-centric.
If you need to reason about this project quickly:
- Think of Oxide as a secure automation runtime, not a generic chatbot.
- The orchestration core is Rust; business automations are Lua skills.
- AI is a fallback/extractor tool, not the primary execution path.
- Scheduler and queueing are first-class features for delayed and periodic workflows.
- Hexagonal architecture keeps adapters and providers replaceable without changing core logic.