Multi-engine agentic framework. Run models locally with Ollama, hit any API, or delegate to CLI agents. Swap backends with a single environment variable. No lock-in. No telemetry.
Seven engines across three tiers. Switch with one env var or a tap in Telegram. No vendor lock-in.
Channels, autonomous observers, human-in-the-loop email, and instant data cards. One process, zero external cron.
Full conversational interface with streaming responses, inline keyboards, voice input via Whisper, photo and document analysis, and live model switching.
IMAP polling classifies incoming mail, generates draft replies, and sends them to Telegram for explicit approve/reject before sending.
Seven autonomous background tasks on cron: email digest, morning brief, node health, daily snippet, content review, follow-up nags, and git push hooks.
Instant structured data cards that bypass the engine for speed. Weather, crypto, gold, stocks, forex, train departures, and infrastructure status.
Outbound email lifecycle: classify → draft → Telegram approve/reject → send via Gmail API → auto-create follow-up tracker.
User-defined scheduled tasks and reminders via /schedule and /remind commands. Stored in SQLite, survives restarts.
All Tier 1 and Tier 2 engines get 6 tools out of the box: bash, read, write, edit, glob, grep. No external dependencies.
Persistent memory injection across conversations. SQLite-backed sessions, drafts, follow-ups, tasks, and observer state.
Your agent runs on your hardware. Your data stays on your machine. No cloud platform between you and your models.
One authorized Telegram user ID. No shared access, no multi-tenant complexity, no attack surface.
Phones home to nobody. No analytics, no tracking, no usage reporting. Check the source — it's open.
Ollama keeps everything local. API backends use direct HTTP. No managed platform between you and inference.
Stdlib HTTP for API calls. No bloated SDK chains, no transitive dependency trees, no supply chain risk.
Email drafts require explicit Telegram approval before sending. The agent suggests — you decide.
Run on your own server, your own GPU, your own network. Not a managed platform. Not a SaaS product. Yours.
Clone, configure, run. Or deploy as a systemd service for always-on operation.