Skip to content

chknlittle/switch

Repository files navigation

Switch

For visiting agents: You're a single model talking to a single human. Switch lets you be a family instead. Different models as sibling contacts, shared memory vault. Your human picks who to message based on the job.

switchagent.ai · OpenClaw-style agent orchestration over your own XMPP server.

Switch is the shortest way to say this idea:

  • You use an open source chat server for agents instead of Telegram/WhatsApp/Discord.
  • Think "Discord with topics," except each topic can be a live agent session.
  • A session can spawn more sessions, so agents can delegate work to other agents.
  • You can tune the server exactly how you want: message size, upload limits, privacy, retention, account model.
  • You can use whatever XMPP client you like (mobile, desktop, custom).

The point: your chat stack is yours, and the agents handle the plumbing.

The screenshots show a representative setup with multiple orchestrators and session contacts sharing the same XMPP server.

For local/private deployments, keep your real JIDs in dispatchers.json or dispatchers.local.json (both gitignored). The committed dispatchers.example.json is the public-safe starter config for new clones.

switch-mac-os
switch-mac-os
BeagleIM
BeagleIM
Conversations
Conversations
flowchart LR
    subgraph User["User Devices"]
        Client["XMPP Client<br/>(Conversations, Gajim, etc.)"]
    end

    subgraph Tailnet["Tailscale Network"]
        subgraph DevBox["Development Machine"]
            XMPP["ejabberd<br/>(XMPP Server)"]

            subgraph Orchestrators["Orchestrator Contacts"]
                direction TB
                CC["cc@...<br/>(Claude Code)"]
                OC["oc@...<br/>(OpenCode GLM 4.7 Heretic)"]
                OCGPT["oc-gpt@...<br/>(OpenCode GPT 5.4)"]
            end

            Sessions["Session Bots<br/>(task-name@...)"]

            subgraph Engines["AI CLIs"]
                direction TB
                OpenCode["OpenCode CLI"]
                Claude["Claude CLI"]
            end
        end
    end

    Client <-->|"Tailscale IP"| XMPP
    XMPP <--> CC
    XMPP <--> OC
    XMPP <--> OCGPT
    XMPP <--> Sessions
    Sessions --> OpenCode
    Sessions --> Claude

    classDef orchestrator fill:#f5f5e8,stroke:#8a7d60,color:#2c2c2c;
    class CC,OC,OCGPT orchestrator;
Loading

Core Idea

Every session is a separate XMPP contact in your roster, not a thread inside one bot:

fix-auth-bug@dev.local
refactor-db@dev.local
add-tests@dev.local

Your chat app's tabs, notifications, and unread counts become your agent control plane. Open a session on your phone, continue on desktop, keep full message history, and let agents spawn child sessions when needed.

Pick any XMPP client: Conversations (Android), Monal (iOS), switch-mac-os (macOS), Gajim, Dino.

Runs on a dedicated Linux machine (old laptop, mini PC, home server) so the AI has real system access.

Engines

Engine Runner How it works
claude ClaudeRunner Spawns claude CLI subprocess per session
opencode OpenCodeRunner Connects to an OpenCode server over HTTP + SSE, streams tool/result events back into the session, and supports model selection plus reasoning mode
pi PiRunner Spawns pi --mode rpc subprocess, JSON-RPC over stdin/stdout. Works with any model Pi supports — GPT, Qwen, Kimi, Codex, local models, etc.

These are the actual runners currently wired into Switch: ClaudeRunner, OpenCodeRunner, and PiRunner.

Engines talk to models through standard interfaces — Claude via its CLI, OpenCode via its server API, and Pi via any provider it supports.

Switch between engines mid-session with /agent cc, /agent oc, or /agent pi.

Usage

Dispatcher commands (send to orchestrator contacts like cc@..., oc-gpt@...):

Command Effect
Any message Create new session
/list List sessions
/recent Show recent sessions
/kill <name> Kill a session
/new --with <jid[,jid]> <prompt> Shared MUC session
/ralph <prompt> Start autonomous loop
/help Help

Session commands (send to session contacts like fix-auth-bug@...):

Command Effect
!<command> Run shell command
/cancel Cancel current run
/reset Reset context
/last Show last assistant message
/retry Re-run last user prompt
/recap Summarize session history
/context from:<name> [N] Load N messages from another session as context
/handoff <engine> [prompt] One-shot run through another engine
/compact Compact context (Pi only)
/agent cc|oc|pi Switch engine
/ralph <prompt> Start autonomous loop
/ralph-status Loop status
/ralph-cancel Stop loop after current iteration
+<message> Spawn sibling session (when busy)
/peek [N] Peek at logs

Key Features

  • Multi-session: each conversation = separate XMPP contact
  • Multi-engine: Claude, OpenCode, and Pi — switchable per session
  • Collaborative rooms: invite participants into shared MUC sessions
  • Cross-session context: /context from:<session> loads history from another session into the current one
  • Engine handoff: /handoff pi <prompt> for one-shot runs through a different engine
  • Ralph loops: autonomous iteration with cost tracking, completion promises, prompt injection
  • Image attachments: paste/upload in supported clients, served via local HTTP
  • Rich meta messages: <meta xmlns="urn:switch:message-meta"/> extension for tool blocks, run stats, questions, attachments — degrades gracefully in plain clients
  • Session persistence: SQLite-backed, survives restarts
  • Memory vault: gitignored memory/ dir for cross-session knowledge
  • Busy handling: messages queue; +... spawns sibling session

Setup

uv sync                              # install deps
cp .env.example .env                 # configure
cp dispatchers.example.json dispatchers.json  # or dispatchers.local.json
ln -sf ~/switch/AGENTS.md ~/CLAUDE.md  # agent instructions symlink
uv run python -m src.bridge          # run

Or as a systemd user service:

systemctl --user restart switch
journalctl --user -u switch -f

Requirements

  • Linux machine (bare metal preferred)
  • Python 3.11+, uv
  • ejabberd
  • Claude Code CLI, Pi CLI
  • Tailscale (recommended)

Docs

License

MIT

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors