Spacebot
Thinks, executes, and responds — concurrently, not sequentially.
Built for large teams and communities.
Spacebot
dev-general
can spacebot handle rate limiting across providers?
Yep, with automatic fallback.
what happens when context fills up mid-conversation?
Compactor summarizes at 80%. Never blocks.
does it support streaming responses?
Yeah.
how do branches differ from workers?
Branches clone context. Workers start fresh.
can I run multiple agents on a single instance?
design-review
the new sidebar collapsed state looks solid, ship it
Merging.
can we add keyboard shortcuts for tab switching?
Sure, cmd+1-7.
the cortex chat panel needs a resize handle
On it.
what about mobile responsive for the channel cards?
Already handled.
nice. make the platform badges slightly smaller
support-tickets
my bot stopped responding after I changed the model
Which model?
switched from claude to gpt-4o
OpenAI key isn't set. Run `spacebot secrets set openai_api_key`.
how do I export my memory store?
Copy the SQLite db from your data directory.
that fixed it, thanks!
Nice.
is there a way to clear all memories for an agent?
infrastructure
deploy to staging failed, docker build timeout
Adding cargo chef layer for dep caching.
also the health check endpoint returns 503 during startup
Expected. Set initialDelaySeconds to 5.
can we get metrics exported to prometheus?
Already there — /metrics endpoint.
what port does the webhook listener use?
3100.
perfect, I'll wire it into grafana
product-ideas
what if cron jobs could trigger based on memory changes?
I like it. Reactive cron.
could we do agent-to-agent messaging?
Would need a routing layer on top of the shared memory store.
the cortex should be able to spawn its own workers
Phase 3.
what about voice channels?
Doable — same adapter interface, just add STT/TTS.
we should let extensions register custom tools
daily-standup
standup: finished the memory search refactor yesterday
Nice. Any regressions?
I'm working on the branch timeout logic today
Check MaxTurnsError — Rig returns chat history for partial results.
also found a bug where compaction drops the last summary
Off-by-one in the tier 2 threshold.
should branches inherit the channel's max_turns?
No — branches default to 10, channels to 5.
makes sense. shipping the fix now
An opinionated architecture for agentic computing
Workers work.
Workers get a fresh prompt and the right tools. No conversation context — just focused execution.
Workers report status back to the channel through the event bus. The channel sees live updates without polling.

can you research what changed in the Stripe API and update our webhook handler?
On it — let me pull context and get a worker on this.
also, are we still on for the deploy at 3?
Yes — staging is green. I'll run the final checks before 3.
cool. make sure we handle the new payment_intent.requires_action event
Already on it — the worker is scraping the latest changelog now.
Oscar prefers Stripe v2 webhook format. Last integration used checkout sessions 3 weeks ago. Team policy requires signature verification on all endpoints.
did we ever set up the retry logic for failed webhooks?
Not yet, I'll do that now.
Stripe API v2024-12 changelog scraped. 3 new event types identified: payment_intent.requires_action, invoice.overdue, charge.dispute.funds_withdrawn.
Webhook handler updated. Added signature verification, exponential backoff retry logic, and handlers for all 3 new event types.
Done. Scraped the Stripe API v2024-12 changelog — 3 new event types found (payment_intent.requires_action, invoice.overdue, charge.dispute.funds_withdrawn). Webhook handler updated with signature verification, exponential backoff retry logic, and handlers for all 3 events.
Branches clone the full conversation context to think deeply. They recall memories and return only the conclusion.
Branches think.
Nothing blocks.
Channel
The user-facing ambassador. One per conversation. Has soul, identity, personality. Talks to the user. Delegates everything else.
Branch
A fork of the channel's context that goes off to think. Has the channel's full history. Returns only the conclusion.
Worker
Does real work. Gets a task and the right tools. No personality, no conversation context — just focused execution.
The perfect assistant
Out of the box, with everything you need to create a fleet of capable AI employees.
Memory Graph
Eight memory types (Fact, Preference, Decision, Identity, Event, Observation, Goal, Todo) with graph edges connecting them. Hybrid recall via vector + full-text search. The cortex generates a periodic briefing instead of dumping raw results into context.
Multi-Platform
Native adapters for Discord, Slack, and Telegram. Message coalescing batches rapid-fire bursts. Threading, reactions, file attachments, typing indicators, and per-channel permissions.
Task Execution
Shell, file, exec, browser, and web search tools. Workers are pluggable — built-in workers handle most tasks, or spawn OpenCode for deep coding sessions with LSP awareness. Both support interactive follow-ups.
Smart Model Routing
Process-type defaults (channels get the best conversational model, workers get cheap and fast). Task-type overrides. Prompt complexity scoring routes simple requests to cheaper models automatically. Fallback chains handle rate limits.
Scheduling
Cron jobs with natural language scheduling. "Check my inbox every 30 minutes" becomes a job with a delivery target. Active hours support with midnight wrapping. Circuit breaker auto-disables after 3 consecutive failures.
Multi-Agent
Run multiple agents on one instance. Each with its own workspace, databases, identity, and cortex. A friendly community bot on Discord, a no-nonsense dev assistant on Slack, a research agent for background tasks. One binary, one deploy.
It already knows.
The Cortex sees across every conversation, every memory, every running process. It synthesizes what the agent knows into a pre-computed briefing that every conversation inherits — so nothing starts cold.

James is the primary user. Prefers concise communication, dislikes over-engineering.
Memory Bulletin
Every 60 minutes, the Cortex queries the memory graph across 8 dimensions and synthesizes a concise briefing. Every conversation reads it on every turn — lock-free, zero-copy.
Association Loop
Continuously scans memories for embedding similarity and builds graph edges between related knowledge. Facts link to decisions. Events link to goals. The graph grows smarter on its own.
Cortex Chat
A persistent admin line directly to the Cortex. Full tool access — memory, shell, browser, web search, workers. One conversation per agent, accessible from anywhere.
Drop files. Get memories.
Dump text files into the ingest folder — notes, docs, logs, markdown, whatever. Spacebot chunks them, runs each chunk through an LLM with memory tools, and produces typed, graph-connected memories automatically.
No manual tagging. No reformatting. The LLM reads each chunk, classifies the content, recalls related memories to avoid duplicates, and saves distilled knowledge with importance scores and graph associations.
Migrating from OpenClaw?
Drop your MEMORY.md
and daily logs into the ingest folder — Spacebot extracts structured memories and wires them into the graph.
Skills go in the skills folder and are compatible out of the box.
What they're saying
@richiemcilroy
Founder @Cap
Using spacebot.sh from @jamiepine
@devabdultech
get spacebot.sh for your team today!!!
@zach_sndr
I think you need to check this beautiful RUST orchestration for agents. (I have it on my VPS) spacebot.sh I've moved my openclaw into and actively trying to build our marketing layer here.
@tobi
CEO @Shopify
very nice indeed
@thotsonrecord
GRAPH CENTRIC AGENTS WILL PREVAIL... Ray Kurzweil's 2029 happens THIS year
@HeyZohaib
Product @neoncommerce
you’ve solved a big number of problems out of the box. rooting for you and spacebot!
@stripeyhorse
spacebot replies so much faster than openclaw - using the same providers and same api keys..
@azapsoul
Built for teams and communities is an insane selling point. Personal agents are cool but having an agent help your entire classroom, family group or friend group is sooo useful too. Idk why other agents don't focus on this!
@tylersookochoff
There IS a better way to do memory. And Spacebot is it. Early days, but it just makes sense.
@michaelgrant
So a friend and I started down our path of personal agentic AI, of course looking at openclaw. But fortunately, our research surfaced a much better option: Spacebot. Dramatically better in all respects, including architecture, security, functionality, etc.
@zach_sndr
Coupled with your novel memory architecture- spacebot is a powerhouse from the get go! People be thinking I'm being paid to say all this, but I'm just a fan of spacebot 😬
@dingyi
卧槽我收回昨晚的话,这个由 spacedrive 团队创造的 spacebot 看起来也很牛逼,设计还是一如既往的好看。可以订阅,也可以 self-host 完全免��。 今年真正好的 OpenClaw 替代品会越来越多的。 spacebot.sh
@richiemcilroy
Founder @Cap
Using spacebot.sh from @jamiepine
@devabdultech
get spacebot.sh for your team today!!!
@zach_sndr
I think you need to check this beautiful RUST orchestration for agents. (I have it on my VPS) spacebot.sh I've moved my openclaw into and actively trying to build our marketing layer here.
@tobi
CEO @Shopify
very nice indeed
@thotsonrecord
GRAPH CENTRIC AGENTS WILL PREVAIL... Ray Kurzweil's 2029 happens THIS year
@HeyZohaib
Product @neoncommerce
you’ve solved a big number of problems out of the box. rooting for you and spacebot!
@stripeyhorse
spacebot replies so much faster than openclaw - using the same providers and same api keys..
@azapsoul
Built for teams and communities is an insane selling point. Personal agents are cool but having an agent help your entire classroom, family group or friend group is sooo useful too. Idk why other agents don't focus on this!
@tylersookochoff
There IS a better way to do memory. And Spacebot is it. Early days, but it just makes sense.
@michaelgrant
So a friend and I started down our path of personal agentic AI, of course looking at openclaw. But fortunately, our research surfaced a much better option: Spacebot. Dramatically better in all respects, including architecture, security, functionality, etc.
@zach_sndr
Coupled with your novel memory architecture- spacebot is a powerhouse from the get go! People be thinking I'm being paid to say all this, but I'm just a fan of spacebot 😬
@dingyi
卧槽我收回昨晚的话,这个由 spacedrive 团队创造的 spacebot 看起来也很牛逼,设计还是一如既往的好看。可以订阅,也可以 self-host 完全免费。 今年真正好的 OpenClaw 替代品会越来越多的。 spacebot.sh

Built in Rust, for the long run.
Spacebot isn't a chatbot — it's an orchestration layer for autonomous AI processes. That's infrastructure, and infrastructure should be machine code.
Multiple AI processes sharing mutable state, spawning tasks, and making decisions without human oversight. Rust's strict type system and compiler enforce correctness at build time. The result is a single binary with no runtime dependencies, no garbage collector pauses, and predictable resource usage. No Docker, no server processes, no microservices.

Every major provider, built in.
First-class support for 10 LLM providers with automatic routing, fallbacks, and rate limit handling.
Hosted or self-hosted, your call.
Pick managed cloud for speed, or self-host with priority support and SLAs. Same core product, different deployment model.
Pod
For personal use
- ✓ 1 hosted instance
- ✓ 2 shared vCPU, 1GB RAM per instance
- ✓ 3 agents per instance
- ✓ 10GB storage
- ✓ 1 dashboard seat
- ✓ All messaging platforms
Outpost
For power users
- ✓ 2 hosted instances
- ✓ 2 shared vCPU, 1.5GB RAM per instance
- ✓ 6 agents per instance
- ✓ 40GB storage
- ✓ 2 dashboard seats
- ✓ Priority support
Nebula
For teams
- ✓ 5 hosted instances
- ✓ 2 performance vCPU, 4GB RAM per instance
- ✓ 12 agents per instance
- ✓ 80GB storage
- ✓ 5 dashboard seats
- ✓ Priority support
Titan
For enterprise
- ✓ 10 hosted instances
- ✓ 4 performance vCPU, 8GB RAM per instance
- ✓ Unlimited agents per instance
- ✓ 250GB storage
- ✓ 10 dashboard seats
- ✓ Dedicated support, SLA, SSO
Community
Self-hosted open source
- ✓ Unlimited self-hosted agents
- ✓ Community Discord support
- ✓ BYO infrastructure and keys
- ✓ Manual upgrades
Basic Support
For production self-host teams
- ✓ Priority support response targets
- ✓ Shared support channel
- ✓ Bug fix prioritization
Priority Support
For teams that need fast responses
- ✓ Dedicated support channel
- ✓ Fastest response times
- ✓ Bug fix prioritization
- ✓ Deployment architecture review
- ✓ Direct engineer access
Enterprise Contract
For regulated or large-scale deployments
- ✓ SLA options and escalation path
- ✓ SSO/SAML and security reviews
- ✓ Dedicated support channel and onboarding
- ✓ Managed updates on your infrastructure
Bring your own keys
Connect your own API keys from any LLM provider — Anthropic, OpenAI, OpenRouter, and more. Bundled LLM credits are coming soon.
Dashboard seats
Seats are for the control plane — agent config, memory, conversations. End users on Discord, Slack, or Telegram don't need one. Extra seats $20/mo. Agent caps are per hosted instance, not account-wide.
Enterprise migration path
Start in hosted cloud, then move to self-host with support contracts as your compliance and procurement requirements evolve.
All plans include Discord, Slack & Telegram · hybrid memory search · coding & browser workers · cron jobs · daily backups
All plans currently require your own LLM API keys (BYOK). Bundled LLM credits will be included with every plan in a future update.
Self host with one command.
Single binary. No runtime dependencies. No microservices. Everything runs from one container.
Web UI at localhost:19898
— add an API key in Settings and you're live.