Simple Explanation: What is moltbot?
Understanding moltbot's core features in plain language
Your Personal AI Assistant
Imagine having a super-smart assistant that you can chat with through WhatsApp, Telegram, Discord, and other messaging apps you already use. Just send messages like you would to a friend, and it helps you with all kinds of tasks.
An AI That Actually Does Things
Unlike regular chatbots, moltbot can execute real actions on your computer: run commands, process files, send emails, control smart home devices, and more. It doesn't just "talk"โit "does."
Runs Entirely Locally
Your data never leaves your device. moltbot runs locally on your computer, keeping your private data away from third-party servers. You have complete control over your data.
Multi-Platform Connection
Whether you're using WhatsApp, Telegram, or Discord, you can talk to the same AI assistant. Messages sync automatically, so you can get help anytime, anywhere.
Supported Platforms
Use your AI assistant on the platforms you already know
System Architecture Overview
Understanding how moltbot's core components work together
Gateway: The Central Hub
The Gateway is the heart of moltbot - a long-lived daemon that manages all communications
WebSocket API
The Gateway uses a WebSocket-based API with JSON payloads for real-time communication. All clients, nodes, and messaging channels connect through this unified protocol, formally defined using TypeBox schemas for type safety.
Channel Management
Manages persistent connections to WhatsApp (via Baileys), Telegram (via grammY), Discord, Slack (via Bolt), Signal, iMessage, Matrix, Nostr, and Microsoft Teams. Only the Gateway maintains these connections, simplifying the overall architecture.
Policy Enforcement
Enforces access controls including pairing approval for new devices, mention-gating in groups,
and allowlists to limit blast radius. The /tools/invoke HTTP API provides
authenticated tool calls. Learn more in our moltbot security guide.
Local State Management
All session data, configurations, and credentials are stored locally in ~/.clawdbot.
Memory persists via Markdown files, giving you complete data sovereignty.
No cloud dependencies required for core operations.
Node Capabilities: Your AI's Eyes and Hands
Nodes are devices that connect to the Gateway and expose their hardware/software capabilities to moltbot
Connect your laptop, phone, or other devices as Nodes. Each exposes its capabilities to the AI agent, enabling moltbot to take real-world actions. Security is enforced via pairing approval and sandboxed execution.
AgentSkills: 565+ Ways to Extend Moltbot
The open AgentSkills standard powers moltbot's extensibility
Simple Skill Structure
Each skill is a directory containing a SKILL.md file with YAML frontmatter
defining name, description, and metadata. The AI reads these instructions to learn
how to use external tools and services.
ClawdHub Marketplace
Browse and install from 565+ community-built skills on ClawdHub (clawdhub.com).
Skills cover Web Development, DevOps, Home Automation, Productivity, and more.
Install with: npx clawdhub@latest install <skill>
Skill Loading Priority
Skills load from three locations with increasing priority: bundled skills (lowest),
managed skills in ~/.clawdbot/skills, and workspace skills in
<workspace>/skills (highest). User skills can override bundled ones.
Self-Improving AI
A remarkable feature: moltbot can autonomously write and modify its own skills. Through conversation, the AI learns new capabilities and adapts its behavior over time, pushing the boundaries of personal AI autonomy.
See moltbot use cases for practical examples of skills in action.
Multi-Model Support
Choose your AI backbone - moltbot is model-agnostic
Anthropic Claude
The recommended choice for moltbot. Claude's instruction-following capabilities make it ideal for tool-enabled agents. Supports Claude 3.5 Sonnet, Claude 3 Opus, and newer models for best results with automation tasks.
OpenAI GPT-4
Full support for GPT-4 and GPT-4 Turbo via the OpenAI API. Switch between providers without changing your setupโjust update your API key configuration.
AWS Bedrock
Enterprise-grade deployment with automatic model discovery. Configure defaults and let moltbot discover available models in your AWS account. Ideal for organizations with existing AWS infrastructure.
Local Models via Ollama
Maximum privacy with fully local inference. Run open-source models like Llama, Mistral, or CodeLlama on your own hardware. No API costs, no data leaves your machine. Also supports LM Studio.
Technology Stack
Built on modern, proven technologies