🍎
Installation Guide

Clawdbot on macOS

The best platform for self-hosted AI. Apple Silicon delivers exceptional AI performance with whisper-quiet operation and minimal power consumption.

System Requirements

macOS

13 Ventura or later

RAM

8GB minimum (16GB for Ollama)

Storage

10GB free (50GB for local models)

Chip

Apple Silicon recommended

Why Mac Mini for Clawdbot?

Mac Mini has become the go-to choice for self-hosted AI enthusiasts. Here's why:

Unified Memory

Apple Silicon shares memory between CPU and GPU. Run 70B models on 48GB Mac that would need 80GB VRAM on PC.

🔇

Silent Operation

No GPU fans screaming. Mac Mini is nearly silent even under AI workload. Perfect for home office.

🔌

Low Power

7-20W idle vs 150W+ for gaming PC. Run 24/7 for ~$20/year electricity. Green AI computing.

📦

Tiny Footprint

Fits anywhere. Stack it, mount it, hide it. The M4 Mini is even smaller at 5 inches.

Mac Models Compared for AI

ModelChipRAMPriceAI PerfIdle PowerBest For
Mac Mini M4Best PickM416GB$599⭐⭐⭐⭐⭐~15W idleBest value for AI
Mac Mini M4 ProM4 Pro24GB$1,399⭐⭐⭐⭐⭐~20W idle70B models, heavy use
Mac Mini M2Best PickM216GB$499 (refurb)⭐⭐⭐⭐~10W idleBudget friendly
Mac Mini M1M116GB$350 (used)⭐⭐⭐~7W idleEntry level
MacBook Air M2/M3M2/M316GB$999+⭐⭐⭐⭐~5W idlePortable option

* Prices as of January 2026. Check Apple Store or authorized resellers.

Run Ollama + Clawdbot

Combine Clawdbot with Ollama for 100% free AI on your Mac. No API costs, complete privacy, works offline.

# Install Ollama

brew install ollama

# Pull a model

ollama pull llama3.1:8b

# Configure Clawdbot .env

AI_PROVIDER=ollama

OLLAMA_MODEL=llama3.1:8b

Complete Ollama Guide →
Recommended Models by RAM
Apple Silicon performance benchmarks
ModelRAMSpeed
Llama 3.2 3B4GB50 tok/s
Llama 3.1 8B8GB35 tok/s
Mistral 7B8GB40 tok/s
Llama 3.1 70B48GB10 tok/s
Qwen 2.5 14B16GB25 tok/s

Quick Installation

Easiest
Method 1: One-Line Install

Run this command in Terminal to install everything automatically:

curl -fsSL https://clawdbot.dev/install | bash

This installs Clawdbot, sets up configuration, and provides next steps.

2
Method 2: Using Homebrew

Step 1: Install Dependencies

# Install Homebrew if needed

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install Node.js 20

brew install node@20

Step 2: Clone & Setup

git clone https://github.com/clawdbot/clawdbot.git

cd clawdbot

npm install

cp .env.example .env

Step 3: Configure & Run

# Edit configuration

nano .env

# Start Clawdbot

npm start

3
Method 3: Docker Desktop

Docker isolates Clawdbot from your system. Great for clean installs.

# Install Docker Desktop from docker.com first

git clone https://github.com/clawdbot/clawdbot.git

cd clawdbot

cp .env.example .env

# Edit .env with your API keys

docker compose up -d

Docker Guide →

Getting API Keys

Claude API (Anthropic)
Recommended for best quality

Sign up at console.anthropic.com and create an API key.

Get Claude Key →
Telegram Bot Token
For Telegram integration

Message @BotFather on Telegram to create a new bot.

Open BotFather →
Or Use Ollama
Free, local, private

No API key needed. Run AI models locally on your Mac.

Ollama Setup →

Run Clawdbot at Startup

Make Clawdbot start automatically when your Mac boots—perfect for 24/7 operation.

Create a Launch Agent

# Create the plist file

nano ~/Library/LaunchAgents/com.clawdbot.plist

Paste this content:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>com.clawdbot</string>
  <key>ProgramArguments</key>
  <array>
    <string>/usr/local/bin/node</string>
    <string>/Users/YOUR_USER/clawdbot/index.js</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>KeepAlive</key>
  <true/>
  <key>WorkingDirectory</key>
  <string>/Users/YOUR_USER/clawdbot</string>
</dict>
</plist>

# Load the service

launchctl load ~/Library/LaunchAgents/com.clawdbot.plist

Common Issues

Command not found: npm

Node.js isn't installed or not in PATH. Run brew install node@20 and restart Terminal.

Permission denied errors

Don't use sudo with npm. Fix with: sudo chown -R $(whoami) ~/.npm

Ollama models running slow

Check that Ollama is using Metal (GPU). Run ollama run llama3.1:8b and look for "metal" in the output.