πŸ¦™
100% Free

Clawdbot + Ollama

Run Clawdbot with zero API costs. Ollama lets you run powerful LLMs like Llama 3, Mistral, and more completely locally on your own hardware.

πŸ’° Zero API Costs

No monthly bills from OpenAI or Anthropic. Once set up, it's completely free to use.

πŸ”’ 100% Private

Your data never leaves your machine. Perfect for sensitive conversations and business use.

⚑ No Rate Limits

Send as many messages as you want. No quotas, no throttling, no waiting.

System Requirements

Minimum (7B Models)
For Llama 3 8B, Mistral 7B
  • β€’ 8 GB RAM
  • β€’ 10 GB free disk space
  • β€’ Any modern CPU (4+ cores)
  • β€’ GPU optional but helps
Recommended (Best Experience)
For smooth performance
  • βœ“ 16 GB+ RAM
  • βœ“ Apple Silicon (M1/M2/M3) or NVIDIA GPU
  • βœ“ SSD storage
  • βœ“ Mac Mini M2/M4 is perfect!

Setup Guide

1
Install Ollama

Download and install Ollama for your platform:

# macOS / Linux

curl -fsSL https://ollama.ai/install.sh | sh

# Windows: Download from ollama.ai

2
Download a Model

Pull Llama 3.1 (recommended for most users):

ollama pull llama3.1

# This downloads ~4.7 GB

3
Configure Clawdbot

Update your Clawdbot .env file:

# .env file

AI_PROVIDER=ollama

OLLAMA_HOST=http://localhost:11434

OLLAMA_MODEL=llama3.1

4
Start Chatting!

Restart Clawdbot and enjoy free AI:

clawdbot restart

# Now using local Llama 3.1!

Available Models

ModelSizeSpeedQualityCommand
Llama 3.1 8BRecommended
4.7 GBFastGoodollama pull llama318b
Llama 3.1 70B
40 GBSlowExcellentollama pull llama3170b
Mistral 7B
4.1 GBFastGoodollama pull mistral7b
Mixtral 8x7B
26 GBMediumVery Goodollama pull mixtral8x7b
Phi-3
2.2 GBVery FastDecentollama pull phi-3
Gemma 2 9B
5.4 GBFastGoodollama pull gemma29b

Pro Tips

πŸš€ Speed Up with GPU

Apple Silicon and NVIDIA GPUs automatically accelerate inference. A Mac Mini M2 can run Llama 3.1 8B at 30+ tokens/second.

πŸ’Ύ Manage Disk Space

Models are stored in ~/.ollama. Use ollama list to see installed models and ollama rm modelname to delete.

πŸ”„ Run Ollama as Service

On Linux, Ollama runs as a systemd service automatically. On Mac, use ollama serve in the background.

🌐 Remote Access

To access Ollama from another machine, set OLLAMA_HOST=0.0.0.0 (careful with security!).