AI Models for Clawdbot
Clawdbot supports multiple AI providers, from cloud-based models like Claude and GPT-4 to self-hosted options with Ollama.
Cloud AI Models
Cloud models offer the best quality and require no local hardware. You pay per usage via API.
| Provider | Model | Best For | Speed | Price |
|---|---|---|---|---|
| Anthropic | Claude 3.5 SonnetRecommended | Best overall, excellent at coding and reasoning | Fast | $$ |
| Anthropic | Claude 3 Opus | Most capable, complex tasks | Medium | $$$ |
| Anthropic | Claude 3 HaikuRecommended | Fastest, budget-friendly | Very Fast | $ |
| OpenAI | GPT-4 Turbo | Strong general purpose | Medium | $$$ |
| OpenAI | GPT-4oRecommended | Multimodal, fast responses | Fast | $$ |
| OpenAI | GPT-4o miniRecommended | Very affordable, good quality | Very Fast | $ |
| Gemini 1.5 Pro | Long context, multimodal | Fast | $$ | |
| Gemini 1.5 Flash | Fast and affordable | Very Fast | $ |
Provider Details
Claude models excel at nuanced conversations, coding, and following complex instructions.
- βBest overall quality
- β200K token context
- βVision capabilities
GPT-4 models are versatile and widely supported with excellent documentation.
- βGreat ecosystem
- βDALL-E integration
- βFunction calling
Gemini offers excellent multimodal capabilities and very long context windows.
- β1M token context
- βStrong multimodal
- βCompetitive pricing
Local Models with Ollama
Run AI models locally for complete privacy and zero API costs. Requires decent hardware.
| Model | Size | Best For | Requirements |
|---|---|---|---|
| Llama 3.1 70B | 40GB | Best open-source performance | 32GB+ RAM, GPU recommended |
| Llama 3.1 8B | 4.7GB | Great balance of quality and speed | 16GB RAM |
| Mistral 7B | 4GB | Efficient, good at reasoning | 16GB RAM |
| Mixtral 8x7B | 26GB | MoE architecture, excellent quality | 32GB RAM, GPU recommended |
| Phi-3 | 2GB | Very lightweight, surprisingly capable | 8GB RAM |
| CodeLlama 34B | 19GB | Specialized for coding | 32GB RAM, GPU |
1. Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
2. Pull a Model
ollama pull llama3.1:8b
3. Configure Clawdbot
# In your .env file
OLLAMA_HOST=http://localhost:11434
DEFAULT_MODEL=llama3.1:8b
Choosing the Right Model
Use Claude 3.5 Sonnet or GPT-4o for the best balance of quality and cost.
Recommended: Claude 3.5 SonnetClaude Haiku or GPT-4o mini are 10x cheaper while still very capable.
Recommended: Claude 3 HaikuClaude 3.5 Sonnet excels at code generation and debugging.
Recommended: Claude 3.5 SonnetRun Llama or Mistral locally with Ollama. Your data never leaves your machine.
Recommended: Llama 3.1 8BReady to Get Started?
Install Clawdbot and connect to your preferred AI model.