Errmmm..... It requires models with at least 300B parameters .... Otherwise, the adequacy can not be guaranteed.
That means ... No local uncensored models,
CRITICAL
models.small_params Small models require sandboxing and web tools disabled
Small models (<=300B params) detected:
- ollama/huihui_ai/qwen3-abliterated:8b (8B) @ agents.defaults.model.primary (unsafe; sandbox=off; web=[web_fetch, browser])
Uncontrolled input tools allowed: web_fetch, browser.
Small models are not recommended for untrusted inputs.
Fix: If you must use small models, enable sandboxing for all sessions (agents.defaults.sandbox.mode="all") and disable web_search/web_fetch/browser (tools.deny=["group:web","browser"]).
Local models
Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: ≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+). A single 24 GB GPU works only for lighter prompts with higher latency. Use the largest / full-size model variant you can run; aggressively quantized or “small” checkpoints raise prompt-injection risk (see Security).
(This post was last modified: 31 Jan 2026, 00:05 by Like Ra.)
That means ... No local uncensored models,
CRITICAL
models.small_params Small models require sandboxing and web tools disabled
Small models (<=300B params) detected:
- ollama/huihui_ai/qwen3-abliterated:8b (8B) @ agents.defaults.model.primary (unsafe; sandbox=off; web=[web_fetch, browser])
Uncontrolled input tools allowed: web_fetch, browser.
Small models are not recommended for untrusted inputs.
Fix: If you must use small models, enable sandboxing for all sessions (agents.defaults.sandbox.mode="all") and disable web_search/web_fetch/browser (tools.deny=["group:web","browser"]).
Local models
Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: ≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+). A single 24 GB GPU works only for lighter prompts with higher latency. Use the largest / full-size model variant you can run; aggressively quantized or “small” checkpoints raise prompt-injection risk (see Security).
