Clawdbot (now Moltbot) (and now OpenClaw)

12 Replies, 1574 Views

Errmmm.....  It requires models with at least 300B parameters ....  Otherwise, the adequacy can not be guaranteed.

That means ... No local uncensored models, 



CRITICAL
models.small_params Small models require sandboxing and web tools disabled
  Small models (<=300B params) detected:
- ollama/huihui_ai/qwen3-abliterated:8b (8B) @ agents.defaults.model.primary (unsafe; sandbox=off; web=[web_fetch, browser])
Uncontrolled input tools allowed: web_fetch, browser.
Small models are not recommended for untrusted inputs.
  Fix: If you must use small models, enable sandboxing for all sessions (agents.defaults.sandbox.mode="all") and disable web_search/web_fetch/browser (tools.deny=["group:web","browser"]).

Local models

Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: ≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+). A single 24 GB GPU works only for lighter prompts with higher latency. Use the largest / full-size model variant you can run; aggressively quantized or “small” checkpoints raise prompt-injection risk (see Security).
(This post was last modified: 31 Jan 2026, 00:05 by Like Ra.)
I can almost picture it: a future where AI-controlled gadgets will emerge, followed by AI command-sharing platforms that rely on these gadgets, until... AI-generated prompt attacks... and then an incident occurs.
(30 Jan 2026, 23:51 )Like Ra Wrote: Errmmm.....  It requires models with at least 300B parameters ....  Otherwise, the adequacy can not be guaranteed.

That means ... No local uncensored models, 



CRITICAL
models.small_params Small models require sandboxing and web tools disabled
  Small models (<=300B params) detected:
- ollama/huihui_ai/qwen3-abliterated:8b (8B) @ agents.defaults.model.primary (unsafe; sandbox=off; web=[web_fetch, browser])
Uncontrolled input tools allowed: web_fetch, browser.
Small models are not recommended for untrusted inputs.
  Fix: If you must use small models, enable sandboxing for all sessions (agents.defaults.sandbox.mode="all") and disable web_search/web_fetch/browser (tools.deny=["group:web","browser"]).

Local models

Local is doable, but OpenClaw expects large context + strong defenses against prompt injection. Small cards truncate context and leak safety. Aim high: ≥2 maxed-out Mac Studios or equivalent GPU rig (~$30k+). A single 24 GB GPU works only for lighter prompts with higher latency. Use the largest / full-size model variant you can run; aggressively quantized or “small” checkpoints raise prompt-injection risk (see Security).

They have some extremely strange viewpoints on model security and usability. Personally I wouldnt trust any model no matter how big it is to defend against any kind of attack.

Running clawd/molt/claw or whatever it is called now with a smaller (13,24,30) param model shouldnt pose any critical usability issues as long as the model is MCP capable and you arent giving it complex long running tasks. It will take a little more guiding compared to a big model but the same  fun  😋  is possible regardless