Execution Systems
OpenClaw's Three Weeks That Changed the Stack
9 min read · Published March 17, 2026 · Updated March 17, 2026
By CogLab Editorial Team · Reviewed by Knyckolas Sutherland
Two weeks ago, an agent we run in production started sending duplicate Telegram messages at three in the morning. Not because it had something to say. Because the cron engine didn't recognize the model we told it to use, failed over to a weaker one, hit an incompatible thinking-level parameter, retried with a different parameter, and then decided that a silent no-op meant it should message the operator about having nothing to do. Three separate bugs, chained together, triggered by a single version mismatch.
That kind of failure is the tax you pay for running AI agents in production instead of in demos. And the reason we're writing about OpenClaw again is that the project just shipped a stretch of releases that systematically eliminates exactly these kinds of problems. Between version 2026.2.25 and version 2026.3.13, twelve releases landed in twenty days. The changelog is long. The theme is short: make the infrastructure disappear so the agent can work.
The change you feel first is model routing. OpenClaw 2026.3.13 defaults to GPT-5.4 across the board, which matters less for the raw capability bump and more for what it fixes downstream. Previous versions had a gap between the models the agent configuration accepted and the models the cron engine recognized. If you set your agent to GPT-5.4 but the cron scheduler didn't have it in its registry, every scheduled job started with a failure, a fallback, and a retry. That's two API calls where there should have been one. Multiply that by a dozen cron jobs across a day and you're burning money on errors. The fix sounds boring: align the model registry across all subsystems. In practice, it cut our daily API costs by more than half.
The second shift is how sessions survive. Earlier versions would lose thread context on a session reset, which meant your agent forgot what it was working on every time the system compacted its memory. Version 2026.3.13 now preserves the last account and thread IDs through resets, and creates transcript files on injection when they're missing. If you've ever had an agent that seemed to lose its train of thought mid-task, this is the plumbing fix underneath. It's also the kind of change that never makes a launch video but determines whether anyone actually trusts the system with overnight workloads.
Security got real attention in this window. Version 2026.3.11 patched a cross-site WebSocket hijacking vulnerability in trusted-proxy configurations. Version 2026.3.13 prevents gateway tokens from leaking into Docker build context. If you're self-hosting OpenClaw, which is the entire point of an open-source agent framework, these aren't theoretical risks. They're the difference between a system you'd put on a VPS with real credentials and a system you'd only run locally.
Then there's the browser story. OpenClaw has supported headless Chromium through Playwright for a while, but it was fragile. Sessions would die without clear errors. Driver validation was loose. The new releases harden the full lifecycle: session creation, validation, recovery, and teardown. Version 2026.3.13-beta added Chrome DevTools Protocol attach mode, which means your agent can connect to an already-signed-in browser session instead of starting cold every time. If you've tried to automate anything that requires authentication, you know how much time that saves.
The control UI got a complete refresh in 2026.3.12. This is the web dashboard you use to monitor agents, view sessions, and manage configuration. The new version adds modular views, a command palette, and fast mode toggles at the session level. Fast mode itself is worth calling out. It lets you switch between thorough reasoning and quick responses on the fly, per session, instead of committing to one behavior globally. For an operator managing multiple agents doing different work, that granularity matters.
Provider architecture changed too. Ollama, vLLM, and SGLang moved to a plugin system in 2026.3.12, which means the framework no longer has to ship built-in support for every inference backend. You install what you use. This is the kind of modular decision that signals a project thinking about long-term maintenance, not just feature count. It also means you can run local models through Ollama with first-class onboarding instead of fighting configuration files.
The platform story expanded aggressively. iOS got a full companion app with Home canvas, notification relay, and a share extension for forwarding URLs and images to your agent. Android got a redesigned chat settings interface and camera integration. macOS got a native chat UI with model picker and thinking-level persistence. An Apple Watch relay shipped. Kubernetes deployment documentation landed. The project went from "runs in Docker on a VPS" to "runs everywhere you might want it" in three weeks.
Integration work filled in gaps that matter for daily use. Slack got opt-in interactive reply directives, which means your agent can present structured choices in a channel instead of just walls of text. Signal got proper groups configuration. Telegram got IPv4 fallback for media transport, fixing a class of delivery failures on networks that don't handle IPv6 well. Feishu, the Chinese enterprise messenger, got event-level deduplication and non-ASCII filename handling. Each of these is small. Together, they say: we tested this with real users on real networks.
If you zoom out, what happened in this three-week stretch is a phase change. The early OpenClaw releases we covered were about getting basic reliability right. Sessions that don't crash. Models that respond. Messages that arrive. This batch is about operational maturity. Secrets management. Session persistence. Security hardening. Provider modularity. Cross-platform deployment. Kubernetes. These are the features you build when your users are putting the system in production, not when they're evaluating it in a sandbox.
Here's what this means if you're thinking about running an AI agent for actual work. The gap between "demo" and "production" in this space has always been the infrastructure. The model is the easy part. Getting the model to run reliably on a schedule, recover from failures, respect security boundaries, deliver messages across channels, persist context across restarts, and not burn your budget on retries is the hard part. That's what these twelve releases addressed.
You can try it today. OpenClaw is open source. The Docker setup takes about fifteen minutes if you have a VPS. Start with one agent, one channel, one task. The best way to evaluate whether this level of infrastructure matters to you is to run something overnight and check it in the morning. If the agent did its job and you didn't get any surprise messages at 3 AM, the plumbing is working. That's the whole test.
Frequently Asked
What changed in OpenClaw between versions 2026.2.22 and 2026.3.13?
Twelve releases shipped in twenty days, covering model routing fixes (GPT-5.4 as default), session persistence through resets, security patches for WebSocket hijacking and Docker token leaks, browser automation hardening, a refreshed control UI with fast mode, provider plugin architecture, cross-platform apps (iOS, Android, macOS, Watch), Kubernetes deployment support, and integration improvements for Slack, Telegram, Signal, and Feishu.
Is OpenClaw production-ready for business workloads?
The recent release stretch signals a shift toward production readiness. Key additions like secrets management, session persistence, security hardening, and Kubernetes support address the operational requirements businesses need. It is open source and self-hosted, which gives you full control over data and credentials.
How do I get started with OpenClaw?
OpenClaw is open source and can be self-hosted on any VPS using Docker. The setup takes roughly fifteen minutes. Start with one agent connected to one messaging channel and one scheduled task. The project supports Telegram, Slack, Discord, Signal, and other platforms out of the box.
Sources
Related Articles
Services
Explore AI Coaching Programs
Solutions
Browse AI Systems by Team
Resources
Use Implementation Templates