Cloudflare has released the open beta of Dynamic Worker Loader, a sandboxing API designed to execute AI-generated code with significantly lower latency than hardware-virtualised solutions.
Available to all paid Workers users as of March 24, it provides a secure environment for AI agents to run code on the fly without the security risks associated with direct code execution.
The system replaces Linux-based containers with V8 isolates, the JavaScript engine that powers Google Chrome.
Cloudflare reports that “an isolate takes a few milliseconds to start and uses a few megabytes of memory”, a performance profile that is “100x faster and 10x-100x more memory efficient than a typical container.”
This architecture allows developers to instantiate a unique, isolated sandbox for every user request on demand, then discard it, eliminating the need to keep expensive containers warm to avoid startup delays.
The API specifically targets ‘Code Mode’ for AI agents, where models perform tasks by writing TypeScript code rather than making sequential API calls.
This method can reduce token usage by up to 81% by consolidating multiple API interactions into a single execution.
To maintain security, the platform includes a ‘globalOutbound’ feature that intercepts HTTP requests. This allows for credential injection, where the host environment adds authorisation tokens to outgoing requests. This ensures that the AI agent itself never gains access to sensitive keys.
The technology is already being used by third-party developers to power automated application builders.
“Agents should interact with the world by writing code, not tool calls. This makes it possible at ‘consumer scale’, where millions of end users each have their own agent writing code,” said Kenton Varda, tech lead of Cloudflare Workers, in a post on X.
Cloudflare has also released supporting libraries such as @cloudflare/codemode for sandbox management and @cloudflare/shell to provide agents with a persistent virtual filesystem.
Cloudflare charges $0.002 per unique Worker loaded per day, though this fee is waived during the beta period. Standard CPU and invocation charges for the Workers platform continue to apply.
The launch provides the infrastructure required for the mass deployment of autonomous agents that require low latency and isolated execution of AI-generated code.
ALSO READ: OpenAI Unveils 16MB, 10-Min Model Training Challenge on NVIDIA H100 GPUs