They execute code, call tools and browsers, run for minutes instead of milliseconds, and touch sensitive systems. Most teams end up rebuilding the same fragile setup:
Homegrown agent harnesses
Sandboxes, file systems, and queues stitched together
Logs and traces scattered across many services
Ad hoc snapshots and risky secret handling
The result:
Every agent needs its own mini runtime
Debugging one agent run feels like detective work
Adding a new agent feels like starting a new infra project
Good agents stall at “cool demo” instead of reaching production. Deploying agents becomes a full-time infrastructure problem.
They are building Castari to be the natural runtime for AI agents built on Claude Agent SDK. They wrap the entire agent in a sandbox, deploy it, and provide you with an endpoint url that auto-scales as requests come in. No need to write code in your tools to manage your sandbox lifecycle.
1. Drop in a config file → get a production runtime
In your Claude Agent SDK repo:
Add a
(entrypoint, tools, env vars)
Run
That’s it. Your agent is now running in a secure sandbox with an endpoint, UI, and observability.
Whether that agent is:
an internal system your team uses
a customer-facing agent that is your product
the deployment workflow is identical and boring (in the good way).
2. Sandboxing & safety, built-in
Every run executes in an isolated sandbox designed for tool-using agents
You design the behavior; they handle isolation and scaling
No more duct-taping together sandbox primitives for specific tools
Bottom line: You keep building on Claude Agent SDK the way you already are — for internal workflows or core products. Castari turns that into a safe, observable, production-ready runtime.
Why Castari is different
E2B, Modal, Daytona, Cloudflare (and friends) give you powerful sandbox and compute primitives.
Castari sits one layer up:
Castari wraps around your Claude Agent SDK-based agents so you can define them declaratively and run them in secure sandboxes, without owning the underlying sandbox lifecycle.
Agent-first semantics: runs, sessions, tools, snapshots are native concepts they manage.
Zero framework lock-in: remove Castari and your agent still runs on vanilla Claude Agent SDK; keep Castari and your infra team gets its life back.
Focused start: deeply integrated with Claude Agent SDK today, with the same “drop-in config” experience coming to other frameworks next.
They are building the runtime they wish existed for agents.
Why they are working on this
While Jacob was leading the FDE team at RunPod:
Built and shipped AI infrastructure and agents for Fortune 500s and fast-growing startups
Watched the same loop repeat: agent works in a notebook → weeks disappear into infra glue → nobody trusts the system, production release gets delayed (over and over again).
Additionally, while @Cambree Bernkopf and him scaled their AI consumer app to 2m+ members, their biggest issues were getting their agents to work reliably in production.
The pattern is clear: agents deserve a first-class runtime, the way frontends got one.
Sandboxing & safety should be built-in, not bolted on
Going from “prototype” to “production-safe” should be one config file, not a quarter-long project
AI teams should ship agents, not infrastructure.
Ask: early pilot access
They are opening a limited pilot for:
Teams building internal agents who need a safer, more reliable runtime
Companies building agent centric products that need to run in production without bespoke infra
If that is you and you are tired of wrestling with sandboxes, observability, and brittle tooling:
→ Email Jacob with subject “Castari Pilot” and 2 to 3 sentences about your agent stack and use case. → Check out their open source repo for using any model with Claude Agent SDK on GitHub. → Join the waitlist on castari.com to hear about their first releases.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.