Hacker News Viewer

Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents

by souvik1997 on 1/30/2026, 2:34:32 PM

WASM sandbox for running LLM-generated code safely.<p>Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox

https://github.com/amlalabs/amla-sandbox

Comments

by: sd2k

Cool to see more projects in this space! I think Wasm is a great way to do secure sandboxing here. How does Amla handle commands like grep&#x2F;jq&#x2F;curl etc which make AI agents so effective at bash but require recompilation to WASI (which is kinda impractical for so many projects)?<p>I&#x27;ve been working on a couple of things which take a very similar approach, with what seem to be some different tradeoffs:<p>- eryx [1], which uses a WASI build of CPython to provide a true Python sandbox (similar to componentize-py but supports some form of &#x27;dynamic linking&#x27; with either pure Python packages or WASI-compiled native wheels) - conch [2], which embeds the `brush` Rust reimplementation of Bash to provide a similar bash sandbox. This is where I&#x27;ve been struggling with figuring out the best way to do subcommands, right now they just have to be rewritten and compiled in but I&#x27;d like to find a way to dynamically link them in similar to the Python package approach...<p>One other note, WASI&#x27;s VFS support has been great, I just wish there was more progress on `wasi-tls`, it&#x27;s tricky to get network access working otherwise...<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;eryx-org&#x2F;eryx" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;eryx-org&#x2F;eryx</a> [2] <a href="https:&#x2F;&#x2F;github.com&#x2F;sd2k&#x2F;conch" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sd2k&#x2F;conch</a>

1/30/2026, 4:07:43 PM


by: asyncadventure

Really appreciate the pragmatic approach here. The 11MB vs 173MB difference with agentvm highlights an important tradeoff: sometimes you don&#x27;t need full Linux compatibility if you can constrain the problem space well enough. The tool-calling validation layer seems like the sweet spot between safety and practical deployment.

1/30/2026, 4:01:12 PM


by: quantummagic

Sure, but every tool that you provide access to, is a potential escape hatch from the sandbox. It&#x27;s safer to run everything inside the sandbox, including the called tools.

1/30/2026, 3:05:39 PM


by: syrusakbary

This is great!<p>While I think that with their current choice for the runtime will hit some limitations (aka: not really full Python support, partial JS support), I strongly believe using Wasm for sandboxing is the way for the future of containers.<p>At Wasmer we are working hardly to make this model work. I&#x27;m incredibly happy to see more people joining on the quest!

1/30/2026, 3:10:08 PM


by:

1/30/2026, 3:03:18 PM


by: westurner

From the README:<p>&gt; Security model<p>&gt; <i>The sandbox runs inside WebAssembly with WASI for a minimal syscall interface. WASM provides memory isolation by design—linear memory is bounds-checked, and there&#x27;s no way to escape to the host address space. The wasmtime runtime we use is built with defense-in-depth and has been formally verified for memory safety.</i><p>&gt; <i>On top of WASM isolation, every tool call goes through capability validation:</i> [...]<p>&gt; <i>The design draws from capability-based security as implemented in systems like seL4—access is explicitly granted, not implicitly available. Agents don&#x27;t get ambient authority just because they&#x27;re running in your process.</i>

1/30/2026, 2:48:57 PM


by: taosu_yb

[dead]

1/30/2026, 3:43:39 PM