AI agents run arbitrary code in your sandbox. Without enforcement at the OS level, a single prompt injection can exfiltrate credentials, access cloud metadata, or pivot to internal services.
Drop-in runtime security for Vercel, E2B, Daytona, Cloudflare, Blaxel, and Sprites sandboxes. One npm install, three lines of TypeScript.
import { Sandbox } from '@vercel/sandbox'; import { secureSandbox, adapters } from '@agentsh/secure-sandbox'; const raw = await Sandbox.create({ runtime: 'node24' }); const sandbox = await secureSandbox(adapters.vercel(raw)); await sandbox.exec('npm install express'); // allowed await sandbox.exec('cat ~/.ssh/id_rsa'); // blocked
import Sandbox from '@e2b/code-interpreter'; import { secureSandbox, adapters } from '@agentsh/secure-sandbox'; const raw = await Sandbox.create(); const sandbox = await secureSandbox(adapters.e2b(raw)); await sandbox.exec('pip install pandas'); // allowed await sandbox.exec('cat ~/.aws/credentials'); // blocked
import { Daytona } from '@daytonaio/sdk'; import { secureSandbox, adapters } from '@agentsh/secure-sandbox'; const raw = await new Daytona().create(); const sandbox = await secureSandbox(adapters.daytona(raw)); await sandbox.exec('node server.js'); // allowed await sandbox.exec('curl http://169.254.169.254/'); // blocked
import { Container } from '@cloudflare/containers'; import { secureSandbox, adapters } from '@agentsh/secure-sandbox'; const raw = await Container.create(); const sandbox = await secureSandbox(adapters.cloudflare(raw)); await sandbox.exec('npm run build'); // allowed await sandbox.exec('sudo apt install nmap'); // blocked
import { SandboxInstance } from '@blaxel/sandbox'; import { secureSandbox, adapters } from '@agentsh/secure-sandbox'; const raw = await SandboxInstance.create(); const sandbox = await secureSandbox(adapters.blaxel(raw)); await sandbox.exec('python train.py'); // allowed await sandbox.exec('env | grep SECRET'); // blocked
import { Sprite } from '@fly/sprites'; import { secureSandbox, adapters } from '@agentsh/secure-sandbox'; const raw = await Sprite.create(); const sandbox = await secureSandbox(adapters.sprites(raw)); await sandbox.exec('cargo build --release'); // allowed await sandbox.exec('cat /etc/shadow'); // blocked
Every command runs through the policy engine. Dangerous operations are denied before they reach the kernel.
Built on agentsh, the open-source execution-layer security engine. A lightweight Go binary replaces /bin/bash inside the sandbox, routing every operation through kernel-level enforcement before it reaches the host.
Enforcement is synchronous and adds <1ms per command. No background daemons, no network round-trips.
Extend any preset with your own rules. Allow specific APIs, open file paths, restrict ports — all in TypeScript. See the policy docs →
import { agentDefault } from '@agentsh/secure-sandbox/policies'; const policy = agentDefault({ network: [ { allow: ['api.stripe.com', 'api.openai.com'], ports: [443] } ], file: [ { allow: '/data/**', ops: ['read', 'write'] } ], }); const sandbox = await secureSandbox(adapters.e2b(raw), { policy });
Built-in adapters for the major hosted AI sandbox providers. Each adapter maps the platform's SDK to the secure-sandbox interface with zero configuration.
Stop hoping your sandbox is safe. Know it is.
MIT licensed. Built by Canyon Road.