` / `dict` | Named secret values for this rule. Referenced via `{{SECRET:name}}` in headers and body. Write-only — never returned in API responses. Stored encrypted at rest. |
## Region availability
Proxy availability is region-dependent. The `Region` type includes a `proxyAvailable` boolean field. Check region support before relying on proxy features:
```typescript TypeScript theme={null}
import { listRegions } from "@blaxel/core";
const { data: regions } = await listRegions({ throwOnError: true });
for (const r of regions) {
console.log(`${r.name}: proxy=${r.proxyAvailable}`);
}
```
```python Python theme={null}
from blaxel.core import listRegions
result = await listRegions()
for r in result.data:
print(f"{r.name}: proxy={r.proxy_available}")
```
## Environment variables set inside the sandbox
When proxy is configured, the sandbox automatically has:
| Variable | Purpose |
| --------------------- | ----------------------------------------------------------------------- |
| `HTTP_PROXY` | Proxy URL for HTTP traffic |
| `HTTPS_PROXY` | Proxy URL for HTTPS traffic |
| `NO_PROXY` | Comma-separated bypass list (always includes localhost, private ranges) |
| `NODE_EXTRA_CA_CERTS` | Path to CA cert for Node.js TLS verification |
| `SSL_CERT_FILE` | Path to CA cert for other TLS clients (`curl`, Python, etc.) |
## CLI tool compatibility
When proxy is enabled, the following tools work transparently inside the sandbox with no extra configuration:
| Tool | Protocol | Notes |
| ----------------- | -------- | ------------------------------------------------------------ |
| `curl` | HTTPS | Automatic via `HTTPS_PROXY` env var |
| `git` | HTTPS | May need `GIT_SSL_CAINFO=$SSL_CERT_FILE` for some operations |
| `pip` / `pip3` | HTTPS | Automatic |
| `npm` / `npx` | HTTPS | Automatic |
| Python `requests` | HTTPS | Automatic via env vars |
| Node.js `https` | HTTPS | Automatic via `HTTPS_PROXY` + `NODE_EXTRA_CA_CERTS` env vars |
## Behavior details
* **Wildcard matching**: `*.example.com` matches `sub.example.com` and `a.b.example.com` but not `example.com` itself
* **No cross-route leakage**: Headers/secrets from one routing rule are never applied to requests matching a different rule
* **User headers preserved**: The proxy adds injected headers alongside any headers the sandbox code sends — it does not overwrite user-sent headers
* **Body merge**: Injected body fields are merged into the outbound JSON payload. User-sent fields take precedence if there's a key collision
* **Tracing**: Every proxied request gets an `X-Blaxel-Request-Id` header for observability
* **Local traffic**: Requests to `localhost` / `127.0.0.1` are never routed through the proxy
## Full example: agent sandbox with proxy + firewall
```typescript TypeScript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandbox = await SandboxInstance.create({
name: "agent-workspace",
image: "blaxel/base-image:latest",
region: "us-was-1",
labels: { team: "ml", env: "staging" },
network: {
allowedDomains: [
"api.stripe.com",
"api.openai.com",
"httpbin.org",
"*.s3.amazonaws.com",
],
proxy: {
routing: [
{
destinations: ["api.stripe.com"],
headers: {
"Authorization": "Bearer {{SECRET:stripe-key}}",
"Stripe-Version": "2024-12-18.acacia",
},
body: {
"api_key": "{{SECRET:stripe-key}}",
},
secrets: {
"stripe-key": "sk-live-abc123...",
},
},
{
destinations: ["api.openai.com"],
headers: {
"Authorization": "Bearer {{SECRET:openai-key}}",
"OpenAI-Organization": "org-abc123",
},
secrets: {
"openai-key": "sk-proj-xyz789...",
},
},
],
bypass: ["*.s3.amazonaws.com"],
},
},
});
// curl https://api.stripe.com/... -> gets auth header + body injected
// curl https://api.openai.com/... -> gets auth header injected
// curl https://httpbin.org/... -> allowed, no injection
// curl https://evil.com/... -> BLOCKED by allowedDomains firewall
const result = await sandbox.process.exec({
command: "curl -s https://api.stripe.com/v1/charges",
waitForCompletion: true,
});
console.log(result.logs);
```
```python Python theme={null}
from blaxel.core.sandbox import SandboxInstance
sandbox = await SandboxInstance.create({
"name": "agent-workspace",
"image": "blaxel/base-image:latest",
"region": "us-was-1",
"labels": {"team": "ml", "env": "staging"},
"network": {
"allowedDomains": [
"api.stripe.com",
"api.openai.com",
"httpbin.org",
"*.s3.amazonaws.com",
],
"proxy": {
"routing": [
{
"destinations": ["api.stripe.com"],
"headers": {
"Authorization": "Bearer {{SECRET:stripe-key}}",
"Stripe-Version": "2024-12-18.acacia",
},
"body": {
"api_key": "{{SECRET:stripe-key}}",
},
"secrets": {
"stripe-key": "sk-live-abc123...",
},
},
{
"destinations": ["api.openai.com"],
"headers": {
"Authorization": "Bearer {{SECRET:openai-key}}",
"OpenAI-Organization": "org-abc123",
},
"secrets": {
"openai-key": "sk-proj-xyz789...",
},
},
],
"bypass": ["*.s3.amazonaws.com"],
},
},
})
# curl https://api.stripe.com/... -> gets auth header + body injected
# curl https://api.openai.com/... -> gets auth header injected
# curl https://httpbin.org/... -> allowed, no injection
# curl https://evil.com/... -> BLOCKED by allowedDomains firewall
result = await sandbox.process.exec({
"command": "curl -s https://api.stripe.com/v1/charges",
"wait_for_completion": True,
})
print(result.logs)
```
# Domain filtering
Source: https://docs.blaxel.ai/Sandboxes/Proxy-domains
Restrict which external domains a sandbox can reach using allowlists and denylists.
This feature is currently in public preview and is not recommended for production use. During the preview, the proxy and network features are only available in the `us-was-1` region.
Domain filtering lets you control which external domains a sandbox can reach. You can define an allowlist (only listed domains are reachable) or a denylist (all domains except listed ones are reachable). Domain filtering and proxy routing are **independent configurations** — you do not need to duplicate domains across both. A domain can appear in the allowlist without having a proxy routing rule, and vice versa.
Domain filtering relies on the sandbox's tools and libraries respecting the standard proxy environment variables (`HTTP_PROXY`, `HTTPS_PROXY`). Traffic from tools that ignore these variables will not be filtered. Routing-level enforcement is planned for a future release.
## Allowlist
Only the listed domains are reachable:
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "restricted-sandbox",
image: "blaxel/base-image:latest",
region: "us-was-1",
network: {
allowedDomains: ["api.stripe.com", "api.openai.com", "*.s3.amazonaws.com"],
proxy: { routing: [] },
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "restricted-sandbox",
"image": "blaxel/base-image:latest",
"region": "us-was-1",
"network": {
"allowedDomains": ["api.stripe.com", "api.openai.com", "*.s3.amazonaws.com"],
"proxy": {"routing": []},
},
})
```
## Denylist
All domains except the listed ones are reachable:
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "denylist-sandbox",
image: "blaxel/base-image:latest",
region: "us-was-1",
network: {
forbiddenDomains: ["*.malware.com", "evil.example.org"],
proxy: { routing: [] },
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "denylist-sandbox",
"image": "blaxel/base-image:latest",
"region": "us-was-1",
"network": {
"forbiddenDomains": ["*.malware.com", "evil.example.org"],
"proxy": {"routing": []},
},
})
```
When both `allowedDomains` and `forbiddenDomains` are set, `forbiddenDomains` takes precedence: a domain that appears in both lists will be blocked.
## Firewall + proxy combined
Firewall rules and proxy routing compose naturally:
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "locked-down",
network: {
allowedDomains: ["api.stripe.com", "api.openai.com"],
proxy: {
routing: [
{
destinations: ["api.stripe.com"],
headers: { "Authorization": "Bearer {{SECRET:stripe-key}}" },
secrets: { "stripe-key": "sk_live_..." },
},
],
},
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "locked-down",
"network": {
"allowedDomains": ["api.stripe.com", "api.openai.com"],
"proxy": {
"routing": [
{
"destinations": ["api.stripe.com"],
"headers": {"Authorization": "Bearer {{SECRET:stripe-key}}"},
"secrets": {"stripe-key": "sk_live_..."},
},
],
},
},
})
```
Only `api.stripe.com` and `api.openai.com` are reachable. The proxy injects credentials for Stripe requests; OpenAI requests go through unmodified.
# Proxy routing with secrets injection
Source: https://docs.blaxel.ai/Sandboxes/Proxy-secrets-injection
Inject secrets, headers, and body fields into outbound sandbox requests using the Blaxel proxy.
This feature is currently in public preview and is not recommended for production use. During the preview, the proxy and network features are only available in the `us-was-1` region.
The Blaxel proxy intercepts outbound HTTPS requests from the sandbox and injects headers, body fields, and secrets server-side.
## Header injection
```typescript TypeScript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandbox = await SandboxInstance.create({
name: "my-sandbox",
image: "blaxel/base-image:latest",
region: "us-was-1",
network: {
proxy: {
routing: [
{
destinations: ["api.stripe.com"],
headers: {
"Authorization": "Bearer {{SECRET:stripe-key}}",
"Stripe-Version": "2024-12-18.acacia",
},
secrets: {
"stripe-key": "sk_live_...",
},
},
],
},
},
});
```
```python Python theme={null}
from blaxel.core.sandbox import SandboxInstance
sandbox = await SandboxInstance.create({
"name": "my-sandbox",
"image": "blaxel/base-image:latest",
"region": "us-was-1",
"network": {
"proxy": {
"routing": [
{
"destinations": ["api.stripe.com"],
"headers": {
"Authorization": "Bearer {{SECRET:stripe-key}}",
"Stripe-Version": "2024-12-18.acacia",
},
"secrets": {
"stripe-key": "sk_live_...",
},
},
],
},
},
})
```
Code inside the sandbox calls `api.stripe.com` normally - the proxy intercepts the request, injects the `Authorization` and `Stripe-Version` headers with the resolved secret, and forwards it. The sandbox never sees the raw API key.
## Body injection (POST requests)
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "body-injection",
network: {
proxy: {
routing: [
{
destinations: ["api.stripe.com"],
headers: {
"Authorization": "Bearer {{SECRET:stripe-key}}",
},
body: {
"api_key": "{{SECRET:stripe-key}}",
},
secrets: {
"stripe-key": "sk_live_...",
},
},
],
},
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "body-injection",
"network": {
"proxy": {
"routing": [
{
"destinations": ["api.stripe.com"],
"headers": {
"Authorization": "Bearer {{SECRET:stripe-key}}",
},
"body": {
"api_key": "{{SECRET:stripe-key}}",
},
"secrets": {
"stripe-key": "sk_live_...",
},
},
],
},
},
})
```
The proxy merges body fields into outbound POST/PUT/PATCH JSON payloads. User-sent fields are preserved; injected fields are added alongside them.
## Multiple routing rules
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "multi-route",
network: {
proxy: {
routing: [
{
destinations: ["api.stripe.com"],
headers: { "Authorization": "Bearer {{SECRET:stripe-key}}" },
secrets: { "stripe-key": "sk_live_..." },
},
{
destinations: ["api.openai.com"],
headers: { "Authorization": "Bearer {{SECRET:openai-key}}" },
secrets: { "openai-key": "sk-proj-..." },
},
],
bypass: ["*.s3.amazonaws.com"],
},
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "multi-route",
"network": {
"proxy": {
"routing": [
{
"destinations": ["api.stripe.com"],
"headers": {"Authorization": "Bearer {{SECRET:stripe-key}}"},
"secrets": {"stripe-key": "sk_live_..."},
},
{
"destinations": ["api.openai.com"],
"headers": {"Authorization": "Bearer {{SECRET:openai-key}}"},
"secrets": {"openai-key": "sk-proj-..."},
},
],
"bypass": ["*.s3.amazonaws.com"],
},
},
})
```
Secrets are scoped per rule — the Stripe key is never injected into OpenAI requests and vice versa.
## Global catch-all rule
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "global-auth",
network: {
proxy: {
routing: [
{
destinations: ["*"],
headers: {
"X-Global-Auth": "Bearer {{SECRET:global-key}}",
},
secrets: {
"global-key": "token-xyz",
},
},
],
},
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "global-auth",
"network": {
"proxy": {
"routing": [
{
"destinations": ["*"],
"headers": {"X-Global-Auth": "Bearer {{SECRET:global-key}}"},
"secrets": {"global-key": "token-xyz"},
},
],
},
},
})
```
The `["*"]` destination matches all proxied traffic.
## Proxy bypass
Domains listed in `bypass` skip the proxy tunnel entirely (direct connection):
```typescript TypeScript theme={null}
await SandboxInstance.create({
name: "bypass-only",
network: {
proxy: {
bypass: ["*.s3.amazonaws.com", "169.254.169.254"],
},
},
});
```
```python Python theme={null}
await SandboxInstance.create({
"name": "bypass-only",
"network": {
"proxy": {
"bypass": ["*.s3.amazonaws.com", "169.254.169.254"],
},
},
})
```
S3 and metadata endpoint traffic goes direct; everything else routes through the proxy.
## Secret interpolation
Secrets are referenced in headers and body values using the `{{SECRET:name}}` syntax:
```text theme={null}
"Authorization": "Bearer {{SECRET:api-token}}" → "Bearer tok_live_abc123"
"X-Multi": "{{SECRET:part-a}}-{{SECRET:part-b}}" → "ALPHA-BETA"
"X-Plain": "no-secret-here" → "no-secret-here" (unchanged)
```
* Multiple `{{SECRET:...}}` placeholders can appear in a single value
* Secrets are resolved server-side by the proxy — the sandbox runtime never sees raw secret values
* Secrets are write-only: the `secrets` field is stripped from API responses
* Secrets are scoped per routing rule: a secret defined on route A cannot be resolved by route B
* User code inside the sandbox can also send `{{SECRET:name}}` in its own request headers or body — the proxy will resolve them if the secret exists on the matching route
## Reading proxy config from an existing sandbox
After creation or retrieval, network config is available as typed model attributes:
```typescript TypeScript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandbox = await SandboxInstance.get("my-sandbox");
const network = sandbox.spec.network;
if (network?.proxy?.routing) {
for (const route of network.proxy.routing) {
console.log(route.destinations);
console.log(route.headers["Authorization"]);
}
if (network.proxy.bypass) {
console.log(network.proxy.bypass);
}
}
if (network?.allowedDomains) {
console.log(network.allowedDomains);
}
```
```python Python theme={null}
from blaxel.core.sandbox import SandboxInstance
from blaxel.core.client.types import Unset
sandbox = await SandboxInstance.get("my-sandbox")
network = sandbox.spec.network # SandboxNetwork (or Unset)
if not isinstance(network, Unset) and not isinstance(network.proxy, Unset):
for route in network.proxy.routing:
print(route.destinations)
print(route.headers["Authorization"])
if not isinstance(network.proxy.bypass, Unset):
print(network.proxy.bypass)
if not isinstance(network, Unset) and not isinstance(network.allowed_domains, Unset):
print(network.allowed_domains)
```
# Client-side sessions
Source: https://docs.blaxel.ai/Sandboxes/Sessions
Operate sandboxes from a frontend client using sessions.
In many situations, you’ll need to operate a sandbox from a frontend client. When doing so, you cannot share the Blaxel credentials needed to access the sandbox. The solution is to use **sessions.**
Sessions are created for a sandbox from a backend server (using Blaxel credentials) and then shared with the frontend client, allowing the browser to connect to the sandbox.
From a session, you can:
* only interact with the [sandbox API](https://docs.blaxel.ai/api-reference/filesystem/get-file-or-directory-information) (i.e. manage the sandbox filesystem, processes and logs).
From a session, you cannot:
* interact with Blaxel API to create other preview URLs or sessions. These operations must be done server-side.
## Basic example
Create a temporary backend session to access a sandbox instance from your client application. Main parameter for this is `expiresAt`, a `Date()` corresponding to the expiration date.
```typescript TypeScript theme={null}
// From your backend
import { SandboxInstance } from "@blaxel/core";
const sandbox = await SandboxInstance.get("my-sandbox");
const expiresAt = new Date(Date.now() + 24 * 60 * 60 * 1000); // 24 hours
const session = await sandbox.sessions.create({ expiresAt });
```
```python Python theme={null}
# From your backend
from datetime import datetime, timedelta, UTC
from blaxel.core import SandboxInstance
sandbox = await SandboxInstance.get("my-sandbox")
expires_at = datetime.now(UTC) + timedelta(hours=24)
session = await sandbox.sessions.create({"expires_at": expires_at})
```
```tsx theme={null}
/// From your frontend:
import { SandboxInstance } from "@blaxel/core";
const sandboxWithSession = await SandboxInstance.fromSession(session)
```
### Create if expired
This helper function either retrieves an existing session or creates a new one if it expired. You can optionally pass `delta` (default: 1 hour), the time window in milliseconds before actual expiration when a session should still be recreated.
```typescript TypeScript theme={null}
const expiresAt = new Date(Date.now() + 24 * 60 * 60 * 1000); // 24 hours
const session = await sandbox.sessions.createIfExpired(
{ expiresAt },
60000 // delta in milliseconds
);
```
```python Python theme={null}
expires_at = datetime.now(UTC) + timedelta(hours=24)
session = await sandbox.sessions.create_if_expired(
{"expiresAt": expires_at},
delta_seconds=60000
)
```
## Example (NextJS)
The following example demonstrates a full implementation of sessions in a backend server and frontend client using NextJS.
### Server code (backend)
```typescript expandable theme={null}
import { NextResponse } from 'next/server';
import { SandboxInstance } from "@blaxel/core";
const SANDBOX_NAME = 'my-sandbox';
const responseHeaders = {
"Access-Control-Allow-Origin": "http://localhost:3000",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers": "Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
"Vary": "Origin"
}
export async function GET() {
// Get or create sandbox
const sandbox = await SandboxInstance.createIfNotExists({
name: SANDBOX_NAME,
image: "blaxel/base-image:latest",
memory: 4096,
region: "us-pdx-1",
ports: [
{ name: "preview", target: 3000 }
]
});
// Create session (24 hours expiry)
const session = await sandbox.sessions.create({
expiresAt: new Date(Date.now() + 24 * 60 * 60 * 1000)
});
// Create preview for port 3000
const preview = await sandbox.previews.create({
metadata: { name: "app-preview" },
spec: {
port: 3000,
public: true,
responseHeaders: responseHeaders
}
});
return NextResponse.json({
session,
preview_url: preview.spec?.url
});
}
```
### Client code (frontend)
```typescript expandable theme={null}
'use client'
import { SandboxInstance } from "@blaxel/core";
import { useState, useEffect } from "react";
export default function SandboxClient() {
const [sandbox, setSandbox] = useState(null);
const [previewUrl, setPreviewUrl] = useState(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
initializeSandbox();
}, []);
async function initializeSandbox() {
// Get session from backend
const response = await fetch('/api/sandbox');
const { session, preview_url } = await response.json();
// Create sandbox from session
const sandboxInstance = await SandboxInstance.fromSession(session);
setSandbox(sandboxInstance);
setPreviewUrl(preview_url);
// Start development server
await sandboxInstance.process.exec({
name: "dev-server",
command: "npm run dev",
workingDir: "/app",
waitForPorts: [3000]
});
setLoading(false);
}
if (loading) return Loading sandbox...
;
return (
Sandbox Demo
{previewUrl && (
)}
);
}
```
# Standby control
Source: https://docs.blaxel.ai/Sandboxes/Standby-control
Control when sandboxes remain active by managing WebSocket connections, tab visibility, auto-disconnect behavior, and activity-based timeouts.
Sandboxes stay active as long as there's an active connection to them, typically through a WebSocket connection. When a browser tab becomes inactive, the WebSocket should disconnect after some time, but this behavior depends on the specific browser implementation.
You can also use [process keep-alive](./Processes#sandbox-keep-alive) to keep the sandbox running when you launch a process, even if there isn't an active connection to it.
There are no built-in stop or start functions available in the SDKs to manage standby mode. This means you'll need to rely on other approaches to better control when sandboxes remain active:
1. Hide iframe when tab is inactive
Use JavaScript events to detect when a user is not active on the tab:
* Listen for tab visibility events that indicate when the user switches away from your tab
* When the user becomes inactive, hide the iframe containing the sandbox preview
* Show the iframe again when the user returns to the tab
2. Implement auto-disconnect on tab switch
Some browsers (like Chrome) may keep WebSockets alive even when tabs are inactive to improve performance. You can implement an auto-disconnect feature that:
* Detects when users switch tabs
* Automatically disconnects the WebSocket connection
* Reconnects when the user returns
3. Use activity-based timeouts
Set up a timer system in your interface that:
* Monitors user activity (typing, interactions, etc.)
* Hides the preview after a period of inactivity
* Prompts the user to confirm they're still using the sandbox
You can use the SDKs from the frontend if you have the right headers set on the session. Use the [`sandbox.sessions.create` function](/Sandboxes/Sessions) to manage sessions.
Here is an example of auto disconnect on tab switch with Vite through a plugin:
The code below is illustrative and not intended for production use.
```typescript theme={null}
// vite.config.ts
import { defineConfig } from 'vite'
import { hmrVisibility } from './vite-plugin-hmr-visibility'
// https://vite.dev/config/
export default defineConfig({
plugins: [
hmrVisibility({
disconnectDelay: 2000, // Wait 2 seconds before disconnecting
debug: true, // Enable debug logging
}),
],
server: {
// Prevent Vite from aggressive reconnection attempts
hmr: {
overlay: false, // Disable error overlay which can trigger reconnections
},
},
})
// vite-plugin-hmr-visibility.ts
/**
* Vite Plugin: HMR Visibility Manager
* Automatically disconnects/reconnects HMR websocket based on tab visibility
* to reduce serverless billing costs
*/
import type { Plugin } from 'vite'
export interface HMRVisibilityOptions {
/**
* Grace period before disconnecting (ms)
* @default 2000
*/
disconnectDelay?: number
/**
* Enable debug logging
* @default false
*/
debug?: boolean
}
export function hmrVisibility(options: HMRVisibilityOptions = {}): Plugin {
const {
disconnectDelay = 2000,
debug = false,
} = options
return {
name: 'vite-plugin-hmr-visibility',
apply: 'serve', // Only apply in dev mode
transformIndexHtml() {
// Inject the HMR visibility manager as an inline script
return [
{
tag: 'script',
attrs: { type: 'module' },
children: `
// HMR Visibility Manager - Disconnects HMR when tab is hidden
(function() {
const log = (...args) => {
if (${JSON.stringify(debug)}) {
console.log('[HMR Visibility]', ...args);
}
};
let disconnectTimer = null;
let isTabVisible = !document.hidden;
const DISCONNECT_DELAY = ${JSON.stringify(disconnectDelay)};
// Store reference to original functions
const OriginalWebSocket = window.WebSocket;
const OriginalSetTimeout = window.setTimeout;
const OriginalSetInterval = window.setInterval;
let hmrSocket = null;
let shouldBlockReconnect = false;
let blockedTimers = new Set();
// Patch setTimeout and setInterval to block Vite's reconnection timers
window.setTimeout = function(callback, delay, ...args) {
// Block timers when tab is hidden and it looks like a reconnection attempt
if (shouldBlockReconnect && delay && delay >= 500 && delay <= 5000) {
const callbackStr = callback.toString();
if (callbackStr.includes('connect') || callbackStr.includes('WebSocket') || callbackStr.includes('ws')) {
log('Blocked reconnection timer (setTimeout)');
const timerId = OriginalSetTimeout(() => {}, 999999999);
blockedTimers.add(timerId);
return timerId;
}
}
return OriginalSetTimeout.call(this, callback, delay, ...args);
};
window.setInterval = function(callback, delay, ...args) {
// Block intervals when tab is hidden
if (shouldBlockReconnect) {
const callbackStr = callback.toString();
if (callbackStr.includes('connect') || callbackStr.includes('WebSocket') || callbackStr.includes('ws')) {
log('Blocked reconnection interval (setInterval)');
const timerId = OriginalSetInterval(() => {}, 999999999);
blockedTimers.add(timerId);
return timerId;
}
}
return OriginalSetInterval.call(this, callback, delay, ...args);
};
// Patch WebSocket to track and block HMR connections
window.WebSocket = function(url, protocols) {
// Block new WebSocket connections when tab is hidden
if (shouldBlockReconnect) {
log('Blocked WebSocket connection attempt while tab is hidden:', url);
// Return a fake WebSocket that stays in CONNECTING state forever
const fakeWs = {
readyState: 0, // CONNECTING (keeps Vite waiting)
close: () => { log('Fake WebSocket close called'); },
send: () => { log('Fake WebSocket send called'); },
addEventListener: () => {},
removeEventListener: () => {},
dispatchEvent: () => false,
onopen: null,
onclose: null,
onerror: null,
onmessage: null,
};
return fakeWs;
}
const ws = new OriginalWebSocket(url, protocols);
// Track HMR WebSocket and intercept its event handlers
if (typeof url === 'string') {
hmrSocket = ws;
log('HMR WebSocket created and tracked');
// Intercept the onclose setter to control reconnection behavior
let userOnClose = null;
Object.defineProperty(ws, 'onclose', {
get() { return userOnClose; },
set(handler) {
userOnClose = function(event) {
// If we're blocking reconnect, don't call Vite's onclose handler
if (shouldBlockReconnect) {
log('Suppressed onclose handler - blocking reconnection');
return;
}
// Otherwise, call the original handler
if (handler) {
handler.call(this, event);
}
};
},
configurable: true
});
// Also intercept addEventListener for 'close' events
const originalAddEventListener = ws.addEventListener;
ws.addEventListener = function(type, listener, options) {
if (type === 'close') {
const wrappedListener = function(event) {
if (shouldBlockReconnect) {
log('Suppressed close event listener - blocking reconnection');
return;
}
listener.call(this, event);
};
return originalAddEventListener.call(this, type, wrappedListener, options);
}
return originalAddEventListener.call(this, type, listener, options);
};
}
return ws;
};
// Copy static properties from original WebSocket
Object.setPrototypeOf(window.WebSocket, OriginalWebSocket);
window.WebSocket.prototype = OriginalWebSocket.prototype;
// Copy static constants
Object.defineProperty(window.WebSocket, 'CONNECTING', { value: 0, enumerable: true });
Object.defineProperty(window.WebSocket, 'OPEN', { value: 1, enumerable: true });
Object.defineProperty(window.WebSocket, 'CLOSING', { value: 2, enumerable: true });
Object.defineProperty(window.WebSocket, 'CLOSED', { value: 3, enumerable: true });
const disconnectHMR = () => {
if (hmrSocket && (hmrSocket.readyState === 0 || hmrSocket.readyState === 1)) {
log('Disconnecting HMR websocket (tab hidden) - saving costs ✅');
hmrSocket.close(1000, 'Tab hidden');
hmrSocket = null;
}
shouldBlockReconnect = true;
};
const reconnectHMR = () => {
shouldBlockReconnect = false;
// Clear any blocked timers
blockedTimers.forEach(timerId => {
try {
clearTimeout(timerId);
clearInterval(timerId);
} catch (e) {}
});
blockedTimers.clear();
// If HMR was disconnected, reload to reconnect
if (!hmrSocket || hmrSocket.readyState === 3) {
log('Reloading page to restore HMR connection');
setTimeout(() => window.location.reload(), 100);
}
};
// Handle tab visibility changes
document.addEventListener('visibilitychange', () => {
isTabVisible = !document.hidden;
if (document.hidden) {
log('Tab hidden - will disconnect HMR in ' + DISCONNECT_DELAY + 'ms');
disconnectTimer = OriginalSetTimeout(() => {
if (document.hidden) {
disconnectHMR();
}
}, DISCONNECT_DELAY);
} else {
log('Tab visible - allowing HMR reconnection');
// Clear disconnect timer if tab becomes visible again
if (disconnectTimer) {
clearTimeout(disconnectTimer);
disconnectTimer = null;
}
OriginalSetTimeout(() => {
reconnectHMR();
}, 100);
}
});
// Handle page freeze events
window.addEventListener('freeze', () => {
log('Page freeze detected - disconnecting HMR immediately');
disconnectHMR();
}, { capture: true });
window.addEventListener('resume', () => {
log('Page resume detected - allowing HMR reconnection');
OriginalSetTimeout(() => {
reconnectHMR();
}, 100);
}, { capture: true });
log('HMR Visibility Manager initialized - will block reconnection attempts when tab is hidden');
})();
`,
injectTo: 'head-prepend',
},
]
},
}
}
```
# Templates
Source: https://docs.blaxel.ai/Sandboxes/Templates
Create reusable sandbox templates with pre-configured tools, languages, and frameworks using Dockerfiles. Deploy new sandboxes from templates in seconds.
Sandbox templates allow you to create customized & reusable sandbox environments. They define the tools, languages, frameworks, and configurations that will be available when you spawn a new sandbox instance.
Templates are particularly useful for teams who need standardized environments or for creating many specialized sandboxes for repeated use cases (codegen agent, Git PR reviews agent, etc.).
## What are sandbox templates?
Sandbox templates are pre-configured images that serve as blueprints for creating sandboxes. Each template includes:
* **Base environment**: Operating system and runtime configurations
* **Tools**: Languages, frameworks, and development tools
* **Configuration**: Environment variables, startup scripts, …
* **Resources**: Memory allocated
When you create a sandbox from a template, Blaxel provisions a new instance with all the specifications defined in that template.
### How sandbox templates work
1. **Initial setup**: Follow this guide to create your sandbox template for the first time. Use `bl push` to push the template to Blaxel, or `bl deploy` to push the template and also create a sandbox from it.
2. **Build phase**: Your Dockerfile is used to create a container with all required tools and configurations.
3. **Initialization phase**: The sandbox API is injected and startup commands are executed.
4. **Instantiation**: New sandboxes can be spawned from this template in seconds.
You cannot directly use "library" container images (such as those hosted on Docker Hub and other registries) as sandbox templates. Instead, you must create one or more custom images for your sandboxes using Dockerfiles and ensure that each image includes Blaxel's sandbox API binary. This is necessary for sandbox functionality like process management and file operations.
## Pre-built templates
Blaxel provides a library of pre-built container images that serve as sandbox templates for common needs.
| Image | Description |
| ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------- |
| [`blaxel/base-image:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/base-image) | Minimal environment with Node.js 22 (Alpine) |
| [`blaxel/py-app:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/py-app) | Python 3.12 development environment |
| [`blaxel/ts-app:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/ts-app) | TypeScript development environment with Node.js 22 (slim) |
| [`blaxel/node:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/node) | Node.js development environment with Node.js 23 (Alpine) |
| [`blaxel/nextjs:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/nextjs) | Next.js development environment with Node.js 22 (Alpine) |
| [`blaxel/vite:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/vite) | Vite + React + TS development environment with Node.js 22 (Alpine) |
| [`blaxel/astro:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/astro) | Astro development environment with Node.js 22 (Alpine) |
| [`blaxel/expo:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/expo) | React Native (Expo) development with Node.js 22 (Alpine) |
| [`blaxel/chromium:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/chromium) | Headless Chromium environment with Chrome 124 (Alpine) |
| [`blaxel/lightpanda:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/lightpanda) | Lightweight headless browser |
| [`blaxel/playwright-chromium:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/playwright-chromium) | Playwright + Chromium browser automation environment with Node.js 20 |
| [`blaxel/playwright-firefox:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/playwright-firefox) | Playwright + Firefox browser automation environment with Node.js 20 |
| [`blaxel/docker-in-sandbox:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/docker-in-sandbox) | Docker-in-Docker environment |
| [`blaxel/xfce-vnc:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/xfce-vnc) | XFCE desktop environment with VNC |
| [`blaxel/cua-xfce:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/cua-xfce) | XFCE desktop environment with CUA |
| [`blaxel/jupyter-notebook:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/jupyter-notebook) | Jupyter Notebook with Python 3.12 |
| [`blaxel/jupyter-server:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/jupyter-server) | Jupyter Server with Python 3.12 |
| [`blaxel/benchmark:latest`](https://github.com/blaxel-ai/sandbox/tree/main/hub/benchmark) | Sandbox benchmarking environment |
## Custom templates
You can also customize a template for specific needs.
### Create a sandbox template
You can create a customized sandbox template using the Blaxel CLI or REST API.
#### Blaxel CLI
##### 1. Initialize a template
Start by creating a new sandbox template using the Blaxel CLI:
```bash theme={null}
bl new sandbox mytemplate
```
This creates a new directory with the essential template files:
```text theme={null}
mytemplate/
├── blaxel.toml # Template configuration
├── Makefile # Build commands
├── Dockerfile # Defines the sandbox environment
└── entrypoint.sh # Initialization script
```
##### 2. Customize the Dockerfile
The Dockerfile is the heart of your template. It defines what will be available in your sandbox environment.
```docker theme={null}
# Choose a base image
FROM node:22-alpine
# Set working directory
WORKDIR /app
# Copy sandbox API (required)
COPY --from=ghcr.io/blaxel-ai/sandbox:latest /sandbox-api /usr/local/bin/sandbox-api
# Install system dependencies
RUN apk update && apk add --no-cache \
git curl python3 make g++ netcat-openbsd \
&& rm -rf /var/cache/apk/*
# Copy and set up entrypoint
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
```
**Always include** the *sandbox-api* binary from the Blaxel base image. This is required for sandbox functionality like process management and file operations.
##### 3. Configure template settings
The `blaxel.toml` file defines your template’s runtime configuration:
```toml theme={null}
name = "mytemplate"
type = "sandbox"
description = "Full-stack development environment with Node.js and Python"
[runtime]
generation = "mk3"
memory = 8192 # 8GB RAM
# Define exposed ports
[[runtime.ports]]
name = "dev-server"
target = 3000
protocol = "tcp"
[[runtime.ports]]
name = "another-api"
target = 8888
protocol = "tcp"
# Set environment variables
[env]
NODE_ENV = "development"
PYTHON_ENV = "development"
```
Currently, it is not possible to add or update environment variables for a sandbox after it is created. Ensure that any required environment variables are defined in your Dockerfile, your `blaxel.toml` file, or at sandbox creation time using the Blaxel SDKs.
##### 4. Define initialization
The `entrypoint.sh` script runs when a sandbox is created from your template:
```bash theme={null}
#!/bin/sh
# Start the sandbox API (required)
/usr/local/bin/sandbox-api &
# Wait for sandbox API to be ready
echo "Waiting for sandbox API..."
while ! nc -z 127.0.0.1 8080; do
sleep 0.1
done
echo "Sandbox API ready"
# Initialize your environment
echo "Setting up development environment..."
# Example: Start a development server in the background
if [ -f /app/package.json ]; then
cd /app
npm install
# Execute curl command, we execute it through the sandbox-api so that you can access logs,
# process status and everything you can do with the sandbox api
echo "Running Next.js dev server..."
curl http://127.0.0.1:8080/process -X POST -d '{"workingDir": "/app", "command": "npm run dev", "waitForCompletion": false}' -H "Content-Type: application/json"
fi
# Keep the container running
wait
```
##### 5. Build and test locally
Before creating the template on Blaxel, test it locally:
```bash theme={null}
# Build the Docker image
make build
# Run locally to test
make run
# Access your sandbox-api on exposed ports
# e.g., http://127.0.0.1:8080
# Example: curl http://127.0.0.1:8080/process
```
##### 6. Push the template
Once satisfied with your configuration, push the template to Blaxel:
```bash theme={null}
bl push
```
This will:
1. Build your Docker image
2. Push it to Blaxel's registry (private to your workspace)
3. Return an image ID you can use for creating sandboxes
Use `bl deploy` instead if you also want Blaxel to automatically create a first sandbox from the template, so you can test it:
```bash theme={null}
bl deploy
```
You can monitor the sandbox deployment with:
```bash theme={null}
bl get sandbox mytemplate --watch
```
You can safely delete the sandbox afterwards, and keep using the template for new sandboxes.
#### Blaxel SDK
The Blaxel SDK includes a declarative image builder that lets you define custom sandbox images directly in your code using a fluent, chainable API. Instead of writing Dockerfiles manually, you describe your image step by step, and the SDK generates the Dockerfile, builds the image, and deploys it as a sandbox.
This approach is ideal for dynamic image definitions, CI/CD pipelines, or when your image configuration depends on runtime logic.
The Declarative Image Builder is available in the **TypeScript** and **Python** SDKs. The Go SDK does not support this feature.
Here is a simple example of how to use it:
```typescript TypeScript theme={null}
import { ImageInstance } from "@blaxel/core";
const image = ImageInstance.fromRegistry("python:3.11-slim")
.aptInstall("git", "curl")
.workdir("/app")
.pipInstall("requests", "httpx", "pydantic")
.env({ PYTHONUNBUFFERED: "1" });
const sandbox = await image.build({
name: "my-sandbox",
memory: 4096,
});
```
```python Python theme={null}
from blaxel.core import ImageInstance
image = (
ImageInstance.from_registry("python:3.11-slim")
.apt_install("git", "curl")
.workdir("/app")
.pip_install("requests", "httpx", "pydantic")
.env(PYTHONUNBUFFERED="1")
)
sandbox = await image.build(
name="my-sandbox",
memory=4096,
)
```
The three steps to use the image builder are:
1. **Define**: Use the fluent API to describe your image starting from a base image.
2. **Chain**: Each method returns a new immutable `ImageInstance`, so you can branch and reuse configurations.
3. **Build**: Call `build()` to generate a Dockerfile, package it with any local files, upload it to Blaxel, and deploy it as a sandbox.
The SDK automatically injects the Blaxel sandbox API binary and sets a default entrypoint (if you haven't specified one), so your image is ready to use as a sandbox without manual configuration.
##### Select a base image
Start by selecting a base image from any Docker registry:
```typescript TypeScript theme={null}
import { ImageInstance } from "@blaxel/core";
const image = ImageInstance.fromRegistry("ubuntu:22.04");
```
```python Python theme={null}
from blaxel.core import ImageInstance
image = ImageInstance.from_registry("ubuntu:22.04")
```
##### Execute build commands
Execute shell commands during the image build:
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("node:20")
.runCommands(
"apt-get update && apt-get install -y python3 make g++"
);
```
```python Python theme={null}
image = (
ImageInstance.from_registry("node:20")
.run_commands(
"apt-get update && apt-get install -y python3 make g++",
)
)
```
##### Use package managers
The SDK provides convenience methods for common package managers. These generate optimized `RUN` commands with sensible defaults (e.g., cache cleanup for apt).
###### Python (pip)
```typescript TypeScript theme={null}
// Simple install
const image = ImageInstance.fromRegistry("python:3.11-slim")
.pipInstall("requests", "numpy>=1.20", "pandas");
// With options
const image = ImageInstance.fromRegistry("python:3.11-slim")
.pipInstall(
{ indexUrl: "https://my-private-index.com/simple", pre: true },
"my-package",
"another-package"
);
```
```python Python theme={null}
# Simple install
image = (
ImageInstance.from_registry("python:3.11-slim")
.pip_install("requests", "numpy>=1.20", "pandas")
)
# With options
image = (
ImageInstance.from_registry("python:3.11-slim")
.pip_install(
"my-package",
"another-package",
index_url="https://my-private-index.com/simple",
pre=True,
)
)
```
###### Python (uv)
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("python:3.11-slim")
.runCommands("pip install uv")
.uvInstall("requests", "httpx");
```
```python Python theme={null}
image = (
ImageInstance.from_registry("python:3.11-slim")
.run_commands("pip install uv")
.uv_install("requests", "httpx")
)
```
###### Node.js (npm, yarn, pnpm, bun)
```typescript TypeScript theme={null}
// Install specific packages with npm (default)
const image = ImageInstance.fromRegistry("node:20-alpine")
.npmInstall("express", "typescript");
// Use a different package manager
const image = ImageInstance.fromRegistry("node:20-alpine")
.npmInstall(
{ packageManager: "pnpm", globalInstall: true },
"turbo"
);
// Install from package.json (no packages specified)
const image = ImageInstance.fromRegistry("node:20-alpine")
.workdir("/app")
.copy("package.json", "/app/package.json")
.npmInstall();
```
```python Python theme={null}
# Install specific packages with npm (default)
image = (
ImageInstance.from_registry("node:20-alpine")
.npm_install("express", "typescript")
)
# Use a different package manager
image = (
ImageInstance.from_registry("node:20-alpine")
.npm_install("turbo", package_manager="pnpm", global_install=True)
)
# Install from package.json (no packages specified)
image = (
ImageInstance.from_registry("node:20-alpine")
.workdir("/app")
.copy("package.json", "/app/package.json")
.npm_install()
)
```
###### Install system packages (apt, apk)
```typescript TypeScript theme={null}
// Debian/Ubuntu (apt-get)
const image = ImageInstance.fromRegistry("ubuntu:22.04")
.aptInstall("git", "curl", "build-essential");
// Alpine (apk)
const image = ImageInstance.fromRegistry("node:20-alpine")
.apkAdd("git", "curl", "python3");
```
```python Python theme={null}
# Debian/Ubuntu (apt-get)
image = (
ImageInstance.from_registry("ubuntu:22.04")
.apt_install("git", "curl", "build-essential")
)
# Alpine (apk)
image = (
ImageInstance.from_registry("node:20-alpine")
.apk_add("git", "curl", "python3")
)
```
###### Use other package managers
```typescript TypeScript theme={null}
// Ruby gems
const image = ImageInstance.fromRegistry("ruby:3.2")
.gemInstall("rails", "bundler");
// Rust crates
const image = ImageInstance.fromRegistry("rust:1.75")
.cargoInstall({ locked: true }, "cargo-watch", "cargo-edit");
// Go packages
const image = ImageInstance.fromRegistry("golang:1.22")
.goInstall("github.com/golangci/golangci-lint/cmd/golangci-lint@latest");
// PHP Composer
const image = ImageInstance.fromRegistry("php:8.3-cli")
.composerInstall("laravel/framework", "guzzlehttp/guzzle");
// Python CLI tools (pipx)
const image = ImageInstance.fromRegistry("python:3.11")
.pipxInstall("black", "ruff");
```
```python Python theme={null}
# Ruby gems
image = (
ImageInstance.from_registry("ruby:3.2")
.gem_install("rails", "bundler")
)
# Rust crates
image = (
ImageInstance.from_registry("rust:1.75")
.cargo_install("cargo-watch", "cargo-edit", locked=True)
)
# Go packages
image = (
ImageInstance.from_registry("golang:1.22")
.go_install("github.com/golangci/golangci-lint/cmd/golangci-lint@latest")
)
# PHP Composer
image = (
ImageInstance.from_registry("php:8.3-cli")
.composer_install("laravel/framework", "guzzlehttp/guzzle")
)
# Python CLI tools (pipx)
image = (
ImageInstance.from_registry("python:3.11")
.pipx_install("black", "ruff")
)
```
##### Add Dockerfile instructions
All standard Dockerfile instructions are available as methods.
###### Specify the working directory
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("node:20")
.workdir("/app");
```
```python Python theme={null}
image = ImageInstance.from_registry("node:20").workdir("/app")
```
###### Define environment variables
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("python:3.11")
.env({
PYTHONUNBUFFERED: "1",
APP_ENV: "production",
});
```
```python Python theme={null}
image = (
ImageInstance.from_registry("python:3.11")
.env(PYTHONUNBUFFERED="1", APP_ENV="production")
)
```
###### Copy files
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("node:20")
.workdir("/app")
.copy("package.json", "/app/package.json")
.copy("src", "/app/src");
```
```python Python theme={null}
image = (
ImageInstance.from_registry("node:20")
.workdir("/app")
.copy("package.json", "/app/package.json")
.copy("src", "/app/src")
)
```
###### Expose ports
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("node:20")
.expose(3000, 8080);
```
```python Python theme={null}
image = ImageInstance.from_registry("node:20").expose(3000, 8080)
```
###### Set an entrypoint
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("python:3.11")
.entrypoint("python", "-m", "uvicorn", "main:app");
```
```python Python theme={null}
image = (
ImageInstance.from_registry("python:3.11")
.entrypoint("python", "-m", "uvicorn", "main:app")
)
```
###### Add users
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("ubuntu:22.04")
.runCommands("useradd -m appuser")
.user("appuser");
```
```python Python theme={null}
image = (
ImageInstance.from_registry("ubuntu:22.04")
.run_commands("useradd -m appuser")
.user("appuser")
)
```
###### Add labels and build arguments
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("python:3.11")
.label({ version: "1.0", maintainer: "team@example.com" })
.arg("BUILD_ENV", "production");
```
```python Python theme={null}
image = (
ImageInstance.from_registry("python:3.11")
.label(version="1.0", maintainer="team@example.com")
.arg("BUILD_ENV", "production")
)
```
##### Add local files
Copy files and directories from your local machine into the image:
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("python:3.11")
.workdir("/app")
.addLocalFile("./config.json", "/app/config.json")
.addLocalDir("./src", "/app/src");
```
```python Python theme={null}
image = (
ImageInstance.from_registry("python:3.11")
.workdir("/app")
.add_local_file("./config.json", "/app/config.json")
.add_local_dir("./src", "/app/src")
)
```
Local files are resolved to absolute paths, included in the build context, and `COPY`-ed into the image. You can optionally provide a `contextName` (TypeScript) / `context_name` (Python) parameter to control the filename in the build context.
##### Build and deploy
###### Basic build
```typescript TypeScript theme={null}
const sandbox = await image.build({
name: "my-sandbox",
});
```
```python Python theme={null}
sandbox = await image.build(name="my-sandbox")
```
###### Build with all options
```typescript TypeScript theme={null}
const sandbox = await image.build({
name: "my-sandbox",
memory: 8192, // Memory in MB (default: 4096)
timeout: 900000, // Timeout in ms (default: 900000 = 15 min) — NOTE: Python SDK uses seconds
sandboxVersion: "latest", // Sandbox API version
onStatusChange: (status) => {
console.log(`Build status: ${status}`);
},
});
```
```python Python theme={null}
sandbox = await image.build(
name="my-sandbox",
memory=8192, # Memory in MB (default: 4096)
timeout=900.0, # Timeout in seconds (default: 900) — NOTE: TypeScript SDK uses ms
sandbox_version="latest", # Sandbox API version
on_status_change=lambda s: print(f"Build status: {s}"),
)
```
The `build()` method automatically:
* Injects the Blaxel sandbox API binary
* Sets a default entrypoint if none was specified
* Packages the Dockerfile and local files into a ZIP archive
* Uploads and deploys the image as a sandbox
* Polls until the sandbox reaches `DEPLOYED` or `FAILED` status
###### Inspect the generated Dockerfile
You can preview the Dockerfile without building:
```typescript TypeScript theme={null}
const image = ImageInstance.fromRegistry("python:3.11-slim")
.aptInstall("git")
.workdir("/app")
.pipInstall("requests");
console.log(image.dockerfile);
// FROM python:3.11-slim
// RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/*
// WORKDIR /app
// RUN pip install requests
```
```python Python theme={null}
image = (
ImageInstance.from_registry("python:3.11-slim")
.apt_install("git")
.workdir("/app")
.pip_install("requests")
)
print(image.dockerfile)
# FROM python:3.11-slim
# RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/*
# WORKDIR /app
# RUN pip install requests
```
###### Write to disk
You can also write the image to a folder for manual inspection or use with `bl deploy`:
```typescript TypeScript theme={null}
// Write to a specific directory
const buildDir = image.write("./output", "my-image");
// Write to a temporary directory
const tempDir = image.writeTemp();
```
```python Python theme={null}
# Write to a specific directory
build_dir = image.write("./output", "my-image")
# Write to a temporary directory
temp_dir = image.write_temp()
```
##### Understand immutability and branching
Each method returns a new `ImageInstance`, leaving the original unchanged. This lets you create base images and branch from them:
```typescript TypeScript theme={null}
// Define a shared base
const base = ImageInstance.fromRegistry("python:3.11-slim")
.aptInstall("git", "curl")
.workdir("/app");
// Branch for different use cases
const mlImage = base
.pipInstall("torch", "transformers", "datasets")
.env({ CUDA_VISIBLE_DEVICES: "0" });
const webImage = base
.pipInstall("fastapi", "uvicorn", "sqlalchemy")
.expose(8000)
.entrypoint("python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0");
// Build both independently
const mlSandbox = await mlImage.build({ name: "ml-sandbox", memory: 16384 });
const webSandbox = await webImage.build({ name: "web-sandbox", memory: 4096 });
```
```python Python theme={null}
# Define a shared base
base = (
ImageInstance.from_registry("python:3.11-slim")
.apt_install("git", "curl")
.workdir("/app")
)
# Branch for different use cases
ml_image = (
base
.pip_install("torch", "transformers", "datasets")
.env(CUDA_VISIBLE_DEVICES="0")
)
web_image = (
base
.pip_install("fastapi", "uvicorn", "sqlalchemy")
.expose(8000)
.entrypoint("python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0")
)
# Build both independently
ml_sandbox = await ml_image.build(name="ml-sandbox", memory=16384)
web_sandbox = await web_image.build(name="web-sandbox", memory=4096)
```
##### Example script
A complete example building a Node.js / Python development sandbox:
```typescript TypeScript theme={null}
import { ImageInstance } from "@blaxel/core";
const sandbox = await ImageInstance.fromRegistry("node:20-bookworm-slim")
// System dependencies
.aptInstall("git", "curl", "build-essential")
// Working directory
.workdir("/app")
// Node.js dependencies
.npmInstall({ packageManager: "npm", globalInstall: true }, "turbo")
// Environment
.env({
NODE_ENV: "development",
EDITOR: "code",
})
// Expose dev server ports
.expose(3000, 5173)
// Build and deploy
.build({
name: "node-dev",
memory: 8192,
onStatusChange: (status) => console.log(`Status: ${status}`),
});
console.log("Sandbox deployed:", sandbox.metadata?.name);
```
```python Python theme={null}
import asyncio
from blaxel.core import ImageInstance
async def main():
sandbox = await (
ImageInstance.from_registry("python:3.12-slim")
# System dependencies
.apt_install("git", "curl", "build-essential")
# Working directory
.workdir("/app")
# Python dependencies
.pip_install("httpie", "ipython", "requests", "fastapi", "uvicorn")
# Environment
.env(
PYTHONUNBUFFERED="1",
EDITOR="code",
)
# Expose dev server port
.expose(8000)
# Build and deploy
.build(
name="python-dev",
memory=8192,
on_status_change=lambda s: print(f"Status: {s}"),
)
)
print("Sandbox deployed:", sandbox.metadata.name if sandbox.metadata else None)
asyncio.run(main())
```
##### API reference
###### Methods
| Method | Description |
| -------------------------------------- | ------------------------------------------------- |
| `fromRegistry` / `from_registry` | Create an image from a Docker registry base image |
| `runCommands` / `run_commands` | Run shell commands (`RUN`) |
| `workdir` | Set working directory (`WORKDIR`) |
| `env` | Set environment variables (`ENV`) |
| `copy` | Copy from build context (`COPY`) |
| `addLocalFile` / `add_local_file` | Add a local file to build context and image |
| `addLocalDir` / `add_local_dir` | Add a local directory to build context and image |
| `expose` | Expose ports (`EXPOSE`) |
| `entrypoint` | Set container entrypoint (`ENTRYPOINT`) |
| `user` | Set the user (`USER`) |
| `label` | Add labels (`LABEL`) |
| `arg` | Define build arguments (`ARG`) |
| `pipInstall` / `pip_install` | Install Python packages with pip |
| `uvInstall` / `uv_install` | Install Python packages with uv |
| `pipxInstall` / `pipx_install` | Install Python CLI apps with pipx |
| `aptInstall` / `apt_install` | Install Debian/Ubuntu packages |
| `apkAdd` / `apk_add` | Install Alpine packages |
| `npmInstall` / `npm_install` | Install Node.js packages (npm/yarn/pnpm/bun) |
| `gemInstall` / `gem_install` | Install Ruby gems |
| `cargoInstall` / `cargo_install` | Install Rust crates |
| `goInstall` / `go_install` | Install Go packages |
| `composerInstall` / `composer_install` | Install PHP packages |
| `build` | Build and deploy as a sandbox |
| `write` | Write image to a folder |
| `writeTemp` / `write_temp` | Write image to a temporary folder |
###### Properties
| Property | Description |
| -------------------------- | --------------------------------------------------- |
| `dockerfile` | Generated Dockerfile content |
| `hash` | 12-character SHA256 hash of the image configuration |
| `baseImage` / `base_image` | Base image tag |
#### Blaxel API
Although less common, it is also possible to create a sandbox template and sandbox by directly interacting with the Blaxel API.
Ensure that you have the following:
* A [Blaxel API key](https://app.blaxel.ai/profile/security)
* The workspace name, found at the bottom left corner of the [Blaxel Console](https://app.blaxel.ai/) or via `bl workspaces`
##### 1. Create a ZIP archive
Create a directory with the following project contents:
* `Dockerfile` (required) - Defines your custom sandbox image and must include `sandbox-api`
* Any custom scripts (e.g., `entrypoint.sh` for initialization logic)
* Configuration files or data files as needed
* Additional dependencies or binaries your sandbox requires
Here is an example of the expected project structure:
```text theme={null}
mytemplate/
├── Dockerfile # Required - defines your image
└── entrypoint.sh # Optional - for custom initialization
```
* The Dockerfile is the heart of your template. It defines what will be available in your sandbox environment. [See an example](#2-customize-the-dockerfile).
* The `entrypoint.sh` script runs when a sandbox is created from your template. [See an example](#4-define-initialization).
The Dockerfile can reference and use any files included in the ZIP archive. Everything gets extracted and built together as a Docker image.
Once the files are ready, create a ZIP archive containing the files:
```bash theme={null}
(cd mytemplate && zip -r ../mytemplate.zip .)
```
##### 2. Create a sandbox resource
Set your Blaxel API key and workspace as environment variables:
```bash theme={null}
export BL_API_KEY=YOUR_API_KEY
export BL_WORKSPACE=YOUR_WORKSPACE_NAME
```
Make an HTTP POST request to create or update your sandbox resource. This will return an upload URL for the ZIP archive.
```bash theme={null}
curl -v -X POST "https://api.blaxel.ai/v0/sandboxes?upload=true" \
-H "Authorization: Bearer $BL_API_KEY" \
-H "X-Blaxel-Workspace: $BL_WORKSPACE" \
-H "Content-Type: application/json" \
-d '{
"apiVersion": "blaxel.ai/v1alpha1",
"kind": "Sandbox",
"metadata": {
"name": "my-sandbox"
},
"spec": {
"runtime": {
"memory": 2048
},
"region": "us-pdx-1"
}
}'
```
Note the `upload=true` query parameter in the request, which indicates intent to upload custom code.
The Blaxel API returns a JSON response. The response contains an `x-blaxel-upload-url` response header, containing the target URL to use when uploading your template. The URL is in the response headers, not the JSON body.
Example response:
```http theme={null}
HTTP/2 200
x-blaxel-upload-url: https://controlplane-prod-build-sources...
```
Refer to the documentation on [sandbox configuration parameters](#understand-sandbox-configuration) for more information on the body of the POST request.
##### 3. Upload ZIP archive
Use an HTTP PUT request to upload the ZIP file to the upload URL. Replace the placeholder URL in the command below with the value of the `x-blaxel-upload-url` response header received earlier.
```bash theme={null}
export UPLOAD_URL="https://controlplane-prod-build-sources..."
curl -X PUT "$UPLOAD_URL" \
-H "Content-Type: application/zip" \
--data-binary @mytemplate.zip
```
The upload is successful when you receive a `200 OK` status code.
##### 4. Monitor deployment status
After uploading, poll the sandbox status endpoint to track the build and deployment progress.
Make a GET request to `https://api.blaxel.ai/v0/sandboxes/`, where SANDBOX-NAME is the `metadata.name` specified in the initial POST request.
```bash theme={null}
watch -n 1 'curl -s -X GET https://api.blaxel.ai/v0/sandboxes/my-sandbox -H "Authorization: Bearer $BL_API_KEY" -H "X-Blaxel-Workspace: $BL_WORKSPACE" | jq -r ".status"'
```
The `status` field of the response will progress through these values:
| Status | Description |
| ------------- | ------------------------------------------ |
| `UPLOADING` | Code archive is being uploaded |
| `BUILDING` | Docker image is being built |
| `DEPLOYING` | Container is being deployed to the cluster |
| `DEPLOYED` | Sandbox is ready to use |
| `FAILED` | Deployment failed (check build logs) |
| `DEACTIVATED` | Sandbox has been deactivated |
Continue polling every 3-5 seconds until the status reaches `DEPLOYED` or `FAILED`.
A first sandbox with that template is automatically created on Blaxel once deployment succeeds. You can safely delete the sandbox and keep using the template for new sandboxes.
##### Example script
Here's a complete example script that performs all the steps above:
```bash Shell expandable theme={null}
#!/bin/bash
# Real deployment script that executes the documented API workflow
set -e
# Configuration
SANDBOX_NAME="my-custom-sandbox-$(date +%s)"
SOURCE_DIR="mytemplate"
ZIP_FILE="mytemplate.zip"
BASE_URL="https://api.blaxel.ai/v0"
# Colors for output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Cleanup function
cleanup() {
if [ -f "$ZIP_FILE" ]; then
rm -f "$ZIP_FILE"
echo -e "\n${GREEN}✓ Cleaned up temporary files${NC}"
fi
}
# Set trap to cleanup on exit
trap cleanup EXIT
# Validate credentials
if [ -z "$BL_API_KEY" ]; then
echo -e "${RED}Error: BL_API_KEY not set${NC}"
echo "Get your API key from workspace settings and set it with:"
echo " export BL_API_KEY='your-api-key'"
exit 1
fi
if [ -z "$BL_WORKSPACE" ]; then
echo -e "${RED}Error: BL_WORKSPACE not set${NC}"
echo "Set your workspace name with:"
echo " export BL_WORKSPACE='your-workspace-name'"
exit 1
fi
# Check if source directory exists
if [ ! -d "$SOURCE_DIR" ]; then
echo -e "${RED}Error: $SOURCE_DIR directory not found${NC}"
exit 1
fi
# Create ZIP archive
echo -e "${BLUE}Creating ZIP archive from $SOURCE_DIR...${NC}"
cd "$SOURCE_DIR"
zip -q -r "../${ZIP_FILE}" .
cd ..
FILE_SIZE=$(wc -c < "$ZIP_FILE" | tr -d ' ')
echo -e "${GREEN}✓ ZIP archive created: $ZIP_FILE ($FILE_SIZE bytes)${NC}\n"
# Step 1: Create sandbox and get upload URL
echo -e "${BLUE}[1/4] Creating sandbox '$SANDBOX_NAME'...${NC}"
# Create temporary file for headers
HEADERS_FILE=$(mktemp)
CREATE_RESPONSE=$(curl -s -D "$HEADERS_FILE" -w "\n%{http_code}" -X POST "$BASE_URL/sandboxes?upload=true" \
-H "Authorization: Bearer $BL_API_KEY" \
-H "X-Blaxel-Workspace: $BL_WORKSPACE" \
-H "Content-Type: application/json" \
-d '{
"apiVersion": "blaxel.ai/v1alpha1",
"kind": "Sandbox",
"metadata": {
"name": "'"$SANDBOX_NAME"'"
},
"spec": {
"runtime": {
"memory": 2048
},
"region": "us-pdx-1"
}
}')
HTTP_CODE=$(echo "$CREATE_RESPONSE" | tail -n 1)
RESPONSE_BODY=$(echo "$CREATE_RESPONSE" | sed '$d')
if [ "$HTTP_CODE" != "200" ] && [ "$HTTP_CODE" != "201" ]; then
echo -e "${RED}✗ Failed to create sandbox (HTTP $HTTP_CODE)${NC}"
echo "$RESPONSE_BODY" | jq . 2>/dev/null || echo "$RESPONSE_BODY"
rm -f "$HEADERS_FILE"
exit 1
fi
# Extract upload URL from response headers
UPLOAD_URL=$(grep -i "x-blaxel-upload-url:" "$HEADERS_FILE" | cut -d' ' -f2- | tr -d '\r\n')
if [ -z "$UPLOAD_URL" ]; then
echo -e "${RED}✗ No upload URL received in response headers${NC}"
rm -f "$HEADERS_FILE"
exit 1
fi
rm -f "$HEADERS_FILE"
echo -e "${GREEN}✓ Sandbox created${NC}"
echo ""
# Step 2: Upload ZIP file
echo -e "${BLUE}[2/4] Uploading code archive ($FILE_SIZE bytes)...${NC}"
UPLOAD_RESPONSE=$(curl -s -w "%{http_code}" -X PUT "$UPLOAD_URL" \
-H "Content-Type: application/zip" \
--data-binary "@$ZIP_FILE")
HTTP_CODE="${UPLOAD_RESPONSE: -3}"
if [ "$HTTP_CODE" != "200" ]; then
echo -e "${RED}✗ Upload failed with status $HTTP_CODE${NC}"
exit 1
fi
echo -e "${GREEN}✓ Upload completed${NC}"
echo ""
# Step 3: Monitor deployment status
echo -e "${BLUE}[3/4] Monitoring deployment status...${NC}"
MAX_WAIT=900 # 15 minutes
START_TIME=$(date +%s)
LAST_STATUS=""
while true; do
# Check timeout
CURRENT_TIME=$(date +%s)
ELAPSED=$((CURRENT_TIME - START_TIME))
if [ $ELAPSED -gt $MAX_WAIT ]; then
echo -e "${RED}✗ Deployment timed out after $MAX_WAIT seconds${NC}"
exit 1
fi
# Get current status
STATUS_RESPONSE=$(curl -s -X GET "$BASE_URL/sandboxes/$SANDBOX_NAME" \
-H "Authorization: Bearer $BL_API_KEY" \
-H "X-Blaxel-Workspace: $BL_WORKSPACE")
STATUS=$(echo "$STATUS_RESPONSE" | jq -r '.status // empty')
if [ -z "$STATUS" ]; then
echo -e "${YELLOW}Warning: Could not get status, retrying...${NC}"
sleep 3
continue
fi
# Log status changes
if [ "$STATUS" != "$LAST_STATUS" ]; then
echo " Status: $STATUS"
LAST_STATUS=$STATUS
fi
# Check terminal states
if [ "$STATUS" = "DEPLOYED" ]; then
IMAGE=$(echo "$STATUS_RESPONSE" | jq -r '.spec.runtime.image // empty')
echo ""
echo -e "${GREEN}🎉 Deployment complete!${NC}"
echo "Sandbox: $SANDBOX_NAME"
echo "Image: $IMAGE"
echo ""
# Step 4: Show how to use it
echo -e "${BLUE}[4/4] How to use your sandbox:${NC}"
echo ""
echo "Run a command:"
echo " bl run sandbox $SANDBOX_NAME"
echo ""
echo "Get sandbox details:"
echo " bl get sandbox $SANDBOX_NAME"
echo ""
echo "View logs:"
echo " bl logs sandbox $SANDBOX_NAME"
echo ""
echo "Delete sandbox:"
echo " bl delete sandbox $SANDBOX_NAME"
echo ""
exit 0
elif [ "$STATUS" = "FAILED" ]; then
echo ""
echo -e "${RED}✗ Deployment failed${NC}"
echo ""
echo "Check build logs with:"
echo " curl -X GET '$BASE_URL/sandboxes/$SANDBOX_NAME/build-logs' \\"
echo " -H 'Authorization: Bearer \$BL_API_KEY' \\"
echo " -H 'X-Blaxel-Workspace: \$BL_WORKSPACE'"
exit 1
elif [ "$STATUS" = "DEACTIVATED" ] || [ "$STATUS" = "DEACTIVATING" ] || [ "$STATUS" = "DELETING" ]; then
echo ""
echo -e "${RED}✗ Unexpected status: $STATUS${NC}"
exit 1
fi
# Wait before next poll
sleep 3
done
```
```typescript TypeScript expandable theme={null}
import { SandboxInstance, settings } from "@blaxel/core";
import { execSync } from "node:child_process";
import { existsSync, readFileSync, statSync, unlinkSync } from "node:fs";
import { resolve } from "node:path";
const SANDBOX_NAME = `my-custom-sandbox-${Math.floor(Date.now() / 1000)}`;
const SOURCE_DIR = "mytemplate";
const ZIP_FILE = "mytemplate.zip";
async function poll() {
const maxWait = 900;
const startTime = Date.now();
let lastStatus: string | null = null;
while (true) {
if ((Date.now() - startTime) / 1000 > maxWait) {
console.log(`Deployment timed out after ${maxWait} seconds`);
process.exit(1);
}
try {
const sandbox = await SandboxInstance.get(SANDBOX_NAME);
const status = sandbox.status;
if (!status) {
console.log("Warning: Could not get status, retrying...");
await new Promise((r) => setTimeout(r, 3000));
continue;
}
if (status !== lastStatus) {
console.log(` Status: ${status}`);
lastStatus = status;
}
if (status === "DEPLOYED") {
const image = sandbox.spec?.runtime?.image;
console.log(
`\nDeployment complete!\nSandbox: ${SANDBOX_NAME}\nImage: ${image}\n`
);
return;
} else if (status === "FAILED") {
console.log(
`\nDeployment failed\n\nCheck build logs with:\n bl logs sandbox ${SANDBOX_NAME}`
);
process.exit(1);
} else if (
status === "DEACTIVATED" ||
status === "DEACTIVATING" ||
status === "DELETING"
) {
console.log(`\nUnexpected status: ${status}`);
process.exit(1);
}
} catch {
console.log("Warning: Could not get status, retrying...");
}
await new Promise((r) => setTimeout(r, 3000));
}
}
async function main() {
try {
if (!existsSync(SOURCE_DIR) || !statSync(SOURCE_DIR).isDirectory()) {
console.error(`Error: ${SOURCE_DIR} directory not found`);
process.exit(1);
}
console.log(`Creating ZIP archive from ${SOURCE_DIR}...`);
execSync(`zip -r ${resolve(ZIP_FILE)} .`, {
cwd: resolve(SOURCE_DIR),
stdio: "pipe",
});
const fileSize = statSync(ZIP_FILE).size;
console.log(`ZIP archive created: ${ZIP_FILE} (${fileSize} bytes)\n`);
console.log(`[1/4] Creating sandbox '${SANDBOX_NAME}'...`);
await settings.authenticate();
const createResponse = await fetch(
`${settings.baseUrl}/sandboxes?upload=true`,
{
method: "POST",
headers: {
...settings.headers,
"Content-Type": "application/json",
},
body: JSON.stringify({
metadata: { name: SANDBOX_NAME },
spec: {
runtime: { memory: 2048 },
region: "us-pdx-1",
},
}),
}
);
if (createResponse.status !== 200 && createResponse.status !== 201) {
console.error(
`Failed to create sandbox (HTTP ${createResponse.status})`
);
console.error(await createResponse.text());
process.exit(1);
}
const uploadUrl = createResponse.headers.get("x-blaxel-upload-url");
if (!uploadUrl) {
console.error("No upload URL received in response headers");
process.exit(1);
}
console.log("Sandbox created\n");
console.log(`[2/4] Uploading code archive (${fileSize} bytes)...`);
const uploadResponse = await fetch(uploadUrl, {
method: "PUT",
headers: { "Content-Type": "application/zip" },
body: readFileSync(ZIP_FILE),
});
if (uploadResponse.status !== 200) {
console.error(`Upload failed with status ${uploadResponse.status}`);
process.exit(1);
}
console.log("Upload completed\n");
console.log("[3/4] Monitoring deployment status...");
await poll();
console.log(`[4/4] How to use your sandbox:\n`);
console.log(`Run a command:\n bl run sandbox ${SANDBOX_NAME}\n`);
console.log(`Get sandbox details:\n bl get sandbox ${SANDBOX_NAME}\n`);
console.log(`View logs:\n bl logs sandbox ${SANDBOX_NAME}\n`);
console.log(`Delete sandbox:\n bl delete sandbox ${SANDBOX_NAME}\n`);
} finally {
if (existsSync(ZIP_FILE)) {
unlinkSync(ZIP_FILE);
console.log("Cleaned up temporary files");
}
}
}
main();
```
```python Python expandable theme={null}
import asyncio
import sys
import time
import zipfile
from pathlib import Path
import httpx
from blaxel.core import SandboxInstance, client
SANDBOX_NAME = f"my-custom-sandbox-{int(time.time())}"
SOURCE_DIR = "mytemplate"
ZIP_FILE = "mytemplate.zip"
try:
if not Path(SOURCE_DIR).is_dir():
print(f"Error: {SOURCE_DIR} directory not found")
sys.exit(1)
print(f"Creating ZIP archive from {SOURCE_DIR}...")
with zipfile.ZipFile(ZIP_FILE, "w", zipfile.ZIP_DEFLATED) as zf:
for file in Path(SOURCE_DIR).rglob("*"):
if file.is_file():
zf.write(file, file.relative_to(SOURCE_DIR))
file_size = Path(ZIP_FILE).stat().st_size
print(f"ZIP archive created: {ZIP_FILE} ({file_size} bytes)\n")
print(f"[1/4] Creating sandbox '{SANDBOX_NAME}'...")
httpx_client = client.get_httpx_client()
response = httpx_client.post(
"/sandboxes",
params={"upload": "true"},
json={
"metadata": {"name": SANDBOX_NAME},
"spec": {
"runtime": {"memory": 2048},
"region": "us-pdx-1",
},
},
timeout=30,
)
if response.status_code not in (200, 201):
print(f"Failed to create sandbox (HTTP {response.status_code})")
print(response.text)
sys.exit(1)
upload_url = response.headers.get("x-blaxel-upload-url")
if not upload_url:
print("No upload URL received in response headers")
sys.exit(1)
print("Sandbox created\n")
print(f"[2/4] Uploading code archive ({file_size} bytes)...")
with open(ZIP_FILE, "rb") as f:
upload_response = httpx.put(
upload_url,
content=f.read(),
headers={"Content-Type": "application/zip"},
timeout=300,
)
if upload_response.status_code != 200:
print(f"Upload failed with status {upload_response.status_code}")
sys.exit(1)
print("Upload completed\n")
print("[3/4] Monitoring deployment status...")
async def poll():
max_wait = 900
start_time = time.time()
last_status = None
while True:
if time.time() - start_time > max_wait:
print(f"Deployment timed out after {max_wait} seconds")
sys.exit(1)
try:
sandbox = await SandboxInstance.get(SANDBOX_NAME)
status = sandbox.status
except Exception:
print("Warning: Could not get status, retrying...")
await asyncio.sleep(3)
continue
if status is None:
print("Warning: Could not get status, retrying...")
await asyncio.sleep(3)
continue
if status != last_status:
print(f" Status: {status}")
last_status = status
if status == "DEPLOYED":
image = sandbox.spec.runtime.image if sandbox.spec and sandbox.spec.runtime else None
print(f"\nDeployment complete!\nSandbox: {SANDBOX_NAME}\nImage: {image}\n")
return
elif status == "FAILED":
print(f"\nDeployment failed\n\nCheck build logs with:\n bl logs sandbox {SANDBOX_NAME}")
sys.exit(1)
elif status in ("DEACTIVATED", "DEACTIVATING", "DELETING"):
print(f"\nUnexpected status: {status}")
sys.exit(1)
await asyncio.sleep(3)
asyncio.run(poll())
print(f"[4/4] How to use your sandbox:\n")
print(f"Run a command:\n bl run sandbox {SANDBOX_NAME}\n")
print(f"Get sandbox details:\n bl get sandbox {SANDBOX_NAME}\n")
print(f"View logs:\n bl logs sandbox {SANDBOX_NAME}\n")
print(f"Delete sandbox:\n bl delete sandbox {SANDBOX_NAME}\n")
finally:
if Path(ZIP_FILE).exists():
Path(ZIP_FILE).unlink()
print("Cleaned up temporary files")
```
The script accepts a source directory path containing the Dockerfile and related code and uses it to deploy a sandbox on Blaxel. It monitors the deployment status until completion and also cleans up temporary files.
To use this script, first export your API key and credentials as below:
```bash theme={null}
export BL_API_KEY=YOUR-API-KEY
export BL_WORKSPACE=YOUR-WORKSPACE-NAME
```
Create a sandbox directory named `mytemplate` with your custom Dockerfile and entrypoint script:
```bash theme={null}
mkdir -p mytemplate
```
Save and run the script as `deploy-sandbox.sh` (Shell), `index.ts` (TypeScript) or `main.py` (Python):
```bash Shell theme={null}
chmod +x deploy-sandbox.sh
./deploy-sandbox.sh
```
```typescript TypeScript theme={null}
node index.ts
```
```python Python theme={null}
python main.py
```
### Use a custom template
Once a template is successfully pushed, you can spawn new sandboxes instantly by using its image ID.
Use the following command to retrieve the image ID of the most recently pushed template:
```docker theme={null}
# Retrieve your IMAGE_ID
bl get image sandbox/mytemplate --latest
```
```typescript TypeScript theme={null}
import { SandboxInstance } from "@blaxel/core";
// Create a new sandbox
const sandbox = await SandboxInstance.create({
name: "my-sandbox-from-template",
image: "IMAGE_ID",
memory: 4096,
region: "us-pdx-1",
ports: [{ name: "nextjs-dev", target: 3000 }, { name: "another-api", target: 8888 }]
});
```
```python Python theme={null}
from blaxel.core import SandboxInstance
sandbox = await SandboxInstance.create({
"name": "my-sandbox-from-template",
"image": "IMAGE_ID",
"memory": 4096,
"region": "us-pdx-1",
"ports": [{ "name": "nextjs-dev", "target": 3000 }, { "name": "another-api", "target": 8888 }]
})
```
### Update a custom template
To update an existing custom template:
1. Modify your Dockerfile or configuration
2. Rebuild locally to test changes
3. Push the new version:
```bash theme={null}
bl push
```
The new revision becomes available while the old one remains accessible.
## Best practices
### 1. Optimize for cold starts
* Use smaller base images (Alpine when possible)
* Minimize layers in Dockerfile
* Pre-install only essential packages
* Defer optional installations to runtime
### 2. Cache dependencies
```docker theme={null}
# Good: Cache package installations
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Bad: Invalidates cache on any file change
COPY . .
RUN npm install
```
### 3. Security considerations
* Don’t include secrets in templates
* Use Blaxel’s secrets management for sensitive data
* Keep base images updated
* Scan for vulnerabilities regularly
### 4. Resource optimization
Choose appropriate resources for your use case:
| Use Case | Memory |
| --------------------- | ------ |
| Light development | 2GB |
| Small web application | 4GB |
| Full-stack web | 8GB |
# Volumes
Source: https://docs.blaxel.ai/Sandboxes/Volumes
Attach Blaxel Volumes to sandboxes for persistent storage that survives sandbox destruction and recreation, with mount path configuration.
Blaxel Volumes provide persistent storage that survives sandbox destruction and recreation.
For full documentation on creating, deleting, resizing, and listing volumes, see the [Volumes overview](/Volumes/Overview).
## Attach a volume to a sandbox
At this time, you can only attach one volume to a sandbox. A volume can also only be attached to one sandbox at a time.
To use a volume, attach it **when you create a sandbox** by passing an array of volume configurations to the `volumes` property.
Each configuration must include the `name` of the volume to attach and the `mountPath` where it will be accessible inside the sandbox's filesystem. The mount path **will override the existing content** of a directory.
```typescript TypeScript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandbox = await SandboxInstance.create({
name: "my-sandbox",
image: "blaxel/nextjs:latest",
memory: 4096,
region: "us-pdx-1",
volumes: [{
name: "my-volume", // Must match the name of the created volume
mountPath: "/app", // Directory inside the sandbox
readOnly: false // Set to true to prevent writes
}],
// ... other sandbox properties
});
```
```python Python theme={null}
from blaxel.core import SandboxInstance
sandbox = await SandboxInstance.create({
"name": "my-sandbox",
"image": "blaxel/nextjs:latest",
"memory": 4096,
"region": "us-pdx-1",
"volumes": [{
"name": "my-volume", # Must match the name of the created volume
"mount_path": "/app", # Directory inside the sandbox
"read_only": False # Set to True to prevent writes
}],
# ... other sandbox properties
})
```
Any files written to the `/app` directory within this sandbox will be stored on `my-volume` and will persist even if this sandbox is deleted.
At this time, you cannot detach a volume from a sandbox.
Learn how to create and manage volumes.
Pre-populate volumes with files for faster environment setup.
# Best practices
Source: https://docs.blaxel.ai/Sandboxes/best-practices
Recommended practices for sandbox lifecycle management, scale-to-zero behavior, persistent storage, agent architecture, and MCP server design.
Unlike traditional sandbox providers, Blaxel Sandboxes automatically scale up and down at near-instant speeds, and persist forever in standby. As such, here are some recommended best practices so you can best make use of the platform features.
## Use the sandbox lifecycle strategically
**Treat sandboxes as** **persistent computers**, rather than thinking of them as ephemeral runtimes that must be wiped clean after every interaction. Just as your laptop isn't reformatted every time you close the lid, you shouldn't destroy a sandbox's state simply because a session ended. Blaxel Sandboxes are designed to maintain persistent storage and system state, allowing agents to retain context, shell history, and installed dependencies indefinitely.
The most efficient way to manage lifecycle is to let the sandbox suspend automatically when idle, rather than explicitly destroying it. Auto-suspend happens automatically after 15s inactivity. Resuming a standby sandbox is orders of magnitude faster than cold-booting a new box and re-running setup scripts (like `npm install` or `apt-get`). By relying on suspension, you ensure that when an agent returns (whether in ten minutes or ten weeks) the environment is restored exactly as it was left, enabling a seamless instant resume experience that saves both time and compute costs.
The definition of a session is at your discretion. It's a tradeoff between instant resume times from standby mode (\~25ms) and paying for the [standby snapshot storage cost](https://blaxel.ai/pricing). As a rule of thumb, most customers keep sandboxes in standby for 7-60 days -- see down below.
## Use volumes for data durability
The sandbox's root filesystem is like your laptop's local SSD. While it's very fast and retains data during suspension, it isn't strictly guaranteed against long-term risks (e.g. if you spill coffee on your computer). [******For data that must survive indefinitely, use Volumes.******](./Volumes)
Think of Volumes as redundant external hard drives. They are decoupled from the compute hardware, meaning they exist independently of the sandbox. Blaxel ensures high redundancy on these volumes, making them the correct choice for critical user data, databases, or project files that need to be accessed by multiple different agents.
However, relying solely on volumes for state management comes with a trade-off. If you delete a sandbox and rely on re-attaching a volume to a new one, you lose the "instant resume" benefit of the previous section. You will incur a \~400–600ms cold-boot penalty to create the new sandbox and will need to restart your processes. Use volumes for safety, but rely on sandbox suspension for speed.
## Automate cleanup with TTLs
While we advocate for treating sandboxes as perpetual computers, not every machine needs to last forever. In production, you will inevitably accumulate a long tail of sandboxes that will never be called upon again (e.g., completed tasks, abandoned user sessions). To prevent digital clutter and unnecessary storage usage, you need automated garbage collection.
Use [lifecycle policies and time-to-live (TTL)](../Sandboxes/Expiration) settings to define exactly when a computer should be retired. You can configure this based on idle duration (*e.g., delete if it hasn't been active for 7 days*) or absolute maximum age (*e.g., delete after 30 days*).
In [quota tiers 0 and 1](https://app.blaxel.ai/account/quotas), a maximum TTL is enforced on all sandboxes. On quota tier 2 and above, no expiration policy is set by default.
# Access tokens
Source: https://docs.blaxel.ai/Security/Access-tokens
Interact with Blaxel by API or CLI using access tokens.
User access tokens can be used in order to authenticate to Blaxel by API or CLI. They apply for both [users](/Security/Workspace-access-control) and [service accounts](/Security/Service-accounts). They are generated through a variety of methods, which are documented below.
## Overview of authentication methods on Blaxel
Blaxel employs two main authentication paradigms: **short-lived tokens** (OAuth) and **long-lived tokens** (API keys).
Long-lived tokens are easier to use but are less secure as their validity can go from multiple days to infinite. They are generated as **API keys** from the Blaxel console.
**OAuth tokens** are recommended for security reasons, as their duration is only of 2 hours (short-lived). They are generated through [OAuth 2.0](https://oauth.net/2/) authentication endpoints.
## API keys
Long-lived authentication tokens are called **API keys** on Blaxel. Their validity duration can be infinite.
API keys can be used in the Blaxel APIs, CLI and SDK.
**To authenticate in the SDKs**, use the following two variables **`BL_WORKSPACE`** and **`BL_API_KEY`**, which can be set in one of these sources:
1. your **`.env`** file
2. your machine's environment variables
3. the local configuration file created when you log in through the CLI (see below)
**To authenticate in the CLI** use this command
```bash theme={null}
export BL_API_KEY=YOUR-API-KEY
bl login YOUR-WORKSPACE
```
**To authenticate in the APIs**, use the API key as an authorization bearer in the headers `Authorization` or `X-Blaxel-Authorization` in any call to the Blaxel APIs. For example, to list models:
```bash theme={null}
curl 'https://api.blaxel.ai/v0/models' \
-H 'Accept: application/json, text/plain, */*' \
-H 'Authorization: Bearer YOUR-API-KEY'
```
### Use API keys in CI
#### GitHub Action
The easiest way to integrate the Blaxel CLI into a CI pipeline is with the official [Blaxel GitHub Action](https://github.com/blaxel-ai/bl-action).
To use this, add a new secret to your GitHub repository named `BL_API_KEY` and set it to the value of your Blaxel API key. You can then use the GitHub Action in a GitHub workflow, as shown below:
```yaml theme={null}
name: Deploy
on:
push:
branches:
- "main"
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Deploy to Blaxel
uses: blaxel-ai/bl-action@v1
with:
workspace: "YOUR_WORKSPACE"
apikey: ${{ secrets.BL_API_KEY }}
- name: Deploy to Blaxel
run: bl deploy
```
#### Manual configuration
If you prefer to set up authentication manually or are using a different CI platform, follow these steps:
* Set the API key as an environment variable in your CI environment:
```shell theme={null}
export BL_API_KEY=YOUR-API-KEY
```
* Add steps to [Download and install the Blaxel CLI](/cli-reference/introduction#install) in your CI environment and log in to your Blaxel workspace:
```shell theme={null}
bl login YOUR-WORKSPACE
```
You can now use the Blaxel CLI to interact with your workspace, including creating sandboxes, executing batch jobs, or deploying agents or MCP servers.
For security, you should always create a separate API key for your CI environment. [Create an API key in the Blaxel Console](https://app.blaxel.ai/profile/security).
### Use API keys via middleware
If you wish to call the Blaxel API directly from a client Web application, you would have to include your Blaxel API key in client-side code. This creates a serious security vulnerability, as the key will become visible in network requests or in the source code on the client end of the connection.
The solution is to add authentication middleware between your client application and the Blaxel API. The middleware serves as a secure "translator", which intercepts requests from the client application, injects your Blaxel API key, and then forwards them to Blaxel.
This solution requires a catch-all route in the client application (e.g. `/api/blaxel`) that forwards all SDK requests to Blaxel's API. The route reads the Blaxel workspace and API key securely from host environment variables (e.g. `BL_WORKSPACE` and `BL_API_KEY`) and injects it into each request by adding `Authorization` and `X-Blaxel-Workspace` headers.
Here is a sample implementation of this catch-all route implemented in Next.js:
The above code is illustrative and not intended for production use. It has no authentication, allowing access to anyone who knows the endpoint URL. You must add authentication (session validation, API keys, rate limiting) before deploying in production.
```typescript theme={null}
import { NextRequest, NextResponse } from 'next/server';
async function handleRequest(request: NextRequest, { params }: { params: { slug: string[] } }) {
params = await params;
const path = params.slug.slice(1).join('/');
const workspace = process.env.BL_WORKSPACE;
const apiKey = process.env.BL_API_KEY; // Securely stored in environment variables
const headers = new Headers(request.headers);
headers.set('Authorization', `Bearer ${apiKey}`);
headers.set('X-Blaxel-Workspace', workspace);
const fetchOptions: RequestInit = {
method: request.method,
headers,
body: ['GET', 'HEAD'].includes(request.method) ? undefined : await request.text(),
};
const response = await fetch(`https://api.blaxel.ai/v0/${path}`, fetchOptions);
const responseHeaders = new Headers(response.headers);
responseHeaders.delete('content-encoding');
responseHeaders.delete('content-length');
return new NextResponse(response.body, {
status: response.status,
headers: responseHeaders,
});
}
export { handleRequest as DELETE, handleRequest as GET, handleRequest as PATCH, handleRequest as POST, handleRequest as PUT };
```
Now, within the application, the Blaxel SDK should be configured at init-time to use the catch-all route, as shown below. With this change, SDK requests are routed through the middleware rather than directly transmitted to the Blaxel API.
```typescript theme={null}
import blaxel from "@blaxel/core";
blaxel.initialize({
proxy: 'http://localhost:3000/api/blaxel', // Your catch-all route
workspace: 'YOUR_WORKSPACE',
});
// Now use the SDK as normal
const sandbox = await blaxel.SandboxInstance.create({
// ...
});
```
Under this approach, your Blaxel API key is never exposed to the calling client, ensuring security. SDK requests continue to work as before, except that now they are routed through the catch-all route rather than directly to the Blaxel API.
### Manage API keys
You can create private API keys for your Blaxel account to authenticate directly when using the Blaxel APIs or CLI. Your [permissions in each workspace](/Security/Workspace-access-control) will be the ones given to your account in each of them.
API keys can be managed from the Blaxel console in **Profile > Security**.
For production-grade access to workspace resources that should be independent of individual users, it's strongly recommended to use [service accounts](/Security/Service-accounts) in the workspace.
## OAuth 2.0 tokens
These short-lived tokens are based on the [OAuth 2.0](https://oauth.net/2/) authentication protocol, and have a validity period of **2 hours**.
### Use OAuth tokens in the CLI
Use `bl login` command.
You will then be redirected to the Blaxel console to finish logging in. Sign in using your Blaxel account if you aren’t already.
Once this is done, return to your terminal: the login will be finalized and you will then be able to run CLI commands.
Your [permissions in each workspace](/Security/Workspace-access-control) will be the ones given to your account in each of them.
### Use OAuth tokens in the API via service accounts
[Service accounts](/Security/Service-accounts) can retrieve a short-lived token via the *OAuth client credentials grant type* in the authentication API, using their *client ID* and *client secret*. These two keys are generated automatically when creating a service account. Make sure to copy the secret at its generation as you will never be able to see it again after.
Service accounts can also connect to the API using a long-lived API key, as detailed [in the section below](/Security/Access-tokens).
To retrieve the token, pass the service account’s *client ID* and *client secret* in the header to the `/oauth/token` endpoint.
```bash theme={null}
curl --request POST \
--url https://api.blaxel.ai/v0/oauth/token \
--header 'Authorization: Basic base64(CLIENT_ID:CLIENT_SECRET)' \
--header 'Content-Type: application/json' \
--data '{
"grant_type":"client_credentials"
}'
```
Alternatively, you can also pass pass the *client ID* and *client secret* in the body:
```bash theme={null}
curl --request POST \
--url https://api.blaxel.ai/v0/oauth/token \
--header 'Content-Type: application/json' \
--data '{
"grant_type":"client_credentials",
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET
}'
```
You will retrieve a bearer token valid for 2 hours, which can then be passed in any call to a Blaxel APIs through any of the following headers: `Authorization` or `X-Blaxel-Authorization`, as such:
```bash theme={null}
curl 'https://api.blaxel.ai/v0/models' \
-H 'Accept: application/json, text/plain, */*' \
-H 'X-Blaxel-Authorization: Bearer YOUR_TOKEN'
```
### (Advanced) Use OAuth tokens in the APIs
This section assumes you are a developer experienced with OAuth 2.0. For a simpler guide of how to use short-lived tokens in Blaxel APIs, read the [section on authenticating service accounts](/Security/Access-tokens).
Blaxel implements all **grant types** in the OAuth 2.0 convention, including [client credentials](https://www.oauth.com/oauth2-servers/access-tokens/client-credentials/), [authorization code](https://www.oauth.com/oauth2-servers/access-tokens/authorization-code-request/), and [refresh tokens](https://www.oauth.com/oauth2-servers/access-tokens/refreshing-access-tokens/). If you are a developer experienced with OAuth 2.0, you can find the **well-known configuration endpoint** at the following URL:
```text theme={null}
https://api.blaxel.ai/v0/.well-known/openid-configuration
```
Through the endpoints discoverable in the aforementioned URL, you can implement any authentication flow in your application, and use the retrieved tokens in any of the following headers: `Authorization` or `X-Blaxel-Authorization` .
Alternatively, you can retrieve a token using the SDK:
```tsx theme={null}
import { settings } from ´blaxel/core’
await settings.authenticate() // Refreshes the token only if needed
console.log(settings.token)
```
*Note: when using the Blaxel SDK to operate Blaxel (e.g. to create an agent), token retrieval and refresh [is done automatically based on either being authenticated with CLI or using environment variables](../sdk-reference/introduction).*
# Data collection and privacy
Source: https://docs.blaxel.ai/Security/Data-collection-and-privacy
Understand what usage data Blaxel CLI and SDKs collect, and how to opt out.
The Blaxel SDKs and Blaxel CLI have two separate data collection systems.
## Error tracking
When enabled, the SDKs and CLI capture errors and exceptions that originate from the SDK/CLI itself - not from application code. Each SDK/CLI has a lightweight, custom Sentry client that collects and tracks the following error-related metadata:
* Error type and message
* Stack trace
* Environment
* Release/version
* Workspace
* Commit hash
* OS/runtime context
* Breadcrumbs
* Runtime-specific context (e.g. goroutine info in Go)
No request data, API keys, file contents, IP addresses or other user or application data is collected.
### Opt in
Error tracking is controlled by a combination of the `DO_NOT_TRACK` environment variable and the `tracking` parameter, read from the following locations (in priority order):
1. `DO_NOT_TRACK` environment variable
2. `tracking:` field in `~/.blaxel/config.yaml`
3. Pre-set default value
* `false` starting from SDK v0.2.46 (Python), v0.2.76 (TypeScript), and v0.16.1 (Go)
* `true` in all earlier versions
The Blaxel CLI prompts the user for tracking consent on first interactive use and during installation. The response is recorded in `~/.blaxel/config.yaml` and affects error tracking in the SDKs and the CLI locally.
Consenting when prompted sets `tracking: true` in `~/.blaxel/config.yaml`, which is read by both the CLI and SDKs and results in error tracking becoming enabled for local sessions.
However, when deploying on Blaxel, the platform automatically injects `DO_NOT_TRACK=1`, which disables error tracking regardless of the value set in `~/.blaxel/config.yaml`. You can override this by explicitly setting `DO_NOT_TRACK=0` in your environment configuration (see below). Similarly, in CI environments or in remote environments where the Blaxel CLI is not used, the `tracking:` field is never set to `true` and error tracking is therefore disabled by default.
#### `DO_NOT_TRACK` environment variable
The `DO_NOT_TRACK` environment variable follows the [Console Do Not Track](https://consoledonottrack.com/) convention. To explicitly enable tracking, use the command below in your shell environment:
```bash theme={null}
export DO_NOT_TRACK=0 # Enable tracking
```
#### Blaxel configuration file
You can also disable tracking by adding the following parameter to your `~/.blaxel/config.yaml` global configuration file:
```yaml theme={null}
tracking: false
```
Setting `BL_ENABLE_OPENTELEMETRY=0` has no effect on error tracking.
## OpenTelemetry tracing and logging
When you deploy from the Blaxel Console, Blaxel CLI or by using the Blaxel SDK to wrap your code for deployment, Blaxel automatically instruments your requests with logging and tracing. The resulting logs and traces are provided to users for real-time monitoring and debugging of their workloads.
The data collected consists of:
* Metrics: Aggregated data about workload executions
* Logs: Timestamped logs for workloads
* Traces: Data on inputs, outputs and execution steps
Blaxel only collects and saves traces for a sampled 10% of all your executions.
[Read more about observability](/Observability/Overview).
### Opt in locally
This feature is controlled by the `BL_ENABLE_OPENTELEMETRY` environment variable. This environment variable defaults to `false` locally and to `true` in deployed environments. To explicitly enable it locally, use the command below in your shell environment:
```bash theme={null}
export BL_ENABLE_OPENTELEMETRY=true # Enable OpenTelemetry logging
```
Setting `DO_NOT_TRACK=1` has no effect on OpenTelemetry.
# Domain Capture
Source: https://docs.blaxel.ai/Security/Domain-capture
Verify your company's email domain and control which login methods your team can use.
## Overview
Domain capture lets you claim ownership of your organization's email domain (e.g. `acme.com`) and then decide exactly how users from that domain can sign in. You can restrict logins to specific methods (Google, SSO (SAML), or email passwordless) and automatically add new users to your workspaces when they first sign in.
This feature is available to all account administrators at no extra cost.
SAML SSO and Directory Sync are enterprise features built on top of domain verification. See [SSO & Directory Sync](/Security/SSO-Directory-sync) if you need those.
## Prerequisites
* You must be an account administrator.
* You must have access to your domain's DNS settings to add a TXT record.
## Step 1: Add a domain
Go to **Account Settings** → **Identity & Access**.
Type your company's email domain (e.g. `acme.com`) in the input field and click **Add domain**.
The domain appears in the list with a **Pending** status. It will remain inactive until you complete DNS verification.
## Step 2: Verify via DNS TXT record
To prove you own the domain, add a DNS TXT record provided by Blaxel.
Click **Show DNS** on the pending domain row.
You'll see two values:
* **Name**: the hostname to add the record to (e.g. `_blaxel-sso-verification.yourdomain.com`)
* **Value**: the verification string starting with `blaxel-sso-verify=...`
Click the copy icon next to each value.
Log in to your DNS provider (Cloudflare, Route 53, GoDaddy, etc.) and add a new TXT record with the name and value you copied.
DNS propagation typically takes a few minutes, but can take up to 48 hours in rare cases.
Return to **Account Settings** → **Identity & Access** and click **Verify** on the domain row.
Once verified, the domain status changes to **Verified** (green checkmark) and additional options appear below it.
If verification fails, double-check that the TXT record name and value are entered exactly as shown. Some DNS providers automatically append the root domain. Confirm the full record name in your DNS provider's interface.
## Step 3: Set allowed auth methods
After your domain is verified, you can restrict which login methods users from that domain can use.
Click a method badge to toggle it on or off:
| Method | Description |
| ------------------------ | ---------------------------------------------------------------- |
| **Google** | Sign in with a Google account |
| **SSO (SAML)** | Sign in through your SAML identity provider (requires SSO setup) |
| **Email (passwordless)** | Sign in via email magic link or OTP |
If no methods are selected, there is no restriction and users can sign in with any available method.
If SAML SSO is configured and active on your account, **SSO (SAML)** becomes the only allowed method automatically and the other toggles are locked. See [SSO & Directory Sync](/Security/SSO-Directory-sync).
## Step 4: Configure auto-join workspaces
You can automatically add new users to one or more workspaces the first time they sign in with your verified domain.
Under **Auto-join**, toggle on any workspace to enable automatic membership for users from that domain.
Users who are already logged in when domain capture is toggled on will not be automatically added to the workspace until they log out and log back in.
If Directory Sync is also active, workspace membership may be managed by two systems simultaneously. Prefer using Directory Sync group mappings to control workspace membership when Directory Sync is configured.
## Removing a domain
Click the trash icon on any domain row to remove it. You'll be asked to confirm before deletion.
A domain that is actively linked to a SAML SSO connection cannot be deleted until the SSO connection is removed first.
## Related
* [SSO & Directory Sync](/Security/SSO-Directory-sync)
* [Workspace Access Control](/Security/Workspace-access-control)
# Cross-workspace image sharing
Source: https://docs.blaxel.ai/Security/Image-sharing
Share resource images between workspaces without duplicating storage, using metadata-only copies.
Blaxel allows workspace admins to share resource images (e.g. sandbox images) across workspaces within the same [account](/Security/Workspace-access-control). When you share an image, only the metadata record is copied to the target workspace. The underlying storage remains in the source workspace, avoiding duplication and keeping billing simple.
This is useful when you operate multiple workspaces (e.g. development, staging, production) and want to reuse the same custom images across them without rebuilding or re-pushing.
## How it works
When you share an image from workspace A to workspace B:
1. The image metadata record is copied to workspace B, pointing to the same underlying data in workspace A.
2. Workspace B can use the shared image to deploy agents, MCP servers, batch jobs, and sandboxes, just like a locally-owned image.
3. Storage billing stays with workspace A (the source).
4. When new tags are pushed to the source image, they are automatically propagated to shared workspaces.
Shared images in the consuming workspace display a `sourceWorkspace` field indicating where the image originates from. In the Blaxel Console, shared images are marked with a badge showing the source workspace name.
## Prerequisites
* Both workspaces must belong to the **same account**.
* You must be an **admin** of the source workspace.
* You must also be an **admin** of the target workspace.
* The image must be **locally owned** in the source workspace. You cannot re-share an image that was already shared to you from another workspace.
## Share an image
### Blaxel Console
1. Navigate to the **Images** page in the source workspace.
2. Click the actions menu (three dots) on the image you want to share.
3. Select **Share**.
4. In the popin, select the target workspace from the dropdown (only workspaces in the same account are listed).
5. Click **Share** to confirm.
The shared image will immediately appear in the target workspace's image list.
### Blaxel CLI
```bash theme={null}
bl share image --workspace
```
For example, to share a sandbox image called `my-template` with the `production` workspace:
```bash theme={null}
bl share image sandbox/my-template --workspace production
```
You cannot share a specific tag. Sharing applies to the entire image (all tags). Tag-qualified references like `sandbox/my-template:v1.0` are not accepted.
### Management API
```bash theme={null}
curl -X POST "https://api.blaxel.ai/v0/images/{resourceType}/{imageName}/share" \
-H "Authorization: Bearer $BL_API_KEY" \
-H "X-Blaxel-Workspace: $BL_WORKSPACE" \
-H "Content-Type: application/json" \
-d '{"targetWorkspace": "production"}'
```
## List shared workspaces
You can see which workspaces an image is currently shared with.
### Blaxel Console
On the image detail page, shared workspaces are displayed with the ability to revoke sharing for each one.
### Management API
```bash theme={null}
curl -X GET "https://api.blaxel.ai/v0/images/{resourceType}/{imageName}/share" \
-H "Authorization: Bearer $BL_API_KEY" \
-H "X-Blaxel-Workspace: $BL_WORKSPACE"
```
## Unshare an image
Unsharing removes the metadata record from the target workspace. The image data in the source workspace is not affected.
After unsharing, any deployments in the target workspace that reference the shared image will fail on their next restart or scale-up. Make sure no active deployments depend on the image before unsharing.
### Blaxel Console
1. Navigate to the **Images** page in the source workspace.
2. Open the image detail page.
3. In the shared workspaces list, click **Unshare** next to the target workspace you want to revoke.
### Blaxel CLI
```bash theme={null}
bl unshare image --workspace
```
For example:
```bash theme={null}
bl unshare image sandbox/my-template --workspace production
```
### Management API
```bash theme={null}
curl -X DELETE "https://api.blaxel.ai/v0/images/{resourceType}/{imageName}/share/{targetWorkspace}" \
-H "Authorization: Bearer $BL_API_KEY" \
-H "X-Blaxel-Workspace: $BL_WORKSPACE"
```
## Billing
Storage billing is tied to the source workspace. Metering and costs remain with the workspace that originally pushed the image, regardless of how many workspaces it is shared with.
The consuming workspace pays nothing for image storage of shared images.
## Constraints and limitations
* **Same account only**: Images can only be shared between workspaces under the same account.
* **No re-sharing**: A shared image cannot be re-shared from the consuming workspace to a third workspace. Only the original owner can share.
* **Deletion protection**: You cannot delete an image (or individual tags) while it is shared with other workspaces. You must unshare it from all target workspaces first.
* **Whole image only**: Sharing applies to the entire image with all its tags. You cannot share individual tags.
* **Admin-only**: Both sharing and unsharing require admin permissions on both source and target workspaces.
# Usage and quotas
Source: https://docs.blaxel.ai/Security/Quotas
Monitor usage and manage quotas associated with your account.
The Blaxel Console provides detailed analytics for you to monitor account usage. You can also see your account's current quotas, and request an increase in quotas if required.
## Analytics
Blaxel provides detailed cost and usage analytics at account level. These metrics are available for the current billing period, last 30 days, last 7 days, and last 24 hours.
To view usage analytics for your account:
* Click the workspace name at the bottom left of the Blaxel Console.
* Click **Account**.
* Click the **Subscription** tab.
* Navigate to the **Usage analytics** section of the page.
The usage analytics section provides details on:
* Deployment image and snapshot storage usage/cost
* Volume storage usage/cost
* Compute runtime usage/cost for sandboxes, agents, batch jobs and MCP servers
* Number/cost of asynchronous requests
You can toggle between usage and cost metrics using the switch in the top right corner of the section.
## Quotas
### Quota tiers
Blaxel has a quota tiering system that unlocks higher limits and features on the platform as your tier progresses.
Higher tiers have access to higher limits such as more standby/concurrent sandboxes, more concurrent jobs, higher sandbox and volume sizes, longer sandbox TTLs. They also have access to gated features such as custom domains, dedicated IPs and more.
You can see the [limits and features of each tier](https://app.blaxel.ai/account/quotas) in the **Quotas** tab.
* Click the workspace name at the bottom left of the Blaxel Console.
* In the sub-menu, click **Account**.
* Click the **Quotas** tab.
#### How tiers are calculated
Your current tier is evaluated continuously based on your **30-day top-up volume**. This is the total amount of real funds added to your Blaxel wallet via manual or automated top-ups over the trailing 30 days.
**Top-ups are not flat fees**: Funds added to meet a tier requirement are deposited directly into your account balance. These funds are subsequently used to pay for your standard compute and storage consumption.
**Promotional credits do not unlock tiers**: Promotional credits (such as startup program grants) and previously accumulated balances automatically apply toward your active infrastructure costs. However, they do not count toward the 30-day top-up volume required to unlock or maintain a quota tier.
#### Maintaining your tier
Because tier eligibility relies on a rolling 30-day window, your tier will automatically adjust if your top-up volume decreases. If your 30-day top-up volume falls below your current tier's threshold, your workspace will be downgraded, which may result in blocked resource creation if you exceed the lower tier's limits.
To prevent service disruptions and maintain your quotas without manual intervention, you can configure **Auto top-up** in the Billing section of your Console.
### Quota usage
Blaxel also lets you view and manage quotas associated with your account.
Quotas for your account are listed in the **All current quotas** section
This section provides details on quotas for:
* Deployed sandboxes, agents, MCP servers, model APIs, policies, triggers, and jobs
* Sandboxes: volumes, volume templates, default TTL, previews, volume sizes and versions
* Jobs: maximum parallel executions, maximum concurrent tasks
* Workspaces: users, service accounts, integrations, custom domains, revisions, timeouts
### Increase account quotas
#### Automatically increase quotas via top-up
To unlock access to [higher quotas and limits](https://app.blaxel.ai/account/quotas), you must upgrade to a higher tier by topping up the appropriate amount of credit in your account. These top-up amounts go directly into your Blaxel wallet. This is not a fee for the tier itself, but rather a pre-payment for the usage you are about to run.
### Manually request a quota increase
**Upgrading to a higher tier** is the usual way to unlock higher quotas for your account. In case of issues, it is also possible to manually request an increase in quotas for your account through the Blaxel Console. To do this:
* Click the pencil icon.
* Input the requested quota value.
In some cases, you may see an **Upgrade** button instead of a pencil icon. Click this to access our payment gateway, in which you can provide details for payment.
Accounts on the free plan are not eligible for quota upgrades and will not see the option to request a quota increase in the Blaxel Console.
## Best practices for quota management
### Configure sandbox TTL
Keeping inactive sandboxes in standby consumes snapshot storage and counts against your quota limits. To optimize costs and quota usage, we recommend setting a Time-To-Live (TTL) of 7 to 60 days (depending on your re-activation patterns) to automatically prune dormant sandboxes.
### Monitor usage alerts
Blaxel automatically dispatches an email notification to workspace administrators when your infrastructure usage reaches 90% of your current tier's quota.
# SSO & Directory Sync
Source: https://docs.blaxel.ai/Security/SSO-Directory-sync
Configure SAML Single Sign-On and automated user provisioning via Directory Sync (SCIM) for your organization.
## Overview
Blaxel supports enterprise-grade identity management through two features:
* **SAML SSO**: Let your employees sign in through your existing identity provider (Okta, Azure AD, Google Workspace, OneLogin, etc.)
* **Directory Sync (SCIM)**: Automatically provision and deprovision workspace memberships based on your identity provider's directory groups
Both features require the **SAML** feature flag on your account. [Contact us](https://blaxel.ai/contact) or email [support@blaxel.ai](mailto:support@blaxel.ai) to get access.
## Prerequisites
* Account administrator role
* The `saml` feature flag enabled on your account
* At least one **verified domain**: complete [Domain Capture](/Security/Domain-capture) first
***
## SAML SSO
### How it works
Once configured, users from your verified domain are redirected to your identity provider's login page instead of seeing the default Blaxel login options. After authenticating with your IdP, they are signed in to Blaxel automatically.
When SAML is active, it becomes the **only** allowed authentication method for your domain. Other methods (Google, email, etc.) are locked out.
### Set up SAML SSO
Go to **Account Settings** → **Identity & Access**.
If you haven't already, add and verify your company domain. See [Domain Capture](/Security/Domain-capture).
Scroll to **SAML Identity Provider** and click **Configure SAML Provider**.
This opens the SSO Admin Portal in a new tab.
In the SSO portal, follow the step-by-step instructions for your IdP. We provide setup guides for all major providers including Okta, Azure AD, Google Workspace, and OneLogin.
Return to **Account Settings** → **Identity & Access**. The **SAML Identity Provider** section shows **Active** with the provider name and connection name once setup is complete.
### Single Logout (SLO)
When a SAML user signs out of Blaxel, they are also signed out of your identity provider if your IdP supports Single Logout. No additional configuration is required on the Blaxel side.
***
## Directory Sync (SCIM)
### How it works
Directory Sync connects your identity provider's directory to Blaxel. When you add or remove users from groups in your IdP, Blaxel automatically adds or removes them from the corresponding workspaces.
### Set up Directory Sync
Go to **Account Settings** → **Identity & Access**.
Scroll to **Directory Sync (SCIM)** and click **Configure Directory Sync**.
This opens the Admin Portal in a new tab.
In the portal, select your directory provider and follow the setup instructions.
After connecting, configure group-to-workspace mappings so that members of each group are automatically provisioned into the right workspaces with the right roles.
Once active, the **Directory Sync (SCIM)** section shows **Active** with the provider type and directory name.
### Viewing membership source in the team table
The **Workspace Settings → Team** table includes a **Source** column that shows how each member joined the workspace.
| Source | Meaning |
| ------------------ | -------------------------------------------------------------- |
| **Directory Sync** | Provisioned automatically by Directory Sync |
| **Invitation** | Joined via an email invitation |
| **Domain Capture** | Auto-joined because their email domain matched a domain policy |
| **Local** | Added directly within Blaxel |
### Deprovisioning
When a user is removed from a synced group in your IdP, Blaxel automatically removes their workspace membership on the next sync event. Their Blaxel account is not deleted; only the workspace membership is removed.
Avoid manually removing members who were provisioned by Directory Sync, as they will be re-added on the next sync. Manage membership through your identity provider instead.
***
## Related
* [Domain Capture](/Security/Domain-capture)
* [Workspace Access Control](/Security/Workspace-access-control)
# Service accounts
Source: https://docs.blaxel.ai/Security/Service-accounts
Automate the life-cycle of Blaxel resources via API through service accounts.
Service accounts are workspace users (i.e. identities) that represent an external system that needs to access Blaxel to operate resources in **one specific workspace**.
## Authentication of service accounts
### API keys
Service accounts can use [API keys](/Security/Access-tokens) to authenticate on Blaxel. These API keys can be created and managed by admins from the service account’s page, in your workspace settings.
API keys can have an infinite validity duration.
### Client credentials (OAuth 2.0)
Service accounts can also use the *client credentials* [OAuth 2.0 grant type](/Security/Access-tokens), via their **client ID** and **client secret**. A pair client ID / client secret is generated automatically by Blaxel when you create a new service account.
Make sure to copy the client secret when you create the service account as you will never be able to access it again after you leave the page.
## Permissions of service accounts
Service accounts can have [similar permissions](/Security/Workspace-access-control) as other users from your team. These permissions are managed by admins in your workspace settings.
# Workspaces, users and roles
Source: https://docs.blaxel.ai/Security/Workspace-access-control
Manage workspaces, control access, and understand account-level sharing.
All resources on Blaxel are logically regrouped in an **account**, which is the highest possible level of tenancy. An account can have **multiple workspaces** — a common pattern when dealing with multiple environments (e.g. development vs. production), business units, or end-clients.
Users can be either [team members](/Security/Workspace-access-control), or [service accounts](/Security/Service-accounts) that represent external systems that can operate Blaxel. They are added to a workspace with certain permissions on the workspace resources inherited from their role.
## Workspace name
The workspace name (called `name` in the Blaxel API) uniquely identifies your workspace. You set it when the workspace is created. Once set, it cannot be changed. The workspace name appears at the bottom of the left sidebar in the Blaxel Console.
Each workspace also has a display name for better organization, which workspace admins can modify.
## User roles
There are two roles that a user or service account can have in a workspace: **admin** and **member**.
Admins have **complete access** in the workspace, on all workspace resources. They can also modify all workspace settings, including inviting other team members. More specifically, admins have all the permissions that members have, in addition to:
* creating and editing policies
* inviting and removing users
* changing user's permissions
* adding and removing integrations
* changing the workspace name
* deleting the workspace
Members can view the workspace settings but not edit them. They are also able to **view and modify** the following resources inside a workspace (including querying them when applicable):
* [agents](../Agents/Overview)
* [MCP servers](../Functions/Overview)
* [model APIs](../Models/Model-deployment)
* [batch jobs](../Jobs/Overview)
* [sandboxes](../Sandboxes/Overview)
They have read-only access on the following resources:
* [policies](../Model-Governance/Policies)
## Shared account settings
A Blaxel account is the top-level entity that owns one or more workspaces. The following are managed at the account level and are **shared across all workspaces** in an account:
* Quota tier and limits
* Credits
* SSO/SAML configuration
* Usage analytics
This means that resource usage in any workspace draws from the common credit balance and counts toward the same quotas. See the [usage and quotas documentation](./Quotas) for more information.
## Create a workspace
Workspaces can be created, managed, and deleted using the Blaxel Console and the Blaxel SDKs.
### Blaxel Console
To create a new workspace using the Blaxel Console:
* Click the workspace name at the bottom left corner.
* In the rollout menu, click **Switch** > **+**.
* In the **Create a new workspace** dialog, enter a workspace name and label.
* Click **Create** to create the workspace.
### Blaxel SDKs
```typescript TypeScript theme={null}
import { createWorkspace } from "@blaxel/core";
const { data: newWs } = await createWorkspace({
body: {
name: "my-workspace",
displayName: "My Workspace",
region: "us-west-2",
},
});
console.log(newWs);
```
```python Python theme={null}
import asyncio
from blaxel.core.common import autoload
from blaxel.core.client import client
from blaxel.core.client.api.workspaces import create_workspace
from blaxel.core.client.models.workspace import Workspace
autoload()
async def main():
new_ws = await create_workspace.asyncio(
client=client,
body=Workspace(
name="my-workspace",
display_name="My Workspace",
region="us-west-2",
),
)
print(new_ws)
asyncio.run(main())
```
## Delete a workspace
Deleting a workspace is permanent and cannot be undone.
### Blaxel Console
To delete a workspace using the Blaxel Console:
* Click the workspace name at the bottom left corner.
* Click **Workspace settings**.
* Review the information and click **Delete workspace** to delete the workspace.
### Blaxel SDKs
```typescript TypeScript theme={null}
import { deleteWorkspace } from "@blaxel/core";
const { data: deleted } = await deleteWorkspace({
path: {
workspaceName: "my-workspace",
},
});
console.log(deleted);
```
```python Python theme={null}
import asyncio
from blaxel.core.common import autoload
from blaxel.core.client import client
from blaxel.core.client.api.workspaces import delete_workspace
autoload()
async def main():
deleted = await delete_workspace.asyncio(
workspace_name="my-workspace",
client=client,
)
print(deleted)
asyncio.run(main())
```
## Invite a member
Admins can invite team members via their email address. They will be prompted for the role to give the user.
To invite a team member:
* Click **Workspace** > **Team**.
* Click **Invite user**.
* Enter the invitee's email address and role.
The invitee will receive an email to allow them to accept the invitation on Blaxel console. They will not be able to access workspace resources until they have manually accepted the invitation. If the user doesn't have a Blaxel account already, they will be asked to signup first.
Invitations to other workspaces are visible from your profile.
# Run Google ADK agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/ADK
Learn how to leverage Blaxel with Google ADK agents.
You can deploy your [Google Agent Development Kit (ADK)](https://github.com/google/adk-python/tree/main) projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with ADK on Blaxel
To get started with ADK on Blaxel:
* if you already have a ADK agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* else initialize an example project with ADK by using the following Blaxel CLI command and selecting the *Google ADK hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a ADK agent using Blaxel features
While building your agent in ADK, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in ADK format:
```python Python theme={null}
from blaxel.googleadk import bl_tools
await bl_tools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```python Python theme={null}
from blaxel.googleadk import bl_model
model = await bl_model("model-api-name")
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```python Python theme={null}
from blaxel.core.agents import bl_agent
response = await bl_agent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Overview
Source: https://docs.blaxel.ai/Tutorials/Agents-Overview
Ship agents in any Python/TypeScript framework on Blaxel.
Blaxel is a fully framework-agnostic infrastructure platform that helps you build and host your agents. It supports a **range of the most popular AI agent frameworks**, optimizing how your agent builds and runs no matter how you coded it.
Blaxel's platform-agnostic design lets you deploy your code either on Blaxel or through traditional methods like Docker containers. When deploying on Blaxel, your agent goes through a specialized build process that gives it access to Blaxel features through SDK commands in its code. This low-level SDK connects you to [MCP servers](../Functions/Overview), [LLM APIs](../Models/Overview) and [other agents](../Agents/Overview) that are hosted on Blaxel.
As such, you can build your agentic applications with anything from [LangChain](/Tutorials/LangChain) or [Vercel AI SDK](/Tutorials/Vercel-AI) to pure TypeScript or Python, and deploy them with minimal upfront setup. Learn how to [get started with Blaxel](../Get-started).
Run OpenClaw in a Blaxel sandbox.
Build and deploy Claude Agent SDK agents on Blaxel.
Run multi-agent systems built with CrewAI on Blaxel.
Run LLM agents or workflows of agents on Blaxel using Google’s Agent Development Kit (ADK).
Build and deploy LangChain agents on Blaxel.
Deploy LlamaIndex agentic systems on Blaxel.
Use Mastra framework to develop agentic AI on Blaxel.
Orchestrate Blaxel agents using n8n workflows.
Leverage OpenAI Agents SDK to create Blaxel agents.
Create and deploy PydanticAI agents on Blaxel.
Host agents built with Vercel’s AI SDK on Blaxel.
Create and deploy custom agents in Python or TypeScript.
# Run Astro in a sandbox
Source: https://docs.blaxel.ai/Tutorials/Astro
Configure an Astro application to run in a Blaxel sandbox
This tutorial explains how to run an Astro application inside a Blaxel sandbox and expose it securely using sandbox preview URLs.
## Prerequisites
Before starting, ensure you have:
* [Blaxel CLI](../cli-reference/introduction) installed and authenticated (`bl login`)
* Node.js 18+ installed
* `@blaxel/core` package installed in your project (`npm install @blaxel/core`)
## Architecture Overview
Running Astro inside a Blaxel sandbox requires a few adjustments:
* Configuring `astro.config.mjs` with `host: '0.0.0.0'` and `allowedHosts: true`
* Exposing the Astro dev server via a Blaxel preview URL
## Create a base sandbox image
### Dockerfile
```dockerfile theme={null}
FROM oven/bun:alpine
RUN apk update && apk add --no-cache \
git \
curl \
netcat-openbsd \
nodejs \
npm \
&& rm -rf /var/cache/apk/*
WORKDIR /app
COPY --from=ghcr.io/blaxel-ai/sandbox:latest /sandbox-api /usr/local/bin/sandbox-api
# Create Astro project with npx (more reliable for template downloads), then use bun for deps
RUN npx create-astro@latest /app --template basics --no-install --no-git --yes \
&& bun install
COPY ./astro.config.mjs /app/astro.config.mjs
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
```
### astro.config.mjs
Create an `astro.config.mjs` file that allows external connections:
```javascript theme={null}
// @ts-check
import { defineConfig } from 'astro/config';
// https://astro.build/config
export default defineConfig({
server: {
host: '0.0.0.0',
port: 4321,
allowedHosts: true
}
});
```
### entrypoint.sh
Create an entrypoint script that starts the sandbox API and the dev server:
```bash theme={null}
#!/bin/sh
# Set environment variables
export PATH="/usr/local/bin:$PATH"
# Start sandbox-api in the background
/usr/local/bin/sandbox-api &
# Function to wait for port to be available
wait_for_port() {
local port=$1
local timeout=30
local count=0
echo "Waiting for port $port to be available..."
while ! nc -z localhost $port; do
sleep 1
count=$((count + 1))
if [ $count -gt $timeout ]; then
echo "Timeout waiting for port $port"
exit 1
fi
done
echo "Port $port is now available"
}
# Wait for port 8080 to be available
wait_for_port 8080
# Execute curl command to start Astro dev server
echo "Running Astro dev server..."
curl http://localhost:8080/process \
-X POST \
-H "Content-Type: application/json" \
-d '{
"name": "dev-server",
"workingDir": "/app",
"command": "bun run dev",
"waitForCompletion": false,
"restartOnFailure": true,
"maxRestarts": 25
}'
wait
```
### blaxel.toml
Create a `blaxel.toml` file in the same directory as your Dockerfile:
```toml theme={null}
type = "sandbox"
name = "astro-template"
[runtime]
memory = 4096
[[runtime.ports]]
name = "astro-dev"
target = 4321
protocol = "tcp"
```
## Deploy the sandbox image
Deploy the image by running:
```bash theme={null}
bl deploy
```
## Create or reuse a sandbox
Create a sandbox from the base image:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-astro-sandbox";
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
labels: {
framework: "astro",
},
image: "astro-template:latest",
memory: 4096,
ports: [
{ name: "preview", target: 4321, protocol: "HTTP" },
],
});
```
## Configure CORS for preview URL access
Astro dev servers work well with permissive CORS headers when accessed through a preview URL:
```typescript theme={null}
const responseHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers":
"Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
Vary: "Origin",
};
```
Alternatively, you can use [custom domains](https://docs.blaxel.ai/Infrastructure/Custom-domains) to expose previews on your own domain.
## Create the preview URL
Astro runs on port 4321, so we expose that port via a [preview URL](../Sandboxes/Preview-url):
```typescript theme={null}
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "dev-server-preview" },
spec: {
responseHeaders,
public: false,
port: 4321,
},
});
```
## Generate a preview token
To securely access the preview, a token is required:
```typescript theme={null}
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24); // 1 day
const token = await preview.tokens.create(expiresAt);
```
## Start the dev server
If not using the entrypoint script, you can start the dev server programmatically:
```typescript theme={null}
async function startDevServer(sandbox: SandboxInstance) {
console.log("Starting Astro dev server...");
await sandbox.process.exec({
name: "dev-server",
command: "bun run dev",
workingDir: "/app",
waitForPorts: [4321],
restartOnFailure: true,
maxRestarts: 25,
});
}
```
## Stream logs
To monitor the Astro dev server output in real-time:
```typescript theme={null}
const logStream = sandbox.process.streamLogs("dev-server", {
onLog(log) {
console.log(log);
},
});
// When done monitoring, close the stream:
logStream.close();
```
## Access the Astro application
Once everything is running, the Astro application will be available at `https://?bl_preview_token=`
## Complete example
Here is a full example combining all the steps:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-astro-sandbox";
const responseHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers":
"Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
Vary: "Origin",
};
async function startDevServer(sandbox: SandboxInstance) {
await sandbox.process.exec({
name: "dev-server",
command: "bun run dev",
workingDir: "/app",
waitForPorts: [4321],
restartOnFailure: true,
maxRestarts: 25,
});
}
async function main() {
try {
// Create or reuse the sandbox
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
labels: {
framework: "astro",
},
image: "astro-template:latest",
memory: 4096,
ports: [
{ name: "preview", target: 4321, protocol: "HTTP" },
]
});
// Create preview
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "preview" },
spec: {
responseHeaders,
public: false,
port: 4321,
},
});
// Generate preview token
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24);
const token = await preview.tokens.create(expiresAt);
// Start dev server if not already running
const processes = await sandbox.process.list();
if (!processes.find((p) => p.name === "dev-server")) {
await startDevServer(sandbox);
}
// Print access URL
const webUrl = `${preview.spec?.url}?bl_preview_token=${token.value}`;
console.log(`Astro Preview URL: ${webUrl}`);
// Stream logs
const logStream = sandbox.process.streamLogs("dev-server", {
onLog(log) {
console.log(log)
},
});
// Keep running until interrupted
process.on("SIGINT", () => {
logStream.close();
process.exit(0);
});
} catch (error) {
console.error("Error:", error);
process.exit(1);
}
}
main();
```
# Run Claude Agent SDK agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/Claude-Agent-SDK-Index
Deploy Claude Agent SDK agents on Blaxel with serverless hosting, colocated close to sandboxes for near-instant latency and full observability.
Anthropic's [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) is a library for building autonomous AI agents using Claude Code. You can deploy your Claude Agent SDK projects to Blaxel with minimal code editing (and zero configuration), enabling you to colocate them close to the [sandboxes](/Sandboxes/Overview) the agents work on.
Build and deploy Claude Agent SDK agents on Blaxel.
Build an agent that operates a sandbox using the sandbox's MCP server and Claude Agent SDK.
Build an agent that connects to a Blaxel MCP server running in code mode using Claude Agent SDK.
# Connect Claude Agent SDK to a Blaxel sandbox
Source: https://docs.blaxel.ai/Tutorials/Claude-Agent-SDK-MCP
Connect Claude Agent SDK (Claude Code SDK) to a Blaxel sandbox and operate it using its MCP server.
Every Blaxel sandbox exposes a Model Context Protocol (MCP) server that allows agents to operate the sandbox using tool calls. This includes [tools for process management, filesystem operations, and code generation](../Sandboxes/MCP#tools-available-in-the-mcp-server).
This tutorial demonstrates by creating a sandbox-aware agent using the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview), but you could also use other frameworks like LangChain, Vercel AI SDK, Mastra, or your own custom code.
## Prerequisites
* An Anthropic API key, required by Claude Agent SDK. If not, [sign up for an Anthropic account](https://platform.claude.com/) and obtain an API key.
* A Blaxel account and API key. If not, [sign up for a Blaxel account](https://blaxel.ai) and [obtain an API key](/Security/Access-tokens#api-keys).
* The Blaxel CLI. If not, [download and install the Blaxel CLI](../cli-reference/introduction)
## 1. Install required dependencies
Create a directory for the project:
```shell theme={null}
mkdir sandbox-agent && cd sandbox-agent
```
Agents deployed on Blaxel must expose an HTTP endpoint for requests.
In your project directory, install the [Claude Agent SDK](https://platform.claude.com/docs/en/agent-sdk/overview) for the agent loop, the Blaxel TypeScript SDK / Python SDK for sandbox operations, and [Express](https://expressjs.com/) (TypeScript) / [FastAPI](https://fastapi.tiangolo.com/) (Python) to handle HTTP requests and responses:
```shell TypeScript (npm) theme={null}
npm init # if new project
npm install @anthropic-ai/claude-agent-sdk express @blaxel/core
```
```shell TypeScript (pnpm) theme={null}
pnpm init # if new project
pnpm install @anthropic-ai/claude-agent-sdk express @blaxel/core
```
```shell TypeScript (yarn) theme={null}
yarn init # if new project
yarn add @anthropic-ai/claude-agent-sdk express @blaxel/core
```
```shell TypeScript (bun) theme={null}
bun init -m --yes # if new project
bun install @anthropic-ai/claude-agent-sdk express @blaxel/core
```
```shell Python (pip) theme={null}
python3 -m venv .venv && source .venv/bin/activate # if new project
pip install claude-agent-sdk "fastapi[standard]" blaxel
```
## 2. Configure the environment
Add your API keys to a `.env` file in the project directory:
```shell theme={null}
echo "ANTHROPIC_API_KEY=your_anthropic_key_here" > .env
echo "BL_API_KEY=your_blaxel_key_here" >> .env
```
## 3. Build the agent
In your project directory, create a file named `index.ts` (TypeScript) or `main.py` (Python) with the following code:
```typescript TypeScript theme={null}
import { query } from "@anthropic-ai/claude-agent-sdk";
import express from "express";
import { SandboxInstance } from "@blaxel/core";
const host = process.env.HOST || "0.0.0.0";
const port = parseInt(process.env.PORT || "8000");
const app = express();
app.use(express.json());
app.post("/query", async (req, res) => {
const { prompt } = req.body;
if (!prompt) {
return res.status(400).json({ error: "prompt is required" });
}
if (!process.env.BL_API_KEY) {
return res.status(400).json({ error: "BL_API_KEY env var is required" });
}
try {
const sandbox = await SandboxInstance.createIfNotExists({
name: "my-sandbox",
image: "blaxel/base-image:latest",
memory: 4096,
region: "us-pdx-1",
});
let response = "";
for await (const message of query({
prompt: prompt,
options: {
systemPrompt: "You are connected to a sandbox environment with tools. Use the tools to accomplish the task.",
mcpServers: {
"sandbox": {
type: "http",
url: `${sandbox.metadata?.url}/mcp`,
headers: {
Authorization: `Bearer ${process.env.BL_API_KEY}`,
},
},
},
tools: [],
permissionMode: "bypassPermissions",
allowDangerouslySkipPermissions: true,
}
})) {
if (message.type === "assistant" && message.message?.content) {
for (const block of message.message.content) {
if ("text" in block) {
console.log(block.text);
} else if ("name" in block) {
console.log(`Tool: ${block.name}`);
}
}
} else if (message.type === "result") {
console.log(`Done: ${message.result}`); // Final result
response = message.result
}
}
return res.json({ response });
} catch (error) {
return res.status(500).json({ error: error instanceof Error ? error.message : "Unknown error" });
}
});
app.listen(port, host, () => {
console.log(`Server listening on ${host}:${port}`);
});
```
```python Python theme={null}
import os
from fastapi import FastAPI, HTTPException, Request
from claude_agent_sdk import query, ClaudeAgentOptions, AssistantMessage, ResultMessage
from blaxel.core import SandboxInstance
import uvicorn
host = os.getenv("HOST", "0.0.0.0")
port = int(os.getenv("PORT", "8000"))
app = FastAPI()
@app.post("/query")
async def query_endpoint(request: Request):
body = await request.json()
prompt = body.get("prompt")
if not prompt:
raise HTTPException(status_code=400, detail="prompt is required")
api_key = os.getenv("BL_API_KEY")
if not api_key:
raise HTTPException(status_code=400, detail="BL_API_KEY env var is required")
try:
sandbox = await SandboxInstance.create_if_not_exists({
"name": "my-sandbox",
"image": "blaxel/base-image:latest",
"memory": 4096,
"region": "us-pdx-1",
})
response = ""
async for message in query(
prompt=prompt,
options=ClaudeAgentOptions(
system_prompt="You are connected to a sandbox environment with tools. Use the tools to accomplish the task.",
mcp_servers={
"sandbox": {
"type": "http",
"url": f"{sandbox.metadata.url}/mcp",
"headers": {
"Authorization": f"Bearer {api_key}"
}
}
},
tools=[],
permission_mode="bypassPermissions",
)
):
if isinstance(message, AssistantMessage):
for block in message.content:
if hasattr(block, "text"):
print(block.text)
elif hasattr(block, "name"):
print(f"Tool: {block.name}")
elif isinstance(message, ResultMessage):
print(f"Done: {message.subtype}")
response = message.result
return {"response": response}
except Exception as error:
raise HTTPException(
status_code=500,
detail=str(error) if error else "Unknown error"
)
if __name__ == "__main__":
print(f"Server listening on {host}:{port}")
uvicorn.run(app, host=host, port=port)
```
This creates a Blaxel sandbox named `my-sandbox` and an agent using the Claude Agent SDK.
* The sandbox exposes a streamable [HTTP MCP server](/Sandboxes/MCP) at the sandbox's base URL: `https:///mcp`. The base URL can be [retrieved](https://docs.blaxel.ai/api-reference/compute/get-sandbox) under `metadata.url` in the sandbox.
* The agent exposes an HTTP endpoint at `/query` to accept user requests.
* The agent's HTTP service is bound to the host and port provided by Blaxel. Blaxel automatically injects these values as `HOST` and `PORT` variables into the runtime environment.
* The agent configuration includes the sandbox MCP server URL and uses the Blaxel API key as credential to gain access to it (the `Authorization` header).
In TypeScript, entrypoints are managed in the `scripts` section of the `package.json` file. Update your `package.json` to ensure that `start` and `dev` scripts are defined in the `scripts` section (TypeScript only).
```json TypeScript (npm/pnpm/yarn) theme={null}
{
"scripts": {
"start": "tsx index.ts",
"dev": "tsx --watch index.ts",
"build": "tsc"
},
// ...
}
```
```json TypeScript (bun) theme={null}
{
"scripts": {
"start": "bun run index.ts",
"dev": "bun --watch index.ts"
},
// ...
}
```
Your agent is now ready to operate the sandbox using the sandbox's MCP tools! For example, you could instruct the agent to "install a Python dev environment" and it would use the available MCP tools to find, download and install all the required libraries and tools for Python development in the sandbox.
## Next steps
Blaxel isn't just a sandbox platform. It also lets you [co-host and deploy your agent](../Agents/Overview) as a serverless auto-scalable endpoint, with near-instant latency. Agent hosting provides endpoints for both synchronous and asynchronous requests and includes full observability and tracing out of the box.
The following resources will help you go further:
Complete tutorial for building an agent with Claude Agent SDK and deploying it on Blaxel as a serverless auto-scalable API.
Complete tutorial for using the TypeScript SDK to develop an agent using Blaxel services.
Complete tutorial for using the Python SDK to develop an agent using Blaxel services.
Complete tutorial for deploying AI agents on Blaxel.
Complete tutorial for managing variables and secrets when deploying on Blaxel.
# Run Claude Code in a sandbox
Source: https://docs.blaxel.ai/Tutorials/Claude-Code
Run Claude Code inside a Blaxel sandbox to execute coding tasks on a hosted codebase, with persistent storage and secure network access.
This tutorial explains how to run [Claude Code](https://claude.com/product/claude-code) inside a Blaxel sandbox and have it execute coding tasks on a codebase hosted in a different sandbox.
## Prerequisites
Before starting, ensure you have:
* a Python or TypeScript development environment;
* a Blaxel account and API key. If not, [sign up for a Blaxel account](https://blaxel.ai) and [create a Blaxel API key](../Security/Access-tokens#api-keys);
* a Claude account with an active subscription or an Anthropic Console account with usage-based billing, required by Claude Code. If not, [sign up for an Anthropic account](https://platform.claude.com/) and obtain a subscription or API credits as required;
If you wish to use an Anthropic API key instead of logging in to your Claude or Anthropic Console account, you can still follow this tutorial but you will need to perform [alternative steps for API key recognition when starting Claude Code](https://github.com/anthropics/claude-code/issues/441).
## Install the Blaxel CLI and SDK
1. [Download and install the Blaxel CLI](https://docs.blaxel.ai/cli-reference/introduction#install) and log in to your Blaxel account:
```shell theme={null}
bl login
```
2. In a new directory, install the Blaxel SDK ([Python](https://github.com/blaxel-ai/sdk-python) and [TypeScript](https://github.com/blaxel-ai/sdk-typescript) are both supported):
```shell TypeScript (npm) theme={null}
npm init # if new project
npm install @blaxel/core
```
```shell TypeScript (pnpm) theme={null}
pnpm init # if new project
pnpm install @blaxel/core
```
```shell TypeScript (yarn) theme={null}
yarn init # if new project
yarn add @blaxel/core
```
```shell TypeScript (bun) theme={null}
bun init -m --yes # if new project
bun install @blaxel/core
```
```shell Python theme={null}
python3 -m venv .venv
source .venv/bin/activate
pip install blaxel
```
## Create sandboxes
1. In the host environment, define the following variable:
```shell theme={null}
export BLAXEL_API_KEY=YOUR-BLAXEL-API-KEY-HERE
```
2. Create a script named `main.py` (Python) or `index.ts` (TypeScript) in the same directory.
```typescript TypeScript theme={null}
import { SandboxInstance } from "@blaxel/core";
async function main() {
// Check for required variables
const blaxelApiKey = process.env.BLAXEL_API_KEY;
if (!blaxelApiKey) {
throw new Error("BLAXEL_API_KEY environment variable is not set");
}
// Create application sandbox
const appSandbox = await SandboxInstance.createIfNotExists({
name: "nextjs-sandbox",
image: "blaxel/nextjs:latest",
memory: 4096,
ports: [{ target: 3000, protocol: "HTTP" }],
region: "us-pdx-1",
});
console.log("Application sandbox created");
// Start dev server in application sandbox
await appSandbox.process.exec({
workingDir: "/blaxel/app",
command: "npm run dev -- --hostname 0.0.0.0 --port 3000"
});
console.log("Application server started");
// Create preview URL
const appPreview = await appSandbox.previews.createIfNotExists({
metadata: { name: "app-preview" },
spec: {
port: 3000,
public: false,
}
});
console.log("Preview URL created");
// Create preview token
// Valid for 24 hours
const expiresAt = new Date(Date.now() + 1440 * 60 * 1000);
const token = await appPreview.tokens.create(expiresAt);
// Get preview URL
console.log(`Preview URL: ${appPreview.spec.url}?bl_preview_token=${token.value}`);
console.log(`MCP URL: ${appSandbox.metadata.url}/mcp`);
// Create Claude Code sandbox
const claudeSandbox = await SandboxInstance.createIfNotExists({
name: "claude-sandbox",
image: "blaxel/node:latest",
memory: 4096,
region: "us-pdx-1",
envs: [
{ name: "BLAXEL_API_KEY", value: blaxelApiKey }
]
});
console.log("Claude Code sandbox created");
}
main();
```
```python Python theme={null}
import asyncio
import os
import sys
from datetime import datetime, timedelta, UTC
from blaxel.core import SandboxInstance
async def main():
# Check for required variables
blaxel_api_key = os.getenv("BLAXEL_API_KEY")
if not blaxel_api_key:
raise ValueError("BLAXEL_API_KEY environment variable is not set")
# Create application sandbox
app_sandbox = await SandboxInstance.create_if_not_exists({
"name": "nextjs-sandbox",
"image": "blaxel/nextjs:latest",
"memory": 4096,
"ports": [{ "target": 3000, "protocol": "HTTP" }],
"region": "us-pdx-1",
})
print("Application sandbox created")
# Start dev server in application sandbox
await app_sandbox.process.exec({
"working_dir": "/blaxel/app",
"command": "npm run dev -- --hostname 0.0.0.0 --port 3000"
})
print("Application server started")
# Create preview URL
app_preview = await app_sandbox.previews.create_if_not_exists({
"metadata": {"name": "app-preview"},
"spec": {
"port": 3000,
"public": False,
}
})
print("Preview URL created")
# Create preview token
# Valid for 24 hours
expires_at = datetime.now(UTC) + timedelta(minutes=1440)
token = await app_preview.tokens.create(expires_at)
# Get preview URL
print(f"Preview URL: {app_preview.spec.url}?bl_preview_token={token.value}")
print(f"MCP URL: {app_sandbox.metadata.url}/mcp")
# Create Claude Code sandbox
claude_sandbox = await SandboxInstance.create_if_not_exists({
"name": "claude-sandbox",
"image": "blaxel/node:latest",
"memory": 4096,
"region": "us-pdx-1",
"envs": [
{"name": "BLAXEL_API_KEY", "value": blaxel_api_key}
]
})
print("Claude Code sandbox created")
if __name__ == "__main__":
asyncio.run(main())
```
This script creates two Blaxel sandboxes:
* `claude-sandbox` using Blaxel's Node.js base image
* `nextjs-sandbox` using Blaxel's Next.js base image
In the Claude Code sandbox, it:
* adds the Blaxel API key to `claude-sandbox` as an environment variable named `BLAXEL_API_KEY`.
In the application sandbox, it:
* starts the Next.js dev server in `nextjs-sandbox` on port 3000;
* creates a preview URL for the Next.js service running in `nextjs-sandbox` on port 3000;
* creates an access token for the preview URL, valid for 24 hours;
* returns the preview URL.
3. Run the script to create the sandboxes and preview URL:
```python Python theme={null}
python main.py
```
```typescript TypeScript theme={null}
bun index.ts
```
Once complete, the script displays the generated preview URL for the Next.js application (for example, `https://b186....preview.bl.run?bl_preview_token=cbba622560db78e...`) and the MCP server URL (for example, ` https://sbx-nextjs-sandbox....bl.run/mcp`) for the Next.js sandbox. Note these values, as you will require them in subsequent steps.
## Install and configure Claude Code
1. Connect to the Claude Code sandbox terminal:
```shell theme={null}
bl connect sandbox claude-sandbox
```
2. Execute the following commands to install Claude Code in the sandbox and include it in the system PATH:
```shell theme={null}
apk add curl bash
curl -fsSL https://claude.ai/install.sh | bash
echo 'export PATH=$HOME/.local/bin:$PATH' >> ~/.bashrc && source ~/.bashrc
```
For detailed installation instructions, refer to the [Claude Code documentation](https://code.claude.com/docs/en/overview).
3. Add the application sandbox's MCP server URL (obtained from the sandbox creation script in the previous section) to Claude Code:
```shell theme={null}
claude mcp add --transport http sandbox YOUR-SANDBOX-MCP-URL-HERE --header "Authorization: Bearer $BLAXEL_API_KEY"
```
4. Confirm that Claude Code is able to connect to the application sandbox's MCP server. Run the following command and confirm that you see output like `sandbox: .... connected`:
```shell theme={null}
claude mcp list
```
Claude Code is now ready to operate the sandbox using the sandbox's MCP tools.
## Test Claude Code
Start Claude Code in the sandbox:
```shell theme={null}
claude
```
You will be prompted for authentication, theme selection and permissions.
If you wish to use an Anthropic API key instead of logging in to your Claude or Anthropic Console account, refer to the [alternative steps for API key recognition when starting Claude Code](https://github.com/anthropics/claude-code/issues/441).
Once these steps are completed, give Claude Code a coding task referencing the application sandbox, as in the example prompt below:
```shell theme={null}
You have access to a sandbox environment over MCP. The sandbox includes tools to read and write files and directories and run commands. The sandbox includes a skeleton Next.js application at /blaxel/app. Update the application codebase and complete the coding task below.
You must make all your changes only in the sandbox.
Do not make any changes in the local environment.
Your task is: create a website for a new board game of your own invention, including an interactive demo
```
Claude Code will connect to the application sandbox, inspect the Next.js codebase and make changes as per your request. You may be prompted for permissions to use the sandbox MCP tools during the process. Once complete, visit the application sandbox's preview URL (obtained from the sandbox creation script in the previous section) to see the result.
## Resources
Want more information on building and deploying with Claude on Blaxel? Check out the following resources:
Build and deploy Claude Agent SDK agents on Blaxel.
An agent that operates a sandbox using the sandbox's MCP server and Claude Agent SDK.
Build an agent that connects to a Blaxel MCP server running in code mode using Claude Agent SDK.
Complete tutorial for deploying AI agents on Blaxel.
# Run code-server in a sandbox
Source: https://docs.blaxel.ai/Tutorials/Code-server
Configure code-server to run in a Blaxel sandbox
This tutorial explains how to run code-server inside a Blaxel sandbox and expose it securely using sandbox preview URLs.
## Prerequisites
Before starting, ensure you have:
* [Blaxel CLI](../cli-reference/introduction) installed and authenticated (`bl login`)
## Create a base sandbox image
### Dockerfile
```dockerfile theme={null}
FROM node:23-slim
RUN apt-get update && apt-get install -y \
curl \
wget \
procps \
jq \
sed \
grep \
nano \
vim \
git \
sudo \
python3 \
zip \
tree \
unzip \
ca-certificates \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /home/user
RUN update-ca-certificates
RUN npm i -g typescript ts-node @types/node dotenv webpack webpack-cli
# Install code-server
RUN curl -fsSL https://code-server.dev/install.sh | sh
# Create /blaxel directory and Next.js app for testing
RUN mkdir -p /blaxel && \
npx create-next-app@latest /blaxel/app --use-npm --typescript --eslint --tailwind --src-dir --app --import-alias "@/*" --no-git --yes --no-turbopack
# Copy sandbox-api
COPY --from=ghcr.io/blaxel-ai/sandbox:latest /sandbox-api /usr/local/bin/sandbox-api
# Copy entrypoint script
COPY entrypoint.sh /home/user/entrypoint.sh
RUN chmod +x /home/user/entrypoint.sh
ENTRYPOINT ["/home/user/entrypoint.sh"]
```
### entrypoint.sh
The critical step is configuring code-server to bind on the correct address and trust all origins. This is required because sandbox preview traffic is proxied through Blaxel's infrastructure.
Create an entrypoint script that creates the configuration file and starts the sandbox API and code-server:
```bash theme={null}
#!/bin/bash
# Start sandbox-api in the background
echo "Starting sandbox-api on port 8080..."
/usr/local/bin/sandbox-api &
# Wait for sandbox-api to be ready
while ! curl -s http://127.0.0.1:8080/health > /dev/null 2>&1; do
sleep 0.1
done
echo "Sandbox API ready"
# Write code-server config to bind on port 8081 (CLI args don't override the config file)
mkdir -p /root/.config/code-server
cat > /root/.config/code-server/config.yaml << 'CONF'
bind-addr: 0.0.0.0:8081
auth: none
cert: false
trusted-origins:
- "*"
CONF
# Start code-server via the sandbox API
echo "Starting code-server on port 8081 via sandbox API..."
curl -s http://127.0.0.1:8080/process -X POST \
-H "Content-Type: application/json" \
-d '{"name":"code-server","command":"code-server --disable-telemetry --config /root/.config/code-server/config.yaml","workingDir":"/home/user","waitForCompletion":false, "env": {"PORT": "8081"}}'
echo "code-server started via sandbox API"
# Keep the entrypoint alive
wait
```
The key configuration settings here are:
* `bind-addr: 0.0.0.0:8081` - listens on all interfaces on port 8081, the port exposed via the sandbox preview
* `auth: none` - disables password auth (access will be gated using a [private preview URL and token](/Sandboxes/Preview-url), discussed below)
* `cert: false` - disables TLS termination (handled upstream by Blaxel)
* `trusted-origins: ["*"]` required to allow requests coming from the sandbox proxy origin; without this, code-server will reject WebSocket connections
### blaxel.toml
Create a `blaxel.toml` file in the same directory as your Dockerfile:
```toml theme={null}
type = "sandbox"
name = "code-server-template"
[runtime]
memory = 4096
[[runtime.ports]]
name = "code-server"
target = 8081
protocol = "tcp"
[[runtime.ports]]
name = "debug-sandbox"
target = 8082
protocol = "tcp"
[[runtime.ports]]
name = "nextjs"
target = 3000
protocol = "tcp"
```
## Deploy the sandbox image
Deploy the image by running:
```bash theme={null}
bl deploy
```
## Create or reuse a sandbox
Create a sandbox from the base image:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-code-server-sandbox";
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
image: "code-server-template:latest",
memory: 4096,
region: "us-was-1",
ports: [
{ name: "preview", target: 8081, protocol: "HTTP" },
],
});
```
## Create the preview URL
code-server runs on port 8081, so we expose that port via a [preview URL](/Sandboxes/Preview-url):
```typescript theme={null}
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "code-server-preview" },
spec: {
public: false,
port: 8081,
},
});
```
## Generate a preview token
To securely access the preview, a token is required:
```typescript theme={null}
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24); // 1 day
const token = await preview.tokens.create(expiresAt);
```
## Start the server
The entrypoint starts code-server automatically. The following function can be used as a fallback to restart it if the process is not running:
```typescript theme={null}
async function startCodeServer(sandbox: SandboxInstance) {
await sandbox.process.exec({
name: "code-server",
command: "code-server --disable-telemetry --config /root/.config/code-server/config.yaml",
workingDir: "/home/user",
waitForPorts: [8081],
restartOnFailure: true,
maxRestarts: 25,
});
}
```
## Access the application
Once everything is running, code-server will be available at `https://?bl_preview_token=`
## Complete example
Here is a full example combining all the steps:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-code-server-sandbox";
async function startCodeServer(sandbox: SandboxInstance) {
await sandbox.process.exec({
name: "code-server",
command: "code-server --disable-telemetry --config /root/.config/code-server/config.yaml",
workingDir: "/home/user",
waitForPorts: [8081],
restartOnFailure: true,
maxRestarts: 25,
});
}
async function main() {
try {
// Create or reuse the sandbox
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
image: "code-server-template:latest",
memory: 4096,
region: "us-was-1",
ports: [
{ name: "preview", target: 8081, protocol: "HTTP" },
],
});
// Create preview
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "code-server-preview" },
spec: {
public: false,
port: 8081,
},
});
// Generate preview token
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24);
const token = await preview.tokens.create(expiresAt);
// Start dev server if not already running
const processes = await sandbox.process.list();
if (!processes.find((p) => p.name === "code-server")) {
await startCodeServer(sandbox);
}
// Print access URL
const webUrl = `${preview.spec?.url}?bl_preview_token=${token.value}`;
console.log(`code-server preview URL: ${webUrl}`);
} catch (error) {
console.error("Error:", error);
}
}
main();
```
# Run CrewAI agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/CrewAI
Deploy multi-agent CrewAI systems on Blaxel with serverless hosting, observability, and connections to MCP servers and LLM APIs.
[CrewAI](https://www.crewai.com/) is a framework for orchestrating autonomous AI agents — enabling you to create AI teams where each agent has specific roles, tools, and goals, working together to accomplish complex tasks. You can deploy your CrewAI projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with CrewAI on Blaxel
To get started with CrewAI on Blaxel:
* if you already have a CrewAI agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* else initialize an example project in CrewAI by using the following Blaxel CLI command and selecting the *CrewAI hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a CrewAI agent using Blaxel features
While building your agent in CrewAI, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in CrewAI format:
```python Python theme={null}
from blaxel.crewai import bl_tools
await bl_tools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```python Python theme={null}
from blaxel.crewai import bl_model
model = await bl_model("model-api-name")
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```python Python theme={null}
from blaxel.core.agents import bl_agent
response = await bl_agent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Run custom agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/Custom-Agents
Build and deploy custom agents on Blaxel in Python or TypeScript
Blaxel’s development paradigm is designed to have a minimal footprint on your usual development process.
Your custom code remains platform-agnostic: you can deploy it on Blaxel or through traditional methods like Docker containers on VMs or Kubernetes clusters. When you deploy on Blaxel (CLI command `bl deploy`), Blaxel runs a specialized build process that integrates your code with its [Global Agentics Network](../Infrastructure/Global-Inference-Network) features.
At this time, Blaxel only supports custom agents developed in TypeScript or Python.
Here is a high-level overview of how agents can be built and deployed using Blaxel:
1. **Initialize a new project by creating a local git repository**. This will contain your agent's logic and connections, as well as all required dependencies. For quick setup, use [Blaxel CLI](../cli-reference/introduction) command `bl new agent`, which creates a pre-scaffolded local repository ready for development that you can deploy to Blaxel in one command.
2. **Develop and test your agent iteratively in a local environment**.
1. Develop your agent logic however you want (using an agentic framework or any custom TypeScript/Python code). Write your own functions as needed. Use Blaxel SDK commands to connect to resources from Blaxel such as model APIs and tool servers.
2. Use Blaxel CLI command `bl serve` to serve your agent on your local machine. The execution workflow—including agent logic, functions, and model API calls—is broken down and sandboxed exactly as it would be when served on Blaxel.
3. **Deploy your agent**. Use Blaxel CLI command `bl deploy` to build and deploy your agent on Blaxel. You can manage a development & production life-cycle by deploying multiple agents, with the according prefix or label.
## Develop an agent on Blaxel
Develop your AI agents in TypeScript using the Blaxel SDK.
Develop your AI agents in Python using the Blaxel SDK.
# Run Expo in a sandbox
Source: https://docs.blaxel.ai/Tutorials/Expo
Configure an Expo application to run in a Blaxel sandbox
This tutorial explains how to run an Expo (React Native / Web) application inside a Blaxel sandbox and expose it securely using sandbox preview URLs.
## Prerequisites
* [Blaxel CLI](../cli-reference/introduction) installed and authenticated (`bl login`)
* Node.js 18+ installed
* `@blaxel/core` package installed in your project (`npm install @blaxel/core`)
## Architecture overview
Running Expo inside a Blaxel sandbox requires a few adjustments:
1. Injecting the preview URL into `app.json`
2. Setting `EXPO_PACKAGER_PROXY_URL` to force Expo to proxy assets through the preview URL
3. Restarting the dev server after configuration changes
4. Exposing the Expo dev server via a Blaxel preview URL
## Create a base sandbox image
### Dockerfile
```dockerfile theme={null}
FROM node:22-alpine
RUN apk update && apk add --no-cache \
git \
curl \
netcat-openbsd \
&& rm -rf /var/cache/apk/*
WORKDIR /app
COPY --from=ghcr.io/blaxel-ai/sandbox:latest /sandbox-api /usr/local/bin/sandbox-api
# Create Expo project with default template
RUN npx create-expo-app@latest .
# Install web dependencies and expo-asset for asset loading support
RUN npx expo install react-dom react-native-web expo-asset
# Patch expo-asset: it hardcodes http:// when constructing asset URLs
# from the manifest's debuggerHost. Behind the Blaxel HTTPS proxy,
# Android's native downloader can't follow the HTTP→HTTPS redirect,
# so fonts and images fail to load. Force https://.
RUN sed -i "s|'http://' + manifest2.extra.expoGo.debuggerHost|'https://' + manifest2.extra.expoGo.debuggerHost|g" \
node_modules/expo-asset/build/AssetSources.js
# Pre-warm Metro bundler cache by starting the dev server briefly
# This warms up the cache better than export since it matches actual dev usage
RUN timeout 120 npx expo start --web --port 8081 --no-dev-client 2>/dev/null || true
# Expose port for Expo web
EXPOSE 8081
# Add npm global modules to PATH
ENV PATH="/usr/local/bin:$PATH"
ENTRYPOINT ["/usr/local/bin/sandbox-api"]
```
### blaxel.toml
Create a `blaxel.toml` file in the same directory as your Dockerfile:
```toml theme={null}
type = "sandbox"
name = "expo-template"
[runtime]
memory = 8096
[[runtime.ports]]
name = "expo-web"
target = 8081
```
## Deploy the sandbox image
Deploy the image by running:
```bash theme={null}
bl deploy
```
## Create or reuse a sandbox
Create a sandbox from the base image:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-expo-sandbox";
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
labels: {
framework: "expo",
},
image: "expo-template:latest",
memory: 8096,
ports: [
{ name: "preview", target: 8081, protocol: "HTTP" },
],
});
```
## Configure CORS for preview URL access
Expo servers require permissive CORS headers when accessed through a preview URL:
```typescript theme={null}
const responseHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers":
"Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
Vary: "Origin",
};
```
Alternatively, you can use [custom domains](https://docs.blaxel.ai/Infrastructure/Custom-domains) to expose previews on your own domain.
## Create the preview URL
Expo runs on port 8081, so expose that port via a [preview URL](../Sandboxes/Preview-url):
```typescript theme={null}
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "dev-server-preview" },
spec: {
responseHeaders,
// Must be public for Android scenarios: Expo Go on Android doesn't forward the auth
// cookie to subsequent bundle/asset requests, so private previews
// fail with 401 when fetching the JS bundle.
// WARNING: This makes the preview publicly accessible with no authentication.
// Do not use this pattern for sensitive workloads.
// If Android support is not required, set to false for private previews
public: true,
port: 8081,
},
});
```
## Generate a preview token
To securely access the preview, a token is required:
```typescript theme={null}
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24); // 1 day
const token = await preview.tokens.create(expiresAt);
```
## Inject the preview URL into Expo's app.json
The Expo application router requires the correct origin when running behind a proxy:
```typescript theme={null}
async function addRouterOriginToAppJson(
sandbox: SandboxInstance,
previewUrl: string
) {
const appJsonPath = "/app/app.json";
const appJsonContent = await sandbox.fs.read(appJsonPath);
const appJson = JSON.parse(appJsonContent);
appJson.expo = {
...appJson.expo,
extra: {
...(appJson.expo.extra || {}),
router: {
...(appJson.expo.extra?.router || {}),
origin: previewUrl,
},
},
};
await sandbox.fs.write(appJsonPath, JSON.stringify(appJson, null, 2));
}
```
## Configure the proxy URL
Expo must be configured to serve assets through the preview URL. This function checks if the configuration is already correct and only updates if needed:
```typescript theme={null}
async function configureExpoProxyUrl(
sandbox: SandboxInstance,
previewUrl: string
): Promise {
const baseUrl = previewUrl.replace(/\/$/, ""); // Remove trailing slash
// Check if the .env file already has the correct proxy URL
let envContent = "";
try {
envContent = await sandbox.fs.read("/app/.env");
} catch {
// File doesn't exist yet
}
const expectedEnvLine = `EXPO_PACKAGER_PROXY_URL=${baseUrl}`;
if (envContent.includes(expectedEnvLine)) {
console.log("Expo proxy URL already configured correctly");
return false; // No changes made
}
// Remove any existing EXPO_PACKAGER_PROXY_URL line and add the new one
const lines = envContent
.split("\n")
.filter((line) => !line.startsWith("EXPO_PACKAGER_PROXY_URL="));
lines.push(expectedEnvLine);
await sandbox.fs.write("/app/.env", lines.join("\n"));
console.log(`Configured Expo to use proxy URL: ${baseUrl}`);
return true; // Changes were made
}
```
## Start the dev server
After setting the proxy URL you can start the dev server:
```typescript theme={null}
async function startDevServer(sandbox: SandboxInstance) {
await sandbox.process.exec({
name: 'dev-server',
command: 'npx expo start --web --port 8081',
workingDir: '/app',
waitForPorts: [8081],
restartOnFailure: true,
maxRestarts: 25,
})
}
```
## Stream logs
Monitor the Expo server output in real-time:
```typescript theme={null}
const logStream = sandbox.process.streamLogs("dev-server", {
onLog(log) {
console.log(log);
},
});
// When done monitoring, close the stream:
logStream.close();
```
## Access the Expo application
Once everything is running, the Expo application will be available at `https://?bl_preview_token=`
## Complete example
Here is a full example combining all the steps:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-expo-sandbox";
const responseHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers":
"Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
Vary: "Origin",
};
async function addRouterOriginToAppJson(
sandbox: SandboxInstance,
previewUrl: string
) {
const appJsonPath = "/app/app.json";
const appJsonContent = await sandbox.fs.read(appJsonPath);
const appJson = JSON.parse(appJsonContent);
appJson.expo = {
...appJson.expo,
extra: {
...(appJson.expo.extra || {}),
router: {
...(appJson.expo.extra?.router || {}),
origin: previewUrl,
},
},
};
await sandbox.fs.write(appJsonPath, JSON.stringify(appJson, null, 2));
}
async function configureExpoProxyUrl(
sandbox: SandboxInstance,
previewUrl: string
): Promise {
const baseUrl = previewUrl.replace(/\/$/, "");
let envContent = "";
try {
envContent = await sandbox.fs.read("/app/.env");
} catch {
// File doesn't exist
}
const expectedEnvLine = `EXPO_PACKAGER_PROXY_URL=${baseUrl}`;
if (envContent.includes(expectedEnvLine)) {
return false;
}
const lines = envContent
.split("\n")
.filter((line) => !line.startsWith("EXPO_PACKAGER_PROXY_URL="));
lines.push(expectedEnvLine);
await sandbox.fs.write("/app/.env", lines.join("\n"));
return true;
}
async function startDevServer(sandbox: SandboxInstance) {
await sandbox.process.exec({
name: 'dev-server',
command: 'npx expo start --web --port 8081',
workingDir: '/app',
waitForPorts: [8081],
restartOnFailure: true,
maxRestarts: 25,
})
}
async function restartDevServer(sandbox: SandboxInstance) {
try {
await sandbox.process.kill('dev-server')
} catch {
// Process might not be running
}
await new Promise((r) => setTimeout(r, 1000))
await startDevServer(sandbox)
}
async function configureExpo(sandbox: SandboxInstance, previewUrl: string) {
await addRouterOriginToAppJson(sandbox, previewUrl)
const proxyChanged = await configureExpoProxyUrl(sandbox, previewUrl)
const processes = await sandbox.process.list()
const devServerRunning = processes.find((p) => p.name === 'dev-server')
if (proxyChanged && devServerRunning) {
await restartDevServer(sandbox)
} else if (!devServerRunning) {
await startDevServer(sandbox)
}
}
async function main() {
try {
// Create or reuse the sandbox
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
labels: {
framework: "expo",
},
image: "expo-template:latest",
memory: 8096,
ports: [
{ name: "preview", target: 8081, protocol: "HTTP" },
]
});
// Create preview
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "preview" },
spec: {
responseHeaders,
// Must be public for Android scenarios: Expo Go on Android doesn't forward the auth
// cookie to subsequent bundle/asset requests, so private previews
// fail with 401 when fetching the JS bundle.
// WARNING: This makes the preview publicly accessible with no authentication.
// Do not use this pattern for sensitive workloads.
// If Android support is not required, set to false for private previews
public: true,
port: 8081,
},
});
// Generate preview token
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24);
const token = await preview.tokens.create(expiresAt);
// Configure Expo (will restart dev server if needed)
await configureExpo(sandbox, preview.spec?.url!);
// Print access URLs
const webUrl = `${preview.spec?.url}?bl_preview_token=${token.value}`;
// Use exps:// (HTTPS variant) so Expo Go connects over TLS.
// Android 9+ blocks cleartext HTTP, and the Blaxel proxy redirects
// HTTP→HTTPS which Android's native downloader can't follow.
const expoUrl = webUrl.replace("https://", "exps://");
console.log(`Web Preview URL: ${webUrl}`);
console.log(`Expo Mobile URL: ${expoUrl}`);
// Stream logs
const logStream = sandbox.process.streamLogs("dev-server", {
onLog(log) {
console.log(log)
},
});
// Keep running until interrupted
process.on("SIGINT", () => {
logStream.close();
process.exit(0);
});
} catch (error) {
console.error("Error:", error);
process.exit(1);
}
}
main();
```
# Run LangChain agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/LangChain
Learn how to leverage Blaxel with LangChain and LangGraph.
[LangChain](https://www.langchain.com/) is a composable framework to build LLM applications. It can be combined with [LangGraph](https://www.langchain.com/langgraph) which is a stateful, orchestration framework that brings added control to agent workflows. You can deploy your LangChain or LangGraph projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with LangChain on Blaxel
To get started with LangChain/LangGraph on Blaxel:
* if you already have a LangChain or LangGraph agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* or initialize an example project in LangChain by using the following Blaxel CLI command and selecting the *LangChain hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a LangChain agent using Blaxel features
While building your agent in LangChain, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in LangChain format:
```python Python theme={null}
from blaxel.langgraph import bl_tools
await bl_tools(['mcp-server-name'])
```
```typescript TypeScript theme={null}
import { blTools } from '@blaxel/langgraph';
const tools = await blTools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```python Python theme={null}
from blaxel.langgraph import bl_model
model = await bl_model("model-api-name")
```
```typescript TypeScript theme={null}
import { blModel } from "@blaxel/langgraph";
const model = await blModel("model-api-name");
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```python Python theme={null}
from blaxel.core.agents import bl_agent
response = await bl_agent("agent-name").run(input);
```
```typescript TypeScript theme={null}
import { blAgent } from "@blaxel/core";
const myAgentResponse = await blAgent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Run LlamaIndex agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/LlamaIndex
Learn how to leverage Blaxel with LlamaIndex agents.
You can deploy your [LlamaIndex](https://www.llamaindex.ai/) projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with LlamaIndex on Blaxel
To get started with LlamaIndex on Blaxel:
* if you already have a LlamaIndex agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* else initialize an example project in LlamaIndex by using the following Blaxel CLI command and selecting the *LlamaIndex hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a LlamaIndex agent using Blaxel features
While building your agent in LlamaIndex, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in LlamaIndex format:
```python Python theme={null}
from blaxel.llamaindex import bl_tools
await bl_tools(['mcp-server-name'])
```
```typescript TypeScript theme={null}
import { blTools } from '@blaxel/llamaindex';
const tools = await blTools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```python Python theme={null}
from blaxel.llamaindex import bl_model
model = await bl_model("model-api-name")
```
```typescript TypeScript theme={null}
import { blModel } from "@blaxel/llamaindex";
const model = await blModel("model-api-name");
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```python Python theme={null}
from blaxel.core.agents import bl_agent
response = await bl_agent("agent-name").run(input);
```
```typescript TypeScript theme={null}
import { blAgent } from "@blaxel/core";
const myAgentResponse = await blAgent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Run Mastra agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/Mastra
Deploy Mastra TypeScript agent projects on Blaxel with serverless hosting, observability, and connections to MCP servers and LLM APIs.
You can deploy your [Mastra](https://mastra.ai/) projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with Mastra on Blaxel
To get started with Mastra on Blaxel:
* if you already have a Mastra agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* else initialize an example project in Mastra by using the following Blaxel CLI command and selecting the *Mastra hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a Mastra agent using Blaxel features
While building your agent in Mastra, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in Mastra format:
```typescript TypeScript theme={null}
import { blTools } from '@blaxel/mastra';
const tools = await blTools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```typescript TypeScript theme={null}
import { blModel } from "@blaxel/mastra";
const model = await blModel("model-api-name");
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```typescript TypeScript theme={null}
import { blAgent } from "@blaxel/core";
const myAgentResponse = await blAgent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Run Next.js in a sandbox
Source: https://docs.blaxel.ai/Tutorials/Nextjs
Configure a Next.js application to run in a Blaxel sandbox
This tutorial explains how to run a Next.js application inside a Blaxel sandbox and expose it securely using sandbox preview URLs.
## Prerequisites
Before starting, ensure you have:
* [Blaxel CLI](../cli-reference/introduction) installed and authenticated (`bl login`)
* Node.js 18+ installed
* `@blaxel/core` package installed in your project (`npm install @blaxel/core`)
## Architecture Overview
Running Next.js inside a Blaxel sandbox requires a few adjustments compared to local development:
* Running the Next.js dev server on port 3000
* Exposing the Next.js dev server via a Blaxel preview URL
## Create a base sandbox image
### Dockerfile
```dockerfile theme={null}
FROM node:22-alpine
RUN apk update && apk add --no-cache \
git \
curl \
netcat-openbsd \
&& rm -rf /var/cache/apk/*
WORKDIR /app
COPY --from=ghcr.io/blaxel-ai/sandbox:latest /sandbox-api /usr/local/bin/sandbox-api
# Create Next.js project with TypeScript, Tailwind, App Router, and Turbopack
RUN mkdir -p /app \
&& npx create-next-app@latest /app --use-npm --typescript --eslint --tailwind --src-dir --app --import-alias "@/*" --no-git --yes \
&& cd /app && npm install --save @next/swc-linux-x64-musl
COPY ./next.config.ts /app/next.config.ts
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Add npm global modules to PATH
ENV PATH="/usr/local/bin:$PATH"
ENTRYPOINT ["/entrypoint.sh"]
```
### entrypoint.sh
Create an entrypoint script that starts the sandbox API and the dev server:
```bash theme={null}
#!/bin/sh
# Set environment variables
export PATH="/usr/local/bin:$PATH"
# Start sandbox-api in the background
/usr/local/bin/sandbox-api &
# Function to wait for port to be available
wait_for_port() {
local port=$1
local timeout=30
local count=0
echo "Waiting for port $port to be available..."
while ! nc -z localhost $port; do
sleep 1
count=$((count + 1))
if [ $count -gt $timeout ]; then
echo "Timeout waiting for port $port"
exit 1
fi
done
echo "Port $port is now available"
}
# Wait for port 8080 to be available
wait_for_port 8080
# Execute curl command to start Next.js dev server
echo "Running Next.js dev server..."
curl http://localhost:8080/process \
-X POST \
-H "Content-Type: application/json" \
-d '{
"name": "dev-server",
"workingDir": "/app",
"command": "npm run dev -- --port 3000",
"waitForCompletion": false,
"restartOnFailure": true,
"maxRestarts": 25
}'
wait
```
### next.config.ts
Create a `next.config.ts` file to configure Next.js for use with Blaxel previews:
```typescript theme={null}
import type { NextConfig } from "next";
const nextConfig: NextConfig = {
/* config options here */
allowedDevOrigins: ["*.preview.bl.run"],
};
export default nextConfig;
```
The `allowedDevOrigins: ["*.preview.bl.run"]` setting allows the dev server to accept requests from the Blaxel preview origin, which is required for Blaxel preview URLs to work correctly. If you have configured a custom domain in your Blaxel workspace, you should also add it to this array (e.g., `["*.preview.bl.run", "*.preview.mycompany.com"]`).
### blaxel.toml
Create a `blaxel.toml` file in the same directory as your Dockerfile:
```toml theme={null}
type = "sandbox"
name = "nextjs-template"
[runtime]
memory = 4096
[[runtime.ports]]
name = "nextjs-dev"
target = 3000
protocol = "tcp"
```
## Deploy the sandbox image
Deploy the image by running:
```bash theme={null}
bl deploy
```
## Create or reuse a sandbox
Create a sandbox from the base image:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-nextjs-sandbox";
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
labels: {
framework: "nextjs",
},
image: "nextjs-template:latest",
memory: 4096,
ports: [
{ name: "preview", target: 3000, protocol: "HTTP" },
],
});
```
## Configure CORS for preview URL access
Next.js dev servers work well with permissive CORS headers when accessed through a preview URL:
```typescript theme={null}
const responseHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers":
"Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
Vary: "Origin",
};
```
Alternatively, you can use [custom domains](https://docs.blaxel.ai/Infrastructure/Custom-domains) to expose previews on your own domain.
## Create the preview URL
Next.js runs on port 3000, so we expose that port via a [preview URL](../Sandboxes/Preview-url):
```typescript theme={null}
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "dev-server-preview" },
spec: {
responseHeaders,
public: false,
port: 3000,
},
});
```
## Generate a preview token
To securely access the preview, a token is required:
```typescript theme={null}
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24); // 1 day
const token = await preview.tokens.create(expiresAt);
```
## Start the dev server
If not using the entrypoint script, you can start the dev server programmatically:
```typescript theme={null}
async function startDevServer(sandbox: SandboxInstance) {
console.log("Starting Next.js dev server...");
await sandbox.process.exec({
name: "dev-server",
command: "npm run dev -- --port 3000",
workingDir: "/app",
waitForPorts: [3000],
restartOnFailure: true,
maxRestarts: 25,
});
}
```
## Stream logs
To monitor the Next.js dev server output in real-time:
```typescript theme={null}
const logStream = sandbox.process.streamLogs("dev-server", {
onLog(log) {
console.log(log);
},
});
// When done monitoring, close the stream:
logStream.close();
```
## Access the Next.js application
Once everything is running, the Next.js application will be available at `https://?bl_preview_token=`
## Complete example
Here is a full example combining all the steps:
```typescript theme={null}
import { SandboxInstance } from "@blaxel/core";
const sandboxName = "my-nextjs-sandbox";
const responseHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, DELETE, OPTIONS, PATCH",
"Access-Control-Allow-Headers":
"Content-Type, Authorization, X-Requested-With, X-Blaxel-Workspace, X-Blaxel-Preview-Token, X-Blaxel-Authorization",
"Access-Control-Allow-Credentials": "true",
"Access-Control-Expose-Headers": "Content-Length, X-Request-Id",
"Access-Control-Max-Age": "86400",
Vary: "Origin",
};
async function startDevServer(sandbox: SandboxInstance) {
await sandbox.process.exec({
name: "dev-server",
command: "npm run dev -- --port 3000",
workingDir: "/app",
waitForPorts: [3000],
restartOnFailure: true,
maxRestarts: 25,
});
}
async function main() {
try {
// Create or reuse the sandbox
const sandbox = await SandboxInstance.createIfNotExists({
name: sandboxName,
labels: {
framework: "nextjs",
},
image: "nextjs-template:latest",
memory: 4096,
ports: [
{ name: "preview", target: 3000, protocol: "HTTP" },
]
});
// Create preview
const preview = await sandbox.previews.createIfNotExists({
metadata: { name: "preview" },
spec: {
responseHeaders,
public: false,
port: 3000,
},
});
// Generate preview token
const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24);
const token = await preview.tokens.create(expiresAt);
// Start dev server if not already running
const processes = await sandbox.process.list();
if (!processes.find((p) => p.name === "dev-server")) {
await startDevServer(sandbox);
}
// Print access URL
const webUrl = `${preview.spec?.url}?bl_preview_token=${token.value}`;
console.log(`Next.js Preview URL: ${webUrl}`);
// Stream logs
const logStream = sandbox.process.streamLogs("dev-server", {
onLog(log) {
console.log(log)
},
});
// Keep running until interrupted
process.on("SIGINT", () => {
logStream.close();
process.exit(0);
});
} catch (error) {
console.error("Error:", error);
process.exit(1);
}
}
main();
```
## Template features
### Turbopack
The template uses Turbopack, Next.js's Rust-based bundler, for significantly faster development builds. Turbopack provides:
* Faster cold starts
* Instant hot module replacement (HMR)
* Optimized incremental compilation
The `@next/swc-linux-x64-musl` package is pre-installed for optimal performance on Alpine Linux.
### App Router
The template comes pre-configured with the App Router (`/app` directory structure). This provides:
* Server Components by default
* Nested layouts
* Loading and error states
* Server Actions
### TypeScript
Full TypeScript support is enabled out of the box with strict type checking.
### Tailwind CSS
Tailwind CSS is pre-configured for styling. The `src/app/globals.css` file includes the Tailwind directives.
### ESLint
ESLint is configured with Next.js recommended rules for code quality.
# Run OpenAI Agents SDK agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/OpenAI-Agents
Learn how to leverage Blaxel with OpenAI Agents framework.
You can deploy your [OpenAI Agents SDK](https://developers.openai.com/api/docs/guides/agents-sdk) projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with OpenAI Agents on Blaxel
To get started with OpenAI Agents SDK on Blaxel:
* if you already have an agent built with OpenAI Agents, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* or initialize an example project with OpenAI Agents by using the following Blaxel CLI command and selecting the *OpenAI Agents hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop with OpenAI Agents using Blaxel features
While building your agent with OpenAI Agents SDK, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in OpenAI Agents format:
```python Python theme={null}
from blaxel.openai import bl_tools
await bl_tools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```python Python theme={null}
from blaxel.openai import bl_model
model = await bl_model("model-api-name")
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```python Python theme={null}
from blaxel.core.agents import bl_agent
response = await bl_agent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Run OpenClaw on Blaxel
Source: https://docs.blaxel.ai/Tutorials/OpenClaw
Deploy and run OpenClaw, an open-source coding agent, inside a Blaxel sandbox with persistent storage and live previews.
This tutorial explains how to deploy and run [OpenClaw](https://openclaw.ai) on Blaxel.
OpenClaw can be manually installed and configured to run inside a Blaxel sandbox.
## Prerequisites
Before starting, ensure you have:
* a [Blaxel account](https://blaxel.ai)
* an API key from any of the [supported model providers](https://docs.openclaw.ai/concepts/model-providers)
* a Python development environment
## Create a sandbox
1. [Download and install the Blaxel CLI](https://docs.blaxel.ai/cli-reference/introduction#install) and log in to your Blaxel account:
```shell theme={null}
bl login
```
2. In a new directory, install the Blaxel [Python SDK](https://github.com/blaxel-ai/sdk-python) (there's also a [TypeScript SDK](https://github.com/blaxel-ai/sdk-typescript)):
```shell theme={null}
python3 -m venv .venv
source .venv/bin/activate
pip install blaxel
```
3. Create a script named `main.py` in the same directory:
```python theme={null}
import asyncio
import os
import sys
from datetime import datetime, timedelta, UTC
from blaxel.core import SandboxInstance
async def main():
# Create sandbox
sandbox = await SandboxInstance.create_if_not_exists({
"name": "openclaw-sandbox",
"image": "blaxel/node:latest",
"memory": 4096,
"ports": [{ "target": 18789, "protocol": "HTTP" }],
"region": "us-pdx-1",
})
# Create preview
preview = await sandbox.previews.create_if_not_exists({
"metadata": {"name": "openclaw-gateway"},
"spec": {
"port": 18789,
"public": False,
}
})
# Create preview token
# Valid for 24 hours
expires_at = datetime.now(UTC) + timedelta(minutes=1440)
token = await preview.tokens.create(expires_at)
# Get preview URL and token
print(f"Preview URL: {preview.spec.url}")
print(f"Token: {token}")
if __name__ == "__main__":
asyncio.run(main())
```
This script:
* creates a new Blaxel sandbox named `openclaw-sandbox` using Blaxel's Node.js base image;
* opens the sandbox port 18789, which OpenClaw uses for WebSocket and HTTP connections; and
* creates a preview URL for the service running on that port;
* creates an access token for the preview URL, valid for 24 hours.
4. Run the script to create the sandbox and preview URL:
```shell theme={null}
python main.py
```
Once complete, the script displays the generated preview URL (for example, `https://b186....preview.bl.run`) and preview URL access token (for example, `cbba622560db78e...`). Note these values, as you will require them in subsequent steps.
## Install and configure OpenClaw
1. Connect to the Blaxel sandbox terminal:
```shell theme={null}
bl connect sandbox openclaw-sandbox
```
2. Execute the following commands to install OpenClaw in the sandbox:
```shell theme={null}
apk add curl bash make cmake g++ build-base linux-headers jq
npm install -g openclaw@latest
```
For detailed installation instructions, refer to the [OpenClaw documentation](https://docs.openclaw.ai/install).
3. Configure OpenClaw:
```shell theme={null}
openclaw onboard
```
Read and accept the security warning to proceed.
Select the **Quickstart** mode to proceed. You will be prompted for more information, including choosing a model provider, channels, skills and hooks.
At minimum, you must select a model provider and model and enter the required API key. All other steps are optional and can be skipped if you don't have the details yet.
4. Once the configuration process is complete, OpenClaw will display status output. This will usually include a message that `systemd` services are unavailable. This is expected as, for performance reasons, Blaxel sandboxes do not include the `systemd` process manager.
5. The status output also displays a tokenized dashboard link. Note the dashboard token (for example, `e782efff66...`), as it will be required in the next step.
## Configure OpenClaw Gateway access
1. Edit the OpenClaw configuration file at `/blaxel/.openclaw/openclaw.json` and add the preview URL to the list of allowed origins:
```json theme={null}
"gateway": {
"controlUi": {
"allowedOrigins": ["https://b186....preview.bl.run"]
},
// ...
}
```
An alternative way to do this is with `jq`:
```shell theme={null}
jq '.gateway.controlUi = {"allowedOrigins":["https://b186....preview.bl.run"]}' /blaxel/.openclaw/openclaw.json > /blaxel/.openclaw/openclaw.json.new && mv /blaxel/.openclaw/openclaw.json.new /blaxel/.openclaw/openclaw.json
```
2. Start the OpenClaw Gateway service manually, and keep it running.
```shell theme={null}
openclaw gateway --bind lan --verbose
```
Confirm that you see log messages like the ones below about the Gateway service starting and listening for requests:
```shell theme={null}
14:25:22 [gateway] agent model: google/gemini-2.5-flash-preview-09-2025
14:25:22 [gateway] listening on ws://0.0.0.0:18789 (PID 1660)
14:25:22 [gateway] log file: /tmp/openclaw/openclaw-2026-02-06.log
14:25:22 [ws] → event health seq=1 clients=0 presenceVersion=1 healthVersion=2
```
**IMPORTANT: This is the OpenClaw Gateway process. Make sure that it is running for the rest of these instructions.**
3. Browse to the sandbox preview URL, remembering to also include the preview URL access token in the URL string - for example, `https://b186....preview.bl.run?bl_preview_token=cbba...`. This displays the OpenClaw Control UI.
4. Navigate to the **Overview** page and enter the dashboard token in the **Gateway Token** field.
5. The Control UI also displays the error `pairing required`. This is because the OpenClaw Gateway requires a [one-time pairing approval](https://docs.openclaw.ai/web/control-ui#device-pairing-first-connection) for connections from a new browser/device.
6. Connect to the sandbox terminal **in a separate terminal window** (to not kill the gateway process running):
```shell theme={null}
bl connect sandbox openclaw-sandbox
```
7. List the available pairing requests:
```shell theme={null}
openclaw devices list
```
8. Typically, there will only be one pending pairing request. Approve it using its request identifier:
```shell theme={null}
openclaw devices approve f06d4e9b...
```
9. Browse to the sandbox preview URL again. The Control UI should now be fully functional and ready to accept requests, verified by the health check in the top right corner.
## Test OpenClaw
Navigate to the **Chat** page, enter a prompt, and wait for a reply to confirm that the OpenClaw agent is working:
## Configure OpenClaw channels
OpenClaw supports multiple channels. To enable them, configure them through the Control UI after connecting, or add them to the `openclaw.json` file.
As mentioned earlier, the OpenClaw Gateway process must remain running while you interact with the agent. However, if you end your interactive shell session with the Blaxel sandbox, the Gateway process will terminate automatically as well. To keep the Gateway process running even if you're not in an active shell session with the sandbox, create and execute the following script:
```python theme={null}
import asyncio
import os
import sys
from blaxel.core import SandboxInstance
async def main():
# Get sandbox
sandbox = await SandboxInstance.get("openclaw-sandbox")
# Start process
process = await sandbox.process.exec({
"name": "start-openclaw-gateway",
"command": "openclaw gateway --bind lan --verbose",
"restart_on_failure": True,
"max_restarts": 5
})
if __name__ == "__main__":
asyncio.run(main())
```
# Run PydanticAI agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/PydanticAI
Learn how to leverage Blaxel with PydanticAI agents.
You can deploy your [PydanticAI](https://ai.pydantic.dev/) projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with PydanticAI on Blaxel
To get started with PydanticAI on Blaxel:
* if you already have a PydanticAI agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* else initialize an example project in PydanticAI by using the following Blaxel CLI command and selecting the *PydanticAI hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a PydanticAI agent using Blaxel features
While building your agent in PydanticAI, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in PydanticAI format:
```python Python theme={null}
from blaxel.pydantic import bl_tools
await bl_tools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```python Python theme={null}
from blaxel.pydantic import bl_model
model = await bl_model("model-api-name")
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```python Python theme={null}
from blaxel.core.agents import bl_agent
response = await bl_agent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Overview
Source: https://docs.blaxel.ai/Tutorials/Sandboxes-Overview
Run Web applications inside a Blaxel sandbox and expose them securely
Blaxel lets you run Web applications inside a Blaxel sandbox and preview the content in real time using a browser client. This is ideal for preview environments, internal demos, and AI-powered coding workflows built on Blaxel.
Run Astro in a sandbox.
Run Expo in a sandbox.
Run Next.js in a sandbox.
# Run Vercel AI SDK agents on Blaxel
Source: https://docs.blaxel.ai/Tutorials/Vercel-AI
Learn how to leverage Blaxel with Vercel AI SDK agents.
You can deploy your [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) projects to Blaxel with minimal code editing (and zero configuration), enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more.
## Get started with Vercel AI SDK on Blaxel
To get started with Vercel AI SDK on Blaxel:
* if you already have a Vercel AI agent, adapt your code with [Blaxel SDK commands](../Agents/Develop-an-agent) to connect to [MCP servers](../Functions/Overview), [LLMs](../Models/Overview) and [other agents](../Agents/Overview).
* else initialize an example project with Vercel AI SDK by using the following Blaxel CLI command and selecting the *Vercel AI hello world:*
```bash theme={null}
bl new agent
```
[Deploy](../Agents/Deploy-an-agent) it by running:
```bash theme={null}
bl deploy
```
## Develop a Vercel AI agent using Blaxel features
While building your agent with Vercel AI SDK, use Blaxel [SDK](../sdk-reference/introduction) to connect to resources already hosted on Blaxel:
* [MCP servers](../Functions/Overview)
* [LLMs](../Models/Overview)
* [other agents](../Agents/Overview)
### Connect to MCP servers
Connect to [MCP servers](../Functions/Overview) using the Blaxel SDK to access pre-built or custom tool servers hosted on Blaxel. This eliminates the need to manage server connections yourself, with credentials stored securely on the platform.
Run the following command to retrieve tools in Vercel AI format:
```typescript TypeScript theme={null}
import { blTools } from '@blaxel/vercel';
const tools = await blTools(['mcp-server-name'])
```
### Connect to LLMs
Connect to [LLMs](../Models/Overview) hosted on Blaxel using the SDK to avoid managing model API connections yourself. All credentials remain securely stored on the platform.
```typescript TypeScript theme={null}
import { blModel } from "@blaxel/vercel";
const model = await blModel("model-api-name");
```
### Connect to other agents
Connect to other agents hosted on Blaxel from your code by using the [Blaxel SDK](../sdk-reference/introduction). This allows for multi-agent chaining without managing connections yourself. This command is independent of the framework used to build the agent.
```typescript TypeScript theme={null}
import { blAgent } from "@blaxel/core";
const myAgentResponse = await blAgent("agent-name").run(input);
```
### Host your agent on Blaxel
You can [deploy](../Agents/Deploy-an-agent) your agent on Blaxel, enabling you to use [Serverless Deployments](../Infrastructure/Global-Inference-Network), [Agentic Observability](../Observability/Overview), [Policies](../Model-Governance/Policies), and more. This command is independent of the framework used to build the agent.
Either run the following CLI command from the root of your agent repository.
```bash theme={null}
bl deploy
```
Or [connect a GitHub repository to Blaxel](../Agents/Github-integration) for automatic deployments every time you push on *main*.
# Orchestrate agents with n8n on Blaxel
Source: https://docs.blaxel.ai/Tutorials/n8n
Build n8n workflows that forward chat messages to Blaxel-hosted AI agents via HTTP requests, with step-by-step setup and configuration.
This tutorial will walk you through how to integrate your AI agents —deployed on Blaxel— into automated workflows using [n8n](https://n8n.io/). Whether you’re new to Blaxel, n8n, or both, this tutorial will help you get started quickly with a minimalistic setup that you can build on.
## What You’ll Build
This is a simple n8n workflow that:
1. listens for chat messages,
2. then forwards those messages as inputs to your [AI agent on Blaxel](../Agents/Overview) via an HTTP request.
Here's a minimal JSON snippet that demonstrates the workflow:
```json theme={null}
{
"name": "Demo: My first AI Agent in n8n",
"nodes": [
{
"parameters": {
"options": {}
},
"id": "5b410409-5b0b-47bd-b413-5b9b1000a063",
"name": "When chat message received",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.1,
"position": [660, -200],
"webhookId": "a889d2ae-2159-402f-b326-5f61e90f602e"
},
{
"parameters": {
"method": "POST",
"url": "https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}",
"authentication": "genericCredentialType",
"genericAuthType": "httpHeaderAuth",
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "inputs",
"value": "={{ $json.chatInput }}"
}
]
},
"options": {}
},
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [1040, -200],
"id": "d389abf6-09cd-4fad-88fa-4a8c098bddf5",
"name": "HTTP Request",
"credentials": {
"httpHeaderAuth": {
"id": "{YOUR_AUTH_ACCOUNT_ID}",
"name": "Header Auth account"
}
}
}
],
"pinData": {},
"connections": {
"When chat message received": {
"main": [
[
{
"node": "HTTP Request",
"type": "main",
"index": 0
}
]
]
},
"HTTP Request": {
"main": [
[]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "f82cb549-fa06-4cbe-9268-76451dd8e7fc",
"meta": {
"templateId": "PT1i+zU92Ii5O2XCObkhfHJR5h9rNJTpiCIkYJk9jHU=",
"templateCredsSetupCompleted": true,
"instanceId": "b90a39a88ba2a73793446bbe14503ff3b070f8a0ec6fce01ee5b4761919441e1"
},
"id": "Xu7ugYZKH0Dzn9hQ",
"tags": []
}
```
## Step 1: Update the URL Field
Before running your workflow, **update the URL field** in the HTTP Request node to match your [agent’s URL](../Agents/Query-agents). Replace `https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}` with your actual workspace and agent identifiers.
## Step 2: Configure Header Authentication
To secure your API calls, you must set up header authentication. Follow these two key steps:
1. **Set up the header auth credentials:**
Ensure that your HTTP Request node is set in ***Header Auth*** type.
2. **Create Credentials:**
Fill out the form with the following details. For more details on obtaining your Blaxel API key, refer to this [Access Tokens documentation](../Security/Access-tokens#api-keys).
* **Name:** `Authorization`
* **Value:** `Bearer `
Your n8n workflow is ready to launch!
Hooking up your Blaxel AI agents with n8n is like giving your dev toolkit superpowers! This bare-bones setup we just walked through is just scratching the surface. Think of it as your "Hello World" moment before diving into the really cool stuff - like building a workflow of multiple AI agents that work together.
# Overview
Source: https://docs.blaxel.ai/Volumes/Overview
Persist files long-term by attaching volumes to your resources.
Blaxel Volumes provide **persistent storage that survives resource destruction and recreation**, enabling stateful environments and data retention across lifecycle events.
While Blaxel automatically snapshots the full state of a sandbox at scale down and stores it in warm storage for ultra-fast boot times, volumes offer a more cost-effective solution to persist files for weeks to years. Use [volume templates](/Volumes/Volumes-templates) to start from a pre-populated filesystem.
## Create a volume
To create a standalone volume, you must provide a unique `name` and specify its `size` in megabytes (MB). You can also specify optional labels. This volume exists independently of any resource it may later be attached to.
The Blaxel SDK requires two environment variables to authenticate:
| Variable | Description |
| -------------- | -------------------------- |
| `BL_WORKSPACE` | Your Blaxel workspace name |
| `BL_API_KEY` | Your Blaxel API key |
You can create an API key from the [Blaxel console](https://app.blaxel.ai/profile/security). Your workspace name is visible in the URL when you log in to the console (e.g. `app.blaxel.ai/{workspace}`).
Set them as environment variables or add them to a `.env` file at the root of your project:
```bash theme={null}
export BL_WORKSPACE=my-workspace
export BL_API_KEY=my-api-key
```
When developing locally, you can also **log in to your workspace with Blaxel CLI** (as shown above). This allows you to run Blaxel SDK functions that will automatically connect to your workspace without additional setup. When you deploy on Blaxel, authentication is handled automatically — no environment variables needed.
```typescript TypeScript theme={null}
import { VolumeInstance } from "@blaxel/core";
const volume = await VolumeInstance.create({
name: "my-volume",
size: 2048, // in MB
region: "us-pdx-1",
});
```
```python Python theme={null}
from blaxel.core import VolumeInstance
volume = await VolumeInstance.create({
"name": "my-volume",
"size": 2048, # in MB
"region": "us-pdx-1",
})
```
```bash Blaxel CLI theme={null}
bl apply -f - <
You can create a volume from a [template](/Volumes/Volumes-templates) to automatically pre-populate it with files.
```typescript TypeScript theme={null}
const volume = await VolumeInstance.createIfNotExists({
name: "myvolume",
template: "mytemplate:1", // Use template-name:revision or template-name:latest
region: "us-pdx-1",
});
```
```python Python theme={null}
volume = await VolumeInstance.create_if_not_exists({
"name": "myvolume",
"template": "mytemplate:1", # Use template-name:revision or template-name:latest
"region": "us-pdx-1",
})
```
```bash Blaxel CLI theme={null}
bl apply -f - <
Labels are specified as key-value pairs during volume creation.
```typescript TypeScript theme={null}
const volume = await VolumeInstance.create({
name: "my-volume",
size: 2048,
region: "us-pdx-1",
labels: { env: "test", project: "12345" },
});
```
```python Python theme={null}
volume = await VolumeInstance.create({
"name": "my-volume",
"size": 2048,
"region": "us-pdx-1",
"labels": {"env": "test", "project": "12345"},
})
```
```bash Blaxel CLI theme={null}
bl apply -f - <
## Delete a volume
Delete a volume by calling:
* the class-level `delete()` method with the volume `name` as argument, or
```typescript TypeScript theme={null}
import { VolumeInstance } from "@blaxel/core";
await VolumeInstance.delete("my-volume");
```
```python Python theme={null}
from blaxel.core import VolumeInstance
await VolumeInstance.delete("my-volume")
```
* by calling the instance-level `delete()` method:
```typescript TypeScript theme={null}
import { VolumeInstance } from "@blaxel/core";
const volume = await VolumeInstance.get("my-volume");
await volume.delete()
```
```python Python theme={null}
from blaxel.core import VolumeInstance
volume = await VolumeInstance.get("my-volume")
await volume.delete()
```
## Resize a volume
Currently, it is only possible to increase the volume size (not decrease it).
Resize a volume by calling:
* the class-level `update()` method with the volume `name` and new size as argument, or
```typescript TypeScript theme={null}
import { VolumeInstance } from "@blaxel/core";
const updatedVolume = await VolumeInstance.update("my-volume", { size: 1024 });
```
```python Python theme={null}
from blaxel.core import VolumeInstance
updated_volume = await VolumeInstance.update("my-volume", { "size": 1024 })
```
* by calling the instance-level `update()` method with the new size as argument:
```typescript TypeScript theme={null}
import { VolumeInstance } from "@blaxel/core";
const volume = await VolumeInstance.get("my-volume");
const updatedVolume = await volume.update({ size: 1024 });
```
```python Python theme={null}
from blaxel.core import VolumeInstance
volume = await VolumeInstance.get("my-volume")
updated_volume = await volume.update({ "size": 1024 })
```
## List volumes
```typescript TypeScript theme={null}
const volumes = await VolumeInstance.list()
```
```python Python theme={null}
volumes = await VolumeInstance.list()
```
```shell Blaxel CLI theme={null}
bl get volumes
```
You can use labels for filtering volumes in the Blaxel CLI or Blaxel Console:
```shell theme={null}
# Get volumes with specific label (e.g., env=test)
bl get volumes -o json | jq -r '.[] | select(.metadata.labels.env == "test") | .metadata.name'
```
## Use volumes with sandboxes and agents
Attach persistent storage to a sandbox at creation time.
Attach persistent storage to a deployed agent via `blaxel.toml`.
# Volume templates
Source: https://docs.blaxel.ai/Volumes/Volumes-templates
Pre-populate volumes with files for faster environment setup.
Volume templates let you create **pre-populated [volumes](Overview) with files and directories**, improving copy performance by up to 90% compared to `cp -r`. Use them to build development environments with pre-installed dependencies, datasets, or application code.
## Create a volume template
### Initialize a new template
To create a new volume template, use the following command:
```bash theme={null}
bl new volume-template mytemplate
```
This creates a folder named `mytemplate` containing a `blaxel.toml` file with the following configuration:
```toml theme={null}
type = "volume-template"
directory = "." # Root of your volume - everything at and below this level will be copied
defaultSize = 2048 # Default size in MB for volumes created from this template
```
The `directory` field specifies which folder's contents will be included in the template:
* `"."` (default) - Includes everything in the template folder
* `"app"` - Includes only files inside the `app` subdirectory
* `"src/data"` - Includes only files inside the `src/data` subdirectory
For example, if you want to include only your application code without configuration files:
```toml theme={null}
type = "volume-template"
directory = "app" # Only files inside the 'app' folder will be copied
defaultSize = 2048
```
### Deploy a volume template
Deploy your volume template to make it available for creating other volumes:
```bash theme={null}
bl deploy
```
To preview the deployment without making changes:
```bash theme={null}
bl deploy --dryrun
```
Each deployment creates a new revision of your template. Revisions are auto-incremented and versioned for rollback capabilities.
## Use volume templates
Once deployed, you can create volumes from your template:
```typescript TypeScript theme={null}
const volume = await VolumeInstance.createIfNotExists({
name: "myvolume",
template: "mytemplate:1" // Use template-name:revision or template-name:latest
});
```
```python Python theme={null}
volume = await VolumeInstance.create_if_not_exists({
"name": "myvolume",
"template": "mytemplate:1" # Use template-name:revision or template-name:latest
})
```
Override the default volume size if needed:
```typescript TypeScript theme={null}
const volume = await VolumeInstance.createIfNotExists({
name: "myvolume",
template: "mytemplate:latest",
size: 4096 // Override the default size if needed
});
```
```python Python theme={null}
volume = await VolumeInstance.create_if_not_exists({
"name": "myvolume",
"template": "mytemplate:latest",
"size": 4096 # Override the default size if needed
})
```
Template revisions are incremental and auto-generated. The system maintains a limited number of versions for each template.
## Manage volume templates
List all available templates in your workspace:
```bash theme={null}
bl get volumetemplates
```
View details of a specific template, including size requirements:
```bash theme={null}
bl get volumetemplate mytemplate
```
### Size constraints
Volumes must be provisioned with sufficient space for the template content. If a volume is smaller than the template data, creation will fail with an error.
Always add a delta (extra space) to your volume size beyond the template content size. This ensures you can add files to the volume after it's attached. We recommend provisioning at least 20-30% extra space.
To check your template directory size before deployment:
```bash theme={null}
du -sh mytemplate
```
For example, if your template is 1.5GB, create volumes with at least 2GB of space:
```typescript TypeScript theme={null}
const volume = await VolumeInstance.createIfNotExists({
name: "myvolume",
template: "mytemplate:latest",
size: 2048 // 2GB for a 1.5GB template - leaves 500MB for new files
});
```
```python Python theme={null}
volume = await VolumeInstance.create_if_not_exists({
"name": "myvolume",
"template": "mytemplate:latest",
"size": 2048 # 2GB for a 1.5GB template - leaves 500MB for new files
})
```
This helps you set an appropriate `defaultSize` in your `blaxel.toml` or specify the correct size when creating volumes.
### Delete a template
To delete a volume template and all its associated revisions:
```bash theme={null}
bl delete volumetemplate mytemplate
```
Deleting a template does not affect existing volumes created from that template. However, you won't be able to create new volumes from deleted templates.
## Limitations
Volume templates **do not handle symlinks or hardlinks**, which may cause unexpected behavior. In addition, symlinks pointing outside the template (for example, `symlink -> /etc/hosts`) are explicitly forbidden. In these cases, use a copy instead.
As a result, using package managers like `pnpm` or `uv` may also behave unexpectedly if dependencies are vendored within the template.
## Use volume templates with sandboxes and agents
Create and attach pre-populated volumes to sandboxes.
Create and attach pre-populated volumes to agents.
# Create agent
Source: https://docs.blaxel.ai/api-reference/agents/create-agent
/api-reference/controlplane.yml post /agents
Creates a new AI agent deployment from your code. The agent will be built and deployed as a serverless auto-scaling endpoint. Use the Blaxel CLI 'bl deploy' for a simpler deployment experience.
# Delete agent
Source: https://docs.blaxel.ai/api-reference/agents/delete-agent
/api-reference/controlplane.yml delete /agents/{agentName}
Permanently deletes an agent and all its deployment history. The agent's inference endpoint will immediately stop responding. This action cannot be undone.
# Get agent
Source: https://docs.blaxel.ai/api-reference/agents/get-agent
/api-reference/controlplane.yml get /agents/{agentName}
Returns detailed information about an agent including its current deployment status, configuration, events history, and inference endpoint URL.
# List all agent revisions
Source: https://docs.blaxel.ai/api-reference/agents/list-all-agent-revisions
/api-reference/controlplane.yml get /agents/{agentName}/revisions
# List all agents
Source: https://docs.blaxel.ai/api-reference/agents/list-all-agents
/api-reference/controlplane.yml get /agents
Returns all AI agents deployed in the workspace. Each agent includes its deployment status, runtime configuration, and global inference endpoint URL.
# Update agent
Source: https://docs.blaxel.ai/api-reference/agents/update-agent
/api-reference/controlplane.yml put /agents/{agentName}
Updates an agent's configuration and triggers a new deployment. Changes to runtime settings, environment variables, or scaling parameters will be applied on the next deployment.
# Code reranking/semantic search
Source: https://docs.blaxel.ai/api-reference/codegen/code-rerankingsemantic-search
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /codegen/reranking/{path}
Uses Relace's code reranking model to find the most relevant files for a given query. This is useful as a first pass in agentic exploration to narrow down the search space.
Based on: https://docs.relace.ai/docs/code-reranker/agent
Query Construction: The query can be a short question or a more detailed conversation with the user request included. For a first pass, use the full conversation; for subsequent calls, use more targeted questions.
Token Limit and Score Threshold: For 200k token context models like Claude 4 Sonnet, recommended defaults are scoreThreshold=0.5 and tokenLimit=30000.
The response will be a list of file paths and contents ordered from most relevant to least relevant.
# Create sandbox
Source: https://docs.blaxel.ai/api-reference/compute/create-sandbox
/api-reference/controlplane.yml post /sandboxes
Creates a new sandbox VM for secure AI code execution. Sandboxes automatically scale to zero when idle and resume instantly, preserving memory state including running processes and filesystem.
# Create Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/create-sandbox-preview
/api-reference/controlplane.yml post /sandboxes/{sandboxName}/previews
Create a preview
# Create token for Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/create-token-for-sandbox-preview
/api-reference/controlplane.yml post /sandboxes/{sandboxName}/previews/{previewName}/tokens
Creates a token for a Sandbox Preview.
# Delete sandbox
Source: https://docs.blaxel.ai/api-reference/compute/delete-sandbox
/api-reference/controlplane.yml delete /sandboxes/{sandboxName}
Permanently deletes a sandbox and all its data. If no volumes are attached, this guarantees zero data retention (ZDR). This action cannot be undone.
# Delete Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/delete-sandbox-preview
/api-reference/controlplane.yml delete /sandboxes/{sandboxName}/previews/{previewName}
Deletes a Sandbox Preview by name.
# Delete token for Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/delete-token-for-sandbox-preview
/api-reference/controlplane.yml delete /sandboxes/{sandboxName}/previews/{previewName}/tokens/{tokenName}
Deletes a token for a Sandbox Preview by name.
# Get sandbox
Source: https://docs.blaxel.ai/api-reference/compute/get-sandbox
/api-reference/controlplane.yml get /sandboxes/{sandboxName}
Returns detailed information about a sandbox including its configuration, attached volumes, lifecycle policies, and API endpoint URL.
# Get Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/get-sandbox-preview
/api-reference/controlplane.yml get /sandboxes/{sandboxName}/previews/{previewName}
Returns a Sandbox Preview by name.
# Get tokens for Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/get-tokens-for-sandbox-preview
/api-reference/controlplane.yml get /sandboxes/{sandboxName}/previews/{previewName}/tokens
Gets tokens for a Sandbox Preview.
# List sandboxes
Source: https://docs.blaxel.ai/api-reference/compute/list-sandboxes
/api-reference/controlplane.yml get /sandboxes
Returns all sandboxes in the workspace. Each sandbox includes its configuration, status, and endpoint URL.
# List Sandboxes
Source: https://docs.blaxel.ai/api-reference/compute/list-sandboxes-1
/api-reference/controlplane.yml get /sandboxes/{sandboxName}/previews
Returns a list of Sandbox Previews in the workspace.
# Update sandbox
Source: https://docs.blaxel.ai/api-reference/compute/update-sandbox
/api-reference/controlplane.yml put /sandboxes/{sandboxName}
Updates a sandbox's configuration. Note that certain changes (like image or memory) may reset the sandbox state. Use lifecycle policies to control automatic cleanup.
# Update Sandbox Preview
Source: https://docs.blaxel.ai/api-reference/compute/update-sandbox-preview
/api-reference/controlplane.yml put /sandboxes/{sandboxName}/previews/{previewName}
Updates a Sandbox Preview by name.
# Get platform configuration
Source: https://docs.blaxel.ai/api-reference/configurations/get-platform-configuration
/api-reference/controlplane.yml get /configuration
Returns global platform configuration including available regions, countries, continents, and private locations for deployment policies.
# Create custom domain
Source: https://docs.blaxel.ai/api-reference/customdomains/create-custom-domain
/api-reference/controlplane.yml post /customdomains
Creates a new custom domain for preview deployments. After creation, you must configure DNS records and verify domain ownership before it becomes active.
# Delete custom domain
Source: https://docs.blaxel.ai/api-reference/customdomains/delete-custom-domain
/api-reference/controlplane.yml delete /customdomains/{domainName}
# Get custom domain
Source: https://docs.blaxel.ai/api-reference/customdomains/get-custom-domain
/api-reference/controlplane.yml get /customdomains/{domainName}
# List custom domains
Source: https://docs.blaxel.ai/api-reference/customdomains/list-custom-domains
/api-reference/controlplane.yml get /customdomains
Returns all custom domains configured in the workspace. Custom domains allow serving preview deployments under your own domain (e.g., preview.yourdomain.com).
# Update custom domain
Source: https://docs.blaxel.ai/api-reference/customdomains/update-custom-domain
/api-reference/controlplane.yml put /customdomains/{domainName}
# Verify custom domain
Source: https://docs.blaxel.ai/api-reference/customdomains/verify-custom-domain
/api-reference/controlplane.yml post /customdomains/{domainName}/verify
# Create a drive
Source: https://docs.blaxel.ai/api-reference/drives/create-a-drive
/api-reference/controlplane.yml post /drives
Creates a new drive in the workspace. Drives can be buckets and can be mounted at runtime to sandboxes.
# Create drive access token
Source: https://docs.blaxel.ai/api-reference/drives/create-drive-access-token
/api-reference/controlplane.yml post /drives/{driveName}/access-token
Issues a short-lived JWT access token scoped to a specific drive. The token can be used as Bearer authentication for direct S3 operations against the drive's bucket.
# Delete a drive
Source: https://docs.blaxel.ai/api-reference/drives/delete-a-drive
/api-reference/controlplane.yml delete /drives/{driveName}
Deletes a drive immediately. The drive record is removed from the database synchronously.
# Get a drive
Source: https://docs.blaxel.ai/api-reference/drives/get-a-drive
/api-reference/controlplane.yml get /drives/{driveName}
Retrieves details of a specific drive including its status and events.
# Get drive token JWKS
Source: https://docs.blaxel.ai/api-reference/drives/get-drive-token-jwks
/api-reference/controlplane.yml get /drives/jwks.json
Returns the JSON Web Key Set containing the Ed25519 public key used to verify drive access tokens. Other S3-compatible storage can use this endpoint to validate Bearer tokens.
# List drives
Source: https://docs.blaxel.ai/api-reference/drives/list-drives
/api-reference/controlplane.yml get /drives
Returns all drives in the workspace. Drives provide persistent storage that can be attached to agents, functions, and sandboxes.
# Update a drive
Source: https://docs.blaxel.ai/api-reference/drives/update-a-drive
/api-reference/controlplane.yml put /drives/{driveName}
Updates an existing drive. Metadata fields like displayName and labels can be changed. Size can be set if not already configured.
# Apply code edit
Source: https://docs.blaxel.ai/api-reference/fastapply/apply-code-edit
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml put /codegen/fastapply/{path}
Uses the configured LLM provider (Relace or Morph) to apply a code edit to the original content.
To use this endpoint as an agent tool, follow these guidelines:
Use this tool to make an edit to an existing file. This will be read by a less intelligent model, which will quickly apply the edit. You should make it clear what the edit is, while also minimizing the unchanged code you write.
When writing the edit, you should specify each edit in sequence, with the special comment "// ... existing code ..." to represent unchanged code in between edited lines.
Example format:
// ... existing code ...
FIRST_EDIT
// ... existing code ...
SECOND_EDIT
// ... existing code ...
THIRD_EDIT
// ... existing code ...
You should still bias towards repeating as few lines of the original file as possible to convey the change. But, each edit should contain minimally sufficient context of unchanged lines around the code you're editing to resolve ambiguity.
DO NOT omit spans of pre-existing code (or comments) without using the "// ... existing code ..." comment to indicate its absence. If you omit the existing code comment, the model may inadvertently delete these lines.
If you plan on deleting a section, you must provide context before and after to delete it. If the initial code is "Block 1\nBlock 2\nBlock 3", and you want to remove Block 2, you would output "// ... existing code ...\nBlock 1\nBlock 3\n// ... existing code ...".
Make sure it is clear what the edit should be, and where it should be applied. Make edits to a file in a single edit_file call instead of multiple edit_file calls to the same file. The apply model can handle many distinct edits at once.
# Retrieve feature flag evaluation for workspace
Source: https://docs.blaxel.ai/api-reference/featureflags/retrieve-feature-flag-evaluation-for-workspace
/api-reference/controlplane.yml get /features/{featureKey}
Evaluates a specific feature flag for the workspace with full details including variant and payload. Useful for testing and debugging feature flag targeting.
# Abort multipart upload
Source: https://docs.blaxel.ai/api-reference/filesystem/abort-multipart-upload
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /filesystem-multipart/{uploadId}/abort
Abort a multipart upload and clean up all parts
# Complete multipart upload
Source: https://docs.blaxel.ai/api-reference/filesystem/complete-multipart-upload
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml post /filesystem-multipart/{uploadId}/complete
Complete a multipart upload by assembling all parts
# Create or update a file or directory
Source: https://docs.blaxel.ai/api-reference/filesystem/create-or-update-a-file-or-directory
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml put /filesystem/{path}
Create or update a file or directory
# Create or update directory tree
Source: https://docs.blaxel.ai/api-reference/filesystem/create-or-update-directory-tree
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml put /filesystem/tree/{path}
Create or update multiple files within a directory tree structure
# Delete directory tree
Source: https://docs.blaxel.ai/api-reference/filesystem/delete-directory-tree
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /filesystem/tree/{path}
Delete a directory tree recursively
# Delete file or directory
Source: https://docs.blaxel.ai/api-reference/filesystem/delete-file-or-directory
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /filesystem/{path}
Delete a file or directory
# Find files and directories
Source: https://docs.blaxel.ai/api-reference/filesystem/find-files-and-directories
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem-find/{path}
Finds files and directories using the find command.
# Fuzzy search for files and directories
Source: https://docs.blaxel.ai/api-reference/filesystem/fuzzy-search-for-files-and-directories
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem-search/{path}
Performs fuzzy search on filesystem paths using fuzzy matching algorithm. Optimized alternative to find and grep commands.
# Get directory tree
Source: https://docs.blaxel.ai/api-reference/filesystem/get-directory-tree
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem/tree/{path}
Get a recursive directory tree structure starting from the specified path
# Get file or directory information
Source: https://docs.blaxel.ai/api-reference/filesystem/get-file-or-directory-information
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem/{path}
Get content of a file or listing of a directory. Use Accept header to control response format for files.
# Initiate multipart upload
Source: https://docs.blaxel.ai/api-reference/filesystem/initiate-multipart-upload
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml post /filesystem-multipart/initiate/{path}
Initiate a multipart upload session for a file
# List multipart uploads
Source: https://docs.blaxel.ai/api-reference/filesystem/list-multipart-uploads
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem-multipart
List all active multipart uploads
# List parts
Source: https://docs.blaxel.ai/api-reference/filesystem/list-parts
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem-multipart/{uploadId}/parts
List all uploaded parts for a multipart upload
# Search for text content in files
Source: https://docs.blaxel.ai/api-reference/filesystem/search-for-text-content-in-files
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /filesystem-content-search/{path}
Searches for text content inside files using ripgrep. Returns matching lines with context.
# Stream file modification events in a directory
Source: https://docs.blaxel.ai/api-reference/filesystem/stream-file-modification-events-in-a-directory
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /watch/filesystem/{path}
Streams the path of modified files (one per line) in the given directory. Closes when the client disconnects.
# Upload part
Source: https://docs.blaxel.ai/api-reference/filesystem/upload-part
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml put /filesystem-multipart/{uploadId}/part
Upload a single part of a multipart upload
# Create MCP server
Source: https://docs.blaxel.ai/api-reference/functions/create-mcp-server
/api-reference/controlplane.yml post /functions
Creates a new MCP server function deployment. The function will expose tools via the Model Context Protocol that can be used by AI agents. Supports streamable HTTP transport.
# Delete MCP server
Source: https://docs.blaxel.ai/api-reference/functions/delete-mcp-server
/api-reference/controlplane.yml delete /functions/{functionName}
Permanently deletes an MCP server function and all its deployment history. Any agents using this function's tools will no longer be able to invoke them.
# Get MCP server
Source: https://docs.blaxel.ai/api-reference/functions/get-mcp-server
/api-reference/controlplane.yml get /functions/{functionName}
Returns detailed information about an MCP server function including its deployment status, available tools, transport configuration, and endpoint URL.
# List all MCP servers
Source: https://docs.blaxel.ai/api-reference/functions/list-all-mcp-servers
/api-reference/controlplane.yml get /functions
Returns all MCP server functions deployed in the workspace. Each function includes its deployment status, transport protocol (websocket or http-stream), and endpoint URL.
# List function revisions
Source: https://docs.blaxel.ai/api-reference/functions/list-function-revisions
/api-reference/controlplane.yml get /functions/{functionName}/revisions
Returns revisions for a function by name.
# Update MCP server
Source: https://docs.blaxel.ai/api-reference/functions/update-mcp-server
/api-reference/controlplane.yml put /functions/{functionName}
Updates an MCP server function's configuration and triggers a new deployment. Changes to runtime settings, integrations, or transport protocol will be applied on the next deployment.
# Get template
Source: https://docs.blaxel.ai/api-reference/get-template
/api-reference/controlplane.yml get /templates/{templateName}
Returns detailed information about a deployment template including its configuration, source code reference, and available parameters.
# Build a container image
Source: https://docs.blaxel.ai/api-reference/images/build-a-container-image
/api-reference/controlplane.yml post /images
Builds a container image without creating a deployment. Returns a presigned URL for uploading source code. After upload, the image will be built and stored in the registry, but no agent, function, sandbox, or job will be created or updated.
# Cleanup unused container images
Source: https://docs.blaxel.ai/api-reference/images/cleanup-unused-container-images
/api-reference/controlplane.yml delete /images
Cleans up unused container images in the workspace registry. Only removes images that are not currently referenced by any active agent, function, sandbox, or job deployment.
# Delete container image
Source: https://docs.blaxel.ai/api-reference/images/delete-container-image
/api-reference/controlplane.yml delete /images/{resourceType}/{imageName}
Deletes a container image and all its tags from the workspace registry. Will fail if the image is currently in use by an active deployment.
# Delete container image tag
Source: https://docs.blaxel.ai/api-reference/images/delete-container-image-tag
/api-reference/controlplane.yml delete /images/{resourceType}/{imageName}/tags/{tagName}
Deletes a specific tag from a container image. The underlying image layers are kept if other tags reference them. Will fail if the tag is currently in use.
# Get container image
Source: https://docs.blaxel.ai/api-reference/images/get-container-image
/api-reference/controlplane.yml get /images/{resourceType}/{imageName}
Returns detailed information about a container image including all available tags, creation dates, and size information.
# List container images
Source: https://docs.blaxel.ai/api-reference/images/list-container-images
/api-reference/controlplane.yml get /images
Returns all container images stored in the workspace registry, grouped by repository with their available tags. Images are created during deployments of agents, functions, sandboxes, and jobs.
# List image shares
Source: https://docs.blaxel.ai/api-reference/images/list-image-shares
/api-reference/controlplane.yml get /images/{resourceType}/{imageName}/share
Returns the list of workspaces that a container image is currently shared with.
# Share a container image
Source: https://docs.blaxel.ai/api-reference/images/share-a-container-image
/api-reference/controlplane.yml post /images/{resourceType}/{imageName}/share
Shares a container image with another workspace by copying the metadata record. The underlying storage (S3) data is not duplicated. The target workspace must belong to the same account.
# Unshare a container image
Source: https://docs.blaxel.ai/api-reference/images/unshare-a-container-image
/api-reference/controlplane.yml delete /images/{resourceType}/{imageName}/share/{targetWorkspace}
Revokes sharing of a container image with a target workspace. Removes the metadata copy from the target workspace. The source image is not affected.
# Inference API
Source: https://docs.blaxel.ai/api-reference/inference
Reference for inference endpoints generated for agents, model APIs, and MCP servers deployed on the Global Agentics Network.
Whenever you deploy a workload on Blaxel, an **inference endpoint** is generated on Global Agentics Network, the [infrastructure powerhouse](../Infrastructure/Global-Inference-Network) that hosts it.
The inference API URL depends on the type of workload ([sandbox](../Sandboxes/Overview), [agent](../Agents/Overview), [model API](../Models/Overview), [MCP server](../Functions/Overview)) you are interacting with:
```http Connect to a sandbox's MCP server theme={null}
https://sbx-{YOUR-SANDBOX}-{YOUR-WORKSPACE}.{REGION}.bl.run/mcp
```
```http Connect to a sandbox's REST API base URL theme={null}
https://sbx-{YOUR-SANDBOX}-{YOUR-WORKSPACE}.{REGION}.bl.run/
```
```http Query agent theme={null}
POST https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}
```
```http Query model API theme={null}
POST https://run.blaxel.ai/{YOUR-WORKSPACE}/models/{YOUR-MODEL}
```
```http Connect to an MCP server theme={null}
https://run.blaxel.ai/{YOUR-WORKSPACE}/functions/{YOUR-SERVER-NAME}/mcp
```
```http Execute a job theme={null}
POST https://api.blaxel.ai/{YOUR-WORKSPACE}/jobs/{YOUR-JOB-NAME}/executions
```
Showing the full request, with the input payload:
```http theme={null}
curl -X GET "https://sbx-{your-sandbox}-{your-workspace}.{region}.bl.run/filesystem/tree/etc" \
-H "Content-Type: application/json" \
-H "X-Blaxel-Authorization: Bearer "
```
If you have the Blaxel CLI `bl` installed, you can use it to directly interpolate your API key into the HTTP request, as shown below:
```http theme={null}
curl -X GET "https://sbx-{your-sandbox}-$(bl workspace --current).{region}.bl.run/filesystem/tree/etc" \
-H "Content-Type: application/json" \
-H "X-Blaxel-Authorization: Bearer $(bl token)"
```
```http theme={null}
curl -X POST "https://run.blaxel.ai/{your-workspace}/agents/{your-agent}" \
-H "Content-Type: application/json" \
-H "X-Blaxel-Authorization: Bearer " \
-d '{"inputs":"Hello, world!"}'
```
If you have the Blaxel CLI `bl` installed, you can use it to directly interpolate your API key into the HTTP request, as shown below:
```http theme={null}
curl -X POST "https://run.blaxel.ai/$(bl workspace --current)/agents/{your-agent}" \
-H "Content-Type: application/json" \
-H "X-Blaxel-Authorization: Bearer $(bl token)" \
-d '{"inputs":"Hello, world!"}'
```
```http theme={null}
curl -X POST "https://run.blaxel.ai/{your-workspace}/models/{your-model}/chat/completions" \
-H "Content-Type: application/json" \
-H "X-Blaxel-Authorization: Bearer " \
-d '{"messages":[{"role":"user","content":"Hello!"}]}'
```
If you have the Blaxel CLI `bl` installed, you can use it to directly interpolate your API key into the HTTP request, as shown below:
```http theme={null}
curl -X POST "https://run.blaxel.ai/$(bl workspace --current)/models/{your-model}/chat/completions" \
-H "Content-Type: application/json" \
-H "X-Blaxel-Authorization: Bearer $(bl token)" \
-d '{"messages":[{"role":"user","content":"Hello!"}]}'
```
```http theme={null}
curl -X POST "https://api.blaxel.ai/{your-workspace}/jobs/{your-job-name}/executions" \
-H "X-Blaxel-Authorization: Bearer " \
-d '{"tasks":[{"my_arg":"my_value"}]}'
```
If you have the Blaxel CLI `bl` installed, you can use it to directly interpolate your API key into the HTTP request, as shown below:
```http theme={null}
curl -X POST "https://api.blaxel.ai/$(bl workspace --current)/jobs/{your-job-name}/executions" \
-H "X-Blaxel-Authorization: Bearer $(bl token)" \
-d '{"tasks":[{"my_arg":"my_value"}]}'
```
### Connect to MCP servers
**MCP servers** ([Model Context Protocol](https://github.com/modelcontextprotocol)) provide a toolkit of multiple capabilities for agents. These servers can be interacted with using Blaxel's streamable HTTP transport implementation on the server's global endpoint.
Learn how to run tool calls through your MCP server.
### Manage sessions
To simulate multi-turn conversations, you can pass on request headers. You'll need your client to generate this ID and pass it using any header which you can retrieve via the code (e.g. `Thread-Id`). Without a thread ID, the agent won't maintain nor use any conversation memory when processing the request.
This is only available for agent requests.
```http Query agent with thread ID theme={null}
curl -X POST "https://run.blaxel.ai/{your-workspace}/agents/{your-agent}" \
-H 'Content-Type: application/json' \
-H "X-Blaxel-Authorization: Bearer " \
-H "X-Blaxel-Thread-Id: " \
-d '{"inputs":"Hello, world!"}'
```
Read our product guide on querying an agent.
# Create integration connection
Source: https://docs.blaxel.ai/api-reference/integrations/create-integration-connection
/api-reference/controlplane.yml post /integrations/connections
Creates a new integration connection with credentials for an external service. The connection can then be used by models, functions, and other resources to authenticate with the service.
# Delete integration connection
Source: https://docs.blaxel.ai/api-reference/integrations/delete-integration-connection
/api-reference/controlplane.yml delete /integrations/connections/{connectionName}
Permanently deletes an integration connection. Any resources using this connection will lose access to the external service.
# Get integration connection
Source: https://docs.blaxel.ai/api-reference/integrations/get-integration-connection
/api-reference/controlplane.yml get /integrations/connections/{connectionName}
Returns detailed information about an integration connection including its provider type, configuration (secrets are masked), and usage status.
# Get integration connection model
Source: https://docs.blaxel.ai/api-reference/integrations/get-integration-connection-model
/api-reference/controlplane.yml get /integrations/connections/{connectionName}/models/{modelId}
Returns a model for an integration connection by ID.
# Get integration connection model endpoint configurations
Source: https://docs.blaxel.ai/api-reference/integrations/get-integration-connection-model-endpoint-configurations
/api-reference/controlplane.yml get /integrations/connections/{connectionName}/endpointConfigurations
Returns a list of all endpoint configurations for a model.
# Get integration provider info
Source: https://docs.blaxel.ai/api-reference/integrations/get-integration-provider-info
/api-reference/controlplane.yml get /integrations/{integrationName}
Returns metadata about an integration provider including available endpoints, authentication methods, and supported models or features.
# List integration connection models
Source: https://docs.blaxel.ai/api-reference/integrations/list-integration-connection-models
/api-reference/controlplane.yml get /integrations/connections/{connectionName}/models
Returns a list of all models for an integration connection.
# List integration connections
Source: https://docs.blaxel.ai/api-reference/integrations/list-integration-connections
/api-reference/controlplane.yml get /integrations/connections
Returns all configured integration connections in the workspace. Each connection stores credentials and settings for an external service (LLM provider, API, database).
# Update integration connection
Source: https://docs.blaxel.ai/api-reference/integrations/update-integration-connection
/api-reference/controlplane.yml put /integrations/connections/{connectionName}
Updates an integration connection's configuration or credentials. Changes take effect immediately for all resources using this connection.
# Overview
Source: https://docs.blaxel.ai/api-reference/introduction
Authenticate and interact with all Blaxel resources using REST APIs, with support for API key and OAuth 2.0 authentication methods.
Blaxel APIs allow you to interact with all resources inside of and across your workspace(s).
## Get started
Authentication to the Blaxel APIs can either be done using [API keys](../Security/Access-tokens) created from the Blaxel console, or through a [classic OAuth 2.0 flow](../Security/Access-tokens).
**API keys** allow you to get started quickly. Simply [generate an API key](../Security/Access-tokens) for your user or service account and use the API key as a bearer token in place of the authorization headers `Authorization` or `X-Blaxel-Authorization` in any call to the Blaxel APIs.
For example, to list models:
```bash theme={null}
curl 'https://api.blaxel.ai/v0/models' \
-H 'accept: application/json, text/plain, */*' \
-H 'X-Blaxel-Authorization: Bearer YOUR-API-KEY'
```
To use **short-lived JWTs**, see [the guide on using an OAuth 2.0 flow](../Security/Access-tokens).
## Blaxel APIs
See the reference for any of the following APIs:
Run inference requests on your deployments by API.
API to manage agents, functions, policies and much more.
# Cancel job execution
Source: https://docs.blaxel.ai/api-reference/jobs/cancel-job-execution
/api-reference/controlplane.yml delete /jobs/{jobId}/executions/{executionId}
Cancels a running job execution. Tasks already in progress will complete, but no new tasks will be started. The execution status changes to 'cancelling' then 'cancelled'.
# Create batch job
Source: https://docs.blaxel.ai/api-reference/jobs/create-batch-job
/api-reference/controlplane.yml post /jobs
Creates a new batch job definition for parallel AI task processing. Jobs can be triggered via API or scheduled, and support configurable parallelism, timeouts, and retry logic.
# Create job execution
Source: https://docs.blaxel.ai/api-reference/jobs/create-job-execution
/api-reference/controlplane.yml post /jobs/{jobId}/executions
Triggers a new execution of the batch job. Each execution runs multiple tasks in parallel according to the job's configured concurrency. Tasks can be parameterized via the request body.
# Delete batch job
Source: https://docs.blaxel.ai/api-reference/jobs/delete-batch-job
/api-reference/controlplane.yml delete /jobs/{jobId}
Permanently deletes a batch job definition and cancels any running executions. Historical execution data will be retained for a limited time.
# Get batch job
Source: https://docs.blaxel.ai/api-reference/jobs/get-batch-job
/api-reference/controlplane.yml get /jobs/{jobId}
Returns detailed information about a batch job including its runtime configuration, execution history, and deployment status.
# Get job execution
Source: https://docs.blaxel.ai/api-reference/jobs/get-job-execution
/api-reference/controlplane.yml get /jobs/{jobId}/executions/{executionId}
Returns detailed information about a specific job execution including status, task statistics (success/failure/running counts), and timing information.
# List batch jobs
Source: https://docs.blaxel.ai/api-reference/jobs/list-batch-jobs
/api-reference/controlplane.yml get /jobs
Returns all batch job definitions in the workspace. Each job can be triggered to run multiple parallel tasks with configurable concurrency and retry settings.
# List job executions
Source: https://docs.blaxel.ai/api-reference/jobs/list-job-executions
/api-reference/controlplane.yml get /jobs/{jobId}/executions
Returns paginated list of executions for a batch job, sorted by creation time. Each execution contains status, task counts, and timing information.
# List job revisions
Source: https://docs.blaxel.ai/api-reference/jobs/list-job-revisions
/api-reference/controlplane.yml get /jobs/{jobId}/revisions
Returns revisions for a job by name.
# Update batch job
Source: https://docs.blaxel.ai/api-reference/jobs/update-batch-job
/api-reference/controlplane.yml put /jobs/{jobId}
Updates a batch job's configuration. Changes affect new executions; running executions continue with their original configuration.
# List deployment regions
Source: https://docs.blaxel.ai/api-reference/locations/list-deployment-regions
/api-reference/controlplane.yml get /locations
Returns all deployment regions with their current availability status and supported hardware flavors. Use this to discover where resources can be deployed.
# List MCP Hub servers
Source: https://docs.blaxel.ai/api-reference/mcphub/list-mcp-hub-servers
/api-reference/controlplane.yml get /mcp/hub
Returns all pre-built MCP server definitions available in the Blaxel Hub. These can be deployed directly to your workspace with pre-configured tools and integrations.
# Create model endpoint
Source: https://docs.blaxel.ai/api-reference/models/create-model-endpoint
/api-reference/controlplane.yml post /models
Creates a new model gateway endpoint that proxies requests to an external LLM provider. Requires an integration connection with valid API credentials for the target provider.
# Delete model endpoint
Source: https://docs.blaxel.ai/api-reference/models/delete-model-endpoint
/api-reference/controlplane.yml delete /models/{modelName}
Permanently deletes a model gateway endpoint. Any agents or applications using this endpoint will need to be updated to use a different model.
# Get model endpoint
Source: https://docs.blaxel.ai/api-reference/models/get-model-endpoint
/api-reference/controlplane.yml get /models/{modelName}
Returns detailed information about a model gateway endpoint including its provider configuration, integration connection, and usage status.
# List model endpoints
Source: https://docs.blaxel.ai/api-reference/models/list-model-endpoints
/api-reference/controlplane.yml get /models
Returns all model gateway endpoints configured in the workspace. Each model represents a proxy to an external LLM provider (OpenAI, Anthropic, etc.) with unified access control.
# List model revisions
Source: https://docs.blaxel.ai/api-reference/models/list-model-revisions
/api-reference/controlplane.yml get /models/{modelName}/revisions
Returns revisions for a model by name.
# Update model endpoint
Source: https://docs.blaxel.ai/api-reference/models/update-model-endpoint
/api-reference/controlplane.yml put /models/{modelName}
Updates a model gateway endpoint's configuration. Changes to provider settings or integration connection take effect immediately.
# Disconnect tunnel
Source: https://docs.blaxel.ai/api-reference/network/disconnect-tunnel
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /network/tunnel
Stop the network tunnel and restore the original network configuration. WARNING: After disconnecting, the sandbox will lose all outbound internet connectivity (no egress). Inbound connections to the sandbox will still work. Use PUT /network/tunnel/config to re-establish the tunnel.
# Get open ports for a process
Source: https://docs.blaxel.ai/api-reference/network/get-open-ports-for-a-process
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /network/process/{pid}/ports
Get a list of all open ports for a process
# Start monitoring ports for a process
Source: https://docs.blaxel.ai/api-reference/network/start-monitoring-ports-for-a-process
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml post /network/process/{pid}/monitor
Start monitoring for new ports opened by a process
# Stop monitoring ports for a process
Source: https://docs.blaxel.ai/api-reference/network/stop-monitoring-ports-for-a-process
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /network/process/{pid}/monitor
Stop monitoring for new ports opened by a process
# Update tunnel configuration
Source: https://docs.blaxel.ai/api-reference/network/update-tunnel-configuration
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml put /network/tunnel/config
Apply a new tunnel configuration on the fly. The existing tunnel is torn down and a new one is established. This endpoint is write-only; there is no corresponding GET to read the config back.
# Create governance policy
Source: https://docs.blaxel.ai/api-reference/policies/create-governance-policy
/api-reference/controlplane.yml post /policies
Creates a new governance policy to control where and how resources are deployed. Policies can restrict deployment to specific regions, countries, or continents for compliance.
# Delete governance policy
Source: https://docs.blaxel.ai/api-reference/policies/delete-governance-policy
/api-reference/controlplane.yml delete /policies/{policyName}
Permanently deletes a governance policy. Resources using this policy will need to be updated to use a different policy.
# Get governance policy
Source: https://docs.blaxel.ai/api-reference/policies/get-governance-policy
/api-reference/controlplane.yml get /policies/{policyName}
Returns detailed information about a governance policy including its type (location, flavor, or maxToken), restrictions, and which resource types it applies to.
# List governance policies
Source: https://docs.blaxel.ai/api-reference/policies/list-governance-policies
/api-reference/controlplane.yml get /policies
Returns all governance policies in the workspace. Policies control deployment locations, hardware flavors, and token limits for agents, functions, and models.
# Update governance policy
Source: https://docs.blaxel.ai/api-reference/policies/update-governance-policy
/api-reference/controlplane.yml put /policies/{policyName}
Updates a governance policy's restrictions. Changes take effect on the next deployment of resources using this policy.
# Execute a command
Source: https://docs.blaxel.ai/api-reference/process/execute-a-command
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml post /process
Execute a command and return process information. If Accept header is text/event-stream, streams logs in SSE format and returns the process response as a final event.
# Get process by identifier
Source: https://docs.blaxel.ai/api-reference/process/get-process-by-identifier
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /process/{identifier}
Get information about a process by its PID or name
# Get process logs
Source: https://docs.blaxel.ai/api-reference/process/get-process-logs
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /process/{identifier}/logs
Get the stdout and stderr output of a process
# Kill a process
Source: https://docs.blaxel.ai/api-reference/process/kill-a-process
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /process/{identifier}/kill
Forcefully kill a running process
# List all processes
Source: https://docs.blaxel.ai/api-reference/process/list-all-processes
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /process
Get a list of all running and completed processes
# Stop a process
Source: https://docs.blaxel.ai/api-reference/process/stop-a-process
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml delete /process/{identifier}
Gracefully stop a running process
# Stream process logs in real time
Source: https://docs.blaxel.ai/api-reference/process/stream-process-logs-in-real-time
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /process/{identifier}/logs/stream
Streams the stdout and stderr output of a process in real time, one line per log, prefixed with 'stdout:' or 'stderr:'. Closes when the process exits or the client disconnects.
# List public ips
Source: https://docs.blaxel.ai/api-reference/publicips:list/list-public-ips
/api-reference/controlplane.yml get /publicIps
Returns a list of all public ips used in Blaxel..
# List Sandbox Hub templates
Source: https://docs.blaxel.ai/api-reference/sandboxhub/list-sandbox-hub-templates
/api-reference/controlplane.yml get /sandbox/hub
Returns all pre-built sandbox templates available in the Blaxel Hub. These include popular development environments with pre-installed tools and frameworks.
# Create service account
Source: https://docs.blaxel.ai/api-reference/service_accounts/create-service-account
/api-reference/controlplane.yml post /service_accounts
Creates a new service account for machine-to-machine authentication. Returns client ID and secret (secret is only shown once at creation). Use these credentials for OAuth client_credentials flow.
# Create service account API key
Source: https://docs.blaxel.ai/api-reference/service_accounts/create-service-account-api-key
/api-reference/controlplane.yml post /service_accounts/{clientId}/api_keys
Creates a new long-lived API key for a service account. The full key value is only returned once at creation. API keys can have optional expiration dates.
# Delete service account
Source: https://docs.blaxel.ai/api-reference/service_accounts/delete-service-account
/api-reference/controlplane.yml delete /service_accounts/{clientId}
Permanently deletes a service account and invalidates all its credentials. Any systems using this service account will lose access immediately.
# List service account API keys
Source: https://docs.blaxel.ai/api-reference/service_accounts/list-service-account-api-keys
/api-reference/controlplane.yml get /service_accounts/{clientId}/api_keys
Returns all long-lived API keys created for a service account. API keys provide an alternative to OAuth for simpler authentication scenarios.
# List service accounts
Source: https://docs.blaxel.ai/api-reference/service_accounts/list-service-accounts
/api-reference/controlplane.yml get /service_accounts
Returns all service accounts in the workspace. Service accounts are machine identities for external systems to authenticate with Blaxel via OAuth or API keys.
# Revoke service account API key
Source: https://docs.blaxel.ai/api-reference/service_accounts/revoke-service-account-api-key
/api-reference/controlplane.yml delete /service_accounts/{clientId}/api_keys/{apiKeyId}
Revokes an API key for a service account. The key becomes invalid immediately and any requests using it will fail authentication.
# Update service account
Source: https://docs.blaxel.ai/api-reference/service_accounts/update-service-account
/api-reference/controlplane.yml put /service_accounts/{clientId}
Updates a service account's name or description. Credentials (client ID/secret) cannot be changed.
# Health check
Source: https://docs.blaxel.ai/api-reference/system/health-check
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml get /health
Returns health status and system information including upgrade count and binary details
Also includes last upgrade attempt status with detailed error information if available
# Upgrade the sandbox-api
Source: https://docs.blaxel.ai/api-reference/system/upgrade-the-sandbox-api
https://raw.githubusercontent.com/blaxel-ai/sandbox/refs/heads/main/sandbox-api/docs/openapi.yml post /upgrade
Triggers an upgrade of the sandbox-api process. Returns 200 immediately before upgrading.
The upgrade will: download the specified binary from GitHub releases, validate it, and restart.
All running processes will be preserved across the upgrade.
Available versions: "develop" (default), "main", "latest", or specific tag like "v1.0.0"
You can also specify a custom baseUrl for forks (defaults to https://github.com/blaxel-ai/sandbox/releases)
# List deployment templates
Source: https://docs.blaxel.ai/api-reference/templates/list-deployment-templates
/api-reference/controlplane.yml get /templates
Returns all deployment templates available for creating agents, functions, and other resources with pre-configured settings and code.
# Create persistent volume
Source: https://docs.blaxel.ai/api-reference/volumes/create-persistent-volume
/api-reference/controlplane.yml post /volumes
Creates a new persistent storage volume that can be attached to sandboxes. Volumes must be created in a specific region and can only attach to sandboxes in the same region.
# Delete persistent volume
Source: https://docs.blaxel.ai/api-reference/volumes/delete-persistent-volume
/api-reference/controlplane.yml delete /volumes/{volumeName}
Permanently deletes a volume and all its data. The volume must not be attached to any sandbox. This action cannot be undone.
# Get persistent volume
Source: https://docs.blaxel.ai/api-reference/volumes/get-persistent-volume
/api-reference/controlplane.yml get /volumes/{volumeName}
Returns detailed information about a volume including its size, region, attachment status, and any events history.
# List persistent volumes
Source: https://docs.blaxel.ai/api-reference/volumes/list-persistent-volumes
/api-reference/controlplane.yml get /volumes
Returns all persistent storage volumes in the workspace. Volumes can be attached to sandboxes for durable file storage that persists across sessions and sandbox deletions.
# Update volume
Source: https://docs.blaxel.ai/api-reference/volumes/update-volume
/api-reference/controlplane.yml put /volumes/{volumeName}
Updates a volume.
# Create or update volume template
Source: https://docs.blaxel.ai/api-reference/volumetemplates/create-or-update-volume-template
/api-reference/controlplane.yml put /volume_templates/{volumeTemplateName}
Creates or updates a volume template.
# Create volume template
Source: https://docs.blaxel.ai/api-reference/volumetemplates/create-volume-template
/api-reference/controlplane.yml post /volume_templates
Creates a new volume template for initializing volumes with pre-configured filesystem contents. Optionally returns a presigned URL for uploading the template archive.
# Delete volume template
Source: https://docs.blaxel.ai/api-reference/volumetemplates/delete-volume-template
/api-reference/controlplane.yml delete /volume_templates/{volumeTemplateName}
Deletes a volume template by name.
# Delete volume template version
Source: https://docs.blaxel.ai/api-reference/volumetemplates/delete-volume-template-version
/api-reference/controlplane.yml delete /volume_templates/{volumeTemplateName}/versions/{versionName}
Deletes a specific version of a volume template.
# Get volume template
Source: https://docs.blaxel.ai/api-reference/volumetemplates/get-volume-template
/api-reference/controlplane.yml get /volume_templates/{volumeTemplateName}
Returns a volume template by name.
# List volume templates
Source: https://docs.blaxel.ai/api-reference/volumetemplates/list-volume-templates
/api-reference/controlplane.yml get /volume_templates
Returns all volume templates in the workspace. Volume templates contain pre-configured filesystem snapshots that can be used to initialize new volumes.
# Allocate a new egress IP from the gateway
Source: https://docs.blaxel.ai/api-reference/vpcs/allocate-a-new-egress-ip-from-the-gateway
/api-reference/controlplane.yml post /vpcs/{vpcName}/egressgateways/{gatewayName}/ips
# Create a VPC for the workspace
Source: https://docs.blaxel.ai/api-reference/vpcs/create-a-vpc-for-the-workspace
/api-reference/controlplane.yml post /vpcs
# Create an egress gateway in a VPC
Source: https://docs.blaxel.ai/api-reference/vpcs/create-an-egress-gateway-in-a-vpc
/api-reference/controlplane.yml post /vpcs/{vpcName}/egressgateways
# Delete a VPC
Source: https://docs.blaxel.ai/api-reference/vpcs/delete-a-vpc
/api-reference/controlplane.yml delete /vpcs/{vpcName}
# Delete an egress gateway
Source: https://docs.blaxel.ai/api-reference/vpcs/delete-an-egress-gateway
/api-reference/controlplane.yml delete /vpcs/{vpcName}/egressgateways/{gatewayName}
# Delete an egress IP
Source: https://docs.blaxel.ai/api-reference/vpcs/delete-an-egress-ip
/api-reference/controlplane.yml delete /vpcs/{vpcName}/egressgateways/{gatewayName}/ips/{ipName}
# Get a VPC by name
Source: https://docs.blaxel.ai/api-reference/vpcs/get-a-vpc-by-name
/api-reference/controlplane.yml get /vpcs/{vpcName}
# Get an egress gateway by name
Source: https://docs.blaxel.ai/api-reference/vpcs/get-an-egress-gateway-by-name
/api-reference/controlplane.yml get /vpcs/{vpcName}/egressgateways/{gatewayName}
# Get an egress IP by name
Source: https://docs.blaxel.ai/api-reference/vpcs/get-an-egress-ip-by-name
/api-reference/controlplane.yml get /vpcs/{vpcName}/egressgateways/{gatewayName}/ips/{ipName}
# List all egress gateways across all VPCs in the workspace
Source: https://docs.blaxel.ai/api-reference/vpcs/list-all-egress-gateways-across-all-vpcs-in-the-workspace
/api-reference/controlplane.yml get /egressgateways
# List all egress IPs across all VPCs and gateways in the workspace
Source: https://docs.blaxel.ai/api-reference/vpcs/list-all-egress-ips-across-all-vpcs-and-gateways-in-the-workspace
/api-reference/controlplane.yml get /egressips
# List all VPCs in the workspace
Source: https://docs.blaxel.ai/api-reference/vpcs/list-all-vpcs-in-the-workspace
/api-reference/controlplane.yml get /vpcs
# List egress gateways in a VPC
Source: https://docs.blaxel.ai/api-reference/vpcs/list-egress-gateways-in-a-vpc
/api-reference/controlplane.yml get /vpcs/{vpcName}/egressgateways
# List egress IPs in a gateway
Source: https://docs.blaxel.ai/api-reference/vpcs/list-egress-ips-in-a-gateway
/api-reference/controlplane.yml get /vpcs/{vpcName}/egressgateways/{gatewayName}/ips
# Accept invitation to workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/accept-invitation-to-workspace
/api-reference/controlplane.yml post /workspaces/{workspaceName}/join
Accepts an invitation to a workspace.
# Check workspace availability
Source: https://docs.blaxel.ai/api-reference/workspaces/check-workspace-availability
/api-reference/controlplane.yml post /workspaces/availability
Check if a workspace is available.
# Create workspace tenant
Source: https://docs.blaxel.ai/api-reference/workspaces/create-workspace-tenant
/api-reference/controlplane.yml post /workspaces
Creates a new workspace tenant. The authenticated user becomes the workspace admin. Requires a linked billing account.
# Decline invitation to workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/decline-invitation-to-workspace
/api-reference/controlplane.yml post /workspaces/{workspaceName}/decline
Declines an invitation to a workspace.
# Delete workspace tenant
Source: https://docs.blaxel.ai/api-reference/workspaces/delete-workspace-tenant
/api-reference/controlplane.yml delete /workspaces/{workspaceName}
Permanently deletes a workspace and ALL its resources (agents, functions, sandboxes, volumes, etc.). This action cannot be undone. Only workspace admins can delete a workspace.
# Get enabled features for workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/get-enabled-features-for-workspace
/api-reference/controlplane.yml get /features
Returns only the feature flags that are currently enabled for the specified workspace. Disabled features are not included to prevent information leakage.
# Get workspace details
Source: https://docs.blaxel.ai/api-reference/workspaces/get-workspace-details
/api-reference/controlplane.yml get /workspaces/{workspaceName}
Returns detailed information about a workspace including its display name, account ID, status, and runtime configuration.
# Invite user to workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/invite-user-to-workspace
/api-reference/controlplane.yml post /users
Invites a new team member to the workspace by email. The invitee will receive an email to accept the invitation before gaining access to workspace resources.
# Leave workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/leave-workspace
/api-reference/controlplane.yml delete /workspaces/{workspaceName}/leave
Leaves a workspace.
# List accessible workspaces
Source: https://docs.blaxel.ai/api-reference/workspaces/list-accessible-workspaces
/api-reference/controlplane.yml get /workspaces
Returns all workspaces the authenticated user has access to. Each workspace is a separate tenant with its own resources, team members, and billing.
# List workspace team members
Source: https://docs.blaxel.ai/api-reference/workspaces/list-workspace-team-members
/api-reference/controlplane.yml get /users
Returns all team members in the workspace including their roles (admin or member) and invitation status.
# Remove user from workspace or revoke invitation
Source: https://docs.blaxel.ai/api-reference/workspaces/remove-user-from-workspace-or-revoke-invitation
/api-reference/controlplane.yml delete /users/{subOrEmail}
Removes a user from the workspace (or revokes an invitation if the user has not accepted the invitation yet).
# Update user role in workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/update-user-role-in-workspace
/api-reference/controlplane.yml put /users/{subOrEmail}
Updates the role of a user in the workspace.
# Update workspace
Source: https://docs.blaxel.ai/api-reference/workspaces/update-workspace
/api-reference/controlplane.yml put /workspaces/{workspaceName}
Updates a workspace's settings such as display name and labels. The workspace name cannot be changed after creation.
# bl apply
Source: https://docs.blaxel.ai/cli-reference/commands/bl_apply
## bl apply
Apply a configuration to a resource by file
### Synopsis
Apply configuration changes to resources declaratively using YAML files.
This command is similar to Kubernetes 'kubectl apply' - it creates resources
if they don't exist, or updates them if they do (idempotent operation).
Use 'apply' for Infrastructure as Code workflows where you:
* Manage resources via configuration files
* Version control your infrastructure
* Deploy multiple related resources together
* Implement GitOps practices
Difference from 'deploy':
* 'apply' manages resource configuration (metadata, settings, specs)
* 'deploy' builds and uploads code as container images
For deploying code changes to agents/jobs, use 'bl deploy'.
For managing resource configuration, use 'bl apply'.
The command respects environment variables and secrets, which can be injected
via -e flag for .env files or -s flag for command-line secrets.
```
bl apply [flags]
```
### Examples
```
# Apply a single resource
bl apply -f agent.yaml
# Apply all resources in directory
bl apply -f ./resources/ -R
# Apply with environment variable substitution
bl apply -f deployment.yaml -e .env.production
# Apply from stdin (useful for CI/CD)
cat config.yaml | bl apply -f -
# Apply with secrets
bl apply -f config.yaml -s API_KEY=xxx -s DB_PASSWORD=yyy
# Example YAML structure for an agent:
# apiVersion: blaxel.ai/v1alpha1
# kind: Agent
# metadata:
# name: my-agent
# spec:
# runtime:
# image: agent/my-template-agent:latest
# memory: 4096
# Create a sandbox with the default base image
bl apply -f - < \~/.local/share/bash-completion/completions/bl
# macOS:
bl completion bash > \$(brew --prefix)/etc/bash\_completion.d/bl
Zsh:
eval "\$(bl completion zsh)"
# To load completions for each session, execute once:
mkdir -p \~/.zsh/completions
bl completion zsh > \~/.zsh/completions/\_bl
Fish:
bl completion fish | source
# To load completions for each session, execute once:
bl completion fish > \~/.config/fish/completions/bl.fish
PowerShell:
bl completion powershell | Out-String | Invoke-Expression
# To load completions for each session, execute once:
bl completion powershell > bl.ps1
# and source this file from your PowerShell profile.
```
bl completion [bash|zsh|fish|powershell]
```
### Options
```
-h, --help help for completion
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl connect
Source: https://docs.blaxel.ai/cli-reference/commands/bl_connect
## bl connect
Open an interactive terminal session to a sandbox
### Synopsis
Open an interactive terminal session to a sandbox
### Options
```
-h, --help help for connect
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
* [bl connect sandbox](/cli-reference/commands/bl_connect_sandbox) - Connect to a sandbox environment
# bl connect sandbox
Source: https://docs.blaxel.ai/cli-reference/commands/bl_connect_sandbox
## bl connect sandbox
Connect to a sandbox environment
### Synopsis
Connect to a sandbox environment with an interactive terminal session.
This command opens a direct terminal connection to your sandbox, similar to SSH.
The terminal supports full ANSI colors, cursor movement, and interactive applications.
Press Ctrl+D to disconnect from the sandbox.
Examples:
bl connect sandbox my-sandbox
bl connect sb my-sandbox
bl connect sbx production-env
```
bl connect sandbox [sandbox-name] [flags]
```
### Options
```
-h, --help help for sandbox
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl connect](/cli-reference/commands/bl_connect) - Open an interactive terminal session to a sandbox
# bl delete
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete
## bl delete
Delete resources from your workspace
### Synopsis
Delete Blaxel resources from your workspace.
WARNING: Deletion is permanent and cannot be undone. Resources are immediately
deactivated and removed along with their configurations.
Two deletion modes:
1. By name: Use subcommands like 'bl delete agent my-agent'
2. By file: Use 'bl delete -f resource.yaml' for declarative management
What Happens:
* Resource is immediately stopped and deactivated
* Configuration and metadata are removed
* Associated logs and metrics may be retained (check workspace policy)
* Data volumes are NOT automatically deleted (use 'bl delete volume')
Before Deleting:
* Backup any important configuration or data
* Check dependencies (other resources using this one)
* Consider stopping instead of deleting for temporary disablement
Note: Deleting an agent/job stops it immediately but may not delete associated
storage volumes. Use 'bl get volumes' to see persistent storage and delete
separately if needed.
```
bl delete [flags]
```
### Examples
```
# Delete by name (using subcommands)
bl delete agent my-agent
bl delete job my-job
bl delete sandbox my-sandbox
# Delete multiple resources by name
bl delete volume vol1 vol2 vol3
bl delete agent agent1 agent2
# Delete a sandbox preview
bl delete sandbox my-sandbox preview my-preview
# Delete a sandbox preview token
bl delete sandbox my-sandbox preview my-preview token my-token
# Delete from YAML file
bl delete -f my-resource.yaml
# Delete multiple resources from directory
bl delete -f ./resources/ -R
# Delete from stdin (useful in pipelines)
cat resource.yaml | bl delete -f -
# Safe deletion workflow
bl get agent my-agent # Review resource first
bl delete agent my-agent # Delete after confirmation
# --- Bulk deletion with jq filtering ---
# WARNING: Bulk deletions are irreversible. Always preview first!
# STEP 1: Preview what would be deleted (ALWAYS DO THIS FIRST)
bl get jobs -o json | jq -r '.[] | select(.status == "DELETING") | .metadata.name'
# STEP 2: After verifying the list, proceed with deletion
bl delete jobs $(bl get jobs -o json | jq -r '.[] | select(.status == "DELETING") | .metadata.name')
# More bulk deletion examples (always preview first):
bl delete sandboxes $(bl get sandboxes -o json | jq -r '.[] | select(.status == "FAILED") | .metadata.name')
bl delete agents $(bl get agents -o json | jq -r '.[] | select(.metadata.name | contains("test")) | .metadata.name')
bl delete volumes $(bl get volumes -o json | jq -r '.[] | select(.metadata.labels.environment == "dev") | .metadata.name')
bl delete sandboxes $(bl get sandboxes -o json | jq -r '.[] | select(.metadata.name | test("^temp-")) | .metadata.name')
```
### Options
```
-f, --filename string containing the resource to delete.
-h, --help help for delete
-R, --recursive Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
* [bl delete agent](/cli-reference/commands/bl_delete_agent) - Delete one or more agents
* [bl delete drive](/cli-reference/commands/bl_delete_drive) - Delete one or more drives
* [bl delete function](/cli-reference/commands/bl_delete_function) - Delete one or more functions
* [bl delete image](/cli-reference/commands/bl_delete_image) - Delete images or image tags
* [bl delete integrationconnection](/cli-reference/commands/bl_delete_integrationconnection) - Delete one or more integrationconnections
* [bl delete job](/cli-reference/commands/bl_delete_job) - Delete one or more jobs
* [bl delete model](/cli-reference/commands/bl_delete_model) - Delete one or more models
* [bl delete policy](/cli-reference/commands/bl_delete_policy) - Delete one or more policies
* [bl delete preview](/cli-reference/commands/bl_delete_preview) - Delete one or more previews
* [bl delete previewtoken](/cli-reference/commands/bl_delete_previewtoken) - Delete one or more previewtokens
* [bl delete sandbox](/cli-reference/commands/bl_delete_sandbox) - Delete one or more sandboxes
* [bl delete volume](/cli-reference/commands/bl_delete_volume) - Delete one or more volumes
* [bl delete volumetemplate](/cli-reference/commands/bl_delete_volumetemplate) - Delete one or more volumetemplates
# bl delete agent
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_agent
## bl delete agent
Delete one or more agents
```
bl delete agent name [name...] [flags]
```
### Options
```
-h, --help help for agent
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete drive
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_drive
## bl delete drive
Delete one or more drives
```
bl delete drive name [name...] [flags]
```
### Options
```
-h, --help help for drive
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete function
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_function
## bl delete function
Delete one or more functions
```
bl delete function name [name...] [flags]
```
### Options
```
-h, --help help for function
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete image
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_image
## bl delete image
Delete images or image tags
### Synopsis
Delete container images or specific tags.
Usage patterns:
bl delete image agent/my-image Delete image with all its tags
bl delete image agent/my-image:v1.0 Delete only the specified tag
The image reference format is: resourceType/imageName\[:tag]
* resourceType: Type of resource (e.g., agent, function, job)
* imageName: The name of the image
* tag: Optional tag to delete only that specific version
WARNING: Deleting an image without specifying a tag will remove ALL tags.
```
bl delete image [resourceType/]imageName[:tag] ... [flags]
```
### Examples
```
# Delete an entire image (all tags)
bl delete image agent/my-agent
# Delete only a specific tag
bl delete image agent/my-agent:v1.0
# Delete multiple images/tags
bl delete image agent/img1:v1 agent/img2:v2
```
### Options
```
-h, --help help for image
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete integrationconnection
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_integrationconnection
## bl delete integrationconnection
Delete one or more integrationconnections
```
bl delete integrationconnection name [name...] [flags]
```
### Options
```
-h, --help help for integrationconnection
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete job
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_job
## bl delete job
Delete one or more jobs
```
bl delete job name [name...] [flags]
```
### Options
```
-h, --help help for job
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete model
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_model
## bl delete model
Delete one or more models
```
bl delete model name [name...] [flags]
```
### Options
```
-h, --help help for model
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete policy
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_policy
## bl delete policy
Delete one or more policies
```
bl delete policy name [name...] [flags]
```
### Options
```
-h, --help help for policy
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete preview
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_preview
## bl delete preview
Delete one or more previews
```
bl delete preview name [name...] [flags]
```
### Options
```
-h, --help help for preview
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete previewtoken
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_previewtoken
## bl delete previewtoken
Delete one or more previewtokens
```
bl delete previewtoken name [name...] [flags]
```
### Options
```
-h, --help help for previewtoken
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete sandbox
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_sandbox
## bl delete sandbox
Delete one or more sandboxes
```
bl delete sandbox name [name...] [flags]
```
### Options
```
-h, --help help for sandbox
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete volume
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_volume
## bl delete volume
Delete one or more volumes
```
bl delete volume name [name...] [flags]
```
### Options
```
-h, --help help for volume
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl delete volumetemplate
Source: https://docs.blaxel.ai/cli-reference/commands/bl_delete_volumetemplate
## bl delete volumetemplate
Delete one or more volumetemplates
```
bl delete volumetemplate name [name...] [flags]
```
### Options
```
-h, --help help for volumetemplate
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl delete](/cli-reference/commands/bl_delete) - Delete resources from your workspace
# bl deploy
Source: https://docs.blaxel.ai/cli-reference/commands/bl_deploy
## bl deploy
Build, push, and deploy your project to Blaxel
### Synopsis
Deploy your Blaxel project to the cloud.
This command packages your code, builds a container image, and deploys it
to your workspace. The deployment process includes:
1. Reading configuration from blaxel.toml
2. Packaging source code (respects .blaxelignore)
3. Building container image with your runtime and dependencies
4. Uploading to Blaxel's container registry
5. Creating or updating the resource in your workspace
6. Streaming build and deployment logs (interactive mode)
A blaxel.toml configuration file is required. By default, the command looks
for it in the current directory. Use -d to specify a subdirectory containing
the blaxel.toml (useful for monorepo setups).
Interactive vs Non-Interactive:
* Interactive (default): Shows live logs and deployment progress with TUI
* Non-interactive (--yes or CI): Runs without interactive UI, suitable for automation
Environment Variables and Secrets:
Use -e to load .env files or -s to pass secrets directly via command line.
Secrets are injected into your container at runtime and never stored in images.
Monorepo Support:
Use -d to deploy a specific subdirectory, or -R to recursively deploy
all projects in a monorepo (looks for blaxel.toml in subdirectories).
```
bl deploy [flags]
```
### Examples
```
# Basic deployment (interactive mode with live logs)
bl deploy
# Non-interactive deployment (for CI/CD)
bl deploy --yes
# Deploy with environment variables
bl deploy -e .env.production
# Deploy with command-line secrets
bl deploy -s API_KEY=xxx -s DB_PASSWORD=yyy
# Deploy without rebuilding (reuse existing image)
bl deploy --skip-build
# Dry run to validate configuration
bl deploy --dryrun
# Deploy specific subdirectory in monorepo
bl deploy -d ./packages/my-agent
# Deploy specifying a resource type
bl deploy --type sandbox
# Recursively deploy all projects in monorepo
bl deploy -R
```
### Options
```
-d, --directory string Deployment app path, can be a sub directory
--docker-config string Path to a Docker config.json file with registry credentials
--dryrun Dry run the deployment
-e, --env-file strings Environment file to load (default [.env])
--experimental Enable experimental features (e.g. USER directive support)
-h, --help help for deploy
-n, --name string Optional name for the deployment
-r, --recursive Deploy recursively (default true)
-c, --registry-cred stringArray Registry credentials (format: registry=username:password, repeatable)
-s, --secrets strings Secrets to deploy
--skip-build Skip the build step
-t, --type string Resource type (sandbox, agent, function, job). Defaults to blaxel.toml type or 'sandbox'
-y, --yes Skip interactive mode
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl get
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get
## bl get
List or retrieve Blaxel resources in your workspace
### Synopsis
Retrieve information about Blaxel resources in your workspace.
A "resource" in Blaxel refers to any deployable or manageable entity:
* agents: AI agent applications
* functions/mcp: Model Context Protocol servers (tool providers)
* jobs: Batch processing tasks
* sandboxes: Isolated execution environments
* models: AI model configurations
* policies: Access control policies
* volumes: Persistent storage
* integrationconnections: External service integrations
Hub Discovery (pre-built resources available in the Blaxel Hub):
* sandbox-hub: Pre-built sandbox images with pre-installed tools and runtimes
* mcp-hub: Pre-built MCP servers for tool integrations (GitHub, Slack, etc.)
* templates: Project scaffolding templates for bl new
Output Formats:
Use -o flag to control output format:
* pretty: Human-readable colored output (default)
* json: Machine-readable JSON (for scripting)
* yaml: YAML format
* table: Tabular format with columns
Watch Mode:
Use --watch to continuously monitor a resource and see updates in real-time.
Useful for tracking deployment status or watching for changes.
The command can list all resources of a type or get details for a specific one.
### Examples
```
# List all agents
bl get agents
# Get specific agent details
bl get agent my-agent
# Get in JSON format (useful for scripting)
bl get agent my-agent -o json
# Watch agent status in real-time
bl get agent my-agent --watch
# List all resources with table output
bl get agents -o table
# Get MCP servers (also called functions)
bl get functions
bl get mcp
# List jobs
bl get jobs
# Get specific job
bl get job my-job
# List executions for a job (nested resource)
bl get job my-job executions
# Get specific execution for a job
bl get job my-job execution EXECUTION_ID
# List pre-built sandbox images from the Hub
bl get sandbox-hub
bl get sandbox-hub -o json
# List pre-built MCP servers from the Hub
bl get mcp-hub
bl get mcp-hub -o json
# Monitor sandbox status
bl get sandbox my-sandbox --watch
# List processes in a sandbox
bl get sandbox my-sandbox process
bl get sbx my-sandbox ps
# Get specific process in a sandbox
bl get sandbox my-sandbox process my-process
# List previews for a sandbox
bl get sandbox my-sandbox previews
# Get a specific preview
bl get sandbox my-sandbox preview my-preview
# List tokens for a sandbox preview
bl get sandbox my-sandbox preview my-preview tokens
# Get a specific token
bl get sandbox my-sandbox preview my-preview token my-token
# --- Filtering with jq ---
# Get names of all jobs with status DELETING
bl get jobs -o json | jq -r '.[] | select(.status == "DELETING") | .metadata.name'
# Get names of all deployed sandboxes
bl get sandboxes -o json | jq -r '.[] | select(.status == "DEPLOYED") | .metadata.name'
# Get all agents with name containing "test"
bl get agents -o json | jq -r '.[] | select(.metadata.name | contains("test")) | .metadata.name'
# Get sandboxes with specific label (e.g., environment=dev)
bl get sandboxes -o json | jq -r '.[] | select(.metadata.labels.environment == "dev") | .metadata.name'
# Get all job names
bl get jobs -o json | jq -r '.[] | .metadata.name'
# Count resources by status
bl get agents -o json | jq 'group_by(.status) | map({status: .[0].status, count: length})'
```
### Options
```
-h, --help help for get
--watch After listing/getting the requested object, watch for changes.
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
* [bl get agents](/cli-reference/commands/bl_get_agents) - List all agents or get details of a specific one
* [bl get drives](/cli-reference/commands/bl_get_drives) - List all drives or get details of a specific one
* [bl get functions](/cli-reference/commands/bl_get_functions) - List all functions or get details of a specific one
* [bl get image](/cli-reference/commands/bl_get_image) - Get image information
* [bl get integrationconnections](/cli-reference/commands/bl_get_integrationconnections) - List all integrationconnections or get details of a specific one
* [bl get jobs](/cli-reference/commands/bl_get_jobs) - List all jobs or get details of a specific one
* [bl get mcp-hub](/cli-reference/commands/bl_get_mcp-hub) - List pre-built MCP servers available in the Blaxel Hub
* [bl get models](/cli-reference/commands/bl_get_models) - List all models or get details of a specific one
* [bl get policies](/cli-reference/commands/bl_get_policies) - List all policies or get details of a specific one
* [bl get previews](/cli-reference/commands/bl_get_previews) - List all previews or get details of a specific one
* [bl get previewtokens](/cli-reference/commands/bl_get_previewtokens) - List all previewtokens or get details of a specific one
* [bl get sandbox-hub](/cli-reference/commands/bl_get_sandbox-hub) - List pre-built sandbox images available in the Blaxel Hub
* [bl get sandboxes](/cli-reference/commands/bl_get_sandboxes) - List all sandboxes or get details of a specific one
* [bl get templates](/cli-reference/commands/bl_get_templates) - List available project templates
* [bl get volumes](/cli-reference/commands/bl_get_volumes) - List all volumes or get details of a specific one
* [bl get volumetemplates](/cli-reference/commands/bl_get_volumetemplates) - List all volumetemplates or get details of a specific one
# bl get agents
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_agents
## bl get agents
List all agents or get details of a specific one
```
bl get agents [flags]
```
### Options
```
-h, --help help for agents
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get drives
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_drives
## bl get drives
List all drives or get details of a specific one
```
bl get drives [flags]
```
### Options
```
-h, --help help for drives
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get functions
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_functions
## bl get functions
List all functions or get details of a specific one
```
bl get functions [flags]
```
### Options
```
-h, --help help for functions
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get image
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_image
## bl get image
Get image information
### Synopsis
Get information about container images.
Usage patterns:
bl get images List all images (without tags)
bl get image agent/my-image Get image details for a specific resource type
bl get image agent/my-image:v1.0 Get specific tag information
bl get image sandbox/my-image --latest Get the latest tag reference for an image
The image reference format is: resourceType/imageName\[:tag]
* resourceType: Type of resource (e.g., agent, function, job, sandbox)
* imageName: The name of the image
* tag: Optional tag to filter for a specific version
The --latest flag returns the image reference with the most recent tag,
formatted as resourceType/imageName:tag. This is useful for scripting
and for retrieving the IMAGE\_ID to use when creating sandboxes from templates.
```
bl get image [resourceType/imageName[:tag]] [flags]
```
### Examples
```
# List all images
bl get images
# Get all tags for a specific image
bl get image agent/my-agent
# Get a specific tag
bl get image agent/my-agent:latest
# Get the latest tag reference (useful for sandbox templates)
bl get image sandbox/mytemplate --latest
# Use different output formats
bl get images -o json
bl get image agent/my-agent -o pretty
```
### Options
```
-h, --help help for image
--latest Return only the most recent tag reference (e.g., sandbox/mytemplate:tag)
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get integrationconnections
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_integrationconnections
## bl get integrationconnections
List all integrationconnections or get details of a specific one
```
bl get integrationconnections [flags]
```
### Options
```
-h, --help help for integrationconnections
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get jobs
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_jobs
## bl get jobs
List all jobs or get details of a specific one
```
bl get jobs [flags]
```
### Options
```
-h, --help help for jobs
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get models
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_models
## bl get models
List all models or get details of a specific one
```
bl get models [flags]
```
### Options
```
-h, --help help for models
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get policies
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_policies
## bl get policies
List all policies or get details of a specific one
```
bl get policies [flags]
```
### Options
```
-h, --help help for policies
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get previews
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_previews
## bl get previews
List all previews or get details of a specific one
```
bl get previews [flags]
```
### Options
```
-h, --help help for previews
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get previewtokens
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_previewtokens
## bl get previewtokens
List all previewtokens or get details of a specific one
```
bl get previewtokens [flags]
```
### Options
```
-h, --help help for previewtokens
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get sandboxes
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_sandboxes
## bl get sandboxes
List all sandboxes or get details of a specific one
```
bl get sandboxes [flags]
```
### Options
```
-h, --help help for sandboxes
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get templates
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_templates
## bl get templates
List available project templates
### Synopsis
List available templates that can be used with 'bl new'.
Templates are grouped by type (agent, mcp, sandbox, job, volume-template).
Use an optional type argument to filter results.
Output formats:
-o json Machine-readable JSON array
-o yaml YAML output
default Table with NAME, TYPE, LANGUAGE, DESCRIPTION columns
```
bl get templates [type] [flags]
```
### Examples
```
# List all templates
bl get templates
# List agent templates only
bl get templates agent
# List templates as JSON
bl get templates -o json
# List MCP templates
bl get templates mcp
```
### Options
```
-h, --help help for templates
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get volumes
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_volumes
## bl get volumes
List all volumes or get details of a specific one
```
bl get volumes [flags]
```
### Options
```
-h, --help help for volumes
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl get volumetemplates
Source: https://docs.blaxel.ai/cli-reference/commands/bl_get_volumetemplates
## bl get volumetemplates
List all volumetemplates or get details of a specific one
```
bl get volumetemplates [flags]
```
### Options
```
-h, --help help for volumetemplates
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
--watch After listing/getting the requested object, watch for changes.
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl get](/cli-reference/commands/bl_get) - List or retrieve Blaxel resources in your workspace
# bl login
Source: https://docs.blaxel.ai/cli-reference/commands/bl_login
## bl login
Login to Blaxel
### Synopsis
Authenticate with Blaxel to access your workspace.
A workspace is your organization's isolated environment in Blaxel that contains
all your resources (agents, jobs, sandboxes, models, etc.). You must login before
using most Blaxel CLI commands.
Authentication Methods:
1. Browser OAuth (default) - Interactive login via web browser
2. API Key - For automation and scripts (set BL\_API\_KEY environment variable)
3. Client Credentials - For CI/CD pipelines (set BL\_CLIENT\_CREDENTIALS)
The CLI automatically detects which authentication method to use:
* If BL\_CLIENT\_CREDENTIALS is set, uses client credentials
* If BL\_API\_KEY is set, uses API key authentication
* Otherwise, shows interactive menu to choose browser or API key login
Credentials are stored securely in your system's credential store and persist
across sessions. Use 'bl logout' to remove stored credentials.
Examples:
```bash theme={null}
# Interactive login (shows menu to choose method)
bl login my-workspace
# Login without workspace (will prompt for workspace)
bl login
# API key authentication (non-interactive)
export BL_API_KEY=your-api-key
bl login my-workspace
# Client credentials for CI/CD
export BL_CLIENT_CREDENTIALS=your-credentials
bl login my-workspace
```
After logging in, all commands will use this workspace by default.
Override with --workspace flag: bl get agents --workspace other-workspace
```
bl login [workspace] [flags]
```
### Options
```
-h, --help help for login
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl logout
Source: https://docs.blaxel.ai/cli-reference/commands/bl_logout
## bl logout
Logout from Blaxel
### Synopsis
Remove stored credentials for a workspace.
This command clears local authentication tokens and credentials from your
system's credential store. Your deployed resources (agents, jobs, sandboxes)
continue running and are not affected by logout.
If you have multiple workspaces authenticated, you can logout from:
* A specific workspace by providing its name
* Any workspace interactively by running 'bl logout' without arguments
After logging out, you'll need to run 'bl login WORKSPACE' again to
authenticate before using other commands for that workspace.
Note: Logout is a local operation only. It does not:
* Stop running agents or jobs
* Delete any deployed resources
* Revoke tokens on the server (they will expire naturally)
* Affect other authenticated workspaces
Examples:
# Logout from current workspace (interactive selection)
bl logout
# Logout from specific workspace
bl logout my-workspace
# Login again after logout
bl login my-workspace
```
bl logout [workspace] [flags]
```
### Options
```
-h, --help help for logout
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl logs
Source: https://docs.blaxel.ai/cli-reference/commands/bl_logs
## bl logs
View and stream logs for agents, jobs, sandboxes, and functions
### Synopsis
View logs for Blaxel resources.
The logs command displays logs for agents, jobs, sandboxes, and functions.
You must specify both the resource type and resource name.
Resource Types (with aliases):
* sandboxes (sandbox, sbx)
* jobs (job, j, jb)
* agents (agent, ag)
* functions (function, fn, mcp, mcps)
Sandbox Process Logs:
For sandboxes, you can view logs for a specific process by adding the process name:
bl logs sandbox my-sandbox my-process
Job Execution Logs:
For jobs, you can filter logs by execution ID and task ID:
bl logs job my-job my-execution-id
bl logs job my-job my-execution-id my-task-id
Time Filtering:
By default, logs from the last 1 hour are displayed.
In follow mode (--follow), the last 15 minutes are shown as context, then new logs
are continuously streamed in real-time.
You can customize this by:
* Using duration format (e.g., 3d, 1h, 10m, 24h) with --period flag
* Using explicit start/end times with --start and --end flags
* Maximum time range is 3 days
Duration units:
* d: days
* h: hours
* m: minutes
* s: seconds
Timestamps:
By default, logs are prefixed with their timestamp in local timezone.
Use --no-timestamps to hide them, or --utc to display timestamps in UTC.
Severity Filtering:
By default, all severity levels are shown. Use --severity to filter by specific levels.
Available severities: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE, UNKNOWN
Use comma-separated values: --severity ERROR,FATAL
Search:
Use --search to filter logs by text content. Only logs containing the search term will be displayed.
Examples:
# View logs for a specific sandbox (last 1 hour - default)
bl logs sandbox my-sandbox
# View logs for a specific process in a sandbox
bl logs sandbox my-sandbox my-process
# Stream process logs in real-time
bl logs sandbox my-sandbox my-process --follow
# View all logs for a job
bl logs job my-job
# View logs for a specific job execution
bl logs job my-job exec-abc123
# View logs for a specific task within an execution
bl logs job my-job exec-abc123 task-456
# Follow job execution logs in real-time
bl logs job my-job exec-abc123 --follow
# Follow logs in real-time (shows last 15 minutes, then streams new logs)
bl logs sandbox my-sandbox --follow
# Follow logs with more historical context
bl logs sandbox my-sandbox --follow --period 1h
# View logs from last 3 days
bl logs job my-job --period 3d
# View logs for a specific time range
bl logs agent my-agent --start 2024-01-01T00:00:00Z --end 2024-01-01T23:59:59Z
# Hide timestamps in output
bl logs agent my-agent --no-timestamps
# Show timestamps in UTC
bl logs agent my-agent --utc
# Filter by severity
bl logs agent my-agent --severity ERROR,FATAL
# Search for specific text in logs
bl logs agent my-agent --search "error"
# Using aliases
bl logs sbx my-sandbox --follow
bl logs j my-job --period 1h
bl logs fn my-function --follow
```
bl logs RESOURCE_TYPE RESOURCE_NAME [NESTED_ARGS...] [flags]
```
### Options
```
--end string End time for logs (RFC3339 format or YYYY-MM-DD)
-f, --follow Follow log output (like tail -f)
-h, --help help for logs
--no-timestamps Hide timestamps in log output
-p, --period string Time period to fetch logs (e.g., 3d, 1h, 10m, 24h)
--search string Search for logs containing specific text
--severity string Filter by severity levels (comma-separated): FATAL,ERROR,WARNING,INFO,DEBUG,TRACE,UNKNOWN
--start string Start time for logs (RFC3339 format or YYYY-MM-DD)
--utc Display timestamps in UTC instead of local timezone
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl new
Source: https://docs.blaxel.ai/cli-reference/commands/bl_new
## bl new
Scaffold a new project from a template (agent, mcp, sandbox, job, volume-template)
### Synopsis
Create a new Blaxel resource from templates.
This command scaffolds a new project with the necessary configuration files,
dependencies, and example code to get you started quickly.
Resource Types:
agent - AI agent application that can chat, use tools, and access data
Use cases: Customer support bots, coding assistants, data analysts
mcp - Model Context Protocol server that extends agent capabilities
Use cases: Custom tools, API integrations, database connectors
sandbox - Isolated execution environment for testing and running code
Use cases: Code execution, testing, isolated workloads
job - Batch processing task that runs on-demand or on schedule
Use cases: ETL pipelines, data processing, scheduled workflows
volumetemplate - Pre-configured volume template for creating volumes
Use cases: Persistent storage templates, data volume configurations
Template Discovery:
Use --list to see all available templates with descriptions before creating.
Combine with a type argument to filter: 'bl new agent --list'
Interactive Mode (Recommended):
When called without arguments, the CLI guides you through:
1. Choosing a resource type
2. Selecting a template (language/framework)
3. Naming your project directory
4. Setting up initial configuration
Non-Interactive Mode:
Use --template and --yes flags for automation and CI/CD workflows.
After Creation:
1. cd into your new directory
2. Review and customize the generated blaxel.toml configuration
3. Develop your resource locally with 'bl serve --hotreload'
4. Test it works as expected
5. Deploy to Blaxel with 'bl deploy'
```
bl new [type] [directory] [flags]
```
### Examples
```
# Interactive creation (recommended for beginners)
bl new
# Create agent interactively
bl new agent
# Create agent with specific template
bl new agent my-agent -t google-adk-py
# Create MCP server with default template (non-interactive)
bl new mcp my-mcp-server -y -t mcp-py
# Create job with specific template
bl new job my-batch-job -t jobs-py
# List all available templates
bl new --list
# List agent templates only
bl new agent --list
# List templates as JSON (for machine parsing)
bl new --list -o json
# Full workflow example:
bl new agent my-assistant
cd my-assistant
bl serve --hotreload # Test locally
bl deploy # Deploy to Blaxel
bl chat my-assistant # Chat with deployed agent
```
### Options
```
-h, --help help for new
-l, --list List available templates with descriptions
-t, --template string Template to use (skips interactive prompt)
-y, --yes Skip interactive prompts and use defaults
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl push
Source: https://docs.blaxel.ai/cli-reference/commands/bl_push
## bl push
Build and push a container image to the Blaxel registry
### Synopsis
Build and push a container image to the Blaxel registry without creating a deployment.
This command packages your code, uploads it, and builds a container image that
is stored in the workspace registry. Unlike 'bl deploy', this command does NOT
create or update any resource (agent, function, sandbox, or job).
The process includes:
1. Reading configuration from blaxel.toml
2. Packaging source code (respects .blaxelignore)
3. Uploading to Blaxel's build system via presigned URL
4. Building container image
5. Streaming build logs until the image is ready
You must run this command from a directory containing a blaxel.toml file.
```
bl push [flags]
```
### Examples
```
# Push current directory as an image
bl push
# Push with a custom name
bl push --name my-image
# Push a specific subdirectory
bl push -d ./packages/my-agent
# Push specifying a resource type
bl push --type agent
```
### Options
```
-d, --directory string Source directory path
--docker-config string Path to a Docker config.json file with registry credentials
-h, --help help for push
-n, --name string Name for the image (defaults to directory name)
-c, --registry-cred stringArray Registry credentials (format: registry=username:password, repeatable)
-t, --type string Resource type (agent, function, sandbox, job). Defaults to blaxel.toml type; required if not set
-y, --yes Skip interactive mode
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl run
Source: https://docs.blaxel.ai/cli-reference/commands/bl_run
## bl run
Execute a resource (agent, model, job, function, sandbox)
### Synopsis
Execute a Blaxel resource with custom input data.
Different resource types behave differently when run:
* agent: Send a single request (non-interactive, unlike 'bl chat')
Returns agent response for the given input
* model: Make an inference request to an AI model
Calls the model's API endpoint with your data
* job: Start a job execution with batch input
Processes multiple tasks defined in JSON batch file
* function/mcp: Invoke an MCP server function
Calls a specific tool or method
* sandbox (sbx): Execute a command in a sandbox VM
Runs shell commands via the sandbox process API
Local vs Remote:
* Remote (default): Runs against deployed resources in your workspace
* Local (--local): Runs against locally served resources (requires 'bl serve')
Input Formats:
* Inline JSON with --data json-object
* From file with --file path/to/input.json
Streaming:
When agents respond via SSE (Server-Sent Events), the CLI automatically detects
and parses the stream. Use --stream to explicitly request streaming mode and
print chunks in real-time as they arrive.
Advanced Usage:
Use --path, --method, and --params for custom HTTP requests to your resources.
This is useful for testing specific endpoints or non-standard API calls.
```
bl run resource-type resource-name [flags]
```
### Examples
```
# Run agent with inline data
bl run agent my-agent --data '{"inputs": "Summarize this text"}'
# Run agent with file input
bl run agent my-agent --file request.json
# Run agent with real-time streaming output
bl run agent my-agent --data '{"inputs": "hello"}' --stream
# Run agent with timeout
bl run agent my-agent --data '{"inputs": "hello"}' --timeout 120
# Run job with batch file
bl run job my-job --file batches/process-users.json
# Run job locally for testing (requires 'bl serve' in another terminal)
bl run job my-job --local --file batch.json
# Run job locally with 4 concurrent workers
bl run job my-job --local --file batch.json --concurrent 4
# Run model with custom endpoint
bl run model my-model --path /v1/chat/completions --data '{"messages": [...]}'
# Run with query parameters
bl run agent my-agent --data '{}' --params "stream=true" --params "max_tokens=100"
# Run with custom headers
bl run agent my-agent --data '{}' --header "X-User-ID: 123"
# Debug mode (see full request/response details)
bl run agent my-agent --data '{}' --debug
# Get JSON output for machine parsing
bl run agent my-agent --data '{"inputs": "hello"}' -o json
# Execute a command in a sandbox
bl run sandbox my-sandbox --path /process --data '{"command": "echo hello"}'
# Execute a command and wait for it to complete (returns stdout/stderr in response)
bl run sandbox my-sandbox --path /process --data '{"command": "ls -al /app", "waitForCompletion": true}'
# Execute a command with a working directory and a process name
bl run sandbox my-sandbox --path /process --data '{"command": "npm install", "name": "install-deps", "workingDir": "/app"}'
# Execute a long-running command with keep-alive (prevents sandbox auto-standby)
bl run sandbox my-sandbox --path /process --data '{"command": "npm run dev -- --port 3000", "name": "dev-server", "keepAlive": true}'
# You can also use the 'sbx' shorthand
bl run sbx my-sandbox --path /process --data '{"command": "python script.py", "waitForCompletion": true}'
```
### Options
```
-c, --concurrent int Number of concurrent workers for local job execution (default 1)
-d, --data string JSON body data for the inference request
--debug Debug mode
--directory string Directory to run the command from
-e, --env-file strings Environment file to load (default [.env])
-f, --file string Input from a file
--header stringArray Request headers in 'Key: Value' format. Can be specified multiple times
-h, --help help for run
--local Run locally
--method string HTTP method for the inference request (default "POST")
--params strings Query params sent to the inference request
--path string path for the inference request
-p, --port int Port to connect to when using --local (default 1338)
-s, --secrets strings Secrets to pass to the execution
--stream Stream SSE responses in real-time
--timeout int Request timeout in seconds (default: no timeout)
--upload-file string This transfers the specified local file to the remote URL
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl serve
Source: https://docs.blaxel.ai/cli-reference/commands/bl_serve
## bl serve
Start a local development server for your project
### Synopsis
Start a local development server for your Blaxel project.
This runs your agent or MCP server locally on your machine for rapid
development and testing. Perfect for the inner development loop where you
want to iterate quickly without deploying to the cloud.
Supported Languages:
* Python (requires pyproject.toml or requirements.txt)
* TypeScript/JavaScript (requires package.json)
* Go (requires go.mod)
Hot Reload:
Enable --hotreload to automatically restart your server when code changes
are detected. This dramatically speeds up development by eliminating manual
restarts.
Testing Locally:
While your server is running, test it with:
* bl chat agent-name --local (for agents)
* bl run agent agent-name --local --data '' (for agents)
Workflow:
1. bl serve --hotreload Start local server with auto-reload
2. Edit your code Make changes
3. Test immediately Server reloads automatically
4. bl deploy Deploy when ready
```
bl serve [flags]
```
### Examples
```
# Basic serve with hot reload (recommended)
bl serve --hotreload
# Serve on custom port
bl serve --port 8080
# Serve specific subdirectory in monorepo
bl serve -d packages/my-agent
# Serve with environment variables
bl serve -e .env.local
# Serve with secrets (for testing)
bl serve -s API_KEY=test-key -s DB_PASSWORD=secret
# Full development workflow
bl serve --hotreload # Terminal 1: Run server
bl chat my-agent --local # Terminal 2: Test agent
```
### Options
```
-d, --directory string Serve the project from a sub directory
-e, --env-file strings Environment file to load (default [.env])
-h, --help help for serve
-H, --host string Bind socket to this host. If 0.0.0.0, listens on all interfaces (default "0.0.0.0")
--hotreload Watch for changes in the project
-p, --port int Bind socket to this port (default 1338)
-r, --recursive Serve the project recursively (default true)
-s, --secrets strings Secrets to deploy
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl share
Source: https://docs.blaxel.ai/cli-reference/commands/bl_share
## bl share
Share a resource with another workspace
### Synopsis
Share Blaxel resources with other workspaces in your account.
Currently supports sharing container images.
### Examples
```
# Share an image with another workspace
bl share image agent/my-agent --workspace other-workspace
```
### Options
```
-h, --help help for share
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
* [bl share image](/cli-reference/commands/bl_share_image) - Share an image with another workspace
# bl share image
Source: https://docs.blaxel.ai/cli-reference/commands/bl_share_image
## bl share image
Share an image with another workspace
### Synopsis
Share a container image with another workspace in your account.
Only the metadata is copied — the image data stays in the source workspace.
The image reference format is: resourceType/imageName
* resourceType: Type of resource (e.g., agent, function, job, sandbox)
* imageName: The name of the image
```
bl share image resourceType/imageName [flags]
```
### Examples
```
# Share an image with another workspace
bl share image agent/my-agent --workspace other-workspace
```
### Options
```
-h, --help help for image
-w, --workspace string Target workspace to share with (required)
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
```
### SEE ALSO
* [bl share](/cli-reference/commands/bl_share) - Share a resource with another workspace
# bl token
Source: https://docs.blaxel.ai/cli-reference/commands/bl_token
## bl token
Retrieve authentication token for a workspace
### Synopsis
Retrieve the authentication token for the specified workspace.
The token command displays the current authentication token used by the CLI
for API requests. This token is automatically managed and refreshed as needed.
Authentication Methods:
* API Key: Returns the API key
* OAuth (Browser Login): Returns the access token (refreshed if needed)
* Client Credentials: Returns the access token (refreshed if needed)
The token is retrieved from your stored credentials and will be automatically
refreshed if it's expired or about to expire.
Examples:
```bash theme={null}
# Get token for current workspace
bl token
# Get token for specific workspace
bl token my-workspace
# Use in scripts (get just the token value)
export TOKEN=$(bl token)
```
```
bl token [workspace] [flags]
```
### Options
```
-h, --help help for token
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl unshare
Source: https://docs.blaxel.ai/cli-reference/commands/bl_unshare
## bl unshare
Unshare a resource from another workspace
### Synopsis
Remove shared Blaxel resources from other workspaces.
Currently supports unsharing container images.
### Examples
```
# Unshare an image from another workspace
bl unshare image agent/my-agent --workspace other-workspace
```
### Options
```
-h, --help help for unshare
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
* [bl unshare image](/cli-reference/commands/bl_unshare_image) - Unshare an image from another workspace
# bl unshare image
Source: https://docs.blaxel.ai/cli-reference/commands/bl_unshare_image
## bl unshare image
Unshare an image from another workspace
### Synopsis
Remove a shared image from another workspace.
This removes the metadata copy from the target workspace.
The original image in the source workspace is not affected.
The image reference format is: resourceType/imageName
* resourceType: Type of resource (e.g., agent, function, job, sandbox)
* imageName: The name of the image
```
bl unshare image resourceType/imageName [flags]
```
### Examples
```
# Unshare an image from another workspace
bl unshare image agent/my-agent --workspace other-workspace
```
### Options
```
-h, --help help for image
-w, --workspace string Target workspace to unshare from (required)
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
```
### SEE ALSO
* [bl unshare](/cli-reference/commands/bl_unshare) - Unshare a resource from another workspace
# bl upgrade
Source: https://docs.blaxel.ai/cli-reference/commands/bl_upgrade
## bl upgrade
Upgrade the Blaxel CLI to the latest version
### Synopsis
Upgrade the Blaxel CLI to the latest version.
This command automatically detects your installation method and updates
the CLI in the correct location to avoid version conflicts.
Supported installation methods:
* Homebrew (brew)
* Manual installation (install.sh)
* Direct binary download
Examples:
# Upgrade to the latest version
bl upgrade
# Upgrade to a specific version
bl upgrade --version v1.2.3
# Force reinstall even if already on latest version
bl upgrade --force
```
bl upgrade [flags]
```
### Options
```
-f, --force Force reinstall even if already on latest version
-h, --help help for upgrade
--version string Target version to upgrade to (e.g., v1.2.3)
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl version
Source: https://docs.blaxel.ai/cli-reference/commands/bl_version
## bl version
Print the version number
```
bl version [flags]
```
### Options
```
-h, --help help for version
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# bl workspaces
Source: https://docs.blaxel.ai/cli-reference/commands/bl_workspaces
## bl workspaces
List workspaces or switch the current workspace
### Synopsis
List and manage Blaxel workspaces.
A workspace is an isolated environment within Blaxel that contains your
resources (agents, jobs, models, sandboxes, etc.). Workspaces provide:
* Isolation between projects or environments (dev/staging/prod)
* Separate billing and resource quotas
* Team collaboration boundaries
* Independent access control and permissions
The current workspace (marked with \*) determines where commands operate.
All commands like 'bl deploy', 'bl get', 'bl run' use the current workspace
unless you override with the --workspace flag.
To switch workspaces, provide the workspace name as an argument.
To list all authenticated workspaces, run without arguments.
```
bl workspaces [workspace] [flags]
```
### Examples
```
# List all authenticated workspaces
bl workspaces
# Switch to different workspace
bl workspaces production
# Use specific workspace for one command (doesn't switch current)
bl get agents --workspace staging
# Get only the current workspace name
bl workspaces --current
# Common multi-workspace workflow
bl workspaces dev # Switch to dev
bl deploy # Deploy to dev
bl workspaces prod # Switch to prod
bl deploy # Deploy to prod
```
### Options
```
--current Display only the current workspace name
-h, --help help for workspaces
```
### Options inherited from parent commands
```
-o, --output string Output format. One of: pretty,yaml,json,table
--skip-version-warning Skip version warning
-u, --utc Enable UTC timezone
-v, --verbose Enable verbose output
-w, --workspace string Specify the workspace name
```
### SEE ALSO
* [bl](/cli-reference/commands/bl) - Blaxel CLI - manage and deploy AI agents, sandboxes, and resources
# Overview
Source: https://docs.blaxel.ai/cli-reference/introduction
Interact with Blaxel through a command line interface.
Blaxel CLI (`bl`) is a command line tool to interact with the Blaxel APIs.
## Install
To install Blaxel CLI, you must use [Homebrew](https://brew.sh/): make sure it is installed on your machine. We are currently in the process of supporting additional installers. Check out the cURL method down below for general installation.
Install Blaxel CLI by running the two following commands successively in a terminal:
```shell theme={null}
brew tap blaxel-ai/blaxel
```
```shell theme={null}
brew install blaxel
```
Install Blaxel CLI by running the following command in a terminal (non-sudo alternatives below):
```shell theme={null}
curl -fsSL \
https://raw.githubusercontent.com/blaxel-ai/toolkit/main/install.sh \
| BINDIR=/usr/local/bin sudo -E sh
```
If you need a non-sudo alternative (**it will ask you questions to configure**):
```shell theme={null}
curl -fsSL \
https://raw.githubusercontent.com/blaxel-ai/toolkit/main/install.sh \
| sh
```
If you need to install a specific version (e.g. v0.1.21):
```shell theme={null}
curl -fsSL \
https://raw.githubusercontent.com/blaxel-ai/toolkit/main/install.sh \
| VERSION=v0.1.21 sh
```
Install Blaxel CLI by running the following command in a terminal (non-sudo alternatives below):
```shell theme={null}
curl -fsSL \
https://raw.githubusercontent.com/blaxel-ai/toolkit/main/install.sh \
| BINDIR=/usr/local/bin sudo -E sh
```
If you need a non-sudo alternative (**it will ask you questions to configure**):
```shell theme={null}
curl -fsSL \
https://raw.githubusercontent.com/blaxel-ai/toolkit/main/install.sh \
| sh
```
If you need to install a specific version (e.g. v0.1.21):
```shell theme={null}
curl -fsSL \
https://raw.githubusercontent.com/blaxel-ai/toolkit/main/install.sh \
| VERSION=v0.1.21 sh
```
For the most reliable solution, we recommend adapting the aforementioned Linux commands by using Windows Subsystem for Linux.
First install WSL (Windows Subsystem for Linux) if not already installed. This can be done by:
* Opening PowerShell as Administrator
* Running: `wsl --install -d Ubuntu-20.04`
* Restarting the computer
* From the Microsoft Store, install the Ubuntu app
* Run the command line using the aforementioned Linux installation process. Make sure to install using **sudo**.
## Update
Update Blaxel CLI by running the following command in a terminal:
```shell theme={null}
bl upgrade
```
If you need to update to a specific version (e.g. v0.1.21):
```shell theme={null}
bl upgrade --version 0.1.21
```
You can also upgrade using Homebrew:
```shell theme={null}
brew upgrade blaxel
```
Update Blaxel CLI by running the following command in a terminal:
```shell theme={null}
bl upgrade
```
If you need to update to a specific version (e.g. v0.1.21):
```shell theme={null}
bl upgrade --version 0.1.21
```
For the most reliable solution, we recommend adapting the aforementioned Linux commands by using Windows Subsystem for Linux.
First make sure WSL (Windows Subsystem for Linux) is installed if not already. This can be done by:
* Opening PowerShell as Administrator
* Running: `wsl --install -d Ubuntu-20.04`
* Restarting the computer
* From the Microsoft Store, install the Ubuntu app
* Run the command line using the aforementioned Linux installation process. Make sure to install using **sudo**.
## Shell autocompletion
To enable shell autocompletion for Blaxel CLI commands, run one of the following:
```zsh zsh theme={null}
echo 'eval "$(bl completion zsh)"' >> ~/.zshrc
```
```bash bash theme={null}
echo 'eval "$(bl completion bash)"' >> ~/.bashrc
```
```fish fish theme={null}
echo 'bl completion fish' > ~/.config/fish/completions/bl.fish
```
```powershell powershell theme={null}
if (!(Test-Path -Path $PROFILE)) {
New-Item -ItemType File -Path $PROFILE -Force
}
Add-Content -Path $PROFILE -Value '(& bl completion powershell) | Out-String | Invoke-Expression'
```
## Get started
First, create a [workspace](../Security/Workspace-access-control) on the Blaxel console. Then, login to Blaxel:
```bash theme={null}
bl login
```
If you have multiple workspaces, the one you last logged into will be used automatically in every subsequent command you run. You can login to as many workspaces as you want.
To set your context to a different workspace you already logged in to, use the following command:
```bash theme={null}
bl workspaces your-workspace
### You can retrieve the list of all your workspaces by running:
bl workspaces
```
Basic commands to scaffold, develop, and deploy your resources:
| Command | Purpose |
| ----------- | --------------------------------------------------------------------------------- |
| `bl new` | Scaffold a new local project (code + `blaxel.toml`) |
| `bl deploy` | Build image + push image + create/update resource + watch logs |
| `bl push` | Build image + push image to Blaxel registry (no resource created or updated) |
| `bl apply` | Declarative resource creation/update using YAML files, similar to `kubectl apply` |
| `bl serve` | Local execution *(not available for sandboxes)* |
`bl deploy` is an all-in-one command that packages local files into an image, pushes it as a template, and deploys it as a resource on Blaxel. Use the `--skip-build` flag to skip image rebuilding. Use `bl push` instead to only build and push the image.
Interact with existing resources in your workspace:
```shellscript theme={null}
bl get sandboxes
```
## **Options**
```text theme={null}
-h, --help Get the help for Blaxel
-w, --workspace string Specify the Blaxel workspace to work on.
-o, --output string Output format. One of: pretty, yaml, json, table
-v, --verbose Enable verbose output
```
# Deployment File Reference
Source: https://docs.blaxel.ai/deployment-reference
Complete reference for the blaxel.toml configuration file, including resource types, build settings, runtime options, environment variables, and volumes.
You can customize your Blaxel deployment with Blaxel's configuration file `blaxel.toml`.
The format of Blaxel's configuration file is described below.
```toml theme={null}
# resource type (optional)
# possible values: ["agent", "function", "job", "sandbox", "volume-template"]
# type = "agent"
# resource name (optional)
# defaults to the directory name
# name = "my-resource"
# public access to resource (only for resource type="agent")
# possible values: [true, false]
# defaults to false
# public = false
# deployment region (optional, for resource type="agent" or resource type="function")
# pins the resource to a specific region instead of global distribution
# required when attaching volumes to an agent
# region = "us-pdx-1"
# build configuration (optional)
# [build]
# automatic image slimming
# possible values: [true, false]
# defaults to true
# slim = true
# entrypoint configuration (optional)
# [entrypoint]
# entrypoint command for production environment
# prod = "python main.py"
# entrypoint command for development environment
# dev = "python main.py --dev"
# environment variables (optional)
# [env]
# key-value paris
# MY_VAR = "my-value"
# runtime configuration (optional)
# [runtime]
# memory (MB)
# memory = 4096
# job configuration
# maxConcurrentTasks = 10
# task timeout in seconds (only valid for resource type="job")
# timeout = 900
# maximum number of retries
# maxRetries=0
# volumes (optional, for resource type="agent" and resource type="sandbox")
# attaches persistent storage to the resource
# the resource must be pinned to the same region as the volume
# [[volumes]]
# volume name
# name = "my-volume"
# volume mount path
# mountPath = "/data"
# volume templates directory (only for resource type="volume-template")
# directory = "."
# volume default size in MB (only for resource type="volume-template")
# defaultSize = 1024
# job trigger (optional)
# [[triggers]]
# trigger identifier
# id = "my-trigger"
# trigger type
# possible values: ["schedule", "http", "http-async"]
# "schedule" trigger (only valid for resource type="job")
# type = "schedule"
# "http" trigger (only valid for resource type="agent" and resource type="function")
# type = "http"
# "http-async" trigger (only valid for resource type="agent")
# type = "http-async"
# [[triggers.configuration]]
# cron expression (only for trigger type="schedule")
# schedule = "0 * * * *"
# endpoint URL (only for trigger type="http" and trigger type="http-async")
# path = "/webhook"
# endpoint authentication (only for resource type="function" and trigger type="http")
# possible values: ["public", "private"]
# authenticationType = "public"
# callback URL (only for trigger type="http-async")
# callbackUrl = "https://webhook.site/3955e30a-e4b6-4fa5-8a28-598ce0f53386"
```
# Overview
Source: https://docs.blaxel.ai/sdk-reference/introduction
Manage Blaxel resources programmatically using our SDKs.
Blaxel features SDKs in three languages: **Python**, **TypeScript**, and **Go**. Check out down below the installation instructions, as well as documentation on **how the SDK** **authenticates** to Blaxel.
## Install
Install the TypeScript SDK.
Install the Python SDK.
Install the Go SDK.
## Prerequisites
To use any Blaxel SDK, you need a [Blaxel account](https://app.blaxel.ai) and the following environment variables:
| Variable | Description |
| -------------- | -------------------------- |
| `BL_WORKSPACE` | Your Blaxel workspace name |
| `BL_API_KEY` | Your Blaxel API key |
You can create an API key from the [Blaxel console](https://app.blaxel.ai/profile/security). Your workspace name is visible in the URL when you log in to the console (e.g. `app.blaxel.ai/{workspace}`).
Set them as environment variables:
```bash theme={null}
export BL_WORKSPACE=my-workspace
export BL_API_KEY=my-api-key
```
Or add them to a `.env` file at the root of your project.
## How authentication works
The Blaxel SDK authenticates with your workspace using credentials from these sources, in priority order:
1. when running on Blaxel, authentication is handled automatically
2. `BL_WORKSPACE` and `BL_API_KEY` environment variables or `.env` file (see [this page](../Agents/Variables-and-secrets) for other authentication options)
3. configuration file created locally when you log in through [Blaxel CLI](../cli-reference/introduction) (or deploy on Blaxel)
When developing locally, you can also **log in to your workspace with Blaxel CLI:**
```bash theme={null}
bl login
```
This allows you to run Blaxel SDK functions that will automatically connect to your workspace without additional setup. When you deploy on Blaxel, this connection persists automatically.
## Data collection and privacy
[Read more about the data collected by the Blaxel SDKs](/Security/Data-collection-and-privacy).
## Complete SDK reference
Visit the GitHub pages below for detailed documentation on each SDK's commands and classes.
Open the GitHub repository for Blaxel SDK in TypeScript.
Open the GitHub repository for Blaxel SDK in Python.
Open the GitHub repository for Blaxel SDK in Go.
# Go SDK
Source: https://docs.blaxel.ai/sdk-reference/sdk-go
Manage Blaxel resources programmatically using our Go SDK.
Blaxel features a SDK in **Go**. To install, follow the instructions below.
## Install
```go theme={null}
import (
"github.com/blaxel-ai/sdk-go" // imported as blaxel
)
```
Or to use a specific version:
```shell Go theme={null}
go get -u 'github.com/blaxel-ai/sdk-go@v0.15.0'
```
## Prerequisites
To use this SDK, you need a [Blaxel account](https://app.blaxel.ai) and the following environment variables:
| Variable | Description |
| -------------- | -------------------------- |
| `BL_WORKSPACE` | Your Blaxel workspace name |
| `BL_API_KEY` | Your Blaxel API key |
You can create an API key from the [Blaxel console](https://app.blaxel.ai/profile/security). Your workspace name is visible in the URL when you log in to the console (e.g. `app.blaxel.ai/{workspace}`).
Set them as environment variables:
```bash theme={null}
export BL_WORKSPACE=my-workspace
export BL_API_KEY=my-api-key
```
Or add them to a `.env` file at the root of your project. For other authentication options, see [Variables and secrets](../Agents/Variables-and-secrets).
Alternatively, you can authenticate via the [Blaxel CLI](../cli-reference/introduction):
```bash theme={null}
bl login
```
When you deploy on Blaxel, authentication is handled automatically.
## Complete SDK reference
Visit the GitHub page below for detailed documentation on the SDK's commands and classes.
Open the GitHub repository for Blaxel SDK in Go.
# Python SDK
Source: https://docs.blaxel.ai/sdk-reference/sdk-python
Manage Blaxel resources programmatically using our Python SDK.
Blaxel features a SDK in **Python**. To install, follow the instructions below.
## Install
```shell Python (pip) theme={null}
pip install blaxel
```
```shell Python (uv) theme={null}
uv pip install blaxel
```
```shell Python (uv add) theme={null}
uv init && uv add blaxel
```
## Prerequisites
To use this SDK, you need a [Blaxel account](https://app.blaxel.ai) and the following environment variables:
| Variable | Description |
| -------------- | -------------------------- |
| `BL_WORKSPACE` | Your Blaxel workspace name |
| `BL_API_KEY` | Your Blaxel API key |
You can create an API key from the [Blaxel console](https://app.blaxel.ai/profile/security). Your workspace name is visible in the URL when you log in to the console (e.g. `app.blaxel.ai/{workspace}`).
Set them as environment variables:
```bash theme={null}
export BL_WORKSPACE=my-workspace
export BL_API_KEY=my-api-key
```
Or add them to a `.env` file at the root of your project. For other authentication options, see [Variables and secrets](../Agents/Variables-and-secrets).
Alternatively, you can authenticate via the [Blaxel CLI](../cli-reference/introduction):
```bash theme={null}
bl login
```
When you deploy on Blaxel, authentication is handled automatically.
## Guides
Use Blaxel SDK to create and connect to sandboxes and sandbox previews.
Use Blaxel SDK to manage the filesystem, processes and logs of a sandbox.
Use Blaxel SDK to retrieve tools from a deployed MCP server.
Use Blaxel SDK to retrieve an LLM client when building agents.
Use Blaxel SDK to create and host a custom MCP server.
Use Blaxel SDK to chain calls to multiple agents.
## Complete SDK reference
Visit the GitHub page below for detailed documentation on the SDK's commands and classes.
Open the GitHub repository for Blaxel SDK in Python.
# TypeScript SDK
Source: https://docs.blaxel.ai/sdk-reference/sdk-ts
Manage Blaxel resources programmatically using our TypeScript SDK.
Blaxel features a SDK in **TypeScript**. To install, follow the instructions below.
## Install
To manage Blaxel resources, use the core SDK `@blaxel/core`:
```shell TypeScript (pnpm) theme={null}
pnpm install @blaxel/core
```
```shell TypeScript (npm) theme={null}
npm install @blaxel/core
```
```shell TypeScript (yarn) theme={null}
yarn add @blaxel/core
```
```shell TypeScript (bun) theme={null}
bun add @blaxel/core
```
For automatic trace and metric exports when running workloads with Blaxel SDK, you'll want to use `@blaxel/telemetry`. Import this SDK at your project's entry point.
```shell TypeScript (pnpm) theme={null}
pnpm install @blaxel/telemetry
```
```shell TypeScript (npm) theme={null}
npm install @blaxel/telemetry
```
```shell TypeScript (yarn) theme={null}
yarn add @blaxel/telemetry
```
```shell TypeScript (bun) theme={null}
bun add @blaxel/telemetry
```
For compatibility with agent’s frameworks (i.e. to import tools and models in the framework’s format), import the corresponding SDK:
```shell TypeScript (pnpm) theme={null}
pnpm install @blaxel/langgraph
pnpm install @blaxel/vercel
pnpm install @blaxel/mastra
pnpm install @blaxel/llamaindex
```
```shell TypeScript (npm) theme={null}
npm install @blaxel/telemetry
npm install @blaxel/vercel
npm install @blaxel/mastra
npm install @blaxel/llamaindex
```
```shell TypeScript (yarn) theme={null}
yarn add @blaxel/telemetry
yarn add @blaxel/vercel
yarn add @blaxel/mastra
yarn add @blaxel/llamaindex
```
```shell TypeScript (bun) theme={null}
bun add @blaxel/telemetry
bun add @blaxel/vercel
bun add @blaxel/mastra
bun add @blaxel/llamaindex
```
## Prerequisites
To use this SDK, you need a [Blaxel account](https://app.blaxel.ai) and the following environment variables:
| Variable | Description |
| -------------- | -------------------------- |
| `BL_WORKSPACE` | Your Blaxel workspace name |
| `BL_API_KEY` | Your Blaxel API key |
You can create an API key from the [Blaxel console](https://app.blaxel.ai/profile/security). Your workspace name is visible in the URL when you log in to the console (e.g. `app.blaxel.ai/{workspace}`).
Set them as environment variables:
```bash theme={null}
export BL_WORKSPACE=my-workspace
export BL_API_KEY=my-api-key
```
Or add them to a `.env` file at the root of your project. For other authentication options, see [Variables and secrets](../Agents/Variables-and-secrets).
Alternatively, you can authenticate via the [Blaxel CLI](../cli-reference/introduction):
```bash theme={null}
bl login
```
When you deploy on Blaxel, authentication is handled automatically.
## Guides
Use Blaxel SDK to create and connect to sandboxes and sandbox previews.
Use Blaxel SDK to manage the filesystem, processes and logs of a sandbox.
Use Blaxel SDK to retrieve tools from a deployed MCP server.
Use Blaxel SDK to retrieve an LLM client when building agents.
Use Blaxel SDK to create and host a custom MCP server.
Use Blaxel SDK to chain calls to multiple agents.
## Complete SDK reference
Visit the GitHub page below for detailed documentation on the SDK's commands and classes.
Open the GitHub repository for Blaxel SDK in TypeScript.
# Blaxel Skills & MCP
Source: https://docs.blaxel.ai/skills-mcp
Configure Cursor, Claude Code, or other AI coding assistants to manage your Blaxel resources or access the Blaxel documentation.
Blaxel provides LLM-accessible tools that you can connect to your coding assistant (Cursor, Windsurf, Claude Desktop, etc.). There are several options:
1. The Blaxel **agent skills** let your coding agent autonomously create sandboxes, deploy agents, run jobs, and launch MCP servers on Blaxel using simple prompts, with support for both command-line (CLI) and code-driven (SDK) operations.
2. The Blaxel **resource management MCP server** lets your coding agent directly manage your Blaxel resources using natural language.
3. The Blaxel **documentation MCP server** lets your coding agent directly read and ask questions of this documentation in real-time for up-to-date commands and features.
4. An **`llms-full.txt`** text file with the entire documentation compiled and formatted for LLMs.
5. A **native AI assistant** built into this documentation portal.
## Agent skills
[Agent skills](https://agentskills.io/specification) are instruction sets to extend coding agents with additional knowledge and tools. An agent can load these instructions into its context and use them to complete the tasks assigned to it.
Blaxel offers open source agent skills to help coding agents autonomously spin up and manage sandboxes on Blaxel, and migrate sandbox code from other providers to Blaxel.
### Blaxel skills
The open source [Blaxel skills](https://github.com/blaxel-ai/agent-skills) let agents:
* create perpetual sandboxes on Blaxel to run code and execute commands
* start application servers within sandboxes
* generate URLs to preview applications running within sandboxes
* create Agent Drives (shared filesystems) on Blaxel
* create and deploy AI agents on Blaxel
* create and deploy MCP servers on Blaxel
* deploy and run batch jobs on Blaxel
Two skills are currently available:
* The `blaxel-cli` skill lets coding agents create and manage Blaxel resources from the command line using the `bl` CLI.
* The `blaxel-sdk` skill lets coding agents write code to create and manage Blaxel resources using the Blaxel SDKs.
Use the `blaxel-cli` skill when you're troubleshooting, or bootstrapping a project on Blaxel. Use the `blaxel-sdk` skill when building agents or MCP servers on Blaxel, or when you need to programmatically manage Blaxel resources.
Add the Blaxel skills to your coding agent with the command below:
```shell theme={null}
npx skills add blaxel-ai/agent-skills
```
You'll be prompted to choose skills and select your coding agent and installation location.
The Blaxel skills are also [available as a ZIP file](https://github.com/blaxel-ai/agent-skills/releases) for [use with Claude](https://support.claude.com/en/articles/12512180-use-skills-in-claude).
See [examples of using the Blaxel skills](https://blaxel.ai/blog/give-your-coding-agent-blaxel-superpowers) in our announcement blog post.
### Blaxel migration skill
The open source [Blaxel migration skill](https://github.com/blaxel-ai/agent-migration-skills) gives agents the knowledge they need to migrate sandbox code from other providers to Blaxel. It is currently able to migrate sandbox code from Daytona, E2B and Modal.
Add the Blaxel migration skill to your coding agent with the command below, depending on which sandbox provider you wish to migrate from:
```shell theme={null}
# migrate from Daytona
npx skills add blaxel-ai/agent-migration-skills/daytona
# migrate from E2B
npx skills add blaxel-ai/agent-migration-skills/e2b
# migrate from Modal
npx skills add blaxel-ai/agent-migration-skills/modal
```
You'll be prompted to select your coding agent and installation location.
The Blaxel migration skill is also [available as a ZIP file](https://github.com/blaxel-ai/agent-migration-skills/releases) for [use with Claude](https://support.claude.com/en/articles/12512180-use-skills-in-claude).
## MCP servers
### MCP server for resource management
Blaxel's MCP server lets compatible AI apps (Cursor, Claude Code, Windsurf, etc.) manage your Blaxel resources using natural language. It is available as a **remote hosted HTTP server**.
Talk with your preferred agent to:
* Create new services
* Follow what is happening on your infrastructure
* Manage your users
#### How it works
The MCP server is hosted at `https://api.blaxel.ai/v0/mcp`.
Clients connect over HTTP(S) and stream MCP messages. Authentication is provided via headers.
##### Authentication headers
Provide your [API key](Security/Access-tokens) in the `Authorization` header. If your user has access to multiple workspaces, include the workspace header as well.
* Required: `Authorization: Bearer `
* Optional (multi-workspace): `X-Blaxel-Workspace: `
#### Installation
##### Claude Code
[Add the remote HTTP server to your Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp#option-3%3A-add-a-remote-http-server) by running the following command:
```bash theme={null}
claude mcp add --transport http blaxel https://api.blaxel.ai/v0/mcp \
--header "Authorization: Bearer " \
--header "X-Blaxel-Workspace: "
```
##### Cursor
Add the server to `~/.cursor/mcp.json`:
```json theme={null}
{
"mcpServers": {
"blaxel": {
"url": "https://api.blaxel.ai/v0/mcp",
"headers": {
"Authorization": "Bearer ",
"X-Blaxel-Workspace": ""
}
}
}
}
```
##### Windsurf
Add to `~/.codeium/windsurf/mcp_config.json`:
```json theme={null}
{
"mcpServers": {
"blaxel": {
"url": "https://api.blaxel.ai/v0/mcp",
"headers": {
"Authorization": "Bearer ",
"X-Blaxel-Workspace": ""
}
}
}
}
```
##### Goose
Add to `~/.config/goose/config.yaml`:
```yaml theme={null}
extensions:
blaxel-api:
enabled: true
type: streamable_http
name: blaxel-api
description: Blaxel API MCP Server
uri: https://api.blaxel.ai/v0/mcp
envs: {}
env_keys: []
headers:
Authorization: Bearer
X-Blaxel-Workspace:
timeout: 300
bundled: null
available_tools: []
```
#### Selecting a workspace
If your account has multiple workspaces, set the active workspace by including the `X-Blaxel-Workspace` header. If omitted, your AI app may prompt you to specify one when a tool is invoked.
#### Example prompts
Ask your AI app to perform actions using Blaxel MCP tools, for example:
* List my agents
* Create a new MCP server named search with blaxel-search integration
* Invite [test@mydomain.com](mailto:test@mydomain.com) to my workspace
#### Supported actions
The Blaxel MCP server exposes tools for common platform operations.
You can list, create and remove services (Sandboxes, Agents, MCP Servers, Jobs, Models, …), as well as manage users and service accounts.
See the [open-source definitions](https://github.com/blaxel-ai/blaxel-mcp-server) for the latest list and schemas.
#### Running locally
Prefer using the hosted server at `https://api.blaxel.ai/v0/mcp`. If you must run locally or customize behavior, clone the repository and follow its instructions.
#### Limitations
* The server requires a valid API key.
* When using multiple workspaces, `X-Blaxel-Workspace` must be provided to disambiguate.
Check out Blaxel MCP server's [source here](https://github.com/blaxel-ai/blaxel-mcp-server).
### MCP server for documentation
You can also give your coding assistant real-time access to this documentation.
Connect your coding assistant directly to the Blaxel documentation via MCP at `https://docs.blaxel.ai/mcp`.
#### Claude Code
[Add the remote HTTP server to your Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp#option-3%3A-add-a-remote-http-server) by running the following command:
```bash theme={null}
claude mcp add --transport http blaxel-docs https://docs.blaxel.ai/mcp
```
#### Cursor
1. Open **Cursor Settings**
2. Go to **MCP & Integrations**
3. Click **"+ Add a custom MCP server"**
4. Add this configuration:
```json theme={null}
{
"mcpServers": {
... // Your other MCP servers
"blaxel-docs": {
"url": "https://docs.blaxel.ai/mcp"
}
}
}
```
## llms-full.txt
A compiled text document of the entire documentation, formatted for LLMs. Copy and paste it directly into your coding assistant's prompt:
[https://docs.blaxel.ai/llms-full.txt](https://docs.blaxel.ai/llms-full.txt)
## Built-in documentation agent
This documentation portal has a built-in AI assistant. Click "✨**Ask AI**" at the top of any page to use it.
# Troubleshooting
Source: https://docs.blaxel.ai/troubleshooting
Resolve common issues with Blaxel deployments including build failures, runtime errors, connectivity problems, and CLI authentication.
This page describes common issues and how to troubleshoot them. If your issue is not covered below, you can also [check our knowledge base](https://support.blaxel.ai).
## Preview URLs return a `502 Bad Gateway` server error
This error occurs when the application server is either not running inside the sandbox, or is running but binding to `localhost`. In the latter case, it is only reachable inside the sandbox and the external preview URL will not be able to access it.
To resolve this error, first confirm that the application server is running. If it is, then ensure that it is configured to bind to IP address `0.0.0.0`, so that it listens on all available network interfaces. Once this is done, restart the application server. To avoid IPv4/IPv6 compatibility issues, we recommend using the environment variable `HOST` instead of setting the address to a static value.
For example, in code:
```tsx TypeScript theme={null}
const host = process.env.HOST || '0.0.0.0';
app.listen(3000, host, () => {
console.log(`Server is running`);
});
```
```python Python theme={null}
host = os.environ.get("HOST", "0.0.0.0")
app.run(host=host, port=3000)
```
or at the command line:
```tsx TypeScript theme={null}
npm run dev -- --host 0.0.0.0
# or
npm serve -- --host 0.0.0.0
```
```python Python theme={null}
python main.py --host 0.0.0.0
```
## Sandbox commands return a `502 Bad Gateway` server error
This error occurs for three possible reasons:
1. The sandbox API is called too soon after sandbox creation. The solution is to retry the request.
2. The sandbox API was unexpectedly terminated - for example, if the sandbox runs out of memory. The solution is to delete and recreate the sandbox, which also restarts the sandbox API.
3. The sandbox never started so the deployment status is set to `ERROR`. The solution is to recreate the sandbox.
The sandbox logs will typically contain additional information to identify the root cause of this error. Sandbox logs can be viewed in the [Blaxel Console](https://app.blaxel.ai) or with the Blaxel CLI using the command `bl logs sandbox SANDBOX-NAME`.
Deleting and recreating will cause the current state of the sandbox to be lost.
## Sandbox deployment fails with `STARTUP TCP probe failed` or `DEADLINE_EXCEEDED` error
This error occurs when the server running inside the sandbox is not binding to the correct host and port.
To resolve this error, configure your server to bind to the host and port designated by the `HOST` and `PORT` environment variables. Blaxel automatically injects these variables during deployment.
For example:
```tsx TypeScript theme={null}
const port = parseInt(env.PORT || "80");
const host = process.env.HOST || "0.0.0.0";
app.listen(port, host, () => {
console.log(`Server is running`);
});
```
```python Python theme={null}
port = os.environ.get("PORT", "80")
host = os.environ.get("HOST", "0.0.0.0")
app.run(host=host, port=port)
```
## Sandbox deployment fails with a `QUOTA_EXCEEDED` error or a 400 status code.
This error indicates that the maximum memory limit for the account has been reached in the current tier.
Blaxel has a tiering system that unlocks higher limits and features on the platform as your tier progresses. To resolve this error, upgrade your account to a higher tier, which unlocks higher RAM limits for sandboxes.
To see more details on which quota was exceeded or to upgrade to a higher tier, visit the [**Quotas** page of the Blaxel Console](https://app.blaxel.ai/account/quotas).
## Models return a 429 error
This error occurs when a model reaches its maximum token limit.
To resolve this error, adjust the model's token usage policy. Token usage policies control the maximum number of tokens your model APIs can handle within a specific time period. You can configure the maximum number of input tokens, output tokens and/or total tokens using these policies.
## Sandbox commands return an `ENOSPC: no space left on device` error
This error occurs when your sandbox runs out of storage space. This usually happens when your file storage requirements exceed the available memory since, for performance reasons, Blaxel sandboxes reserve, when possible, [approximately 50% of their available memory](/Sandboxes/Overview#memory-and-filesystem) for the filesystem.
To resolve this error, you have two options:
* Increase the available memory allocation for the sandbox, which also increases the available filesystem storage.
Increasing memory also increases CPU allocation
* 8GB memory = 4 CPU cores
* 16GB memory = 8 CPU cores
* Add storage using [volumes](/Sandboxes/Volumes). Adding storage requires deleting and recreating the sandbox first.
Volumes are slower than the in-memory filesystem.
## Sandbox commands return a `404 Workspace Not Found` error
This error occurs when a workspace is not provisioned correctly.
To resolve this error, please contact us for support via [Discord](https://discord.gg/G3NqzUPcHP) or [online contact form](https://blaxel.ai/contact).
## Volume creation fails with an error
This error occurs when a volume is smaller than the template data, or when the workspace quota limit is breached.
To resolve this error, you can:
* Provision extra space for your volume beyond the template content size. We recommend provisioning at least 20-30% extra space.
* Contact us to increase your workspace quota limit.
## Hot reloads fail on Webpack previews
This error occurs when a Webpack server is configured to use the same port as local development, but the sandbox preview URL is mapped to a different port. As a result, when hot reloading previews, your client is trying to connect to the local development port instead of the sandbox's preview URL port.
To resolve this error, add a conditional in the `webSocketURL` of your `webpack/config.dev.js` to handle the different ports:
```javascript highlight={9-13} theme={null}
///... rest of webpack/config.dev.js
module.exports = {
devServer: {
host: "::",
port: 3000,
allowedHosts: [".bl.run", ".beamlit.net", "localhost", "127.0.0.1"],
client: {
webSocketURL: {
port: process.env.BL_CLOUD === "true" ? 443 : 3000,
},
},
headers: {
...
},
},
/// ... rest of webpack/config.dev.js
```
# Knowledge base
Source: https://docs.blaxel.ai/troubleshooting/knowledge-base