2025-09-28
Sandbox network v2
- Released v2 of the sandbox networking layer, which decreases latency by up to 85%
2025-09-25
New sandbox builder
- Our new builder has been successfully rolled out with optimized performance for image downloads and adjusted timeouts.
- Key improvements:
- Larger image support: Images can now exceed 10GB
- Optimized file system: Better memory management with in-memory speed
- Faster startups: Further reduced cold start times
- Cost efficiency: Overall lower costs due to system optimizations
2025-09-24
New init command
- Added
bl new
command to quickly initialize a new or existing project on Blaxel. You can also use any of the sub commands:bl new agent
bl new mcp
bl new job
bl new sandbox
2025-09-17
Sandbox lifecycle policies
Set time-to-live & expiration policies on a sandbox to automatically delete it based on specific conditions:- expire at a specific date using the
expires
parameter. - expire after a total maximum lifetime using the
ttl
parameter - expire after a period of inactivity using the
lifecycle.expirationPolicies
/lifecycle.expiration_policies
parameter
2025-09-10
Billing Explorer

- Added new Billing Explorer: monitor your consumption of Blaxel services in real-time.
2025-08-21
Custom domains

- Added possibility to register a custom domain in your workspace and verify it from Blaxel
- Added support for custom domains for sandboxes previews
2025-08-08
TTLs on sandboxes
- Added option to set a TTL (time-to-live) on sandboxes to automatically delete them after a specific period
2025-08-06
Improved sandbox creation time
- Optimized creation flow of new sandboxes to bring it down to 2-3 seconds (+ network latency).
2025-08-01
2025-07-15
New MCP servers: AgentMail, Context7

- Added new MCP server: AgentMail
- Added new MCP server: Context7
- Added new agent template: Email Support Agent - powered by AgentMail
2025-06-06
2025-06-01
Deploy from Dockerfile
- Added possibility to deploy agent, MCP server and job from a Dockerfile
2025-05-26
Improved latency on sandbox
- Reduced typical duration of sandbox creation down to 4-8 seconds
- Reduced latency on calls made to a sandbox
2025-05-19
Sandboxes file system improvements
- Added ability to watch sub-directories
- Added ability to ignore some files/directories from watch
- Added ability to write multiple files at once
- Added ability to write binary content in the fs
- Added ability to get both stderr/stdout logs at once in batch mode
2025-05-15
2025-05-14
Sessions for sandboxes
- Added ability to create sessions to operate sandboxes from a frontend client
2025-05-09
New MCP servers: HubSpot & Smartlead
- Added new MCP server: HubSpot
- Added new MCP server: Smartlead
2025-05-01
Sandboxes

- Boot time under 20ms
- Persistent filesystem across sessions
- Operable via both Python/TypeScript SDK and MCP server
2025-04-25
New MCP: Tavily

- Added new MCP server: Tavily. Enable your agents to search the web with Tavily’s API.
2025-04-17
New Blaxel Console

- We reworked the UI of Blaxel Console! Now more geeky with loads of new metrics and visibility on your infrastructure — and an integrated changelog.
2025-04-12
2025-04-02
SDK v0.1.0

- Released a new major version of our SDK to give you access to lower-level features:
- host custom MCP servers
- connect to LLM via Blaxel gateway
- connect to tools hosted on Blaxel
- Compatibility with all major agentic frameworks from day 1: LangChain, LangGraph, CrewAI, LlamaIndex, OpenAI Agents, Vercel AI SDK, Mastra.
- Improved cold-starts for agents and functions
2025-03-20
Functions are now MCP servers

- All functions on Blaxel are now exposed as MCP servers, even custom functions.
- Create a custom MCP server from a scaffolded repo using
bl create-mcp-server
2025-03-06
Templates of agents

- Added 6 templates of agents
- Templates can be used when creating a new agent from the Blaxel Console. Deploying a template will deploy the agent on your GitHub organization and setup a live synchronization watching future updates
- The entire template list is available on our website
- Added information in Blaxel Console that an agent is synchronized via GitHub
- Updated the schema for functions to better match MCP standard
2025-02-28
Improved build times
Functions and agents now build and deploy 100% to 200% faster than before, thanks to a reworked build system.2025-02-24
Voice agents
We’ve added a new low-latency voice agent template that enables real-time speech interaction with AI systems. Built using OpenAI’s Realtime API and LangGraph ReAct agent, this template supports multi-modal inputs/outputs and tool calling via Blaxel Functions.2025-02-18
Revisions & canary deployments

- Each build of an object creates a new immutable revision, which can be deployed to redirect request traffic to it
- Ability to deploy a new revision using blue-green strategy
- Ability to rollback to previous revision
- Ability to split a percentage of traffic to a second revision (canary deployments)
2025-02-17
2025-02-14
Sunsetting environments
We are discontinuing Beamlit environments. Stay tuned for an upcoming major update that will help you better manage your deployment lifecycles.2025-02-12
New pricing plans

2025-02-04
HuggingFace integration

- Connect to public or private models from HuggingFace Inference API (serverless) and Inference Endpoints (dedicated)
- Deploy a model in HuggingFace Inference Endpoints (dedicated)
2025-01-31
Azure AI Foundry integration

- New integration: Azure AI Services (for OpenAI models and others)
- New integration: Azure Marketplace (for Llama models and others)
2025-01-30
2025-01-26
🚀 We launched Beamlit beta!

- New world-class infrastructure for minimal cold starts and maximum reliability
- World map showing the origin of requests on your agents AI
- Many additional charts
2025-01-20
Agentic traces

- Added Agent Observability suite, with traces of agents’ requests.
- Added ability to launch a request in debug mode from the Playground to save the trace
2025-01-12
Latency metric tracking

- Added a new set of metrics is available for your agents, functions and models: end-to-end latencies (average, p50, p90, p99).
2025-01-07
Beamlit extension for VScode

- View and retrieve your workspace’s resources directly from the VSCode IDE
2025-01-03
New Beamlit console

- Beamlit console has been completely reworked, in order to center the experience around agents and their deployments.
- Removed custom model deployments and renamed external models “model APIs”
- Added statuses for deployments
- Added playbook to deploy an agent from local code in the Beamlit console
2024-12-20
Visual aspect of charts is improved

- Improved visual aspect of charts and options to change time window of screens
- API reference for agents’, models’ and functions’ deployments has been updated
2024-12-17
CLI support for running agents and functions
- You can now use the Beamlit CLI to run requests on agents (
bl run agent
) and functions (bl run function
)
- Added new metric: total number of requests
- You can now develop (and run) your own AI agents locally with the Beamlit SDK.
- Use CLI command
bl create-agent-app your-agent-name
to initialize a repository with the code skeleton to get started - Use CLI command
bl serve --local
to run your AI agent on your local machine
2 new integrations are available to connect to external AI model providers:
- Cohere
- xAI
You can now use a GitHub Action to directly deploy Beamlit objects (agents, functions, models) from your GitHub Workflows.Beamlit is currently in private alpha. Join our waitlist for access today.
Happy Thanksgiving! You can now connect external model providers as integrations in your Beamlit workspace in order to leverage Beamlit’s accelerated Global Inference Network to unify calls to third-party APIs.
- OpenAI
- Anthropic
- Mistral AI
Experience Beamlit first-hand and discover what it can you for you. Join the private alpha waitlist for Beamlit now.
Features
- Agents: run AI agents across multiple locations, so your consumers get the lowest latency and the highest availability.
- Models: connect or deploy generative AI models behind our global gateway, from public endpoints to custom fine-tuned models
- Functions: run serverless functions that provide your agents with the tools to interact with their environment
- Global Inference Network: make your agents available globally and locally with a network designed for AI inferences that optimizes for mission-critical latency while respecting your deployment, routing and cost policies.
- Environments: manage compliance at enterprise level by enforcing policies directly in your development life-cycle
- Policies: define global rules and strategies regarding your deployment placement, inference request routing, and hardware usage.