Blaxel is a cloud infrastructure built for AI agents. Our computing platform gives AI Builders the services, infra, and developer experience optimized to build and deploy agentic AI — with a twist: your agents can also take the wheel.

An AI agent is any application that leverages generative AI models to take autonomous actions in the real world—whether by interacting with humans or using APIs to read and write data.

This portal provides comprehensive documentation and API, SDK and CLI reference to help you operate Blaxel Platform.

Essential concepts

Blaxel is a cloud designed for agentic AI. It doesn’t force you into any kind of workflow or shaped box. While we encourage you to exploit architecture designs that we consider are more reliable, our toolkit gives you all the pieces you need to build agentic systems exactly the way you want.

Blaxel consists of modular services that are engineered to work seamlessly together — but you can also just use any one of them independently. Think of it as a purpose-built set of building blocks that you can use to power and ship agents.

The building blocks

At the heart of Blaxel is our flagship Agents Hosting service. Agents Hosting lets you deploy your AI agents as serverless auto scalable endpoints.

  • Completely framework agnostic. Just bring your code, Blaxel builds it and runs it for you.
  • Asynchronous endpoints with automatic queuing and retries
  • Full observability — out-of-the-box

The rest of Blaxel’s cloud services include:

  • MCP Servers Hosting - Deploy custom tool servers on a fast-starting infrastructure to extend your agents’ capabilities.
  • Model Gateway - Intelligent routing layer to LLM providers with built-in telemetry, token cost control, and fallbacks capabilities
  • Sandboxes - Near instant-starting micro VMs that provide agents with their own compute runtime to execute code or run commands.
  • Batch Jobs - Scalable compute engine designed for agents to schedule and execute many AI processing tasks in parallel in the background

The Blaxel method

As the ultimate AI builder’s playground, Blaxel doesn’t require you to learn and adopt a framework or architecture. However, we do recommend best-practices from our experience working with top AI teams and aim to provide guardrails and framing when you build your agents.

  • Break down and distribute your agents whenever possible. A single monolithic agent handling all tool calls, LLM calls, and task workflows can be deployed to Blaxel—but it will be harder to maintain, monitor, and will use resources inefficiently. Blaxel SDK allows builders to split services and connect them from your code.
  • You can call LLM providers directly from your code, but we recommend you go through Blaxel’s Model Gateway for telemetry.
  • Similarly, while direct tool calls are possible, deploying separate MCP servers improves reusability, optimizes resources, and simplifies monitoring. Blaxel also optimizes placement globally when your serverless tool server needs to make multiple backend calls.
  • Break large agents into smaller, specialized sub-agents when possible—they’re easier to debug and observe.
  • Agentic systems naturally connect with many services both inside and outside your network, mixing North-South and East-West traffic in cloud terms. Strong observability is essential for reliability.
  • Reliability is the biggest challenge in agentic AI. Focus on fine-tuning your prompts, tool calls, data access, and orchestration logic—Blaxel will handle the execution.

The Blaxel powerhouse

When you deploy workloads to Blaxel, they run on a technical backbone called the Global Agentics Network. Its natively serverless architecture automatically scales computing resources without any server management on your part.

Global Agentics Network serves as the powerhouse for the entire Blaxel platform, from Agents Hosting to Sandboxes. It is natively distributed in order to optimize for low-latency or other strategies. It allows for multi-region deployment, enabling AI workloads (such as an AI agent processing inference requests) to run across multiple geographic areas or cloud providers. This is accomplished by decoupling this execution layer from a data layer made of a smart distributed network that federates all those execution locations.

Finally, the platform implements advanced security measures, including fine-grained authentication and authorization through Blaxel IAM, ensuring that your AI infrastructure remains protected. It can be interacted with through various methods, including APIs, CLI, web console, and MCP servers.

Documentation structure

You might want to start with any of the following articles:

  • Get started: Deploy your first workload on Blaxel in just 3 minutes.
  • Product Documentation
    • Agents Hosting: Build and run AI agents that can scale.
    • MCP Servers Hosting: Expose capabilities and execute tool calls for your AI agents.
    • Model APIs: Learn about supported model types.
    • Sandboxes: Equip your agents with blazing-fast virtual machines to run commands and code.
    • Batch jobs: Scheduled jobs of batch processing tasks for your AI workflows.
    • Integrations: Discover how Blaxel works with other tools, frameworks, and platforms.
    • Observability: Monitor logs, traces and metrics for your agent runs.
    • Policies Governance: Manage your AI deployment strategies.
    • Security: Implement robust security measures for your AI infrastructure.
  • API reference: Comprehensive guide to Blaxel’s APIs.
  • CLI reference: Learn how to use Blaxel’s command-line interface.