Agents are AI-powered systems that are able to interact with their consumers, reason, and autonomously take action on the external world via APIs to read or write data.

Blaxel concepts

There are three types of workloads that you can deploy on Blaxel:

  • MCP servers: These are pieces of custom code that can be executed with specific arguments generated by an agent. They represent the tools that an agent can use to interact with the environment, such as a private database or API.
    • When you deploy an MCP server on Blaxel, you get a global endpoint to connect to the server using WebSockets. The server runs in a sandboxed environment.
  • Model APIs: These are the APIs to ML models that make inferences as part of the chained AI workflow. They typically represent action models: LLMs that can interact with humans in natural language and decide to use a tool at their disposal, generating the payload to use as the tool input (function calling)
    • When you deploy a model API on Blaxel, you get a global endpoint to call the model provider from a unified interface that handles credentials management for you via saved integration connections.
  • Agents: These are files of custom code which represents an agent’s logic, dictating which functions and models it can use, as well as which agents it is allowed to transfer a request to.
    • When you deploy an agent on Blaxel, you get a global endpoint to run the agent in a secure sandboxed environment. Agent-type workloads handle all communication with their functions, model APIs, and other linked agents — making sure all requests are authenticated against the end-user’s identity and access rights. Agents also provide complete lineage and audit tracking of consumer requests across all workloads.

This is a high-level representation, as ultimately an AI agent is just software. Blaxel helps you design your agent in a future-proof way by breaking it down from a monolithic architecture into a functions-models-agents logic.

Blaxel provides the developer tools needed to build and run agents throughout their entire lifecycle. Agents, models, and functions become available as single global endpoints when pushed to Blaxel’s infrastructure. Blaxel SDK also lets you connect to model APIs and tool servers when developing an agent, and triggers those API and tool call execution during agent runtime. You can manage your agents’ lifecycle across stages with revisions, in order to iterate on prompts and code, and ship and rollback as needed.

Run your agent on Blaxel

The Blaxel SDK allows you to connect to and orchestrate other resources (such as model APIs, tool servers, multi-agents) during development, and ensures telemetry, secure connections to third-party systems or private networks, smart global placement of workflows, and much more when agents are deployed.

This packaging makes Blaxel fully agnostic of the framework used to develop your agent and doesn’t prevent you from deploying your software on another platform.

Developing an agent on Blaxel

Read our guide for developing any custom AI agent using Blaxel.

Deploy an agent on Blaxel

Learn how to deploy and manage your agent on Blaxel’s infrastructure.

Use your agents in your apps

Once your agent is deployed on Blaxel, you can start using it in your applications. Whether you need to process individual inference requests or integrate the agent into a larger application workflow, Blaxel provides flexible options for interaction. Learn how to authenticate requests, handle responses, and optimize your agent’s performance in production environments.

Was this page helpful?