Agents
The simplest way to run AI agents.
Agents are AI-powered systems that are able to interact with their consumers, reason, and autonomously take action on the external world via APIs to read or write data.
Blaxel concepts
There are three types of workloads that you can deploy on Blaxel:
- Functions: These are pieces of custom code that can be executed with specific arguments through an API endpoint. They represent the tools that an agent can use to interact with the environment.
- When you deploy a function on Blaxel, you get a global endpoint to run the function in a sandboxed environment. It can retrieve integration connections in order to access private systems (such as a database).
- Model APIs: These are the APIs to ML models that make inferences as part of the chained AI workflow. They typically represent action models: LLMs that can interact with humans in natural language and decide to use a tool at their disposal, generating the payload to use as the tool input (function calling)
- When you deploy a model API on Blaxel, you get a global endpoint to call the model provider from a unified interface that handles credentials management for you via saved integration connections.
- Agents: These are files of custom code which represents an agent’s logic, dictating which functions and models it can use, as well as which agents it is allowed to transfer a request to.
- When you deploy an agent on Blaxel, you get a global endpoint to run the agent in a secure sandboxed environment. Agent-type workloads handle all communication with their functions, model APIs, and other linked agents — making sure all requests are authenticated against the end-user’s identity and access rights. Agents also provide complete lineage and audit tracking of consumer requests across all workloads.
This is a high-level representation, as ultimately an entire AI agent is simply just software, but Blaxel helps you design your agent in a future-proof way by breaking it down into a function, model API and agent logic.
Blaxel provides all the developer tools needed to build and run your agents throughout their lifecycle. When pushed to Blaxel, agents, models, and functions become available as single global endpoints. Blaxel lets you set the model APIs and tools an agent uses during development, but also executes those API and tool calls during agent runtime. You can manage your agents’ lifecycle across stages with revisions, in order to iterate on prompts and code, and ship and rollback as needed.
Run your agent on Blaxel
To run your agent on Blaxel, you must package it by using the Blaxel SDK. This packaging is agnostic of the framework you used to develop your agent and doesn’t prevent you from deploying your software elsewhere. The Blaxel SDK allows Blaxel to identify core resources in your code, so they can be adequately created when launching a deployment on Blaxel.
Developing an agent on Blaxel
Read our guide for developing any custom AI agent using Blaxel.
Deploy an agent on Blaxel
Learn how to deploy and manage your agent on Blaxel’s infrastructure.
Use your agents in your apps
Once your agent is deployed on Blaxel, you can start using it in your applications. Whether you need to process individual inference requests or integrate the agent into a larger application workflow, Blaxel provides flexible options for interaction. Learn how to authenticate requests, handle responses, and optimize your agent’s performance in production environments.