Develop agents in Python
Use the Blaxel SDK to develop and run a custom agent in Python.
You can develop agents in Python using any framework (LangChain, LlamaIndex, CrewAI, OpenAI Agents, PydanticAI, Google ADK or any custom framework) and deploy them on Blaxel by integrating a few lines of the Blaxel SDK and leveraging our other developer tools (Blaxel CLI, GitHub action, etc.).
Quickstart
You can quickly initialize a new project from scratch by using CLI command bl create-agent-app
. This will create a pre-scaffolded local repo where your entire code can be added. You can choose the base agentic framework for the template.
In the generated folder, you’ll find a standard server in the entrypoint file main.py
. While you typically won’t need to modify this file, you can add specific logic there if needed. Your main work will focus on the agent.py
file. Blaxel’s development paradigm lets you leverage its hosting capabilities without modifying your agent’s core logic.
Connect to a model API
Blaxel SDK provides a helper to connect to a model API defined on Blaxel from your code. This allows you to avoid managing a connection with the model API by yourself. Credentials remain stored securely on Blaxel.
Convert the retrieved model to the format of the framework you want to use with the .to_...()
function.
Available frameworks :
- LangChain :
to_langchain()
- CrewAI :
to_crewai()
- LlamaIndex :
to_llamaindex()
- OpenAI Agents:
to_openai()
- Pydantic AI Agents:
to_pydantic()
- Google ADK:
to_google_adk()
For example, to connect to model my-model
in a LlamaIndex agent:
Connect to tools
Blaxel SDK provides a helper to connect to pre-built or custom tool servers (MCP servers) hosted on Blaxel from your code. This allows you to avoid managing a connection with the server by yourself. Credentials remain stored securely on Blaxel. The following function retrieves all the tools discoverable in the tool server.
Like for a model, convert the retrieved tools to the format of the framework you want to use with the .to_...()
function. Available frameworks are to_langchain()
(LangChain), to_llamaindex()
(LlamaIndex), to_crewai()
(CrewAI), to_openai()
(OpenAI Agents), to_pydantic()
(PydanticAI Agents) and to_google_adk()
(Google ADK).
You can develop agents by mixing tools defined locally in your agents, and tools defined as remote servers. Using separated tools prevents monolithic designs which make maintenance easier in the long run. Let’s look at a practical example combining remote and local tools. The code below uses two tools:
blaxel-search
: A remote tool server on Blaxel providing web search functionality (learn how to create your own MCP servers here)weather
: A local tool that accepts a city parameter and returns a mock weather response (always “sunny”)
Connect to another agent (multi-agent chaining)
Rather than using a “quick and dirty” approach where you would combine all your agents and capabilities into a single deployment, Blaxel provides a structured development paradigm based on two key principles:
- Agents can grow significantly in complexity. Monolithic architectures make long-term maintenance difficult.
- Individual agents should be reusable across multiple projects.
Blaxel supports a microservice architecture for agent chaining, allowing you to call one agent from another using bl_agent().run()
rather than combining all functionality into a single codebase.
Instrumentation
Instrumentation happens automatically when workloads run on Blaxel. To enable telemetry, simply require the SDK at your project’s root level.
When agents and tools are deployed on Blaxel, request logging and tracing happens automatically.
To add your own custom logs that you can view in the Blaxel Console, use the Python default logger.
Blaxel agent template file structure
Overview
blaxel.toml
This file is used to configure the deployment of the agent on Blaxel. It’s not mandatory, but it allows you to customize the deployment.
The name
, workspace
, and type
fields are optional and serve as default values. Any bl command run in the folder will use these defaults rather than prompting you for input.
The agents
, functions
, and models
fields are also optional. They specify which resources to deploy with the agent. These resources are preloaded during build, eliminating runtime dependencies on the Blaxel control plane and dramatically improving performance.
The entrypoint
section defines how we are going to start your server:
prod
: this is the command that will be used to serve your agent
dev
: same as prod in dev mode, it will be used with the command hotreload. Example:
entrypoint
section is not required, if not set we will use an auto-detect of your agent content
The env
section defines environment variables that the agent can access via the SDK. Note that these are NOT secrets.
Deploy an agent
Learn how to deploy your custom AI agents on Blaxel as a serverless endpoint.
Was this page helpful?