Deploy a LangChain agent
Deploy your first LangChain AI agent on Blaxel.
This tutorial shows how to deploy a custom LangChain AI agent directly from your IDE using Blaxel CLI.
Your agent will typically include these key components:
- A core agent logic algorithm → Blaxel will deploy this as an Agent
- A set of tools that the agent can use → Blaxel will deploy these in sandboxed environments as Functions
- A chat model that powers the agent → Blaxel can request a Blaxel model API upon LLM calling.
When the main agent runs, Blaxel orchestrates everything in secure, sandboxed environments, providing complete traceability of all requests and production-grade scalability.
Prerequisites
- A Blaxel workspace
- Blaxel CLI installed on your machine and logged in to your workspace
Guide
(Optional) Quickstart from a template
Let’s initialize a first app. The following command creates a pre-scaffolded local repository ready for developing and deploying your agent on Blaxel.
Select your model API, choose the Custom template, and press Enter to create the repo.
Add your agent code
Add or import the script files containing your LangChain agent. Here’s an example using a simple LangChain React agent:
Note that this code references a custom tool from another folder that we assume you developed too: customfunctions.helloworld.helloworld
. By default, this function would not be deployed as a separate component. See next section for instructions on deploying it in its own sandboxed environment to trace request usage.
The next step is to use the Blaxel SDK to specify which function should be deployed on Blaxel. You’ll need two key elements:
- Create a main async function to handle agent execution—this will be your default entry point for serving and deployment.
- Add the
@agent
decorator to designate your main agent function for Blaxel serving.
Here’s an example showing the main async function:
Read our reference for the main agent function to serve.
Then, here’s an example adding the Blaxel decorator:
Read our reference for @agent decorator.
At this time:
- your agent is ready to be deployed on Blaxel
- functions (tool calls) and model APIs are not ready to be deployed on separate sandboxed environments.
To get total observability and traceability on the requests of your agent, it’s recommended to use a microservices-like architecture where each component runs independently in its own environment. Blaxel helps you cross this last mile with just a few lines of code.
Sandbox the tool call execution
To deploy your tool on Blaxel, the simplest way is to place the main function’s file in the src/functions/
folder.
Here’s an example with the helloworld custom tool from the previous code snippets:
Now, add the @function
decorator to specify the default entry point for serving and deployment of the function.
Functions placed in the src/functions/
folder will automatically be deployed as custom functions and can be made available to the agent during execution by calling get_functions()
. The final step is to update the tool binding from the main agent file:
Read our reference for get_functions().
Sandbox the model API call
Use the Blaxel console to create the corresponding integration connection to your LLM provider, using one of the supported integrations.
Then create a model API using your integration, and deploy it. Here’s an example from the Blaxel console using an OpenAI model:
Model APIs can be made available to the agent during execution by calling get_chat_model()
:
At this time: your agent, functions and model API are ready to be deployed on Blaxel in sandboxed environments.
Test and deploy your AI agent
Run the following command to serve your agent locally:
Query your agent locally by making a POST request to this endpoint: http://localhost:1338
with the following payload format: {"inputs": "Hello world!"}
.
To push to Blaxel, run the following command. Blaxel will handle the build and deployment:
That’s it! 🌎 🌏 🌍 Your agent is now distributed and available across the entire Blaxel global infrastructure! Global Inference Network significantly speeds up inferences by executing your agent’s workloads in sandboxed environments and smartly routing requests based on your policies.
Make a first inference
Run a first inference on your Blaxel agent with the following command:
You can also run inference requests on your agent (or each function or model API) from the Blaxel console by using the Playground on the console.
Next steps
You are ready to run AI with Blaxel! Here’s a curated list of guides which may be helpful for you to make the most of the Blaxel platform, but feel free to explore the product on your own!
Deploy agents
Complete guide for deploying AI agents on Blaxel.
Integrate and query agents
Complete guide for querying your AI agents on the Global Inference Network.
Manage policies
Complete guide for managing deployment and routing policies on the Global Inference Network.