You can develop agents in TypeScript using any framework (LangChain, LlamaIndex, VercelAI, or any custom framework) and deploy them on Blaxel by integrating a few lines of the Blaxel SDK and leveraging our other developer tools (Blaxel CLI, GitHub action, etc.).

Check out this Getting Started tutorial in order to develop and deploy your first Hello World AI agent globally in less than 5 minutes.

Quickstart

It is required to have npm installed to use the following command.

You can quickly initialize a new project from scratch by using CLI command bl create-agent-app. This will create a pre-scaffolded local repo where your entire code can be added. You can choose the base agentic framework for the template.

In the generated folder, you’ll find a standard server in the entrypoint file index.ts. While you typically won’t need to modify this file, you can add specific logic there if needed. Your main work will focus on the agent.ts file. Blaxel’s development paradigm lets you leverage its hosting capabilities without modifying your agent’s core logic.

Connect to a model API

Blaxel SDK provides a helper to connect to a model API defined on Blaxel from your code. This allows you to avoid managing a connection with the model API by yourself. Credentials remain stored securely on Blaxel.

import { blModel } from "@blaxel/sdk";

const model = await blModel("Model-name-on-Blaxel").To...();

Convert the retrieved model to the format of the framework you want to use with the .To...() function.

Available frameworks :

For example, to connect to model my-model in a LlamaIndex agent:

import { blModel } from "@blaxel/sdk";

const model = await blModel("my-model").ToLlamaIndex();

Connect to tools

Blaxel SDK provides a helper to connect to pre-built or custom tool servers (MCP servers) hosted on Blaxel from your code. This allows you to avoid managing a connection with the server by yourself. Credentials remain stored securely on Blaxel. The following function retrieves all the tools discoverable in the tool server.

await blTools(['Tool-Server-name-on-Blaxel']).To...()

Like for a model, convert the retrieved tools to the format of the framework you want to use with the .To...() function. Available frameworks are ToLangChain() (LangChain), ToLlamaIndex() (LlamaIndex), ToVercelAI() (Vercel AI) and ToMastra() (Mastra).

You can develop agents by mixing tools defined locally in your agents, and tools defined as remote servers. Using separated tools prevents monolithic designs which make maintenance easier in the long run. Let’s look at a practical example combining remote and local tools. The code below uses two tools:

  1. blaxel-search: A remote tool server on Blaxel providing web search functionality (learn how to create your own MCP servers here)
  2. weather: A local tool that accepts a city parameter and returns a mock weather response (always “sunny”)

import { blModel, blTools, logger } from '@blaxel/sdk';
import { streamText, tool } from 'ai';
import { z } from 'zod';
interface Stream {
  write: (data: string) => void;
  end: () => void;
}

export default async function agent(input: string, stream: Stream): Promise<void> {
  const response = streamText({
    experimental_telemetry: { isEnabled: true },
    // Load model API dynamically from Blaxel:
    model: await blModel("gpt-4o-mini").ToVercelAI(),
    tools: {
      // Load tools dynamically from Blaxel:
      ...await blTools(['blaxel-search']).ToVercelAI(),
      // And here's an example of a tool defined locally for Vercel AI:
      "weather": tool({
        description: "Get the weather in a specific city",
        parameters: z.object({
          city: z.string(),
        }),
        execute: async (args: { city: string }) => {
          logger.debug("TOOLCALLING: local weather", args);
          return `The weather in ${args.city} is sunny`;
        },
      }),
    },
    system: "You are an agent that will give the weather when a city is provided, and also do a quick search about this city.",
    messages: [
      { role: 'user', content: input }
    ],
    maxSteps: 5,
  });

  for await (const delta of response.textStream) {
    stream.write(delta);
  }
  stream.end();
}

Connect to another agent (multi-agent chaining)

Rather than using a “quick and dirty” approach where you would combine all your agents and capabilities into a single deployment, Blaxel provides a structured development paradigm based on two key principles:

  • Agents can grow significantly in complexity. Monolithic architectures make long-term maintenance difficult.
  • Individual agents should be reusable across multiple projects.

Blaxel lets you organize your software with a microservice architecture for agent chaining, allowing you to call one agent from another using blAgent().run() rather than combining all functionality into a single codebase.

import { blAgent } from "@blaxel/sdk";

const myFirstAgentResponse = await blAgent("firstAgent").run(input);
const mySecondAgentResponse = await blAgent("secondAgent").run(myFirstAgentResponse);

Instrumentation

Instrumentation happens automatically when workloads run on Blaxel. To enable telemetry, simply require the SDK at your project’s root level.

import "@blaxel/sdk";

When agents and tools are deployed on Blaxel, request logging and tracing happens automatically.

To add your own custom logs that you can view in the Blaxel Console, use the Blaxel logger.

import { logger } from "@blaxel/sdk";

logger.info("Hello, world!");

Blaxel agent template file structure

Overview

package.json            # Mandatory. This file is the standard package.json file, it defines the entrypoint of the project and dependencies.
blaxel.toml             # This file lists configurations dedicated to Blaxel to customize the deployment. It is not mandatory.
tsconfig.json           # This file is the standard tsconfig.json file, only needed if you use TypeScript.
.blaxel                 # This folder allows you to define custom resources using the Blaxel API specifications. These resources will be deployed along with your agent.
├── blaxel-search.yaml  # Here, blaxel-search is a sandbox Web search tool we provide so you can develop your first agent. It has a low rate limit, so we recommend you use a dedicated MCP server for production.
src/
└── index.ts            # This file is the standard entrypoint of the project. It is used to start the server and create an endpoint bound with agent.ts file.
├── agent.ts            # This file is the main file of your agent. It is loaded from index.ts. In the template, all the agent logic is implemented here.

package.json

Here the most notable imports are the scripts. They are used for the bl serve and bl deploy commands.

{
  "name": "name",
  "version": "1.0.0",
  "description": "<no value>",
  "keywords": [],
  "license": "MIT",
  "author": "cdrappier",
  "scripts": {
    "start": "tsx src/index.ts",
    "prod": "node dist/index.js",
    "dev": "tsx watch src/index.ts",
    "build": "tsc"
  },
  "dependencies": {
    "@ai-sdk/openai": "^1.2.5",
    "@blaxel/sdk": "0.1.1-preview.9",
    "ai": "^4.1.61",
    "fastify": "^5.2.1",
    "zod": "^3.24.2"
  },
  "devDependencies": {
    "@types/express": "^5.0.1",
    "@types/node": "^22.13.11",
    "tsx": "^4.19.3",
    "typescript": "^5.8.2"
  }
}

Depending of what you do, all of the scripts are not required. With TypeScript, all 4 of them are used.

  • start : start the server locally through the TypeScript command, to avoid having to build the project when developing.
  • build : build the project. It is done automatically when deploying.
  • prod : start the server remotely from the dist folder, the project needs to be have been built before.
  • dev : same as start, but with hotreload. It’s useful when developing locally, each file change is reflected immediately.

The remaining fields in package.json follow standard JavaScript/TypeScript project conventions. Feel free to add any dependencies you need, but keep in mind that devDependencies are only used during the build process and are removed afterwards.

blaxel.toml

This file is used to configure the deployment of the agent on Blaxel. It’s not mandatory, but it allows you to customize the deployment.

name = "my-agent"
workspace = "my-workspace"
type = "agent"

agents = []
functions = ["blaxel-search"]
models = ["gpt-4o-mini"]

[env]
DEFAULT_CITY = "San Francisco"

The name, workspace, and type fields are optional and serve as default values. Any bl command run in the folder will use these defaults rather than prompting you for input.

The agents, functions, and models fields are also optional. They specify which resources to deploy with the agent. These resources are preloaded during build, eliminating runtime dependencies on the Blaxel control plane and dramatically improving performance.

The env section defines environment variables that the agent can access via the SDK. Note that these are NOT secrets.

Deploy an agent

Learn how to deploy your custom AI agents on Blaxel as a serverless endpoint.