This tutorial will walk you through how to use CopilotKit to create complete copilots that leverage Blaxel agents in your frontend. Turn your MCP servers, models and agents hosted on Blaxel into CopilotKit CoAgents that provide full user interaction.
This tutorial is based on a Python LangGraph agent.

Requirements

Step 1: Create a LangGraph agent

You can quickly initialize a new project from scratch by using CLI command bl create-agent-app.
bl create-agent-app myagent
This will create a pre-scaffolded local repo where your entire code can be added. You can choose the base agentic framework for the template. You can then go to your agent directory and install copilotkit
cd myagent
uv add copilotkit
Develop your LangGraph agent, using Blaxel SDK to connect to Blaxel AI gateway for model APIs and MCP servers, and making sure to add a memory checkpointer. You can copy-paste the following code snippet into agent.py to get started:
agent.py

from typing import AsyncGenerator

from blaxel.langgraph import bl_model, bl_tools
from langchain.tools import tool
from langchain_core.messages import AIMessageChunk
from langgraph.prebuilt import create_react_agent

@tool
def weather(city: str) -> str:
    """Get the weather in a given city"""
    return f"The weather in {city} is sunny"

async def langgraph_agent() -> AsyncGenerator[str, None]:
    prompt = (
        "You are a helpful assistant that can answer questions and help with tasks."
    )
    tools = await bl_tools(["blaxel-search"]) + [weather]
    model = await bl_model("sandbox-openai")
    agent = create_react_agent(model=model, tools=tools, prompt=prompt)
    return agent

async def agent(input: str) -> AsyncGenerator[str, None]:
    agent = await langgraph_agent()
    messages = {"messages": [("user", input)]}
    async for chunk in agent.astream(messages, stream_mode=["updates", "messages"]):
        type_, stream_chunk = chunk
        # This is to stream the response from the agent, filtering response from tools
        if (
            type_ == "messages"
            and len(stream_chunk) > 0
            and isinstance(stream_chunk[0], AIMessageChunk)
        ):
            msg = stream_chunk[0]
            if msg.content:
                if not msg.tool_calls:
                    yield msg.content
        # This to show a call has been made to a tool, usefull if you want to show the tool call in your interface
        if type_ == "updates":
            if "tools" in stream_chunk:
                for msg in stream_chunk["tools"]["messages"]:
                    yield f"Tool call: {msg.name}\n"

Step 2: Use CopilotKit integration to serve your agent

Next, use CopilotKit’s FastAPI integration to serve your LangGraph agent. You can directly modify main.py from the scaffolded directory by copy-pasting the following code.
main.py

import os
from contextlib import asynccontextmanager
from logging import getLogger

from copilotkit import CopilotKitRemoteEndpoint, LangGraphAgent
from copilotkit.integrations.fastapi import add_fastapi_endpoint
from fastapi import FastAPI
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor

from .agent import langgraph_agent
from .server.error import init_error_handlers
from .server.middleware import init_middleware
from .server.router import router

logger = getLogger(__name__)

@asynccontextmanager
async def lifespan(app: FastAPI):
    logger.info(f"Server running on port {os.getenv('BL_SERVER_PORT', 80)}")
    try:
        # Initialize the graph
        graph = await langgraph_agent()

        # Initialize the SDK
        sdk = await get_sdk(graph)

        # Store in app state
        app.state.sdk = sdk

        # Add CopilotKit endpoint
        add_fastapi_endpoint(app, sdk, "/copilotkit", use_thread_pool=False)

        yield
        logger.info("Server shutting down")
    except Exception as e:
        logger.error(f"Error during startup: {str(e)}", exc_info=True)
        raise

# Create the SDK after the graph is initialized
async def get_sdk(graph):
    sdk = CopilotKitRemoteEndpoint(
        agents=[
            LangGraphAgent(
                name="sample_agent",
                description="An agent that can provide weather information and handle other conversational tasks",
                graph=graph,
            )
        ],
    )
    return sdk

app = FastAPI(lifespan=lifespan)
init_error_handlers(app)
init_middleware(app)
app.include_router(router)

FastAPIInstrumentor.instrument_app(app, exclude_spans=["receive", "send"])

Step 3: Setup CopilotKit

CopilotKit maintains a documentation on deploying CoAgents: we are now at Step 4 of this tutorial. The rest of this page will highlight the differences required to integrate with a Blaxel agent, using options “Self-Hosted Copilot Runtime” and “Self hosted (FastAPI)” for the code snippets when available.

Install CopilotKit

Make sure to have the latest packages for CopilotKit installed into your frontend.

npm install @copilotkit/react-ui @copilotkit/react-core

Install Copilot Runtime

Copilot Runtime is a production-ready proxy for your LangGraph agents. In your frontend, go ahead and install it.

npm install @copilotkit/runtime class-validator

Step 4: Plug your agent to CopilotKit in your front-end

You have two options regarding hosting of the agent:
  • local hosting
  • hosting on Blaxel

Local hosting

Run the following command at the root of your agent folder to serve the agent locally:
bl serve
The agent will be available on: http://localhost:1338/copilotkit by default. Now let’s setup a Copilot Runtime endpoint in your application and point your frontend to it. The following tutorial will demonstrate integration with a NextJS application. Check out CopilotKit’s documentation (step 6) for other frameworks. Create the following route file:
app/api/copilotkit/route.ts
import {
  CopilotRuntime,
  ExperimentalEmptyAdapter,
  copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";;
import { NextRequest } from "next/server";
 
// You can use any service adapter here for multi-agent support.
const serviceAdapter = new ExperimentalEmptyAdapter();
 
const runtime = new CopilotRuntime({
  remoteEndpoints: [
    { url: "http://localhost:1338/copilotkit" },
  ],
});
 
export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: "/api/copilotkit",
  });
 
  return handleRequest(req);
};

Make sure to use the correct port when plugging to CopilotRuntime. By default, CopilotKit’s documentation binds to port 8000 but the default port in the Blaxel agent is 1338.
You can now follow the rest of CopilotKit’s documentation on how to setup a Copilot in your frontend application (step 8). Make sure to adapt the name of the agent if you changed it.

Hosting on Blaxel

Deploy the agent Run the following command at the root of your agent folder to deploy the agent to Blaxel:
bl deploy
Retrieve the base invocation URL for the agent. It should look like this, on top of which you will add the /copilotkit endpoint.
Query agent

POST https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}

Create API key Then, create an API key either for your profile or for a service account in your workspace. Store that API key for the next step. Integrate with CopilotKit Now let’s setup a Copilot Runtime endpoint in your application and point your frontend to it. The following tutorial will demonstrate integration with a NextJS application. Check out CopilotKit’s documentation (step 6) for other frameworks. Create the following route file. Make sure to replace https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}/copilotkit and <API_KEY / TOKEN> .
app/api/copilotkit/route.ts
import {
  CopilotRuntime,
  ExperimentalEmptyAdapter,
  copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";;
import { NextRequest } from "next/server";
 
// You can use any service adapter here for multi-agent support.
const serviceAdapter = new ExperimentalEmptyAdapter();
 
const runtime = new CopilotRuntime({
  remoteEndpoints: [
    {
      url: "https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}/copilotkit",
      onBeforeRequest: ({ ctx }) => {
        return {
          headers: {
            "X-Blaxel-Authorization":
              "Bearer <API_KEY / TOKEN>",
          },
        };
      },
    },
  ],
});
 
export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: "/api/copilotkit",
  });
 
  return handleRequest(req);
};

Make sure to call the /copilotkit endpoint on your agent (if left as default), or the actual endpoint name that you have used.
You can now follow the rest of CopilotKit’s documentation on how to setup a Copilot in your frontend application (step 8). Make sure to adapt the name of the agent if you changed it.