Whenever you deploy a workload on Blaxel, an inference endpoint is generated on Global Agentics Network, the infrastructure powerhouse that hosts it.

The inference API URL depends on the type of workload (agent, model API, MCP server) you are trying to request:

POST https://run.blaxel.ai/{YOUR-WORKSPACE}/agents/{YOUR-AGENT}

Showing the full request, with the input payload:

curl -X POST "https://run.blaxel.ai/{your-workspace}/agents/{your-agent}" \
-H 'Content-Type: application/json' \
-H "X-Blaxel-Authorization: Bearer <YOUR_API_KEY>" \
-d '{"inputs":"Hello, world!"}'

Connect to MCP servers

MCP servers (Model Context Protocol) provide a toolkit of multiple capabilities for agents. These servers can be interacted with using Blaxelโ€™s WebSocket transport implementation on the serverโ€™s global endpoint.

Connect to an MCP server

Learn how to run invocation requests on your MCP server.

Manage sessions

To simulate multi-turn conversations, you can pass on request headers. Youโ€™ll need your client to generate this ID and pass it using any header which you can retrieve via the code (e.g. Thread-Id). Without a thread ID, the agent wonโ€™t maintain nor use any conversation memory when processing the request.

This is only available for agent requests.

Query agent with thread ID
curl -X POST "https://run.blaxel.ai/{your-workspace}/agents/{your-agent}" \
-H 'Content-Type: application/json' \
-H "X-Blaxel-Authorization: Bearer <YOUR_API_KEY>" \
-H "X-Blaxel-Thread-Id: <THREAD_ID>" \
-d '{"inputs":"Hello, world!"}'

Product documentation

Read our product guide on querying an agent.