Query an agent
Make inference requests on your agents.
Agent deployments on Blaxel have an inference endpoint which can be used by external consumers to request an inference execution. Inference requests are then routed on the Global inference Network based on the deployment policies associated with your agent deployment.
Inference endpoints
Whenever you deploy an agent on Blaxel, an inference endpoint is generated on Global inference Network.
The inference URL looks like this:
Endpoint authentication
By default, agents deployed on Blaxel aren’t public. It is necessary to authenticate all inference requests, via a bearer token.
The evaluation of authentication/authorization for inference requests is managed by the Global inference Network based on the access given in your workspace.
Manage sessions
To simulate multi-turn conversations, you can include a thread ID in your agent request header. You’ll need to generate this ID and pass it using either X-Blaxel-Thread-Id
or Thread-Id
. Without a thread ID, the agent won’t maintain nor use any conversation memory when processing the request.
Make an inference request
Blaxel API
Make a POST request to the inference endpoint for the agent deployment you are requesting, making sure to fill in the authentication token:
Read about the API parameters in the reference.
Blaxel CLI
The following command will make a default POST request to the agent.
Read about the CLI parameters in the reference.
Blaxel console
Inference requests can be made from the Blaxel console from the agent deployment’s Playground page.