🏠
Toolhouse
DiscordGithubSign upGo to App
  • 🏠Toolhouse
  • Quick start: deploy your first agent
  • Build agents with the th file
  • Test agents before deploying
  • Deploy and run your agents
  • Agent workers
    • Running Agents asynchronously
      • API Reference
    • Schedule autonomous runs
      • API Reference
  • Toolhouse SDK
    • ✨Quick start (Python)
    • ✨Quick start (TypeScript)
    • Using LlamaIndex
    • Using Vercel AI
  • Choose MCP servers for your agent
  • Customize agents for your end users
  • 💬Execution logs
  • Go to app
Powered by GitBook
On this page
  • Calling your agent
  • Handling the response from the agent
  • Update or redeploy your agent
  • Continuing an interaction with an agent

Deploy and run your agents

PreviousTest agents before deployingNextAgent workers

Last updated 4 days ago

Use the th deploy command to deploy your agent configuration as a streaming API. Each agent will have its own unique API endpoint you can use to interact with the agent.

When you run th deploy, the Toolhouse CLI reads your configuration file (th file) and generates an Agents API. The API endpoint is formatted as https://agents.toolhouse.ai/$AGENT_ID, where $AGENT_ID is a unique GUID assigned to your agent. It will be the same value as the id field in your th file.

Calling your agent

You can call your agent by simply making a POST request towards its endpoint.

curl -XPOST https://agents.toolhouse.ai/$AGENT_ID

If your agent is marked as public: true in your th file, you can invoke the Agents API using a simple POST request. No authentication is required.

For agents marked as public: false, you need to authenticate your requests by including your Toolhouse API Key as a Bearer token in the Authorization header.

curl -XPOST \
  https://agents.toolhouse.ai/$AGENT_ID \
  -H 'Authorization: Bearer YOUR_TOOLHOUSE_API_KEY'

Handling the response from the agent

When you make a POST request to the Agents API, it executes the agent and streams the response back to the client.

The response will contain a X-Toolhouse-Run-ID, which contains a unique ID for the execution run. You can use this value to You can think of this ID as an identifier of the current context including the initial message, any MCP server call, and the response from the agent.

Update or redeploy your agent

Any changes to your th file require redeployment to take effect. Simply run th deploy again to update your Agents API.

If you wish to test your configuration without deploying it, you can just use the th run command. See Test agents before deploying.

Continuing an interaction with an agent

The X-Toolhouse-Run-ID header returned in the initial response from the POST call will allow you to continue the interaction with this agent while keeping a reference of the current context. Using a Run ID is particularly useful for conversational agents, because it allows them to retain all the content and history from previous messages.

To continue a conversation, you can use the Run ID in a PUT request to the same agent endpoint.

# Initial request to execute the agent, assuming $AGENT_ID finds pro
curl -v -XPOST https://agents.toolhouse.ai/$AGENT_ID
# Headers will contain x-toolhouse-run-id: $RUN_ID
# Agent will stream the response

# Continuing the conversation using the X-Toolhouse-Run-ID
curl -XPUT https://agents.toolhouse.ai/$AGENT_ID/$RUN_ID \
--json '{
  "message": "thank you, now find products similar to an iPhone"
}'
# Headers will contain x-toolhouse-run-id: $RUN_ID
# Agent will stream a new response using the current context

The PUT request will also send a X-Toolhouse-Run-ID header containing the run ID for subsequent requests.

If the agent is private, you will need to pass a Toolhouse API Key as a Bearer token in your PUT requests.

continue the interaction with your agent.