โจQuick start (Python)
In this guide you'll learn how to implement and use Toolhouse with just a few lines of code. For the purpose of this example, we'll build an assistant that can run the code it generates. We'll do so by using the Code interpreter tool from the Toolhouse.
Sign up
You can sign up for free by going to the Toolhouse sign up page. You can skip this step if you already signed up for Toolhouse.
Create an API Key
You will need to generate an API Key in order to use Toolhouse with your code.
Go to the API Keys section in Toolhouse.
You'll find a field that says API Key Name. Give your API Key a name, then click Generate
Copy the API Key and save it where you save your secrets. In this guide, we'll assume you have a
.env
file. Save the API Key asTOOLHOUSE_API_KEY
in your environment file. This allows Toolhouse to pick up its value directly in your code.TOOLHOUSE_API_KEY=<Your API Key value>
Pick and choose your MCP servers
The Toolhouse MCP server is pre-installed on your account, containing the fundamental stack to build an agent (knowledge, memory, execution, and output). You can select the servers you want using Bundles.
You can expand or restrict the MCP servers your agent can access:
Go to the Bundles page.
Click on the "Create Bundle" button at the top right corner of the screen.
Give your bundle a name and select all the MCP servers you'd want to add.
TypeScript/JavaScript library
If you're not a Python developer but you want to use Toolhouse you can use the Toolhouse SDK for TypeScript
Add Toolhouse to your code
Import the Toolhouse SDK in your project:
pip install toolhouse
Then, simply instantiate the SDK and pass the tool definitions in your LLM call. Toolhouse will encapsulate the logic you usually need to pass the tool output back to the LLM, so you won't have to worry about writing boilerplate logic.
# ๐ Make sure you've also installed the OpenAI SDK through: pip install openai
from openai import OpenAI
from toolhouse import Toolhouse
# Let's set our API Keys.
# Please remember to use a safer system to store your API KEYS
# after finishing the quick start.
client = OpenAI(api_key='YOUR_OPENAI_API_KEY')
th = Toolhouse(api_key='TOOLHOUSE_API_KEY',
provider="openai")
# Define the OpenAI model we want to use
MODEL = 'gpt-4o-mini'
messages = [{
"role": "user",
"content":
"Generate FizzBuzz code."
"Execute it to show me the results up to 10."
}]
response = client.chat.completions.create(
model=MODEL,
messages=messages,
# Passes Code Execution as a tool
tools=th.get_tools()
)
# Runs the Code Execution tool, gets the result,
# and appends it to the context
messages += th.run_tools(response)
response = client.chat.completions.create(
model=MODEL,
messages=messages,
tools=th.get_tools()
)
# Prints the response with the answer
print(response.choices[0].message.content)
Last updated