Quick start

In this guide you'll learn how to implement and use Toolhouse with just a few lines of code. For the purpose of this example, we'll build an assistant that can run the code it generates. We'll do so by using the Code interpreter tool from the Tool Store.

Sign up

You can sign up for free by going to the Toolhouse sign up page. You can skip this step if you already signed up for Toolhouse.

Create an API Key

You will need to generate an API Key in order to use Toolhouse with your code.

  1. Go to the API Keys section in Toolhouse.

  2. You'll find a field that says API Key Name. Give your API Key a name, then click Generate

  3. Copy the API Key and save it where you save your secrets. In this guide, we'll assume you have a .env file. Save the API Key as TOOLHOUSE_API_KEY in your environment file. This allows Toolhouse to pick up its value directly in your code.

    TOOLHOUSE_API_KEY=<Your API Key value>

Add a tool

In Toolhouse, tools are function definitions. In Toolhouse, tools can be local (meaning they run on your infrastructure, such as your cloud environment or just your laptop) or cloud (meaning they are executed in the Toolhouse Cloud, like one of the tools you can find in the Tool Store).

  1. Go to the Tool Store.

That's it! You just enabled a fully sandboxed Code Interpreter that can work on any LLM you wish to use.

Add Toolhouse to your code

Import the Toolhouse SDK in your project:

pip install toolhouse

Then, simply instantiate the SDK and pass the tool definitions in your LLM call. Toolhouse will encapsulate the logic you usually need to pass the tool output back to the LLM, so you won't have to worry about writing boilerplate logic.

import os
from typing import List
# 👋 Make sure you've also installed the OpenAI SDK through: pip install openai
from openai import OpenAI
from toolhouse import Toolhouse

# Let's set our API Keys.
# Please remember to use a safer system to store your API KEYS 
# after finishing the quick start.
client = OpenAI(api_key='YOUR_OPENAI_API_KEY')
th = Toolhouse(access_token='TOOLHOUSE_API_KEY',
provider="openai")

# Define the OpenAI model we want to use
MODEL = 'gpt-4o-mini'

messages = [{
    "role": "user",
    "content":
        "Generate FizzBuzz code."
        "Execute it to show me the results up to 10."
}]

response = client.chat.completions.create(
  model=MODEL,
  messages=messages,
  # Passes Code Execution as a tool
  tools=th.get_tools()
)

# Runs the Code Execution tool, gets the result, 
# and appends it to the context
messages += th.run_tools(response)

response = client.chat.completions.create(
  model=MODEL,
  messages=messages,
  tools=th.get_tools()
)
# Prints the response with the answer
print(response.choices[0].message.content)

Last updated