Working with your local tools

Toolhouse gives your the flexibility to use your local tools alongside any tools you install from the Tool Store. When you run your local tools with Toolhouse, you can simplify the boilerplate logic required to execute tools and convert their results into valid completion objections.

To execute your local tools just follow these two steps:

Decorate your existing functions

In your code, add the register_local_tool() decorator to the function that returns the response to the LLM. Specify the name of the tool as the decorator's parameter.

  • Your function's signature should accept all the arguments the LLM will pass, both required and optional. Toolhouse may throw an exception if the function signature does not match the required LLM arguments.

  • Your function should always return a string. Toolhouse will throw an exception if the function does not return a string or string-like value.

Ensure that the arguments you need from the LLM are mapped in the function signature:

import requests
from toolhouse import Toolhouse
from anthropic import Anthropic

client = Anthropic(api_key="YOUR_API_KEY")
MODEL = "claude-3-5-sonnet-20240620"

th = Toolhouse(provider="anthropic")

# The parameter must match the name of the tool in your tool definition
@th.register_local_tool("get_current_weather")
def get_weather_forecast(
   # These arguments must match the name of the parameters 
   # in your tool definition
   latitude: float, 
   longitude: float) -> str:
   
    url = f"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&hourly=temperature_2m&forecast_days=1"
    
    response = requests.get(url)
    
    if response.status_code == 200:
        return response.text
    else:
        return f"Error: {response.status_code} - {response.text}"

Pass your local tools

Define your local tools as a variable like my_local_tools

Append your tools to the th.get_tools() call, then use th.run_tools() to run both your local and cloud tools.

my_local_tools = [
    {
            "name": "get_current_weather",
            "description": "Retrieves the current weather for the location you specify.",
            "input_schema": {
                "type": "object",
                "properties": {
                    "latitude": {
                        "type": "number",
                        "description": "The latitude of the location.",
                    },
                    "longitude": {
                        "type": "number",
                        "description": "The longitude of the location.",
                    },
                },
                "required": [
                    "latitude",
                    "longitude",
                ]
            }
    }
]

messages = [
  {
    "role": "user",
    "content": "What's the weather in Oakland, CA?",
  }
]

response = client.messages.create(
  model=MODEL,
  messages=messages,
  max_tokens=1000,
  tools=th.get_tools() + my_local_tools,
)

# Runs your local tool, gets the result, 
# and appends it to the context
tool_run = th.run_tools(response)
messages = messages + tool_run

response = client.messages.create(
  model=MODEL,
  messages=messages,
  max_tokens=1000,
  tools=th.get_tools(),
)

print(response.content[0].text)

Toolhouse will always execute tools in the order chosen by the LLM. If your LLM supports parallel tool use, tools will be executed one by one as specified by the LLM.

Overriding cloud tools

Your local tools will always have priority over cloud tools. In other words, if your local tool has the same name as a cloud tool, Toolhouse will execute your local tool.

You can use tool override to bypass cloud execution when you need to do so. This is useful in situations where you need to cache or memoize a function's request, or if you want to inhibit the behavior of a function for a specific completion call.

Treat tool definitions as prompts

Describe your tools like you would you describe the behavior of your assistant. Ensure each tool description contains precise, detailed instructions. This is particularly important for models with fewer parameters, which tend to be less accurate in generating a valid tool call for your user prompts.

Last updated