Function calling
A growing number of chat models, like OpenAI, Gemini, etc., have a function-calling API that lets you describe functions and their arguments, and have the model return a JSON object with a function to invoke and the inputs to that function. Function-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
LangChain comes with a number of utilities to make function-calling easy. Namely, it comes with:
- simple syntax for binding functions to models
- converters for formatting various types of objects to the expected function schemas
- output parsers for extracting the function invocations from API responses
- chains for getting structured outputs from a model, built on top of function calling
Weβll focus here on the first two points. For a detailed guide on output parsing check out the OpenAI Tools output parsers and to see the structured output chains check out the Structured output guide.
Before getting started make sure you have langchain-core
installed.
%pip install -qU langchain-core langchain-openai
import getpass
import os
Binding functionsβ
A number of models implement helper methods that will take care of formatting and binding different function-like objects to the model. Letβs take a look at how we might take the following Pydantic function schema and get different models to invoke it:
from langchain_core.pydantic_v1 import BaseModel, Field
# Note that the docstrings here are crucial, as they will be passed along
# to the model along with the class name.
class Multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
- OpenAI
- Fireworks
- Mistral
- Together
Set up dependencies and API keys:
%pip install -qU langchain-openai
os.environ["OPENAI_API_KEY"] = getpass.getpass()
We can use the ChatOpenAI.bind_tools()
method to handle converting
Multiply
to an OpenAI function and binding it to the model (i.e.,
passing it in each time the model is invoked).
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Q8ZQ97Qrj5zalugSkYMGV1Uo', 'function': {'arguments': '{"a":3,"b":12}', 'name': 'Multiply'}, 'type': 'function'}]})
We can add a tool parser to extract the tool calls from the generated message to JSON:
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
Or back to the original Pydantic class:
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
[Multiply(a=3, b=12)]
If we wanted to force that a tool is used (and that it is used only
once), we can set the tool_choice
argument:
llm_with_multiply = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_multiply.invoke(
"make up some numbers if you really want but I'm not forcing you"
)
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f3DApOzb60iYjTfOhVFhDRMI', 'function': {'arguments': '{"a":5,"b":10}', 'name': 'Multiply'}, 'type': 'function'}]})
For more see the ChatOpenAI API reference.
Install dependencies and set API keys:
%pip install -qU langchain-fireworks
os.environ["FIREWORKS_API_KEY"] = getpass.getpass()
We can use the ChatFireworks.bind_tools()
method to handle converting
Multiply
to a valid function schema and binding it to the model (i.e.,
passing it in each time the model is invoked).
from langchain_fireworks import ChatFireworks
llm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='Three multiplied by twelve is 36.')
If our model isnβt using the tool, as is the case here, we can force
tool usage by specifying tool_choice="any"
or by specifying the name
of the specific tool we want used:
llm_with_tools = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_qIP2bJugb67LGvc6Zhwkvfqc', 'type': 'function', 'function': {'name': 'Multiply', 'arguments': '{"a": 3, "b": 12}'}}]})
We can add a tool parser to extract the tool calls from the generated message to JSON:
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
Or back to the original Pydantic class:
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
[Multiply(a=3, b=12)]
For more see the ChatFireworks reference.
Install dependencies and set API keys:
%pip install -qU langchain-mistralai
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
We can use the ChatMistralAI.bind_tools()
method to handle converting
Multiply
to a valid function schema and binding it to the model (i.e.,
passing it in each time the model is invoked).
from langchain_mistralai import ChatMistralAI
llm = ChatMistralAI(model="mistral-large-latest", temperature=0)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'null', 'type': <ToolType.function: 'function'>, 'function': {'name': 'Multiply', 'arguments': '{"a": 3, "b": 12}'}}]})
We can add a tool parser to extract the tool calls from the generated message to JSON:
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
Or back to the original Pydantic class:
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
[Multiply(a=3, b=12)]
We can force tool usage by specifying tool_choice="any"
:
llm_with_tools = llm.bind_tools([Multiply], tool_choice="any")
llm_with_tools.invoke("I don't even want you to use the tool")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'null', 'type': <ToolType.function: 'function'>, 'function': {'name': 'Multiply', 'arguments': '{"a": 5, "b": 7}'}}]})
For more see the ChatMistralAI API reference.
Since TogetherAI is a drop-in replacement for OpenAI, we can just use the OpenAI integration.
Install dependencies and set API keys:
%pip install -qU langchain-openai
os.environ["TOGETHER_API_KEY"] = getpass.getpass()
We can use the ChatOpenAI.bind_tools()
method to handle converting
Multiply
to a valid function schema and binding it to the model (i.e.,
passing it in each time the model is invoked).
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.together.xyz/v1",
api_key=os.environ["TOGETHER_API_KEY"],
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_4tc61dp0478zafqe33hfriee', 'function': {'arguments': '{"a":3,"b":12}', 'name': 'Multiply'}, 'type': 'function'}]})
We can add a tool parser to extract the tool calls from the generated message to JSON:
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
Or back to the original Pydantic class:
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
[Multiply(a=3, b=12)]
If we wanted to force that a tool is used (and that it is used only
once), we can set the tool_choice
argument:
llm_with_multiply = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_multiply.invoke(
"make up some numbers if you really want but I'm not forcing you"
)
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6k6d0gr3jhqil2kqf7sgeusl', 'function': {'arguments': '{"a":5,"b":7}', 'name': 'Multiply'}, 'type': 'function'}]})
For more see the ChatOpenAI API reference.
Defining functions schemasβ
In case you need to access function schemas directly, LangChain has a built-in converter that can turn Python functions, Pydantic classes, and LangChain Tools into the OpenAI format JSON schema:
Python functionβ
import json
from langchain_core.utils.function_calling import convert_to_openai_tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers together.
Args:
a: First integer
b: Second integer
"""
return a * b
print(json.dumps(convert_to_openai_tool(multiply), indent=2))
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "integer",
"description": "First integer"
},
"b": {
"type": "integer",
"description": "Second integer"
}
},
"required": [
"a",
"b"
]
}
}
}
Pydantic classβ
from langchain_core.pydantic_v1 import BaseModel, Field
class multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
print(json.dumps(convert_to_openai_tool(multiply), indent=2))
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"description": "First integer",
"type": "integer"
},
"b": {
"description": "Second integer",
"type": "integer"
}
},
"required": [
"a",
"b"
]
}
}
}
LangChain Toolβ
from typing import Any, Type
from langchain_core.tools import BaseTool
class MultiplySchema(BaseModel):
"""Multiply tool schema."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
class Multiply(BaseTool):
args_schema: Type[BaseModel] = MultiplySchema
name: str = "multiply"
description: str = "Multiply two integers together."
def _run(self, a: int, b: int, **kwargs: Any) -> Any:
return a * b
# Note: we're passing in a Multiply object not the class itself.
print(json.dumps(convert_to_openai_tool(Multiply()), indent=2))
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"description": "First integer",
"type": "integer"
},
"b": {
"description": "Second integer",
"type": "integer"
}
},
"required": [
"a",
"b"
]
}
}
}
Next stepsβ
- Output parsing: See OpenAI Tools output parsers and OpenAI Functions output parsers to learn about extracting the function calling API responses into various formats.
- Structured output chains: Some models have constructors that handle creating a structured output chain for you.
- Tool use: See how to construct chains and agents that actually call the invoked tools in these guides.