Skip to main content

ChatAnthropic

This notebook covers how to get started with Anthropic chat models.

Setup​

For setup instructions, please see the Installation and Environment Setup sections of the Anthropic Platform page.

%pip install -qU langchain-anthropic

Environment Setup​

We’ll need to get an Anthropic API key and set the ANTHROPIC_API_KEY environment variable:

import os
from getpass import getpass

os.environ["ANTHROPIC_API_KEY"] = getpass()

The code provided assumes that your ANTHROPIC_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:

chat = ChatAnthropic(temperature=0, anthropic_api_key="YOUR_API_KEY", model_name="claude-3-opus-20240229")

In these demos, we will use the Claude 3 Opus model, and you can also use the launch version of the Sonnet model with claude-3-sonnet-20240229.

You can check the model comparison doc here.

from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

chat = ChatAnthropic(temperature=0, model_name="claude-3-opus-20240229")

system = (
"You are a helpful assistant that translates {input_language} to {output_language}."
)
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])

chain = prompt | chat
chain.invoke(
{
"input_language": "English",
"output_language": "Korean",
"text": "I love Python",
}
)
AIMessage(content='μ €λŠ” νŒŒμ΄μ¬μ„ μ‚¬λž‘ν•©λ‹ˆλ‹€.\n\nTranslation:\nI love Python.')

ChatAnthropic also supports async and streaming functionality:​

chat = ChatAnthropic(temperature=0, model_name="claude-3-opus-20240229")
prompt = ChatPromptTemplate.from_messages([("human", "Tell me a joke about {topic}")])
chain = prompt | chat
await chain.ainvoke({"topic": "bear"})
AIMessage(content='Sure, here\'s a joke about a bear:\n\nA bear walks into a bar and says to the bartender, "I\'ll have a pint of beer and a.......... packet of peanuts."\n\nThe bartender asks, "Why the big pause?"\n\nThe bear replies, "I don\'t know, I\'ve always had them!"')
chat = ChatAnthropic(temperature=0.3, model_name="claude-3-opus-20240229")
prompt = ChatPromptTemplate.from_messages(
[("human", "Give me a list of famous tourist attractions in Japan")]
)
chain = prompt | chat
for chunk in chain.stream({}):
print(chunk.content, end="", flush=True)
Here is a list of famous tourist attractions in Japan:

1. Tokyo Skytree (Tokyo)
2. Senso-ji Temple (Tokyo)
3. Meiji Shrine (Tokyo)
4. Tokyo DisneySea (Urayasu, Chiba)
5. Fushimi Inari Taisha (Kyoto)
6. Kinkaku-ji (Golden Pavilion) (Kyoto)
7. Kiyomizu-dera (Kyoto)
8. Nijo Castle (Kyoto)
9. Osaka Castle (Osaka)
10. Dotonbori (Osaka)
11. Hiroshima Peace Memorial Park (Hiroshima)
12. Itsukushima Shrine (Miyajima Island, Hiroshima)
13. Himeji Castle (Himeji)
14. Todai-ji Temple (Nara)
15. Nara Park (Nara)
16. Mount Fuji (Shizuoka and Yamanashi Prefectures)
17.

Multimodal​

Anthropic’s Claude-3 models are compatible with both image and text inputs. You can use this as follows:

# open ../../../static/img/brand/wordmark.png as base64 str
import base64
from pathlib import Path

from IPython.display import HTML

img_path = Path("../../../static/img/brand/wordmark.png")
img_base64 = base64.b64encode(img_path.read_bytes()).decode("utf-8")

# display b64 image in notebook
HTML(f'<img src="data:image/png;base64,{img_base64}">')
from langchain_core.messages import HumanMessage

chat = ChatAnthropic(model="claude-3-opus-20240229")
messages = [
HumanMessage(
content=[
{
"type": "image_url",
"image_url": {
# langchain logo
"url": f"data:image/png;base64,{img_base64}", # noqa: E501
},
},
{"type": "text", "text": "What is this logo for?"},
]
)
]
chat.invoke(messages)
AIMessage(content='This logo is for LangChain, which appears to be some kind of software or technology platform based on the name and minimalist design style of the logo featuring a silhouette of a bird (likely an eagle or hawk) and the company name in a simple, modern font.')

Help us out by providing feedback on this documentation page: