ChatWriter
This notebook provides a quick overview for getting started with Writer chat models.
Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the Writer docs.
:::
Overview
Integration details
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatWriter | langchain-community | ❌ | ❌ | ❌ | ❌ | ❌ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
Setup
To access Writer models you'll need to create a Writer account, get an API key, and install the writer-sdk
and langchain-community
packages.
Credentials
Head to Writer AI Studio to sign up to OpenAI and generate an API key. Once you've done this set the WRITER_API_KEY environment variable:
import getpass
import os
if not os.environ.get("WRITER_API_KEY"):
os.environ["WRITER_API_KEY"] = getpass.getpass("Enter your Writer API key:")
Installation
The LangChain Writer integration lives in the langchain-community
package:
%pip install -qU langchain-community writer-sdk
Instantiation
Now we can instantiate our model object and generate chat completions:
from langchain_community.chat_models.writer import ChatWriter
from writerai import AsyncWriter, Writer
llm = ChatWriter(
client=Writer(),
async_client=AsyncWriter(),
model="palmyra-x-004",
temperature=0.7,
max_tokens=1000,
# other params...
)
Invocation
messages = [
(
"system",
"You are a helpful assistant that writes poems about the Python programming language.",
),
("human", "Write a poem about Python."),
]
ai_msg = llm.invoke(messages)
print(ai_msg.content)
Streaming
ai_stream = llm.stream(messages)
for chunk in ai_stream:
print(chunk.content, end="")
Chaining
We can chain our model with a prompt template like so:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that writes poems about the {input_language} programming language.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "Java",
"input": "Write a poem about Java.",
}
)
Tool calling
Writer supports tool calling, which lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.
ChatWriter.bind_tools()
With ChatWriter.bind_tools
, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to tool schemas, which looks like:
{
"name": "...",
"description": "...",
"parameters": {...} # JSONSchema
}
and passed in every model invocation.
from pydantic import BaseModel, Field
class GetWeather(BaseModel):
"""Get the current weather in a given location"""
location: str = Field(..., description="The city and state, e.g. San Francisco, CA")
llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke(
"what is the weather like in New York City",
)
AIMessage.tool_calls
Notice that the AIMessage has a tool_calls
attribute. This contains in a standardized ToolCall format that is model-provider agnostic.
ai_msg.tool_calls
For more on binding tools and tool call outputs, head to the tool calling docs.
API reference
For detailed documentation of all Writer features, head to our API reference.
Related
- Chat model conceptual guide
- Chat model how-to guides