Arize integration guide
This version of the Anyscale docs is deprecated. Go to the latest version for up to date information.
Arize AI provides a machine learning platform helps data scientists and machine learning engineers monitor, explain, and improve their models in production. You can integrate Arize with Anyscale Private Endpoints for enhanced LLM observability.
Step 0: Install dependencies
pip install arize[AutoEmbeddings]
pip install openai==1.3.2
pip install langchain>=0.0.341
Step 1: Set up Arize client
To log data from an app that uses Anyscale Private Endpoints, you need to set up an ArizeCallBackHandler
to use. Create and account or log in to Arize, and retrieve your API_KEY
and SPACE_KEY
from the console to paste into the code below:
from langchain.callbacks.arize_callback import ArizeCallbackHandler
ARIZE_API_KEY = "YOUR_ARIZE_API_KEY"
ARIZE_SPACE_KEY = "YOUR_ARIZE_SPACE_KEY"
arize_callback = ArizeCallbackHandler(
model_id="anyscale-private-endpoints-example",
SPACE_KEY=ARIZE_SPACE_KEY,
API_KEY=ARIZE_API_KEY
)
Step 2: Initialize chat models with Anyscale Endpoints
To facilitate interaction with various models through Anyscale Private Endpoints, initialize chat models using the LangChain library. This step involves creating a list of initial messages to set the context and utilizing the ChatAnyscale
class to prepare for communication with each available model.
For a quick demo, you can paste in your API base and key, but for development, follow best practices for setting your API base and key.
from langchain.chat_models import ChatAnyscale
from langchain.schema import SystemMessage, HumanMessage
ANYSCALE_BASE_URL = "ANYSCALE_BASE_URL"
ANYSCALE_API_KEY = "ANYSCALE_API_KEY"
# Messages for initiating the chat context
messages = [
SystemMessage(
content="You are a helpful AI that shares everything you know."
),
HumanMessage(
content="Come up with a HEX code that captures November."
),
]
# Retrieve available models and creating a ChatAnyscale instance for each one
chats = {
model: ChatAnyscale(
anyscale_api_base=ANYSCALE_BASE_URL,
anyscale_api_key=ANYSCALE_API_KEY,
model_name=model
)
for model in ChatAnyscale.get_available_models(anyscale_api_base=ANYSCALE_BASE_URL, anyscale_api_key=ANYSCALE_API_KEY)
}
Step 3: Log predictions and monitor with Arize
After setting up the chat instances, send the prompt messages to the chat model endpoints and log the responses through Arize AI. The ArizeCallbackHandler
captures each model's response and sends it to the Arize console. This allows you to monitor the model's performance and ensure the quality of predictions in production.
for model_name, endpoint in chats.items():
response = endpoint.predict_messages(messages, callbacks=[arize_callback])
print(model_name, "\n", response.content)