Weights & Biases integration guide
This version of the Anyscale docs is deprecated. Go to the latest version for up to date information.
Weights and Biases is a machine learning platform that provides tools for experiment tracking, model optimization, and dataset versioning. When integrated with Anyscale Private Endpoints, WandB Weave can securely connect to Anyscale's managed Ray clusters, enabling seamless scaling of machine learning experiments while maintaining data privacy and security through protected network communication.
Step 0: Install dependencies
pip install openai==0.28.0
pip install tiktoken>=0.5.1
pip install wandb>=0.16.0
pip install weave>=0.30.0
Step 1: Set up Weights & Biases
For a quick demo, you can paste in your API base and key, but for development, follow best practices for setting your API base and key.
import wandb
from weave.monitoring import init_monitor
wandb.login()
ANYSCALE_BASE_URL = "ANYSCALE_BASE_URL"
ANYSCALE_API_KEY = "ANYSCALE_API_KEY"
# WandB settings
WB_ENTITY = "WB_ENTITY" # A WandB username to send runs to.
WB_PROJECT = "anyscale_private_endpoint"
STREAM_NAME = "anyscale_logs"
# Initialize monitor with WandB entity, project, and stream name.
m = init_monitor(f"{WB_ENTITY}/{WB_PROJECT}/{STREAM_NAME}")
Step 2: Sample logging to WandB stream table
Now that you set up WandB, you can log sample data to the stream table and view them at http://weave.wandb.ai.
from weave.monitoring import openai
# Making an API call to get a response
response = openai.ChatCompletion.create(
api_base=ANYSCALE_BASE_URL,
api_key=ANYSCALE_API_KEY,
model="meta-llama/Llama-2-70b-chat-hf",
messages=[
{
"role": "user",
"content": "Why doesn't a baker's dozen apply to eggs?"
},
])
print(response['choices'][0]['message']['content'])
Advanced: tracking parameters
With Weave, you can track specific parameters and attributes in your logged records for a more detailed analysis. This example tracks the "system prompt" separately from the "prompt template" and the "equation" parameter.
# Define the system prompt and the template
system_prompt = "Always write in bullet points."
prompt_template = 'Solve the following equation step by step: {equation}'
params = {'equation': '4 * (3 - 1)'}
# Make the API call while tracking additional attributes.
openai.ChatCompletion.create(
api_base=ANYSCALE_BASE_URL,
api_key=ANYSCALE_API_KEY,
model="meta-llama/Llama-2-70b-chat-hf",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt_template.format(**params)},
],
# Track additional attributes for the logged record
monitor_attributes={
'system_prompt': system_prompt,
'prompt_template': prompt_template,
'params': params
}
)