Skip to main content
Version: Latest

Migrate from OpenAI

Check your docs version

These docs are for the new Anyscale design. If you started using Anyscale before April 2024, use Version 1.0.0 of the docs. If you're transitioning to Anyscale Preview, see the guide for how to migrate.

Introduction

To make migration from OpenAI to Anyscale as easy as possible, there are a number of similarities between workflows. However there are a small number of things that must change to complete the migration.

These notes cover migration of Python code that uses the official Python OpenAI library. Although untested in other languages, these changes should also work for any other language that respects the standard environment variables or has a way to set the parameters directly in the code.

The four steps in order are:

  1. Setting the OPENAI_BASE_URL environment variable
  2. Setting the OPENAI_API_KEY environment variable
  3. Changing the model name in the code
  4. Adjusting any parameters to the API calls

Hopefully these 4 steps mean it should take you just a few minutes to migrate.

Setting the OPENAI_BASE_URL environment variable

The OpenAI Python library supports setting an environment variable specifying the base for calls. Set this to point at Anyscale Endpoints. For example, in bash:

export OPENAI_BASE_URL='https://api.endpoints.anyscale.com/v1'

How you set environment variables varies based on the deployment environment.

Setting the OPENAI_API_KEY environment variable

You also need a key generated from Anyscale Endpoints. Once you log in, you can create a key at https://app.endpoints.anyscale.com/credentials.

Once you have the key from there, you should add it to your environment. For example, in bash:

export OPENAI_API_KEY=esecret_...

Changing the model you are using

Use the ChatCompletion API. In the code that calls that, you should specify a different model name. For example, in the below, you need to replace gpt-3.5-turbo with meta-llama/Llama-2-70b-chat-hf.

client = openai.OpenAI()
client.chat.completions.create(
  model = 'gpt-3.5-turbo', # Note this optional and may not be declared
  messages = message_history,
  stream = True
)

# Now change that to:

client = openai.OpenAI()
client.chat.completions.create(
  # Here we use the 70b model (recommended), but you can also use 7b and 13b
  model = 'meta-llama/Llama-2-70b-chat-hf', 
  messages = message_history,
  stream = True
)

Check any parameters to the create call that you might need to change

In the preceding code, you modified the create call. For that create call Anyscale supports most of the optional parameters listed on the API Reference, but not all. Anyscale supports the most commonly used parameters, for example, stream, top_p, and temperature.

Here is a list of the parameters to the create call and the support status.

ParameterEndpoints support status
modelSupported
messagesSupported
temperatureSupported
top_pSupported
streamSupported
max_tokensSupported
stopSupported
frequency_penaltySupported
presence_penaltySupported
nSupported
logprobsSupported*
top_logprobsSupported
response_formatSupported
toolsSupported
tool_choiceSupported
logit_biasSupported
functionsDeprecated by OpenAI
function_callDeprecated by OpenAI
userNot supported*

*: meta-llama/Llama-2-70b-chat-hf and meta-llama/Llama-2-13b-chat-hf aren't supported.

In addition, Anyscale supports some additional parameters

ParameterDescription
schemaDefine the JSON schema for the JSON mode.
top_kThe number of highest probability vocabulary tokens to keep for top-k-filtering.