Skip to main content

Playground

Playground is a web-based interface that allows you to query deployed LLM services. You can use Playground to query models, view responses, and experiment with prompts and parameters. It is a great way to test how your models perform in different scenarios.

How to Use

Playground

  1. Service: Select any a. Anyscale Service or b. Other service to query LLM responses from.
  2. Model: Choose a model to query on the selected service.
  3. System prompt: An optional prompt to provide context to the model.
  4. Parameters: Additional values to configure the model's behavior.
  5. API code: A code snippet to query the model programmatically.

Sections

Service

A Service is any service that has OpenAI-compatible APIs for Chat and Models.

Anyscale Services

Any LLM service that is deployed using a template (for example https://console.anyscale.com/v2/template-preview/endpoints_v2) and meets below requirements will appear for selection:

  1. Running status
  2. (Token) Authentication disabled

Other

Any service that supports OpenAI's Chat and Models APIs can be added to Playground for querying.

To add a service:

  1. Press + Add
  2. Provide the service's URL and your API key to the service

To remove a service:

  1. Press the edit (Pencil Icon) icon
  2. Press Delete

Alternatively, you may clear cookies and site data, or log out, to remove all services.

info

The service's credentials are stored in your browser's storage and are not shared with Anyscale.

Parameters

Parameters are additional values that can be used to configure the model's behavior.

The following parameters are available on Playground:

To use default values, check the ☐ Use defaults box.

API Code

API code allows you to copy a code snippet to query the model programmatically.

The following languages/libraries are supported:

  • curl
  • Python
  • OpenAI Python SDK
  • Node.js

Other modes

info

These modes are for services that are Anyscale Services only.

JSON mode

JSON mode allows you to query models that were deployed with JSON mode enabled. By providing a JSON response schema along with your prompt, you will receive a JSON response from the model.


To use:

  1. Select a model that was deployed with JSON mode enabled
  2. Add the word json somewhere in System prompt (for example You are a helpful assistant outputing in JSON)
  3. Enable JSON mode in Response format
  4. Provide a JSON response schema in the input box

Vision-language models

Vision-language models allow you to attach an image to your prompt and receive a response that is based on both the image and the prompt.

To use:

  1. Select a vision-language model that was deployed
  2. Press the image (Add Image Icon) icon

warning

Currently, vision-language models do not support multi-turn conversations. You will not be able to further query the model based on its response.