Skip to main content

Integration Patterns

Data Scientists and Engineers have an array of third-party tools and libraries which help in their daily work. Some of the ones we have seen so far have been

Each of these four tools have a different pattern for integrating with Anyscale and Ray. These details should assist in developing your integrations.

Fundamentally there are two integration types:

  • Code-level Integrations in which you'll integrate with a particular tool by modifying your code
  • Service-level Integrations in which you'll integrate with a particular tool by setting some configuration that will automatically log information.

Code-level integrations

Most of the code that data scientists and ML engineers use come from third-party libraries and are imported and leveraged from within the Python application. Many integrations with third-party tools are no different. With an API token in hand, all it takes for most integrations is to

  • set your token in a runtime environment variable
  • include the third party integration as a dependency
  • use logging statements or other integrations

Weights and Biases

Weights and Biases is a suite of tools for Machine Learning practioners. Its integration is code-level; to use the logging and integrations provided by Weights and Biases. You'll need an API key in order to use Weights and Biases -- the integration is trivial.

  • Create an account
  • pip install wandb within local environment
  • Include wandb in your Ray/Anyscale environment, either runtime or cluster.
  • Use wandb.log() to send logging events to Weights and Biases

Code-level Integrations

This example passes the local environment's WANDB_API_KEY to Anyscale in the runtime environment declaration. It also uses the runtime environment to install wandb.

import ray
import os
import time

@ray.remote
def log_to_wandb():
import wandb

wandb.init(project="my-test-project", entity="YOURUSERNAMEHERE")
wandb.config = {
"learning_rate": 0.001,
"epochs": 100,
"batch_size": 128
}
for i in range(100):
wandb.log({"loss": i})
time.sleep(1)

ray.init("anyscale://integrations",
runtime_env={"pip":["wandb"],
"env_vars":{"WANDB_API_KEY":f"{os.environ['WANDB_API_KEY']}"},
"working_dir":".",
"excludes":["tests", "yello*"]})

ray.get(log_to_wandb.remote())

Ray Tune integration

https://docs.wandb.ai/guides/integrations/other/ray-tune#wandb\_mixin

There is also a simple integration built into Ray Tune. Simply annotate your training function and then pass keyword annotation to wandb.init() within the configuration for tune.run(). Here's an example of how to use it:

import ray
import os
from ray import tune
from ray.tune.integration.wandb import wandb_mixin
import wandb

@wandb_mixin
def train_fn(config):
for i in range(10):
loss = config["a"] + config["b"]
wandb.log({"loss": loss})
tune.report(loss=loss, done=True)

ray.init("anyscale://integrations",
runtime_env={"pip":["wandb","ray[tune]"],
"env_vars":{"WANDB_API_KEY":f"{os.environ['WANDB_API_KEY']}"},
"excludes":["tests", "yello*"],
"working_dir"="."})
tune.run(
train_fn,
config={
# define search space here
"a": tune.choice([1, 2, 3]),
"b": tune.choice([4, 5, 6]),
# wandb configuration
"wandb": {
"project": "A_PROJECT_IN_WANDB",
"entity":"YOURUSERNAME",
}
})

MLflow and Anyscale

MLflow provides management of Machine Learning models and experiment metrics and logs. Including calls to metaflow in your code is similar to Weights and Biases.

In order for the MLflow client library to log metrics and register models with MLflow, provide one or more environment variables to Anyscale.

If you have created your own MLflow server in your cloud account, then you can configure you Anyscale applications to track to it. Here's a ray.init() call that initializes an environment for tracking to MLflow.

ray.init("anyscale://integrations",
runtime_env={"pip":["mlflow"],
"env_vars":{"MLFLOW_TRACKING_URI":'YOUR_MLFLOW_TRACKING_URI'},
"excludes":["tests", "yello*"],
"working_dir"="."})

MLflow Hosted by Databricks

If you have a Databricks account, then include a hostname, token, and experiment name from Databricks and MLflow will log to your Databricks instance. For example:

ray.init("anyscale://integrations",
project_dir=".",
runtime_env={"pip":["mlflow"],
"env_vars":{"MLFLOW_TRACKING_URI":'YOUR_MLFLOW_TRACKING_URI',
"DATABRICKS_HOST":"http://databricks....",
"DATABRICKS_TOKEN":"YOURDATABRICKSTOKEN",
"MLFLOW_EXPERIMENT_NAME":"/Users/xxx@yyy.com/first-experiment"},
"excludes":["tests", "yello*"],
"working_dir"="."})

Here's an example of a task that logs some parameters and metrics to Databricks's MLflow:

@ray.remote
def logging_task():
with mlflow.start_run():
alpha = "ALPHA"
l1_ratio = "L1"
rmse = 0.211
r2 = 0.122
mae = 30
mlflow.log_param("alpha", alpha)
mlflow.log_param("l1_ratio", l1_ratio)
mlflow.log_metric("rmse", rmse)
mlflow.log_metric("r2", r2)
mlflow.log_metric("mae", mae)
return "Done"

print(ray.get(logging_task.remote()))

Service-level Integrations

Datadog

Datadog is a popular platform for general application monitoring and analytics.

To use Datadog, the image backing your cluster nodes must have the Datadog agent installed. Fortunately, Datadog provides a very stable method for installation, and all you need to do is copy the recommended installation method into the "post build commands" of a Cluster Environment, and then leverage that environment when launching clusters.

Integration using an Agent

Copy this into the "Debian" section of your Cluster Environment:

curl

And this into your "post-build commands," using your API key.

DD_AGENT_MAJOR_VERSION=7 DD_INSTALL_ONLY=true DD_API_KEY={YOUR_API_KEY_HERE} DD_SITE="datadoghq.com" bash -c "$(curl -L https://s3.amazonaws.com/dd-agent/scripts/install_script.sh)"
echo "sudo service datadog-agent start" >> ~/.bashrc

The first line ensures that the Datadog Agent is available on each node that Ray provisions. The second line appends a command to start the agent into the .bashrc file, which is run when the cluster launches.

Once the agent is installed and running, and depending on your Datadog plan, you'll see system metrics and logs flowing to Datadog from servers when they are running.