Skip to main content

CI/CD with Anyscale Jobs and Services

Check your docs version

This version of the Anyscale docs is deprecated. Go to the latest version for up to date information.

Continuous integration (CI) and continuous deployment (CD) for machine learning systems enables the automatic execution of workloads for developing, deploying, monitoring, and maintaining your applications. An automated pipeline may trigger a cascade of workflows in response to a variety of events such as fresh data, performance regressions, or code updates.

This guide outlines the steps for integrating your existing CI/CD pipeline with Anyscale Jobs and Services.

Continuous integration with Anyscale Jobs

Anyscale Jobs automate machine learning workloads, including tasks like data processing, batch embedding generation, or model fine-tuning. Submitting Jobs provides automatic failure handling, email alerts, and log management.

To integrate Anyscale Jobs into your CI pipeline:

  1. Authenticate with Anyscale and your chosen cloud storage provider.
  2. Include the necessary CLI or Python SDK commands within the action steps of the pipeline.
  3. Store the outputs, like processed data or models, in cloud storage so that subsequent Jobs can then retrieve and process these artifacts.
tip

When using an orchestration framework that employs Directed Acyclic Graphs (DAGs), like Airflow or Prefect, it may be helpful to use the --wait flag with Anyscale Job submissions to block the CLI command until the Job succeeds. Consider the implications of blocking a process in terms of pipeline efficiency and resource usage.

Continuous deployment with Anyscale Services

Anyscale Services allow you to deploy and monitor Ray Serve applications in production. They ensure scalability, fault tolerance, and high availability with zero downtime upgrades, even under critical loads.

Similar to Anyscale Jobs, to automatically deploy Anyscale Services:

  1. Authenticate with Anyscale and your chosen cloud storage provider.
  2. Connect to cloud storage to retrieve artifacts and store outputs.
  3. Include the necessary CLI or Python SDK commands within the action steps of the pipeline. During rollouts, you can configure the way traffic shifts from the Service to the upgraded version.
  4. Monitor the Service through the Service detail page, Ray Dashboard, or Grafana.

Integrating with CI/CD tools

The Anyscale CLI and Python SDK serve as integration points for Jobs and Services with your orchestration tools, such as:

  • GitHub Actions: Use Anyscale CLI commands within action steps triggered by repository events, like pushes to main. These commands execute in GitHub hosted or self-hosted runners.
  • Prefect: Use Anyscale’s Prefect integration to use Anyscale as the compute infrastructure for Prefect workloads.
  • Airflow: Incorporate Anyscale CLI commands or SDK calls within tasks or DAGs.

Example: GitHub Actions

tip

View the comprehensive MadeWithML example for a complete tutorial of CI/CD with Anyscale Jobs and Services.

GitHub Actions allow you to define CI/CD workflows triggered by specific events like pull requests or pushes. You define these workflows in the .github/workflows directory in your repository.

Follow these steps to trigger an Anyscale workload:

  1. Create a workflow file in your repository at .github/workflows/NAME.yaml.
  2. Set up the Anyscale authentication and your cloud service provider credentials so the processes can access the right resources and store results.
  3. Set up dependencies to use during execution.
  4. Use Anyscale CLI commands in the steps of your workflow to submit Jobs or deploy Services:
    anyscale jobs submit deploy/jobs/workloads.yaml --wait
    anyscale service rollout --service-config-file deploy/services/serve_model.yaml

Submitting an Anyscale Job

The following is a snippet from a sample workflow demonstrating how to submit an Anyscale Job and connect to an AWS S3 bucket. See the complete workloads.yaml on GitHub.

# Run workloads
- name: Workloads
run: |
export ANYSCALE_CLI_TOKEN=${{ secrets.ANYSCALE_CLI_TOKEN }}
anyscale jobs submit deploy/jobs/workloads.yaml --wait

# Read results from S3
- name: Read results from S3
run: |
mkdir results
aws s3 cp s3://madewithml/${{ github.actor }}/results/ results/ --recursive
python .github/workflows/json_to_md.py results/training_results.json results/training_results.md
python .github/workflows/json_to_md.py results/evaluation_results.json results/evaluation_results.md

Deploying an Anyscale Service

The following is a snippet from a sample workflow demonstrating how to rollout an Anyscale Service. See the complete serve.yaml on GitHub.

# Serve model
- name: Serve model
run: |
export ANYSCALE_CLI_TOKEN=${{ secrets.ANYSCALE_CLI_TOKEN }}
anyscale service rollout --service-config-file deploy/services/serve_model.yaml