Skip to main content

Job API Reference

Customer-hosted cloud features

note

Some features are only available on customer-hosted clouds. Reach out to support@anyscale.com for info.

Job CLI

anyscale job submit

Usage

anyscale job submit [OPTIONS] [ENTRYPOINT]...

Submit a job.

The job config can be specified in one of the following ways:

  • Job config file can be specified as a single positional argument. E.g. anyscale job submit config.yaml.

  • Job config can also be specified with command-line arguments. In this case, the entrypoint should be specified as the positional arguments starting with --. Other arguments can be specified with command-line flags. E.g.

    • anyscale job submit -- python main.py: submit a job with the entrypoint python main.py.

    • anyscale job submit --name my-job -- python main.py: submit a job with the name my-job and the entrypoint python main.py.

  • [Experimental] If you want to specify a config file and override some arguments with the commmand-line flags, use the --config-file flag. E.g.

    • anyscale job submit --config-file config.yaml: submit a job with the config in config.yaml.

    • anyscale job submit --config-file config.yaml -- python main.py: submit a job with the config in config.yaml and override the entrypoint with python main.py.

Either containerfile or image-uri should be used, specifying both will result in an error.

By default, this command submits the job asynchronously and exits. To wait for the job to complete, use the --wait flag.

Options

  • -n/--name: Name of the job.
  • -w/--wait: Block this CLI command and print logs until the job finishes.
  • --config-file/-f: Path to a YAML config file to use for this job. Command-line flags will overwrite values read from the file.
  • --compute-config: Named compute configuration to use for the job. This defaults to the compute configuration of the workspace.
  • --image-uri: Container image to use for this job. When running in a workspace, this defaults to the image of the workspace.
  • --registry-login-secret: Name or identifier of the secret containing credentials to authenticate to the docker registry hosting the image. This can only be used when 'image_uri' is specified and the image is not hosted on Anyscale.
  • --containerfile: Path to a containerfile to build the image to use for the job.
  • --env: Environment variables to set for the job. The format is 'key=value'. This argument can be specified multiple times. When the same key is also specified in the config file, the value from the command-line flag will overwrite the value from the config file.
  • --working-dir: Path to a local directory or a remote URI to a .zip file (S3, GS, HTTP) that will be the working directory for the job. The files in the directory will be automatically uploaded to cloud storage. When running in a workspace, this defaults to the current working directory.
  • -e/--exclude: File pattern to exclude when uploading local directories. This argument can be specified multiple times and the patterns will be appended to the 'excludes' list in the config file (if any).
  • -r/--requirements: Path to a requirements.txt file containing dependencies for the job. These will be installed on top of the image. When running in a workspace, this defaults to the workspace dependencies.
  • --py-module: Python modules to be available for import in the Ray workers. Each entry must be a path to a local directory.
  • --cloud: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • --project: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • --ray-version: The Ray version (X.Y.Z) to the image specified by --image-uri. This is only used when --image-uri is provided. If not provided, the latest Ray version will be used.
  • --max-retries: Maximum number of retries to attempt before failing the entire job.
  • --timeout-s/--timeout/-t: The timeout in seconds for each job run. Set to None for no limit to be set.

Examples

$ anyscale job submit --name my-job --wait -- python main.py
Output
(anyscale +1.0s) Submitting job with config JobConfig(name='my-job', image_uri=None, compute_config=None, env_vars=None, py_modules=None, cloud=None, project=None, ray_version=None, job_queue_config=None).
(anyscale +1.7s) Uploading local dir '.' to cloud storage.
(anyscale +2.6s) Including workspace-managed pip dependencies.
(anyscale +3.2s) Job 'my-job' submitted, ID: 'prodjob_6ntzknwk1i9b1uw1zk1gp9dbhe'.
(anyscale +3.2s) View the job in the UI: https://console.anyscale.com/jobs/prodjob_6ntzknwk1i9b1uw1zk1gp9dbhe
(anyscale +3.2s) Waiting for the job to run. Interrupting this command will not cancel the job.
(anyscale +3.5s) Waiting for job 'prodjob_6ntzknwk1i9b1uw1zk1gp9dbhe' to reach target state SUCCEEDED, currently in state: STARTING
(anyscale +1m19.7s) Job 'prodjob_6ntzknwk1i9b1uw1zk1gp9dbhe' transitioned from STARTING to SUCCEEDED
(anyscale +1m19.7s) Job 'prodjob_6ntzknwk1i9b1uw1zk1gp9dbhe' reached target state, exiting

anyscale job status

Usage

anyscale job status [OPTIONS]

Query the status of a job.

To specify the job by name, use the --name flag. To specify the job by id, use the --id flag. Either name or id should be used, specifying both will result in an error.

If job is specified by name and there are multiple jobs with the specified name, the most recently created job status will be returned.

Options

  • --id/--job-id: Unique ID of the job.
  • --name/-n: Name of the job.
  • --cloud: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • --project: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • --json/-j: Output the status in a structured JSON format.
  • --verbose/-v: Include verbose details in the status.

Examples

$ anyscale job status -n my-job
id: prodjob_6ntzknwk1i9b1uw1zk1gp9dbhe
name: my-job
state: STARTING
runs:
- name: raysubmit_ynxBVGT1SmzndiXL
state: SUCCEEDED

anyscale job terminate

Usage

anyscale job terminate [OPTIONS]

Terminate a job.

To specify the job by name, use the --name flag. To specify the job by id, use the --id flag. Either name or id should be used, specifying both will result in an error.

If job is specified by name and there are multiple jobs with the specified name, the most recently created job status will be terminated.

Options

  • --id/--job-id: Unique ID of the job.
  • --name/-n: Name of the job.
  • --cloud: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • --project: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).

Examples

$ anyscale job terminate -n my-job
(anyscale +5.4s) Marked job 'my-job' for termination
(anyscale +5.4s) Query the status of the job with `anyscale job status --name my-job`.

anyscale job archive

Usage

anyscale job archive [OPTIONS]

Archive a job.

To specify the job by name, use the --name flag. To specify the job by id, use the --id flag. Either name or id should be used, specifying both will result in an error.

If job is specified by name and there are multiple jobs with the specified name, the most recently created job status will be archived.

Options

  • --id/--job-id: Unique ID of the job.
  • --name/-n: Name of the job.
  • --cloud: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • --project: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).

Examples

$ anyscale job archive -n my-job
(anyscale +8.5s) Job prodjob_vzq2pvkzyz3c1jw55kl76h4dk1 is successfully archived.

anyscale job logs

Usage

anyscale job logs [OPTIONS]

Print the logs of a job.

By default from the latest job attempt.

Options

  • --id/--job-id: Unique ID of the job.
  • --name/-n: Name of the job.
  • --run: Name of the job run.
  • --cloud: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • --project: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • --head: Used with --max-lines to get max-lines lines from the head of the log.
  • --tail: Used with --max-lines to get max-lines lines from the tail of the log.
  • --max-lines: Used with --head or --tail to limit the number of lines output.
  • --follow/-f: Whether to follow the log.
  • --all-attempts: DEPRECATED: Listing logs from all attempts no longer supported, instead list jobs by run name.

Examples

$ anyscale job logs -n my-job
2024-08-23 20:31:10,913 INFO job_manager.py:531 -- Runtime env is setting up.
hello world

anyscale job wait

Usage

anyscale job wait [OPTIONS]

Wait for a job to enter a specific state (default: SUCCEEDED).

To specify the job by name, use the --name flag. To specify the job by id, use the --id flag.

If the job reaches the target state, the command will exit successfully.

If the job reaches a terminal state other than the target state, the command will exit with an error.

If the command reaches the timeout, the command will exit with an error but job execution will continue.

Options

  • --id/--job-id: Unique ID of the job.
  • --name/-n: Name of the job.
  • --cloud: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • --project: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • --state/-s: The state to wait for this job to enter
  • --timeout-s/--timeout/-t: The timeout in seconds after which this command will exit.

Examples

$ anyscale job wait -n my-job
(anyscale +5.7s) Waiting for job 'my-job' to reach target state SUCCEEDED, currently in state: STARTING
(anyscale +1m34.2s) Job 'my-job' transitioned from STARTING to SUCCEEDED
(anyscale +1m34.2s) Job 'my-job' reached target state, exiting

anyscale job list

Usage

anyscale job list [OPTIONS]

Display information about existing jobs.

Options

  • --name/-n: Filter by job name.
  • --id/--job-id: Filter by job id.
  • --project-id: Filter by project id.
  • --include-all-users: Include jobs not created by current user.
  • --include-archived: List archived jobs as well as unarchived jobs.If not provided, defaults to listing only unarchived jobs.
  • --max-items: Max items to show in list.

Examples

$ anyscale job list -n my-job
Output
View your Jobs in the UI at https://console.anyscale.com/jobs
JOBS:
NAME ID COST PROJECT NAME CLUSTER NAME CURRENT STATE CREATOR ENTRYPOINT
my-job prodjob_s9x4uzc5jnkt5z53g4tujb3y2e 0 default cluster_for_prodjob_s9x4uzc5jnkt5z53g4tujb3y2e SUCCESS doc@anyscale.com python main.py

Job SDK

anyscale.job.submit

Submit a job.

Returns the id of the submitted job.

Arguments

  • config (JobConfig): The config options defining the job.

Returns: str

Examples

import anyscale
from anyscale.job.models import JobConfig

anyscale.job.submit(
JobConfig(
name="my-job",
entrypoint="python main.py",
working_dir=".",
),
)

anyscale.job.status

Get the status of a job.

Arguments

  • name (str | None) = None: Name of the job.
  • id (str | None) = None: Unique ID of the job
  • cloud (str | None) = None: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • project (str | None) = None: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).

Returns: JobStatus

Examples

import anyscale
from anyscale.job.models import JobStatus

status: JobStatus = anyscale.job.status(name="my-job")

anyscale.job.terminate

Terminate a job.

This command is asynchronous, so it always returns immediately.

Returns the id of the terminated job.

Arguments

  • name (str | None) = None: Name of the job.
  • id (str | None) = None: Unique ID of the job
  • cloud (str | None) = None: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • project (str | None) = None: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).

Returns: str

Examples

import anyscale

anyscale.job.terminate(name="my-job")

anyscale.job.archive

Archive a job.

This command is asynchronous, so it always returns immediately.

Returns the id of the archived job.

Arguments

  • name (str | None) = None: Name of the job.
  • id (str | None) = None: Unique ID of the job
  • cloud (str | None) = None: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • project (str | None) = None: Named project to use for the job . If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).

Returns: str

Examples

import anyscale

anyscale.job.archive(name="my-job")

anyscale.job.get_logs

Query the jobs for a job run.

Arguments

  • id (str | None) = None: Unique ID of the job
  • name (str | None) = None: Name of the job
  • cloud (str | None) = None: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • project (str | None) = None: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • run (str | None) = None: The name of the run to query. Names can be found in the JobStatus. If not provided, the last job run will be used.
  • mode (str | JobLogMode) = TAIL: The mode of log fetching to be used. Supported modes can be found in JobLogMode. If not provided, JobLogMode.TAIL will be used.
  • max_lines (int | None) = None: The number of log lines to be fetched. If not provided, the complete log will be fetched.

Returns: str

Examples

import anyscale

anyscale.job.get_logs(name="my-job", run="job-run-name")

anyscale.job.wait

"Wait for a job to enter a specific state.

Arguments

  • name (str | None) = None: Name of the job.
  • id (str | None) = None: Unique ID of the job
  • cloud (str | None) = None: The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • project (str | None) = None: Named project to use for the job. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • state (JobState | str) = SUCCEEDED: Target state of the job
  • timeout_s (float) = 1800: Number of seconds to wait before timing out, this timeout will not affect job execution

Examples

import anyscale

anyscale.job.wait(name="my-job", timeout_s=180)

Job Models

JobQueueSpec

Options defining a job queue.

When the first job with a given job queue spec is submitted, the job queue will be created. Subsequent jobs with the same job queue spec will reuse the queue instead of creating another one.

Jobs can also target an existing queue using the name parameter.

Fields

  • idle_timeout_s (int): Timeout that the job queue cluster will be kept running while no jobs are running.
  • name (str): Name of the job queue that can be used to target it when submitting future jobs. The name of a job queue must be unique within a project.
  • execution_mode (JobQueueExecutionMode): Execution mode of the jobs submitted into the queue (one of: FIFO,LIFO,PRIORITY
  • compute_config (str | None): The name of an existing compute config that will be used to create the job queue cluster. If not specified, the compute config of the associated job will be used.
  • max_concurrency (int): Max number of jobs that can run concurrently.Defaults to 1, meaning only one job can run at a given time.

Python Methods

def to_dict(self) -> Dict[str, Any]
"""Return a dictionary representation of the model."""

Examples

job_queue_spec:
# Unique name that can be used to target this queue by other jobs.
name: my-job-queue
execution_mode: FIFO
# Name of a compute config that will be used to create a cluster to execute jobs in this queue.
# Must match the compute config of the job if specified.
compute_config: my-compute-config:1
max_concurrency: 5
idle_timeout_s: 3600

JobQueueConfig

Configuration options for a job related to using a job queue for scheduling and execution.

Fields

  • priority (int | None): Job's relative priority (only relevant for Job Queues of type PRIORITY). Valid values range from 0 (highest) to +inf (lowest). Default value is None
  • target_job_queue_name (str | None): The name of an existing job queue to schedule this job in. If this is provided, job_queue_spec cannot be.
  • job_queue_spec (JobQueueSpec | None): Configuration options defining a job queue to be created for the job if needed. If this is provided, target_job_queue_name cannot be.

Python Methods

def __init__(self, **fields) -> JobQueueConfig
"""Construct a model with the provided field values set."""

def options(self, **fields) -> JobQueueConfig
"""Return a copy of the model with the provided field values overwritten."""

def to_dict(self) -> Dict[str, Any]
"""Return a dictionary representation of the model."""

Examples

# An example configuration that creates a job queue if one does not exist with the provided options.
job_queue_config:
# Priority of the job (only relevant if the execution_mode is "PRIORITY").
priority: 100,
# Specification of the target Job Queue (will be created if does not exist)
job_queue_spec:
name: my-job-queue
compute_config: my-compute-config:1
idle_timeout_s: 3600

# An example config that targets an existing job queue by name.
job_queue_config:
# Priority of the job (only relevant if the execution_mode is "PRIORITY").
priority: 100
# Name for the job queue this job should be added to (specified in `JobQueueSpec.name`
# on queue's creation)
target_job_queue_name: my-new-queue

JobQueueExecutionMode

An enumeration.

Values

  • FIFO: Executes jobs in chronological order ('first in, first out')
  • LIFO: Executes jobs in reversed chronological order ('last in, first out')
  • PRIORITY: Executes jobs in the order induced by ordering their priorities in ascending order, with 0 being the highest priority

JobConfig

Configuration options for a job.

Fields

  • name (str | None): Name of the job. Multiple jobs can be submitted with the same name.
  • image_uri (str | None): URI of an existing image. Exclusive with containerfile.
  • containerfile (str | None): The file path to a containerfile that will be built into an image before running the workload. Exclusive with image_uri.
  • compute_config (ComputeConfig | Dict | str | None): The name of an existing registered compute config or an inlined ComputeConfig object.
  • working_dir (str | None): Directory that will be used as the working directory for the application. If a local directory is provided, it will be uploaded to cloud storage automatically. When running inside a workspace, this defaults to the current working directory ('.').
  • excludes (List[str] | None): A list of file path globs that will be excluded when uploading local files for working_dir.
  • requirements (str | List[str] | None): A list of pip requirements or a path to a requirements.txt file for the workload. When running inside a workspace, this defaults to the workspace-tracked requirements.
  • env_vars (Dict[str, str] | None): A dictionary of environment variables that will be set for the workload.
  • py_modules (List[str] | None): A list of local directories or remote URIs that will be uploaded and added to the Python path.
  • cloud (str | None): The Anyscale Cloud to run this workload on. If not provided, the organization default will be used (or, if running in a workspace, the cloud of the workspace).
  • project (str | None): The project for the workload. If not provided, the default project for the cloud will be used (or, if running in a workspace, the project of the workspace).
  • registry_login_secret (str | None): A name or identifier of the secret containing credentials to authenticate to the docker registry hosting the image. This can only be used when 'image_uri' is specified and the image is not hosted on Anyscale.
  • ray_version (str | None): The Ray version (X.Y.Z) specified for this image specified by either an image URI or a containerfile. If not provided, the latest Ray version will be used.
  • entrypoint (str): Command that will be run to execute the job, e.g., python main.py.
  • max_retries (int): Maximum number of times the job will be retried before being marked failed. Defaults to 1.
  • job_queue_config (JobQueueConfig | None): Job's configuration related to scheduling & execution using job queues
  • timeout_s (int | None): The timeout in seconds for each job run. Set to None for no limit to be set.

Python Methods

def __init__(self, **fields) -> JobConfig
"""Construct a model with the provided field values set."""

def options(self, **fields) -> JobConfig
"""Return a copy of the model with the provided field values overwritten."""

def to_dict(self) -> Dict[str, Any]
"""Return a dictionary representation of the model."""

Examples

name: my-job
entrypoint: python main.py
image_uri: anyscale/image/my-image:1 # (Optional) Exclusive with `containerfile`.
containerfile: /path/to/Dockerfile # (Optional) Exclusive with `image_uri`.
compute_config: my-compute-config:1 # (Optional) An inline dictionary can also be provided.
working_dir: /path/to/working_dir # (Optional) Defaults to `.`.
excludes: # (Optional) List of files to exclude from being packaged up for the job.
- .git
- .env
- .DS_Store
- __pycache__
requirements: # (Optional) List of requirements files to install. Can also be a path to a requirements.txt.
- emoji==1.2.0
- numpy==1.19.5
env_vars: # (Optional) Dictionary of environment variables to set in the job.
MY_ENV_VAR: my_value
ANOTHER_ENV_VAR: another_value
py_modules: # (Optional) A list of local directories or remote URIs that will be added to the Python path.
- /path/to/my_module
- s3://my_bucket/my_module
cloud: anyscale-prod # (Optional) The name of the Anyscale Cloud.
project: my-project # (Optional) The name of the Anyscale Project.
max_retries: 3 # (Optional) Maximum number of times the job will be retried before being marked failed. Defaults to `1`.

JobState

Current state of a job.

Values

  • STARTING: The job is being started and is not yet running.
  • RUNNING: The job is running. A job will have state RUNNING if a job run fails and there are remaining retries.
  • FAILED: The job did not finish running or the entrypoint returned an exit code other than 0 after retrying up to max_retries times.
  • SUCCEEDED: The job finished running and its entrypoint returned exit code 0.
  • UNKNOWN: The CLI/SDK received an unexpected state from the API server. In most cases, this means you need to update the CLI.

JobStatus

Current status of a job.

Fields

  • id (str): Unique ID of the job (generated when the job is first submitted).
  • name (str): Name of the job. Multiple jobs can be submitted with the same name.
  • state (str | JobState): Current state of the job.
  • config (JobConfig): Configuration of the job.
  • runs (List[JobRunStatus]): List of job run states.

Python Methods

def to_dict(self) -> Dict[str, Any]
"""Return a dictionary representation of the model."""

Examples

import anyscale
from anyscale.job.models import JobStatus
status: JobStatus = anyscale.job.status(name="my-job")

JobRunStatus

Current status of an individual job run.

Fields

  • name (str): Name of the job run.
  • state (str | JobRunState): Current state of the job run.

Python Methods

def to_dict(self) -> Dict[str, Any]
"""Return a dictionary representation of the model."""

Examples

import anyscale
from anyscale.job.models import JobRunStatus
run_statuses: List[JobRunStatus] = anyscale.job.status(name="my-job").runs

JobRunState

Current state of an individual job run.

Values

  • STARTING: The job run is being started and is not yet running.
  • RUNNING: The job run is running.
  • FAILED: The job run did not finish running or the entrypoint returned an exit code other than 0.
  • SUCCEEDED: The job run finished running and its entrypoint returned exit code 0.
  • UNKNOWN: The CLI/SDK received an unexpected state from the API server. In most cases, this means you need to update the CLI.

JobLogMode

Mode to use for getting job logs.

Values

  • HEAD: Fetch logs from the start of the job's log.
  • TAIL: Fetch logs from the end of the job's log.

Job SDK Legacy

The AnyscaleSDK class must be constructed in order to make calls to the SDK. This class allows you to create an authenticated client in which to use the SDK.

ParamTypeDescription
auth_tokenOptional StringAuthentication token used to verify you have permissions to access Anyscale. If not provided, permissions default to the credentials set for your current user. Credentials can be set by following the instructions on this page: https://console.anyscale.com/credentials

Example

from anyscale import AnyscaleSDK

sdk = AnyscaleSDK()

create_job Legacy

warning

This command is deprecated. Upgrade to anyscale.job.submit.

Create an Production Job

Parameters

NameTypeDescriptionNotes
create_production_jobCreateProductionJob

Returns ProductionjobResponse

get_production_job Legacy

warning

This command is deprecated. Upgrade to anyscale.job.status.

Get an Production Job

Parameters

NameTypeDescriptionNotes
production_job_idstrDefaults to null

Returns ProductionjobResponse

get_session_for_job Legacy

warning

This command is deprecated. Upgrade to anyscale.job.status.

Get Session for Production Job

Parameters

NameTypeDescriptionNotes
production_job_idstrDefaults to null

Returns SessionResponse

terminate_job Legacy

warning

This command is deprecated. Upgrade to anyscale.job.terminate.

Terminate an Production Job

Parameters

NameTypeDescriptionNotes
production_job_idstrDefaults to null

Returns ProductionjobResponse

fetch_job_logs Legacy

warning

This command is deprecated. Upgrade to anyscale.job.get_logs.

Retrieves logs for a Job.

This function may take several minutes if the Cluster this Job ran on has been terminated.

Returns the log output as a string.

Raises an Exception if fetching logs fails.

ParamTypeDescription
job_idStringID of the Job

Example

from anyscale import AnyscaleSDK

sdk = AnyscaleSDK(auth_token="sss_YourAuthToken")

job_logs = sdk.fetch_job_logs(job_id="job_id")

print(job_logs)

fetch_production_job_logs Legacy

warning

This command is deprecated. Upgrade to anyscale.job.get_logs.

Retrieves logs for a Production Job.

This function may take several minutes if the Cluster this Production Job ran on has been terminated.

Returns the log output as a string.

Raises an Exception if fetching logs fails.

ParamTypeDescription
job_idStringID of the Job

Example

from anyscale import AnyscaleSDK

sdk = AnyscaleSDK(auth_token="sss_YourAuthToken")

job_logs = sdk.fetch_production_job_logs(job_id="production_job_id")

print(job_logs)

get_job_logs_download Legacy

warning

This command is deprecated. Upgrade to anyscale.job.get_logs.

Parameters

NameTypeDescriptionNotes
job_idstrDefaults to null
all_logsoptional boolWhether to grab all logs.Defaults to true

Returns LogdownloadresultResponse

get_job_logs_stream Legacy

warning

This command is deprecated. Upgrade to anyscale.job.get_logs.

Parameters

NameTypeDescriptionNotes
job_idstrDefaults to null

Returns LogstreamResponse

search_jobs Legacy

Limited support

This command is not actively maintained. Use with caution.

DEPRECATED: This API is now deprecated. Use list_production_jobs instead.

Parameters

NameTypeDescriptionNotes
jobs_queryJobsQuery

Returns JobListResponse

list_production_jobs Legacy

Limited support

This command is not actively maintained. Use with caution.

Parameters

NameTypeDescriptionNotes
project_idoptional strproject_id to filter byDefaults to null
nameoptional strname to filter byDefaults to null
state_filterList[HaJobStates]A list of session states to filter byDefaults to []
creator_idoptional strfilter by creator idDefaults to null
paging_tokenoptional strDefaults to null
countoptional intDefaults to null

Returns ProductionjobListResponse

Job Models Legacy

BaseJobStatus Legacy

An enumeration.

Possible Values: ['RUNNING', 'COMPLETED', 'PENDING', 'STOPPED', 'SUCCEEDED', 'FAILED', 'UNKNOWN']

CreateClusterComputeConfig Legacy

Configuration of compute resources to use for launching a Cluster. Used when creating a cluster compute.

NameTypeDescriptionNotes
cloud_idstrThe ID of the Anyscale cloud to use for launching Clusters.[default to null]
max_workersintDesired limit on total running workers for this Cluster.[optional] [default to null]
regionstrDeprecated! When creating a cluster compute, a region does not have to be provided. Instead we will use the value from the cloud.[optional] [default to USE_CLOUD]
allowed_azsList[str]The availability zones that sessions are allowed to be launched in, e.g. "us-west-2a". If not specified or "any" is provided as the option, any AZ may be used. If "any" is provided, it must be the only item in the list.[optional] [default to null]
head_node_typeComputeNodeTypeNode configuration to use for the head node.[default to null]
worker_node_typesList[WorkerNodeType]A list of node types to use for worker nodes.[optional] [default to null]
aws_advanced_configurations_jsonobject[DEPRECATED: use advanced_configurations_json instead] The advanced configuration json that we pass directly AWS APIs when launching an instance. We may do some validation on this json and reject the json if it is using a configuration that Anyscale does not support.[optional] [default to null]
gcp_advanced_configurations_jsonobject[DEPRECATED: use advanced_configurations_json instead] The advanced configuration json that we pass directly GCP APIs when launching an instance. We may do some validation on this json and reject the json if it is using a configuration that Anyscale does not support.[optional] [default to null]
advanced_configurations_jsonobjectAdvanced configurations for this compute node type to pass to the cloud provider when launching this instance.[optional] [default to null]
maximum_uptime_minutesintIf set to a positive number, Anyscale will terminate the cluster this many minutes after cluster start.[optional] [default to null]
auto_select_worker_configboolIf set to true, worker node groups will automatically be selected based on workload.[optional] [default to false]
flagsobjectA set of advanced cluster-level flags that can be used to configure a particular workload.[optional] [default to null]
idle_termination_minutesintIf set to a positive number, Anyscale will terminate the cluster this many minutes after the cluster is idle. Idle time is defined as the time during which a Cluster is not running a user command or a Ray driver. Time spent running commands on Jupyter or ssh is still considered 'idle'. To disable, set this field to 0.[optional] [default to 120]

CreateJobQueueConfig Legacy

Specifies configuration of the job being added to a Job Queue

NameTypeDescriptionNotes
priorityintJob's relative priority (only relevant for Job Queues of type PRIORITY). Valid values range from 0 (highest) to +inf (lowest). Default value is None[optional] [default to null]
target_job_queue_idstrIdentifier of the existing Job Queue this job should be added to. Note, only one of `target_job_queue_id`, `target_job_queue_name` or `job_queue_spec` could be provided[optional] [default to null]
target_job_queue_namestrExisting Job Queue user-provided name (identifier), this job should be added to. Note, only one of `target_job_queue_id`, `target_job_queue_name` or `job_queue_spec` could be provided[optional] [default to null]
job_queue_specJobQueueSpecSpec of the Job Queue definition that should be created and associated with this job. Note, only one of `target_job_queue_id`, `target_job_queue_name` or `job_queue_spec` could be provided[optional] [default to null]

CreateProductionJob Legacy

NameTypeDescriptionNotes
namestrName of the job[default to null]
descriptionstrDescription of the job[optional] [default to null]
project_idstrId of the project this job will start clusters in[optional] [default to null]
configCreateProductionJobConfig[default to null]
job_queue_configCreateJobQueueConfigConfiguration specifying semantic of the execution using job queues[optional] [default to null]

CreateProductionJobConfig Legacy

NameTypeDescriptionNotes
entrypointstrA script that will be run to start your job.This command will be run in the root directory of the specified runtime env. Eg. 'python script.py'[optional] [default to ]
ray_serve_configobjectThe Ray Serve config to use for this Production service. This config defines your Ray Serve application, and will be passed directly to Ray Serve. You can learn more about Ray Serve config files here: https://docs.ray.io/en/latest/serve/production-guide/config.html[optional] [default to null]
runtime_envRayRuntimeEnvConfigA ray runtime env json. Your entrypoint will be run in the environment specified by this runtime env.[optional] [default to null]
build_idstrThe id of the cluster env build. This id will determine the docker image your job is run on.[default to null]
compute_config_idstrThe id of the compute configuration that you want to use. This id will specify the resources required for your job[optional] [default to null]
compute_configCreateClusterComputeConfigOne-off compute that the cluster will use.[optional] [default to null]
max_retriesintThe number of retries this job will attempt on failure. Set to None to set infinite retries[optional] [default to 5]
timeout_sintThe timeout in seconds for each job run. Set to None for no limit to be set[optional] [default to null]
runtime_env_configRayRuntimeEnvConfigDEPRECATED: Use runtime_env[optional] [default to null]

HaJobGoalStates Legacy

An enumeration.

Possible Values: ['SCHEDULED', 'RUNNING', 'TERMINATED', 'SUCCESS']

HaJobStates Legacy

An enumeration.

Possible Values: ['PENDING', 'AWAITING_CLUSTER_START', 'UPDATING', 'RUNNING', 'SUCCESS', 'ERRORED', 'TERMINATED', 'CLEANING_UP', 'BROKEN', 'OUT_OF_RETRIES', 'RESTARTING']

Job Legacy

NameTypeDescriptionNotes
idstrServer assigned unique identifier.[default to null]
ray_session_namestrName of the Session provided from Ray[default to null]
ray_job_idstrID of the Job provided from Ray[default to null]
namestrName of this Job.[optional] [default to null]
statusJobStatusStatus of this Job's execution.[default to null]
created_atdatetimeTime at which this Job was created.[default to null]
finished_atdatetimeTime at which this Job finished. If absent, this Job is still running.[optional] [default to null]
ray_job_submission_idstrID of the submitted Ray Job that this Job corresponds to.[optional] [default to null]
cluster_idstrID of the Anyscale Cluster this Job is on.[default to null]
namespace_idstrID of the Anyscale Namespace this Job is using.[optional] [default to DEPRECATED_NAMESPACE_ID]
runtime_environment_idstrID of the Anyscale Runtime Environment this Job is using.[default to null]
project_idstrID of the Project this Job belongs to.[optional] [default to null]
creator_idstrID of the user who created this Job.[default to null]

JobListResponse Legacy

A list response form the API. Contains a field "results" which has the contents of the response.

NameTypeDescriptionNotes
resultsList[Job][default to null]
metadataListResponseMetadata[optional] [default to null]

JobQueueConfig Legacy

Captures job's configuration in the context of its scheduling & execution via Job Queues

NameTypeDescriptionNotes
priorityintJob's relative priority (only relevant for Job Queues of type PRIORITY). Valid values range from 0 (highest) to +inf (lowest). Default value is None[optional] [default to null]

JobQueueExecutionMode Legacy

An enumeration.

Possible Values: ['FIFO', 'LIFO', 'PRIORITY']

JobQueueSpec Legacy

Specifies definition of the Job Queue to be created

NameTypeDescriptionNotes
job_queue_namestrOptional user-provided identifier of the queue that could be subsequently used to reference the queue when submitting jobs. Note that name has to be unique within the project.[optional] [default to null]
execution_modeJobQueueExecutionModeExecution mode of the jobs submitted into the queue (one of: FIFO,LIFO,PRIORITY[optional] [default to null]
compute_config_idstrThe id of the compute configuration that will be used to create cluster associated with the queue. Defaults to default compute config in the given project[optional] [default to null]
cluster_environment_build_idstrThe id of the cluster environment build that will be used to create cluster associated with the queue.[optional] [default to null]
max_concurrencyintMax number of jobs to be run concurrently. Defaults to 1, ie running no more than 1 job at a time.[optional] [default to 1]
idle_timeout_secintMax period of time queue will be accepting new jobs, before being sealed off and its associated cluster being shutdown[default to null]

JobRunType Legacy

An enumeration.

Possible Values: ['INTERACTIVE_SESSION', 'RUN', 'RAY_SUBMIT']

JobStatus Legacy

An enumeration.

Possible Values: ['RUNNING', 'COMPLETED', 'PENDING', 'STOPPED', 'SUCCEEDED', 'FAILED', 'UNKNOWN']

JobsSortField Legacy

An enumeration.

Possible Values: ['STATUS', 'CREATED_AT', 'FINISHED_AT', 'NAME', 'ID', 'COST']

ProductionJob Legacy

Model of a Production Job for use in the SDK.

NameTypeDescriptionNotes
idstrThe id of this job[default to null]
namestrName of the job[default to null]
descriptionstrDescription of the job[optional] [default to null]
created_atdatetimeThe time this job was created[default to null]
creator_idstrThe id of the user who created this job[default to null]
configProductionJobConfigThe config that was used to create this job[default to null]
job_queue_configJobQueueConfigJob Queue configuration of this job (if applicable)[optional] [default to null]
stateProductionJobStateTransitionThe current state of this job[default to null]
project_idstrId of the project this job will start clusters in[default to null]
last_job_run_idstrThe id of the last job run[optional] [default to null]
schedule_idstrIf the job was launched via Scheduled job, this will contain the id of that schedule.[optional] [default to null]
job_queue_idstrId of the job queue this job is being enqueued to[optional] [default to null]

ProductionJobConfig Legacy

NameTypeDescriptionNotes
entrypointstrA script that will be run to start your job.This command will be run in the root directory of the specified runtime env. Eg. 'python script.py'[optional] [default to ]
ray_serve_configobjectThe Ray Serve config to use for this Production service. This config defines your Ray Serve application, and will be passed directly to Ray Serve. You can learn more about Ray Serve config files here: https://docs.ray.io/en/latest/serve/production-guide/config.html[optional] [default to null]
runtime_envRayRuntimeEnvConfigA ray runtime env json. Your entrypoint will be run in the environment specified by this runtime env.[optional] [default to null]
build_idstrThe id of the cluster env build. This id will determine the docker image your job is run on.[default to null]
compute_config_idstrThe id of the compute configuration that you want to use. This id will specify the resources required for your job[default to null]
compute_configCreateClusterComputeConfigOne-off compute that the cluster will use.[optional] [default to null]
max_retriesintThe number of retries this job will attempt on failure. Set to None to set infinite retries[optional] [default to 5]
timeout_sintThe timeout in seconds for each job run. Set to None for no limit to be set[optional] [default to null]
runtime_env_configRayRuntimeEnvConfigDEPRECATED: Use runtime_env[optional] [default to null]

ProductionJobStateTransition Legacy

NameTypeDescriptionNotes
idstrThe id of this job state transition[default to null]
state_transitioned_atdatetimeThe last time the state of this job was updated. This includes updates to the state and to the goal state[default to null]
current_stateHaJobStatesThe current state of the job[default to null]
goal_stateHaJobGoalStatesThe goal state of the job[optional] [default to null]
errorstrAn error message that occurred in this job state transition[optional] [default to null]
operation_messagestrThe logging message for this job state transition[optional] [default to null]
cluster_idstrThe id of the cluster the job is running on[optional] [default to null]

ProductionjobListResponse Legacy

A list response form the API. Contains a field "results" which has the contents of the response.

NameTypeDescriptionNotes
resultsList[ProductionJob][default to null]
metadataListResponseMetadata[optional] [default to null]

ProductionjobResponse Legacy

A response from the API. Contains a field "result" which has the contents of the response.

NameTypeDescriptionNotes
resultProductionJob[default to null]

RayRuntimeEnvConfig Legacy

A runtime env config. Can be used to start a production job.

NameTypeDescriptionNotes
working_dirstrThe working directory that your code will run in. Must be a remote URI like an s3 or git path.[optional] [default to null]
py_modulesList[str]Python modules that will be installed along with your runtime env. These must be remote URIs.[optional] [default to null]
pipList[str]A list of pip packages to install.[optional] [default to null]
condaobject[Union[Dict[str, Any], str]: Either the conda YAML config or the name of a local conda env (e.g., "pytorch_p36"),[optional] [default to null]
env_varsDict(str, str)Environment variables to set.[optional] [default to null]
configobjectConfig for runtime environment. Can be used to setup setup_timeout_seconds, the timeout of runtime environment creation.[optional] [default to null]

SortByClauseJobsSortField Legacy

This model is used in the backend to represent the SQL ORDER BY clauses.

NameTypeDescriptionNotes
sort_fieldJobsSortField[default to null]
sort_orderSortOrder[default to null]