Dataiku Orchestrator
Enterprise
Enterprise
Image | $DATAOPS_DATAIKU_RUNNER_IMAGE |
---|---|
Feature Status | PubPrev |
The Dataiku orchestrator integrates and runs Dataiku Scenarios as part of a DataOps pipeline.
Usage
This YAML script demonstrates how to structure a typical Dataiku job run inside a DataOps pipeline.
pipelines/includes/local_includes/stitch-jobs/my_stitch_job.yml
"Dataiku job":
extends:
- .agent_tag
stage: Datascience Processing
image: $DATAOPS_DATAIKU_RUNNER_IMAGE
variables:
DATAIKU_API_KEY: DATAOPS_VAULT(DATAIKU.API_KEY)
DATAIKU_URL: DATAOPS_VAULT(DATAIKU.URL)
DATAIKU_PROJECT_KEY: DATAOPS_VAULT(DATAIKU.PROJECT_KEY)
DATAIKU_SCENARIO_ID: DATAOPS_VAULT(DATAIKU.SCENARIO_ID)
script:
- /dataops
icon: ${DATAIKU_ICON}
Supported parameters
Parameter | Required/Default | Description |
---|---|---|
DATAIKU_URL | REQUIRED | URL for the Dataiku instance |
DATAIKU_API_KEY | REQUIRED | Dataiku API access key (see the Dataiku Authentication docs) |
DATAIKU_PROJECT_KEY | REQUIRED | Dataiku Project Key for the Scenario you want to trigger using DataOps.live |
DATAIKU_SCENARIO_ID | REQUIRED | Dataiku Scenario ID which you want to trigger using DataOps.live |
Example jobs
This YAML script demonstrates a typical Dataiku use case, that is, starting a pre-existing Dataiku Scenario:
pipelines/includes/local_includes/stitch_ingestion.yml
"Dataiku job":
extends:
- .agent_tag
stage: Datascience Processing
image: $DATAOPS_DATAIKU_RUNNER_IMAGE
variables:
DATAIKU_API_KEY: DATAOPS_VAULT(DATAIKU.API_KEY)
DATAIKU_URL: DATAOPS_VAULT(DATAIKU.URL)
DATAIKU_PROJECT_KEY: DATAOPS_VAULT(DATAIKU.PROJECT_KEY)
DATAIKU_SCENARIO_ID: DATAOPS_VAULT(DATAIKU.SCENARIO_ID)
script:
- /dataops
icon: ${DATAIKU_ICON}