The Matillion Orchestrator is a pre-set orchestrator that triggers the start of a Matillion job as part of a DataOps pipeline. This functionality makes it possible to integrate all existing Matillion jobs into a DataOps pipeline.
The Matillion Orchestrator orchestrates Matillion jobs via Matillion's ETL API.
The orchestrator's workflow is as follows:
- It posts a request to the specified Matillion instance
- It then polls the status and fetches progress from the Matillion instance
- Finally, it propagates the Matillion execution status to the DataOps pipeline
"My Matillion Job":
stage: "My Stage"
We recommend that you configure the DataOps pipeline to continue running only if the Matillion job is successful, ensuring that the pipeline run does not transform any out-of-date data.
|REQUIRED but by default pulled from the DataOps Vault||Username to access the Matillion ETL API|
|REQUIRED but by default pulled from the DataOps Vault||Password to access the Matillion ETL API|
|REQUIRED||Currently only supports |
|REQUIRED||The server IP address or domain name of the Matillion ETL instance|
|REQUIRED||Name of the Matillion group of the job to be executed|
|REQUIRED||Name of the Matillion project of the job to be executed|
|REQUIRED||Name of the Matillion Job to execute|
|Version for Matillion ETL|
|None||If set, the credentials from the DataOps Vault would be fetched and exposed in the environment|
|If set, overrides the default vault path for username|
|If set, overrides the default vault path for password|
This example demonstrates what a typical pipeline job looks like:
"Start Matillion Job":
stage: "Bulk Ingestion"
Host Dependencies (and Resources)