Skip to main content

DataOps Orchestration

DataOps.live helps you to easily create data pipelines to build your modern data products and data apps. The data product platform is made up of many apps providing specialized capabilities ranging from data ingestion over data quality and data transformation to data observability and governance. Orchestrators are your building blocks to orchestrate all your applications to create high-value data products.

DataOps pipelines are made up of a directed acyclic graph (DAG) of jobs. All DataOps jobs use an orchestrator image providing the necessary capabilities to interact with other apps.

The most commonly used orchestrators include:

Orchestrators overview

Most DataOps orchestrators perform a single main action, configured using job variables. Therefore, the job's script block only needs to call the /dataops entry point, e.g.:

My Job:
...
script:
- /dataops

You can insert additional scripts before and after the /dataops execution sequence to perform custom setup or teardown actions respectively.

However, a few orchestrators provide a set of utilities that support whatever actions the job developer requires. Jobs using flexible orchestrators do not typically call the /dataops entry point (although it is still available if needed), but instead define a sequence of custom script actions, e.g.:

My Job:
...
script:
- echo "My job starting..."

List of orchestrators