Skip to main content

Modelling and Transformation Orchestrator

TypePre-Set
Image$DATAOPS_TRANSFORM_RUNNER_IMAGE

The Modelling and Transformation (MATE) Orchestrator is a pre-set orchestrator responsible for taking the models in the /dataops/modelling directory and executing them in a Snowflake Data Warehouse by first compiling them to SQL and then running the resultant SQL statements.

Multiple operations are possible within the Modelling and Transformation Engine (MATE). To trigger the selected operation within MATE, set the parameter TRANSFORM_ACTION to one of the supported values.

Usage

The Modelling and Transformation Orchestrator MUST always be used together with the DataOps Reference Project, providing, among others, the .modelling_and_transformation_base job.

pipelines/includes/local_includes/mate_jobs/build_all_models.yml
"Build all Models":
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Data Transformation"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: RUN
script:
- /dataops
icon: ${TRANSFORM_ICON}

Supported Parameters

ParameterRequired/DefaultDescription
TRANSFORM_ACTIONREQUIREDMust be one of RUN, COMPILE, SNAPSHOT, DOCS, TEST, OPERATION, RENDER, or SEED
TRANSFORM_MODEL_SELECTOROptional, defaults to blankThe scope of overall project to execute for. And the name of a model or a tag selector, e.g., 'person' or 'tag:curation'
TRANSFORM_OPERATION_NAMEREQUIRED, if TRANSFORM_ACTION is set to OPERATIONThis is the macro/operation to be executed
TRANSFORM_OPERATION_ARGSOptional, defaults to {}The YAML string representing the macro arguments, e.g. {arg1: value1, arg2: 345}
FULL_REFRESHOptional, defaults to blankIf set, it will force incremental models to be fully refreshed
TRANSFORM_PROJECT_PATHREQUIRED, defaults to $CI_PROJECT_DIR/templates/modellingThe directory in the project structure where the base of the Modelling and Transformation project is located
TRANSFORM_FORCE_DEPSREQUIRED, defaults to FalseWhether to force a refresh of external transformation libraries before execution
DATABASE_PROFILEREQUIRED, defaults to snowflake_operationsWhich dbt profile to use
DATABASE_TARGETREQUIRED, defaults to otherWhich dbt profile target to use
TRANSFORM_DEBUG_TIMINGOptional, defaults to blankWhen set, this parameter saves performance profiling information to a file timing.log in the TRANSFORM_PROJECT_PATH directory. To view the report use the tool snakevizby using the command snakeviz timing.log
TRANSFORM_PARTIAL_PARSEOptional, defaults to blankWhen set, this parameter disables partial parsing in the project. See the section on partial parsing for more information
TRANSFORM_EXTRA_PARAMS_BEFOREOptional, defaults to blankAdditional command-line arguments to be added to the beginning of the transform orchestrator process
TRANSFORM_EXTRA_PARAMS_AFTEROptional, defaults to blankAdditional command-line arguments to be added to the end of the transform orchestrator process
DATAOPS_REMOVE_RENDERED_TEMPLATESOptional, defaults to blankIf set, the system will remove any templates found after processing. This allows files of the format <modelname>.template.yml to be used without creating extra models in the project.

The following details bear reference to and expand on several of these supported parameters:

Test Reporting

The test reporting feature is broken further down into the following categories:

1. Enable Test Reporting

The Transform Orchestrator generates a test report when running a TEST job. In order to surface this report into the DataOps platform, the job must include an artifact around it, as in lines 11 to 14 in the following example:

pipelines/includes/local_includes/mate_jobs/my_test_job.yml
"My Test Job":
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Transformation Testing"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: TEST
script:
- /dataops
artifacts:
when: always
reports:
junit: $CI_PROJECT_DIR/report.xml
icon: ${TRANSFORM_ICON}

Once the job has run successfully, the test results will be added to the pipeline's Tests tab.

Pipeline tests tab __shadow__

2. Test States

It is possible to categorize the test result or state test in one of the following ways:

StateDescriptionExample
PASSThe test executed and passedThe test is successful. The target fulfilled its specified condition, such as the correct number of rows present in a database table
FAILThe test executed and failedThe test failed. The target did not meet its specified condition, such as the incorrect number of rows present in a table
ERRORError while executing a testAn error occurred while executing a test, such as an invalid column/table name specified in a table/dataset
SKIPPEDThe test skipped due to an unfulfilled conditionThe test did not execute because of an unfilled condition in a dataset/table

3. Test Report Control

The following parameters control the generation of test reports:

ParameterRequired/OptionalDescription
JSON_PATHOptional, defaults to $TRANSFORM_PROJECT_PATH/target/run_results.jsonThe path to the JSON result generated by the Transform Orchestrator
REPORT_NAMEOptional, defaults to report.xmlThe generated report name
REPORT_DIROptional, defaults to $CI_PROJECT_DIRThe path where the generated report is saved. Note: This directory must already exist before these tests run
TREAT_TEST_ERRORS_AS_FAILEDOptional, defaults to FALSEIf enabled, it reports a test error as FAIL. See Test State for more information
TREAT_TEST_WARNS_AS_FAILEDOptional, defaults to FALSEIf enabled, it reports a test warning as FAIL. See Test State for more information

Partial Parse

Partial parsing can improve the performance characteristics of DataOps pipeline runs by limiting the number of files a pipeline must parse every time it runs. Here, "parsing" means reading files in a project from disk and capturing ref() and config() method calls. These method calls are used to determine the following:

  • The shape of the dbt DAG (Direct Acyclic Graph)
  • The supplied resource configurations

There is no need to re-parse these files if partial parsing is enabled and the files are unchanged between job requests. The Transform Orchestrator can use the parsed representation from the last job request. However, if a file has changed between invocations, then the orchestrator will re-parse the file and update the parsed node cache accordingly.

TRANSFORM_PARTIAL_PARSE is enabled by default. To disable this feature, set its value to 1.

To utilize partial parsing in a DataOps project, enable caching in the .modelling_and_transformation_base job by overriding the settings from the reference project base job by creating a definition in your project as follows:

pipelines/includes/local_includes/mate_jobs/base_job_modelling_and_transformation.yml
.modelling_and_transformation_base:
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
DATAOPS_TEMPLATES_DIR: /tmp/local_config
DATAOPS_SECONDARY_TEMPLATES_DIR: $CI_PROJECT_DIR/dataops/modelling
cache:
key: $CI_PIPELINE_ID
paths:
- dataops/modelling/target/
icon: ${TRANSFORM_ICON}

Example Jobs

The examples below extend the base job .modelling_and_transformation_base to simplify the MATE job definition. See the reference project base job for all details about these examples .

For ease of reading, the examples below are summarized as follows:

  1. Build All Models
  2. Build a Directory of Models
  3. Build Tagged Models
  4. Test all Models
  5. Build Models by Running a Macro
  6. Run a Macro using the SOLE Admin Role

1. Build All Models

Build all the models in your project:

pipelines/includes/local_includes/mate_jobs/build_all_models.yml
"Build all Models":
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Data Transformation"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: RUN
script:
- /dataops
icon: ${TRANSFORM_ICON}

2. Build a Directory of Models

Build all the models in the divisions/finance directory:

pipelines/includes/local_includes/mate_jobs/build_all_models.yml
"Build all Models":
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Data Transformation"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: RUN
TRANSFORM_MODEL_SELECTOR: divisions/finance
script:
- /dataops
icon: ${TRANSFORM_ICON}

3. Build Tagged Models

Build all the models tagged finance:

pipelines/includes/local_includes/mate_jobs/build_all_models.yml
"Build all Models":
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Data Transformation"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: RUN
TRANSFORM_MODEL_SELECTOR: tag:finance
script:
- /dataops
icon: ${TRANSFORM_ICON}

4. Test all Models

The TRANSFORM_MODEL_SELECTOR variable works the same way with TEST as it does with RUN.

pipelines/includes/local_includes/mate_jobs/test_all_models.yml
"Test all Models": 
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Transformation Testing"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: TEST
script:
- /dataops
artifacts:
when: always
reports:
junit: $CI_PROJECT_DIR/report.xml
icon: ${TESTING_ICON}

5. Build Models by Running a Macro

Rather than building/testing all or part of the MATE models, a MATE job can also execute a standalone macro as its primary operation.

pipelines/includes/local_includes/mate_jobs/my_mate_job.yml
Run My Macro:
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Additional Configuration"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: OPERATION
TRANSFORM_OPERATION_NAME: my_macro
script:
- /dataops
icon: ${TRANSFORM_ICON}

6. Run a Macro using the SOLE Admin Role

Setting DATABASE_PROFILE and DATABASE_TARGET to the values snowflake_master and default, respectively (as per the example below) will execute the macro using the higher privileges that SOLE uses.

pipelines/includes/local_includes/mate_jobs/my_mate_job.yml
"Run My Macro as Admin":
extends:
- .modelling_and_transformation_base
- .agent_tag
stage: "Additional Configuration"
image: $DATAOPS_TRANSFORM_RUNNER_IMAGE
variables:
TRANSFORM_ACTION: OPERATION
TRANSFORM_OPERATION_NAME: my_macro
DATABASE_PROFILE: snowflake_master
DATABASE_TARGET: default
script:
- /dataops
icon: ${TRANSFORM_ICON}

Project Resources

None

Host Dependencies (and Resources)

None