Orchestrators Release Notes (April 25, 2022)
v5.4.6 (5-latest | 5-stable)
We have added new features to the existing DataOps Orchestrators' functionality. They are divided into the following categories:
Feature Extended Snowflake Object Management
Our expanded support for the following managed objects and grants have been added to the DataOps Snowflake Object Lifecycle Engine (SOLE):
-
Sequence and Materialized View Objects can now be managed through SOLE.
-
We've further enhanced SOLE's capabilities to manage Snowflake objects by adding support for the following parameters:
insert_only
andshow_initial_rows
for the DataOps Stream Objectuser_task_managed_initial_warehouse_size
for the DataOps Task Objectrsa_public_key
andrsa_public_key_2
for the DataOps User Objectmax_concurrency_level
andstatement_queued_timeout_in_seconds
for the DataOps Warehouse Objectdirectory
for the DataOps Stage Object
-
New and existing DataOps Network Policies can now be managed through SOLE.
Feature Enhanced Logging
We have developed richer log files to enable faster debugging and troubleshooting. The SOLE log files now include information about:
Hooks Logging
A hook is a placeholder or interface that allows you to insert a custom function before or after key stages in the SOLE process. This upgrade now provides enhanced detail in the SOLE log files to capture relevant information about the hooks you choose to use in your pipeline architecture to allow for better visibility and debugging.
Here are the Before and After Hook logging examples:
Before
After
Destroy Job Logging
Running the SOLE Destroy Job in debug mode will log all objects being deleted as well as a summary of the objects not deleted, together with a reason why each one hasn't been deleted.
Here are the Before and After Destroy Job logging examples:
Before
After
Missing Warehouse Warning
SOLE will fail if the user it is connecting to in Snowflake does not have an associated warehouse (default or specified). From this upgrade onwards, SOLE will log a warning to flag this configuration detail ahead of time or before the pipeline fails.
SOLE-Specific Debugging
When debugging, you can use the new DATAOPS_SOLE_DEBUG
parameter; thus, preventing SOLE from revealing credentials in its log files.
Feature Increased Flexibility
This upgrade includes the following points that fall within SOLE's increased capacity to be flexible:
Externally-Defined Namespacing
Managing namespaces is hard. We have solved this and made namespacing easier, so you can now opt to use custom namespacing formats by including namespace: external
in your SOLE configuration files. No prefixes or suffixes will be added to your Snowflake objects, and SOLE will continue to manage the complete object lifecycle.
For more information, see the DataOps Namespace and Environment Management reference document.
Here are several examples of SOLE log files, logging namespacing details:
External Namespace Configuration
SOLE Log Entry for Role Creation
SOLE Log Entry for Role Removal
Choosing a Warehouse
SOLE will use the warehouse linked to the associated Snowflake user credentials by default. SOLE will fail if the user does not have a default warehouse attached to its account.
From this upgrade onwards, you can specify the warehouse you want SOLE to use, providing more options around how you choose to manage your Snowflake ecosystem. Therefore, the default warehouse attached to your user account is no longer the only option.