Skip to main content

Procedure

You can provide configuration to Snowflake Object Lifecycle Engine for the following operations with procedure:

  • Manage the lifecycle of new and existing procedures
  • Manage grants of a procedure

Usage

note

We have introduced SOLE for Data Products as a new framework for SOLE to help you easily build an ecosystem of data products. The major difference is in how you define Snowflake objects in the configuration file. Rather than having a grouped collection of objects, SOLE for Data Products goes for modular, self-describing, and explicit object definition.
Learn more about SOLE for Data Products, currently available as a public preview.

We have also introduced Data products as an extra layer on top of the data product platform capabilities making managing data products easier than ever. Learn more about Data Products, currently available as a private preview.

databases:
<database-name>:
schemas:
<schema-name>:
procedures:
<procedure-name>:
<configuration-key>: <value>
grants:
<privilege>:
- <role-name>
- <role-name>

Supported parameters

The engine supports the parameters listed below.

Configuration KeyRequired/OptionalData Types and ValuesDescription
return_typeRequiredStringThe return type of the procedure
statementRequiredString See here for a definition of statementSpecifies the JavaScript/Java/Python/SQL code used to create the procedure's source code
argumentsOptionalObject. See here for a definition of argumentList of the arguments for the procedure
commentOptionalStringSpecifies a comment for the procedure
deletedOptionalBoolean: True enables deletion prevention, False does nothingSpecifies what objects are allowed to be deleted
execute_asOptionalStringSets execute context - see caller's rights and owner's rights
external_access_integrationOptionalList of StringThe external access integration specifies network rules and secrets that specify external locations and credentials (if any) allowed for use by handler code when making requests of an external network, such as an external REST API.
grantsOptionalMap: See Supported Procedure Grants to RolesList of Privileges and Roles to which privileges are granted on the current procedure.
languageOptionalString: javascript (default), sql, pythonSpecifies the language of the stored procedure code
manage_modeOptionalString: all (default), none, grantsConfigures what properties to manage for the procedure.
See Changing Manage Mode before changing the value.
null_input_behaviorOptionalStringSpecifies the behavior of the procedure when called with null inputs
return_behaviorOptionalStringSpecifies the behavior of the procedure when returning results
runtime_versionOptional (Required for Python functions)StringSpecifies the Python runtime version in a string format. For example, specify version 3.8 as the following runtime_version: "3.8"
packagesOptionalList of StringFiles to import for Java/Python procedure
handlerOptionalStringThe handler method for Java/Python procedure
importsOptionalList of String: See here for a definition of importsImplementation files to import for Java / Python procedure
warning

During subsequent pipeline runs, a force-replacement behavior might be observed in the PLAN and APPLY phases for a few parameters like return_behavior, null_input_behavior, statement, arguments, return_type, etc.

arguments parameters

You can specify the arguments for the procedure.

You can list multiple arguments in an object format in the arguments parameter, with each having the following supported parameter:

Configuration KeyRequired / OptionalData Types and ValuesDescription
typeRequiredStringData type of the Column
defaultOptionalString, Boolean, Null, Number (Float or Integer)The default value for the column
warning

Using the PUT or GET commands in a SQL statement throws an error due to recent updates in Snowflake releases.

Example

arguments:
ARG_1:
type: VARCHAR
ARG_2:
type: DATE

statement parameter

You can define the statement parameter by directly inputting raw SQL/JavaScript/Java/Python code or referencing a file path containing the desired code using the !dataops.include directive. However, ensure the file is stored in the $CI_PROJECT_DIR directory.

This code is used to create the procedure's logic that executes complex operations or tasks within the Snowflake environment.

SQL procedure with statement

dataops/snowflake/databases.template.yml
databases:
DATABASE_1:
schemas:
SCHEMA_1:
procedures:
SQL_PROCEDURE:
comment: "return 1 in JavaScript"
language: sql
return_type: INT
execute_as: CALLER
statement: SELECT 1 AS x;

JavaScript procedure with statement

dataops/snowflake/databases.template.yml
databases:
DATABASE_1:
schemas:
SCHEMA_1:
procedures:
JAVASCRIPT_PROCEDURE:
comment: "return 1 in JavaScript"
language: javascript
return_type: FLOAT
execute_as: CALLER
statement: >-
var x = 1;
return x;

Python procedure with statement

dataops/snowflake/databases.template.yml
databases:
DATABASE:
schemas:
SCHEMA:
procedures:
PYTHON_PROCEDURE:
comment: "return 1 in Python"
language: python
runtime_version: "3.8"
handler: example_handler
packages: ["snowflake-snowpark-python"]
arguments:
x:
type: INT
return_type: INT
execute_as: CALLER
statement: >-
def example_handler(session, x):
return x

Python procedure with code uploaded from a stage

In a complex application where numerous dependency files are uploaded to a stage, you can simplify procedure creation by accessing these files using a Python handler with imports. The handler parameter specifies what procedure to call, while the imports parameter imports the module.

dataops/snowflake/databases.template.yml
databases:
DATABASE:
schemas:
SCHEMA:
procedures:
PYTHON_PROCEDURE:
comment: "procedure using imports"
language: python
runtime_version: "3.8"
handler: main
imports: ['@"DATAOPS_PROD"."SAMPLES"."INTERNAL_STAGE"/project.zip']
packages: ["snowflake-snowpark-python"]
arguments:
x:
type: INT
return_type: INT
execute_as: CALLER
statement: >-
def main(session, x):
return imported_function(session, x)

Python procedure with handler and imports

dataops/snowflake/databases.template.yml
databases:
DATABASE:
schemas:
SCHEMA:
procedures:
example_python_procedure_with_imports:
comment: "procedure using imports"
language: python
runtime_version: "3.8"
handler: main
imports: ['@"DATAOPS_PROD"."SAMPLES"."INTERNAL_STAGE"/project.zip']
packages: ["snowflake-snowpark-python"]
arguments:
x:
type: INT
return_type: INT
execute_as: CALLER
statement: >-
def main(session, x):
return imported_function(session, x)

imports parameter

This parameter helps specify the implementation files to import for the Java / Python procedure, where the implementation is in a binary Python package or Jar files. However, when importing a Python module, you must have the handler and packages keywords in the DDL.

Supported procedure grants to roles

Following are the privileges you can grant to roles in the procedure definition:

  • USAGE
  • OWNERSHIP

Examples

JavaScript procedure

dataops/snowflake/databases.template.yml
databases:
DATABASE_1:
schemas:
SCHEMA_1:
procedures:
JAVASCRIPT_PROCEDURE:
comment: "Procedure using JavaScript language"
return_type: VARCHAR
language: javascript
execute_as: CALLER
null_input_behavior: "RETURNS NULL ON NULL INPUT"
statement: "return 1"
arguments:
ARG_1:
type: VARCHAR
ARG_2:
type: DATE

SQL procedure

dataops/snowflake/databases.template.yml
databases:
DATABASE_1:
schemas:
SCHEMA_1:
procedures:
SQL_PROCEDURE:
comment: "Procedure using SQL language"
language: sql
return_type: VARCHAR
execute_as: CALLER
statement: >-
DECLARE
radius_of_circle FLOAT;
area_of_circle FLOAT;
BEGIN
radius_of_circle := 3;
area_of_circle := pi() * radius_of_circle * radius_of_circle;
RETURN area_of_circle;
END;

Python procedure

dataops/snowflake/databases.template.yml
databases:
DATABASE:
schemas:
SCHEMA:
procedures:
PYTHON_PROCEDURE:
comment: "Procedure using Python language"
return_type: VARCHAR
language: python
RUNTIME_VERSION: "3.8"
packages:
[
"snowflake-snowpark-python",
"numpy",
"pandas",
"scikit-learn",
"scikit-image",
]
HANDLER: run
execute_as: CALLER
null_input_behavior: "RETURNS NULL ON NULL INPUT"
statement: |
def run(session):
result = 1
return result

Procedure having $ character in the statement

warning

Due to a known limitation that might lead to an error, the $ character should be escaped if present in the statement parameter of the procedure definition. Instead of $, you should use ${"$"}.

dataops/snowflake/databases.template.yml
databases:
DATABASE_1:
schemas:
SCHEMA_1:
procedures:
JAVASCRIPT_PROCEDURE:
comment: "Procedure using JavaScript language"
return_type: VARCHAR
language: javascript
execute_as: CALLER
null_input_behavior: "RETURNS NULL ON NULL INPUT"
statement: '
const application = "DataOps.live";
const message = `This is ${"$"}{application}`;
return message;
'
arguments:
ARG_1:
type: VARCHAR
ARG_2:
type: DATE

Examples of the !dataops.include directive in configuration files

You can specify the statement in a file external to the YAML configuration. The file path of the directive must be relative to the git repository root ($CI_PROJECT_DIR).

The correct way to specify that path is !dataops.include common/files/test.sql.

Examples that will not lead to the desired result are:

  • !dataops.include /test.sql - note the leading /
  • !dataops.include /home/user/workspace/common/files/test.sql - attempting to use the full path to the workspace
  • !dataops.include test.sql - will look for the file at $CI_PROJECT_DIR/test.sql but not at $CI_PROJECT_DIR/dataops/snowflake/test.sql

SQL function

common/files/test.sql
declare
radius_of_circle float;
area_of_circle float;
begin
radius_of_circle := 3;
area_of_circle := pi() * radius_of_circle * radius_of_circle;
return area_of_circle;
end;
dataops/snowflake/databases.template.yml
databases:
DATAOPS_TEST_DATABASE_1_{{ env.DATAOPS_ENV_NAME }}:
schemas:
SCHEMA_1:
procedures:
TEST_PROCEDURE:
return_type: VARCHAR(16777216)
statement: !dataops.include common/files/test.sql
arguments: {}
execute_as: OWNER
language: SQL
manage_mode: all

Javascript function

common/files/test.js
var my_sql_command = "select * from table1";
var statement1 = snowflake.createStatement({ sqlText: my_sql_command });
var result_set1 = statement1.execute();
// Loop through the results, processing one row at a time...
while (result_set1.next()) {
var column1 = result_set1.getColumnValue(1);
var column2 = result_set1.getColumnValue(2);
}
return 0.0; // Replace with something more useful.
dataops/snowflake/databases.template.yml
databases:
DATABASE_1:
schemas:
SCHEMA_1:
procedures:
TEST_PROCEDURE:
return_type: VARCHAR(16777216)
statement: !dataops.include common/files/test.js
arguments: {}
execute_as: OWNER
language: SQL
manage_mode: all