Skip to main content

How to Manage Log Output

As an administrator or a team member, you must analyze your instances using log messages and diagnose problems while running your jobs and pipelines.

Configuring logging in DataOps

Logs are generated by the DataOps runner while processing jobs and pipelines. When running pipelines, the log output varies based on what the jobs in your pipelines are doing and what you have defined in the runner's config.toml file of your built-in or local image.

You can change the runner's behavior by modifying the config.toml settings. This file's [[runners]] section has a few configuration parameters you can adjust, including output_limit — the parameter that defines the maximum build log size in kilobytes. Logs are captured according to what you have specified in the config file.

Setting log output limits

Set the log output limit from the config.toml file of your DataOps runner. The default size of logs defined in the file is 4096 KB (4 MB). You can increase this value inside the runner by doing the following:

  1. Open the config.toml file of your runner.
  2. Set the value of the output_limit parameter in kilobytes according to your needs.
output_limit = 1048576

We do not recommend your logs exceed 50 MB. For anything more significant, you can collect logs in artifacts as described in Writing log output to artifacts.

  1. Restart your runner after changing the limit value.

Writing log output to artifacts

Job artifacts are files and directories generated by a successful job execution. They capture the results of the jobs that run in a pipeline. If you have lots of logs, you may want to collect the output into a file and add it as an artifact. Let's look at how to create the artifact in more detail.

  1. Open the <job>.yml file.

  2. Add the artifacts keyword to this file.

    For instance, the following YAML config file shows how to set up an artifact file to collect the logs of a job called my reporting job:

    my reporting job:
    name: ${CI_JOB_ID}
    when: always
    - $CI_PROJECT_DIR/${CI_JOB_ID}_output.log
    expires_in: 1 week

    In this example, the following details are relevant:

    • A job called my reporting job runs and uses the variable ${CI_JOB_ID} to generate an artifact file named using the job identifier, which is unique within the instance
    • The when: always keyword shows that the job must generate this artifact every time the pipeline runs — if when is omitted, the artifacts are just stored on job success
    • The paths keyword determines to add the file <CI_JOB_ID>_output.log to the job artifacts in the root directory CI_PROJECT_DIR
    • The expires_in keyword determines to keep the artifact for one week before being marked for deletion
  3. Once the pipeline runs, navigate to CI/CD > jobs and download the artifact file or download the file from the job execution page.

For more information about artifacts, see job artifacts.

The default size of artifacts is 100 MB. If you have lots of logs and need a larger size, contact your DataOps administrator.


To maintain a good service level for all our customers, artifact support is subject to a fair use policy. We will contact you if you are storing an excessive number of artifacts.

Setting artifact retention period

Use the expires_in keyword in the <pipeline>-ci.yml file to specify how long the job artifacts are stored before they are deleted. If no value is set in this keyword, job artifacts are deleted as per the default expiration time, which is 30 days.

  1. Open the <job>.yml file.

  2. In the artifacts section, add a value to the expires_in keyword followed by a time unit.

    If no unit is provided, the time is in seconds. Valid time units include: seconds, sec, mins, min, hrs, day, mos, yrs, week, and never.

my reporting job:
expires_in: 2 hrs 20 min

The value set in expires_in doesn't affect artifacts from the latest successful job unless you turn off this functionality for your project at Settings > CI/CD > Artifacts > Keep artifacts for the most recent successful job.

If you disable this option, the latest artifacts do not immediately expire. A new pipeline must run before the latest artifacts expire and are deleted.


It is not possible to set a retention period for pipeline artifacts. Artifacts of the last pipeline are kept forever, while artifacts of the previous pipeline are deleted 7 days after their creation date.