Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Please keep these aspects in mind when designing your CI/CD architecture around Liquibase Enterprise.

Packager Process

The Deploy Packager process performs a Backup and Restore operation on the REF database so only one Packaging operation should run at-a-time per REF database. There can be negative consequences to the REF database if multiple Backup and Restore operations happen concurrently.

If you have a project with multiple pipelines and a separate REF database per pipeline, the separate pipelines may run Packager simultaneously.

Forecast Process

The Forecast operation does not make any updates to the database and does not place a lock on the DATABASECHANGELOG table. It is safe for multiple nodes to run Forecast operations in parallel.

Deploy Process

The Deploy operation sets a lock on the environment’s DATABASECHANGELOG table that is specified in the datical.project file. If multiple Deploy processes are triggered for the same DaticalDB project in parallel, the processes will wait until the lock has been released.

By default the wait time is set at:

  • ChangeLogLockWaitTimeInMinutes = 5 (number of minutes to wait for changelog lock to be available)

  • ChangeLogLockPollRate = 10 (number of seconds to wait between checks when locked)

If you wish to change the wait time, this can be accomplished by including this argument in the hammer deploy command:

hammer deploy <dbdef> --labels=<labels> --vmargs "-Dliquibase.changeLogLockWaitTimeInMinutes=10"

Setting up an additional pipeline for long running scripts

There may be instances where teams are deploying scripts that have long running times. It is possible to setup a separate pipeline with a separate schema for the DATABASECHANGELOG table so that these long running scripts do not block other changes in the main pipeline.

Items to consider

  1. Do not use this pipeline if there are dependencies between the long running script and other changes in the pipeline.

  2. If the long running scripts pipeline is sharing a REF database the packager operation should run simultaneously. Ensure that your CI/CD configuration is set so that this is not possible either with a build blocker or some other mechanism.

  3. There needs to be a way for developers to select that they want to use the long running scripts pipeline. This will likely be done via branch naming conventions. For example you could have a long running branch named, eg. branch2, or if the branch name contains the text “pipeline2”.

Instructions

Setting up an additional pipeline will involve instructions specific to your implementation and CI/CD tooling, but in general follow these steps:

  1. In the datical.project file the best practice is to create a separate REF database for the pipeline but it is possible to share a REF database between two pipelines.

    1. If you are sharing a REF database, the best practice is to setup a separate dbDef, eg. REF_PIPELINE2.

    2. Set the appropriate label on any new dbDef. The label should be the name of the pipeline, eg. pipeline2.

    3. Set the appropriate contexts on the dbDef, eg. REF,REF_PIPELINE2.

  2. For each of the databases in the pipeline, add new dbDefs. They can point to the same database, but give them unique names, eg. DEV_PIPELINE2, QA_PIPELINE2, Production_PIPELINE2.

    1. Note if you are copying from existing dbDefs be sure to clear out the dbDefsId values.

    2. Set the appropriate label on any new dbDef. The label should be the name of the pipeline, eg. pipeline2.

    3. Set the appropriate contexts on the dbDef, eg. DEV_PIPELINE2, QA_PIPELINE2, Production_PIPELINE2,

  3. In the datical.project file add a new plan/pipeline, eg. Pipeline2, with the desired databases. These should be the dbDefs with eg. REF_PIPELINE2, DEV_PIPELINE2, etc.

    1. Note if you are copying from existing plans be sure to clear out the plansId values.

  4. For the databases, add a new Tracking Schema to the REF database and all databases in the pipeline, eg. LIQUIBASE2.

  5. In the datical.project file update the trackingSchema value to be parameterized, eg. trackingSchema="${TRACKING_SCHEMA}"

  6. In the changelog/changelog.xml file, add the TRACKING_SCHEMA values. These go in the top of the changelog before the first <changeSet> tag.

    <property context="REF1" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/>
    <property context="REF_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/>
    
    <property context="DEV" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/>
    <property context="DEV_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/>
    
    <property context="QA" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/>
    <property context="QA_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/>
    
    <property context="Production" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/>
    <property context="Production_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/>
  7. For the deployPackager.properties file, add code for the new pipeline.

    # pipelne2 pipeline
    pipeline2.sqlScmLastImportID=<set with commitID>
    pipeline2.sqlScmBranch=branch2 # Do not need this line if using directory scmBranchHandling
  8. You will need to configure your CI/CD tool to be able to trigger Packager for the correct pipeline based on some type of branch name structure, eg.

    if [[ $(Build.SourceBranchName) == branch2 ]]
        then
          hammer groovy deployPackager.groovy pipeline=pipeline2 commitPrefix="[skip ci]" scm=true labels=$(Build.BuildId),pipeline2,$(Build.SourceBranchName)
        else
          hammer groovy deployPackager.groovy pipeline=main commitPrefix="[skip ci]" scm=true labels=$(Build.BuildId),main,$(Build.SourceBranchName)
        fi
    if [ $? -ne 0 ]; then exit 1; fi

  • No labels