Running Liquibase Enterprise in Parallel
Please keep these aspects in mind when designing your CI/CD architecture around Liquibase Enterprise.
Packager Process
The Deploy Packager process performs a Backup and Restore operation on the REF database so only one Packaging operation should run at-a-time per REF database. There can be negative consequences to the REF database if multiple Backup and Restore operations happen concurrently.
If you have a project with multiple pipelines and a separate REF database per pipeline, the separate pipelines may run Packager simultaneously.
Forecast Process
The Forecast operation does not make any updates to the database and does not place a lock on the DATABASECHANGELOG table. It is safe for multiple nodes to run Forecast operations in parallel.
Deploy Process
The Deploy operation sets a lock on the environment’s DATABASECHANGELOG table that is specified in the datical.project file. If multiple Deploy processes are triggered for the same DaticalDB project in parallel, the processes will wait until the lock has been released.
By default the wait time is set at:
ChangeLogLockWaitTimeInMinutes = 5 (number of minutes to wait for changelog lock to be available)
ChangeLogLockPollRate = 10 (number of seconds to wait between checks when locked)
If you wish to change the wait time, this can be accomplished by including this argument in the hammer deploy command:
hammer deploy <dbdef> --labels=<labels> --vmargs "-Dliquibase.changeLogLockWaitTimeInMinutes=10"
Setting up an additional pipeline for long running scripts
There may be instances where teams are deploying scripts that have long running times. It is possible to setup a separate pipeline with a separate schema for the DATABASECHANGELOG table so that these long running scripts do not block other changes in the main pipeline.
Items to consider
Do not use this pipeline if there are dependencies between the long running script and other changes in the pipeline.
If the long running scripts pipeline is sharing a REF database the packager operation should run simultaneously. Ensure that your CI/CD configuration is set so that this is not possible either with a build blocker or some other mechanism.
There needs to be a way for developers to select that they want to use the long running scripts pipeline. This will likely be done via branch naming conventions. For example you could have a long running branch named, eg. branch2, or if the branch name contains the text “pipeline2”.
Instructions
Setting up an additional pipeline will involve instructions specific to your implementation and CI/CD tooling, but in general follow these steps:
In the datical.project file the best practice is to create a separate REF database for the pipeline but it is possible to share a REF database between two pipelines.
If you are sharing a REF database, the best practice is to setup a separate dbDef, eg. REF_PIPELINE2.
Set the appropriate label on any new dbDef. The label should be the name of the pipeline, eg. pipeline2.
Set the appropriate contexts on the dbDef, eg. REF,REF_PIPELINE2.
For each of the databases in the pipeline, add new dbDefs. They can point to the same database, but give them unique names, eg. DEV_PIPELINE2, QA_PIPELINE2, Production_PIPELINE2.
Note if you are copying from existing dbDefs be sure to clear out the dbDefsId values.
Set the appropriate label on any new dbDef. The label should be the name of the pipeline, eg. pipeline2.
Set the appropriate contexts on the dbDef, eg. DEV_PIPELINE2, QA_PIPELINE2, Production_PIPELINE2,
In the datical.project file add a new plan/pipeline, eg. Pipeline2, with the desired databases. These should be the dbDefs with eg. REF_PIPELINE2, DEV_PIPELINE2, etc.
Note if you are copying from existing plans be sure to clear out the plansId values.
For the databases, add a new Tracking Schema to the REF database and all databases in the pipeline, eg. LIQUIBASE2.
In the datical.project file update the trackingSchema value to be parameterized, eg. trackingSchema="${TRACKING_SCHEMA}"
In the changelog/changelog.xml file, add the TRACKING_SCHEMA values. These go in the top of the changelog before the first <changeSet> tag.
<property context="REF1" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/> <property context="REF_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/> <property context="DEV" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/> <property context="DEV_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/> <property context="QA" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/> <property context="QA_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/> <property context="Production" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE"/> <property context="Production_PIPELINE2" labels="" name="TRACKING_SCHEMA" value="LIQUIBASE2"/>
For the deployPackager.properties file, add code for the new pipeline.
# Pipelne2 pipeline Pipeline2.sqlScmLastImportID=<set with commitID> Pipeline2.sqlScmBranch=branch2 # Do not need this line if using directory scmBranchHandling
You will need to configure your CI/CD tool to be able to trigger Packager for the correct pipeline based on some type of branch name structure, eg.
There will also need to be CI/CD code to handle running Deployments on the second pipeline from separate agents so that they can run in parallel to the main pipeline.
Copyright © Datical 2012-2020 - Proprietary and Confidential