Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Table of Contents

Overview


Below are suggestions that can improve packager performance.  Which suggestions may be most relevant can depend on which stages of packager are showing longer times spent in the packagerReport.html and/or which packaging folders and script types are showing longer times spent.  The first step is to look in your packager reports to see which stage(s) of packaging are taking the most time.

...

First run deployPackager and obtain timing data on the complete workflow in the packagerReport.html. You may want to run packager multiple times, with different types of scripts committed in different folders.  The timing will likely be different for some types of scripts compared to others because packager uses different processes.  Such as timing for ddl folder (when using the default ddl packageMethod=convert) versus timing for sql_direct folder or ddl_direct folder (or any folder with any name that uses either packageMethod=direct or packageMethod=ddl_direct) versus timing for function/procedure/etc stored logic folders (uses packageMethod=storedlogic).

...

2.  If packaging scripts in the "ddl" folder (when using Fixed Folders) or packageMethod=convert (when using Flexible Folders) is slow, consider using the optional "ddlExcludeList" property in the deployPackager.properties file to exclude certain object types from the snapshots that are used for before and after comparisons with ddl scripts.  If you are packaging stored logic objects in their recommended corresponding packaging folders (such as packaging function scripts in the "function" folder, and packaging procedure scripts in the "procedure" folder) then you do NOT need to have those object types in the snapshots used by the convert method when packaging ddl scripts in the "ddl" folder.  Excluding stored logic object types from the snapshots used in the "ddl" folder can improve performance, especially if you have a lot of stored logic objects in your database. 

Please see "ddlExcludeList" in this document → Using the required deployPackager.properties file

3.  Use backup "on_demand" to create static backups, such as a nightly backup that can be used by packaging jobs as needed.  You would have two different automation jobs.  One job is your existing packager job that processes your new sql scripts.  The other job will be a job that runs packager in preview mode each night to only create the backup file.  Please see this document for the appropriate settings for each job → Managing Database Backup and Restore

  • If you want to use static on demand backups with Oracle or SQL Server, we recommend running Datical DB versions 5.6 or higher.  
  • If you want to use static on demand backups with Postgres, we recommend running Datical DB versions 6.12 or higher.
  • Note that if you use backup on_demand, you will NOT get the full benefits of using the schemaName property in the metadata.properties file:
    • If you are using backup mode "always", then the schemaName property will limit the schema list to the relevant schemas for three aspects of packager: backup, restore, and snapshot. 
    • However, if you are using backup mode "on demand" then the schemaName property will limit the schema list to the relevant schemas for only one aspect of packager: snapshot.   When you set backup to "on_demand" all of the schemas in the project will be backed up and restored, the schemas for backup/restore will NOT be limited based on the schemaName property.
  • Troubleshooting performance time of on_demand static backups:  If you are implementing static/on_demand backups but have not seen a significant improvement in how long your packager jobs take to run, check your configuration.  If you have set databaseBackupMode=on_demand but are still using createDatabaseBackup=true in your main packager jobs that process scripts, that is an unusual configuration.  Packager will work with that configuration but it will still create a new backup file for each packager job that processes scripts, and therefore you would NOT be getting the possible performance benefit that you would have in the more typical configuration of creating a nightly backup separately and re-using that backup file when processing scripts (to avoid running backup each time).  To optimize the on_demand backup, do NOT use createDatabaseBackup=true with your main packaging job that processes scripts (assuming that the backup file was already created and is in place). 

4.  For Oracle, you can set the "parallel" property in the deployPackager.properties file for Oracle backup/restore as described in these pages:

5.  Change your Row Count setting to "approximate" or "disabled" (because "approximate" is significantly faster than "exact") → Settings for Collecting Row Counts

  • If you use rules that relate to row count, set this to "approximate".  If you do not use rules relating to row count, set this to "disabled".

There are different ways you can set the row count option, use the method you prefer:

6.  Use the optional Limited Forecast which is typically faster than Full Forecast, please see this page →  Limited Forecast

  • Caveat: This could have an impact on Rules enforcement if using Rules.

7.  Run packager as a different user than the schema owner (so packager drops the schema and re-creates it, instead of clearing out each object).

8.  For Oracle/SQL Server/DB2, change the Stored Logic Validity Check from "local" (the default) to the new "limited" value.  Using "limited" will be faster than using "local" or "global".

  • For Oracle, we recommend running Datical DB 6.12 or higher due to a performance improvement with the "limited" option for Oracle only.
  • If you are using the "limited", "local", or "global" setting and you package multiple scripts in the same packaging run in the ddl folder (or with the convert packaging method), we recommend running Datical DB 6.12 or higher because the stored logic validity check will no longer be repeated redundantly for each script.
  • The stored logic validity check runs during the deploy section of packager.

.properties file to exclude certain object types from the snapshots that are used for before and after comparisons with ddl scripts.  If you are packaging stored logic objects in their recommended corresponding packaging folders (such as packaging function scripts in the "function" folder, and packaging procedure scripts in the "procedure" folder) then you do NOT need to have those object types in the snapshots used by the convert method when packaging ddl scripts in the "ddl" folder.  Excluding stored logic object types from the snapshots used in the "ddl" folder can improve performance, especially if you have a lot of stored logic objects in your database. 

Please see "ddlExcludeList" in this document → Using the required deployPackager.properties file


3.  Use backup "on_demand" to create static backups, such as a nightly backup that can be used by packaging jobs as needed.  You would have two different automation jobs.  One job is your existing packager job that processes your new sql scripts.  The other job will be a job that runs packager in preview mode each night to only create the backup file.  Please see this document for the appropriate settings for each job → Managing Database Backup and Restore

  • If you want to use static on demand backups with Oracle or SQL Server, we recommend running Datical DB versions 5.6 or higher.  
  • If you want to use static on demand backups with Postgres, we recommend running Datical DB versions 6.12 or higher.
  • Note that if you use backup on_demand, you will NOT get the full benefits of using the schemaName property in the metadata.properties file:
    • If you are using backup mode "always", then the schemaName property will limit the schema list to the relevant schemas for three aspects of packager: backup, restore, and snapshot. 
    • However, if you are using backup mode "on demand" then the schemaName property will limit the schema list to the relevant schemas for only one aspect of packager: snapshot.   When you set backup to "on_demand" all of the schemas in the project will be backed up and restored, the schemas for backup/restore will NOT be limited based on the schemaName property.
  • Troubleshooting performance time of on_demand static backups:  If you are implementing static/on_demand backups but have not seen a significant improvement in how long your packager jobs take to run, check your configuration.  If you have set databaseBackupMode=on_demand but are still using createDatabaseBackup=true in your main packager jobs that process scripts, that is an unusual configuration.  Packager will work with that configuration but it will still create a new backup file for each packager job that processes scripts, and therefore you would NOT be getting the possible performance benefit that you would have in the more typical configuration of creating a nightly backup separately and re-using that backup file when processing scripts (to avoid running backup each time).  To optimize the on_demand backup, do NOT use createDatabaseBackup=true with your main packaging job that processes scripts (assuming that the backup file was already created and is in place). 


4.  For Oracle, you can set the "parallel" property in the deployPackager.properties file for Oracle backup/restore as described in these pages:


5.  Change your Row Count setting to "approximate" or "disabled" (because "approximate" is significantly faster than "exact") → Settings for Collecting Row Counts

  • If you use rules that relate to row count, set this to "approximate".  If you do not use rules relating to row count, set this to "disabled".

There are different ways you can set the row count option, use the method you prefer:


6.  Use the optional Limited Forecast which is typically faster than Full Forecast, please see this page →  Limited Forecast

  • Caveat: This could have an impact on Rules enforcement if using Rules.


7.  Run packager as a different user than the schema owner (so packager drops the schema and re-creates it, instead of clearing out each object).


8.  For Oracle/SQL Server/DB2, change the Stored Logic Validity Check to "local" (the default) or "limited" or "disabled", depending on which features you actively use.

  • Please see the notes in these pages:
  • Performance recommendations:
    • If you do not review or use the information in the Stored Logic Validity Check section of your deploy reports, then set storedLogicValidityCheck="disabled" to avoid possible performance slow down for a feature you aren't actively using.
    • If you review and use the Stored Logic Validity Check information in your deploy reports but you do not use the storedLogicValidityAction=FAIL option, then we recommend setting storedLogicValidityCheck="limited".
    • If you review and use the Stored Logic Validity Check information in your deploy reports and you also have enabled the storedLogicValidityAction=FAIL option, then we recommend setting storedLogicValidityCheck="local".
    • Although storedLogicValidityCheck="global" is an available setting and is the most comprehensive, if performance timing is an important consideration then it may be better to use a smaller scope such as "local" or "limited".
  • There are different ways you can set the stored logic validity check level, use the method you prefer to set the value: represented as
    • represented as storedLogicValidityCheck="disabled" or storedLogicValidityCheck="limited" or storedLogicValidityCheck="
    limited
    • local".


9.  Packaging ddl scripts from the sql_direct folder (packageMethod=direct) or from the ddl_direct folder (packageMethod=ddl_direct) is typically faster than from the ddl folder (packageMethod=convert).  You could also opt to set packageMethod=ddl_direct for your ddl folder for better performance.

  • Caveat: If you are not using SQL Parser for Oracle, then only sqlrules would apply in sql_direct and ddl_direct folder, or any folder with any name that uses packageMethod=ddl_direct or packageMethod=direct.  (Other types of rules and forecast modeling do NOT apply in the for ddl_direct/direct/sql_direct folder changes if you are not using SQL Parser for Oracle.)
  • If you are using Oracle with a recent version of Liquibase Enterprise/Datical DB 7.x, you could consider using SQL Parser for Oracle to add forecast modeling and forecast rules.
    • When you enable the SQL Parser for Oracle project setting, SQL Parser is applicable by default to the dataddl_dml direct folder (packageMethod=dataddl_dmldirect) and , the sql_direct folder (packageMethod=direct), and sql folder (packageMethod=sqlfile).  
    • You could also opt to set packageMethod=ddl_direct for your ddl folder using flexible folder configuration so that folder would also use SQL Parser.  Using SQL Parser with packageMethod=ddl_direct/packageMethod=direct for ddl would be faster than using the packageMethod=convert for ddl (convert is the default for ddl).  You can change the packageMethod for the ddl folder in the metadata.properties file for the ddl folderfor the ddl folder in the metadata.properties file for the ddl folder.  Using packageMethod=ddl_direct would make that folder be processed near the beginning of the folder order.  Or using packageMethod=direct would make that folder be processed near the end of the folder order.
    • If your DML scripts are quite large, for performance reasons you could disable SQL Parser for the DATA_DML folder for performance reasonsany folder where large DML scripts may be packaged.  You can disable SQL Parser at the folder level by setting disableSqlParser=true in the metadata.properties file for the DATA_DML that folder.  Note that you only need to set disableSqlParser=true for DML in older Datical versions 7.5 and below.  Parser is already disabled for DML by default set by default for the DATA_DML folder in newer Datical versions 7.6 and higher.
    • There were improvements to SQL Parser for Oracle in versions 6.15, 7.6, 8 and 7.812.  We recommend upgrading to a recent 7.x version if you are using SQL Parser for Oracle.
    • Please see these pages:


10.  Having the build agent and the database in close proximity can help performance.

...

14.  Consider increasing the amount of RAM used by Datical using the Xmx setting.  See the instructions here: Increase the amount of RAM used by Datical DBLiquibase Enterprise


15.  Upgrade to a current version of Liquibase Enterprise/Datical:

  • There were performance improvements during the forecast stage for those who run deployPackager on Windows clients or Windows agents in Datical DB version 6.8 (and higher).There were performance improvements for those who use the Stored Logic Validity Check project setting in Datical DB version 6.12 (and higher).
  • There were performance improvements for several operations Status, statusDetails, and Pipeline Status in Datical DB version 6.14 (and higher).  Areas where you may notice performance improvements:
  • Status/Pipeline Status operations in the Datical DB GUI
  • 'status' & 'statusDetails' commands in the CLI
  • Complex operations  This also improved operations which run status implicitly (CLI & GUI) - All types of 'deploy' operations, All types of 'rollback' operations, Deploy Packager, Convert SQL, and Change Log Syncsuch as deploy, rollback, deployPackager, convert SQL and changeLogSync.  
  • There were performance improvements specifically for multi-database/multi-catalog configurations of SQL Server projects in Datical DB version 6.16 (and higher).
  • There is a fix for the DATAPUMP API Oracle backup and restore in Datical DB version 6.16 (and higher) to better handle running multiple packager jobs concurrently.
  • There is a new cleanup command for packager in Datical DB versions 7.3 (and higher).  The cleanup command can be run after any time that you might need to manually interrupt a packager build job midway.  The cleanup command is to unblock subsequent packager jobs after a manual interruption by clearing the locks on DATABASECHANGELOGLOCK and DATICAL_SPERRORLOG tables and also restoring REF.  Please see this page for more details: How To: Use ReleaseLocks Command and Packager with Cleanup Option
  • With Datical DB version 7.6 (and higher), there is a new feature that prevents continuing to run packager jobs after a backup error or restore error.  Please see the "Recovering from a Backup or Restore Failure" section here for more details: https://datical-cs.atlassian.net/wiki/spaces/DDOC/pages/896570174/Managing+Database+Backup+and+Restore#ManagingDatabaseBackupandRestore-RecoveringfromaBackuporRestoreFailure
  • There were improvements for SQL Parser for Oracle in versions 6.15, 7.6, and 7.8.  If you are using SQL Parser for Oracle, we recommend running a recent 7.x versionconfigurations of SQL Server projects in Datical DB version 6.16 (and higher).
  • There is a fix for the DATAPUMP API Oracle backup and restore in Datical DB version 6.16 (and higher) to better handle running multiple packager jobs concurrently.
  • There is a new cleanup command for packager in Datical DB versions 7.3 (and higher).  The cleanup command can be run after any time that you might need to manually interrupt a packager build job midway.  The cleanup command is to unblock subsequent packager jobs after a manual interruption by clearing the locks on DATABASECHANGELOGLOCK and DATICAL_SPERRORLOG tables and also restoring REF.  Please see this page for more details: How To: Use ReleaseLocks Command and Packager with Cleanup Option
  • With Datical DB version 7.6 (and higher), there is a new feature that prevents continuing to run packager jobs after a backup error or restore error.  Please see the "Recovering from a Backup or Restore Failure" section here for more details: Recovering from a Backup or Restore Failure
  • There were improvements for memory utilization of SQL scripts that produce a high-volume output in 7.11 (and higher).
  • There were improvements for SQL Parser for Oracle in version 7.12 (and higher). 
  • There were improvements for Limited Forecast in version 7.13 (and higher):
    • Limited Forecast will only profile tables impacted by the changesets to be forecasted or deployed
    • Limited Forecast will only profile the schema impacted by the changesets to be forecasted or deployed in multi-schema projects
  • Significant performance improvements for Forecast profiling in version 7.14 (and higher):
    • Faster forecasting of Views and Materialized Views
    • Faster profiling for tables, columns, and views in multi-schema projects
    • Use multiple connections (maximum of 10 connections) to profile schemas simultaneously in Oracle multi-schema projects.  Note that with 7.14 (and higher) in Oracle projects with multiple schemas, you may notice higher CPU utilization due to multiple connections being used for Oracle forecast profiling.


16.  Although not specifically about packager, it may also be useful to check the items on these pages that may improve Deploy performance:

...