This section describes how to setup Datical project when you are using multiple branches in your development. You have developers working on multiple releases at the same time. You may have a branch called "release/1.0". You may also have another parallel branch called "release/2.0".
Step-by-step guide
You will need to create multiple pipelines in your Datical project. For example, you will call them "pipeline1" and "pipeline2".
- Each pipeline will have it's own REF database. You may also have dedicated DEV and QA databases for each pipelne. For example:
- "pipeline1" will look like this: REF1 → DEV1 → QA1 → STAGE → PROD
- "pipeline2" will look like this: REF2 → DEV2 → QA2 → STAGE → PROD
- Notice that STAGE and PROD is where pipelines merge.
- You will use these labels for your database environments:
DB Environment in Datical project Label expression for the DB Environment Notes REF1 pipeline1 All pipeline1 changes are deployed here DEV1 pipeline1 AND !abandoned AND !hold QA1 pipeline1 AND !abandoned AND !hold REF2 pipeline2 All pipeline2 changes are deployed here DEV2 pipeline2 AND !abandoned AND !hold QA2 pipeline2 AND !abandoned AND !hold STAGE pipeline1 AND !abandoned AND !hold, pipeline2 AND !abandoned AND !hold Both pipeline1 and pipeline2 changes are deployed here PROD pipeline1 AND !abandoned AND !hold, pipeline2 AND !abandoned AND !hold Both pipeline1 and pipeline2 changes are deployed here
- Based on the table above, now when you package your scripts in REF1, the resulting changesets will automatically receive the "pipeline1" label.
- Similarly, when you package scripts in REF2, the resulting changesets will automatically receive the "pipeline2" label.
- Here is an example of how your changelog.xml might look like. Note the highlighted text showing different pipeline labels:
- You artifact your Datical project and publish to your artifacts repository such as Artifactory, Nexus, or artifacts repositories native to your deploy tool. So every time you package a new script, you will create a new Datical artifact.
- When you deploy your database changes using your deploy tool Datical will automatically know what changes to deploy to each target environment. For example, with the list of changes shown in the screenshot above:
- Deployment to DEV1 and QA1 will receive all changesets that have the label "pipeline1". Any pipeline1 changeset that also has the label "abandoned" or "hold" will not be deployed there.
- Deployment to DEV2 and QA2 will receive all changesets that have the label "pipeline2". Any pipeline2 changeset that also has the label "abandoned" or "hold" will not be deployed there.
- Deployment to STAGE and PROD will receive all changesets that have the label "pipeline1" or "pipeline2". Any pipeline1 or pipeline2 changeset that also has the label "abandoned" or "hold" will not be deployed there.
- OBSERVATION:
- Note that changes going to pipeline1 also have the label "release/oct2017". This label was added by the packaging job and provided at build time by Jenkins or Bamboo.
- Note that changes going to pipeline2 also have the label "release/nov2017". This label was added by the packaging job and provided at build time by Jenkins or Bamboo.
FAQ:
- How does Datical know what labels to deploy to DEV1, DEV2, QA1 and QA2?
- You artifact the entire Datical project which includes all changesets (in changelog.xml). Datical project also includes environment definitions that include labels and label expressions given to each environment. So when you pull down an artifact and attempt to deploy to a target environment, Datical will know exactly what labels (or label expressions) to apply to that deployment.
- What additional labels does I need to specify at deploy time?
- If you intend to deploy all undeployed changes then at deploy time there is no need to specify additional labels.
- However, if you intend to deploy specific changes then you will need to specify labels during deploy time. For example, you want to deploy a specific script, so you would provide the script name as a label at deploy time.
- How do I merge pipeline1 changes to pipeline2 so that changes deployed from pipeline1 all the way to PROD can now be deployed (or "backfilled") to pipeline2 (REF2, DEV2 and QA2)?
- See this document on how to backfill your pipeline → How To: Backfill your pipeline
Related articles