Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

As a database script developer, once you have reached a stable state with your .sql scripts, you will check them in your company Source Code Management system (SCM) like TFS, SVN, Git Stash, BitBucket, ...

Once those scripts are checked in the SCM, a job to package them and deploy them in the REF Database will be started automatically (CI) or manually.

As part of this job the following actions will happen:

  1. Checkout the SQL code from the appropriate branch 
  2. Checkout the DaticalDB project from the main branch
  3. Run the deployer through the Command Line Interface (CLI) "hammer groovy deployPackager.groovy", this will includes:
    Image Added
  4. Create an artifact ( comprised of the DaticalDB project directory)
  5. After the artifact has been created, the process will be the same in each environment:
    1. download the artifact
    2. run the CLI to do a forecast
    3. run the CLI to execute the deployment itself


Checking your code into SCM

File structure

The file structure is typically as follow:

Image Added

where:

  • portal_sql is the name of your repository, 
  • sql_code is the name of the directory where the packager will look for new/modified .sql files to package, this directory is reference in the deployPackager.properties in your DaticalDB project
  • the sub-directories represent the different kinds of SQL scripts to package
  • the process order for a commit when using fixed folder names is: ddl, ddl_direct, view, function, procedure, package, packagebody, trigger, sql, sql_direct and finally data_dml folder
  • the process order for a commit when using flexible folders is: convert, ddl_direct, storedlogic, sqlfile, direct, and finally data_dml packageMethod
  • more details cane be found at Placing Files in the SCM Repository

metadata.properties

In each directory under the top level one (sql_code in this example), you can create configuration file named metadata, where you can pass some additional directives to the packager (like additional label or context). 

Git

For a complete understanding of Git please refer to the official documentation.

  1. Get the latest version for the code
    1. "git clone" (the first time)
    2. or "git pull" if your local repository already exist
  2. change to the correct branch
    1. "git checkout branch_name"
  3. create files in the correct subdirectories (you may have to create the subdirectory first) as required
  4. add the files to your upcoming commit with: "git add <filename> ..." or "git add ." to commit the whole current directory and its children.
  5. Commit the files you have added with :
    git commit -m "Commit message" 
    if the commit message contains some text between square bracket like "Create employee table [JIRA-1234]", a "label" will be automatically created by DaticalDB during the deployer phase. This allows you to add additional label for specific jira, feature, story, ... for a more granular deployment process
  6. if you need to insert some kind of orders to process you files (i.e. CREATE the table before the ALTER statement)

...

  1. , you can split you git add and git commit as in
    1. git add createEmployeeTable.sql
    2. git commit -m "Create employee table"
    3. git add AlterEmployeeTable.sql
    4. git commit -m "Add a column to the employee table [JIRA-2345]"
  2. Push your changes to the git server so the packager (and your colleagues) can have access to the changes:
    1. git push 

TFS

Build Systems

You will find below more specific instructions on how to setup your packager job in each major build system, if yours is not explicitely explained below, feel free to use those instructions to create your own process or contact us to get some help from a team of seasoned professionals.

Jenkins

Those instructions are based on GitHub.com server and a Linux environment.

We use the 2017-july branch for the SQL repo

Source Code Management

We usually set up our build to be a multiple SCMs one. TO avoid to have to deal with passwords (you need to be sure you will not be queried for password or it will hung the job), we simply use the SSK key mechanism provided by GitHub.

SQL Repo

We also use 2 additional behaviors for that step:

  1. check out to a sub-directory: portal_sql
  2. Check out to specific local branch: 2017-july
  3. (optional): Polling ignore commits from certain users:
    1. the idea behind this is simply to ignore changes create by the check in at the end of the packager. If you choose this option you will add your jenkins user in the "Excluded Users" field

That step should look like:

Image Added

DaticalDB project repo

At the same level than the SQL repo, we have anothe repo that contains the DaticalDB project (this is where all you DB Refs are stored, as well as your pipeline information, the change sets, ...). This is the core DaticalDB.

 For this step as well, we use 3 additional behaviors for that step:

  1. check out to a sub-directory: portal_ddb
  2. Don't trigger a build on commit notification (if you have a CI, you do not want to start a new build when the DaticalDB project is modified)
  3. Check out to specific local branch: master (as the changes are cumulative, there is no need for a branch)

That step should look like:

Image Added

Build Triggers

You can use this section to transform your manual job into a CI one by "building when a change is pushed to (insert favorite SCM here)"

Build Environment

Check "Delete workspace before build starts", repo is usually small so the cost of cloning it the whole project is negligible compare t to the benefit of a clean environment

Image Added

Build

This is where the real work is done for the deployer

Set up branches

this first step is necessary to match the local and upstream branches . We notices that without it, the deployer has trouble pushing back the files into the Git repo.


Code Block
languagebash
titleSet up branches
linenumberstrue
cd portal_ddb
echo "Current Directory: " `pwd` 
git branch --set-upstream-to=origin/master master
git status

cd ../portal_sql
echo "Current Directory: " `pwd` 
git branch --set-upstream-to=origin/2017-july 2017-july
git status

Packager

This is the step that calls the packager itself. A few explanations:

  • If the PATH is not set in the profile of the "jenkins user", set it up there.
  • Also point to PATH to the directory containing the DB client (Oracle in this case) so the backup and restore commands can be executed
  • cd in the Datical project directory
  • Invoke the groovy deployPackager.groovy script with hammer
    • scm=true indicates to the Deployer that the files to package are to be found in a SCM. Additional configuration is done through the deployPackager.properties file in the DaticalDB project directory


Code Block
titlePackager
linenumberstrue
# specify path to "hammer" - Datical DB's CLI tool
# specify path to Oracle client
export PATH=$PATH:/opt/datical/DaticalDB/repl:/home/datical/app/datical/product/11.2.0/client_1/bin
export ORACLE_HOME=/home/datical/app/datical/product/11.2.0/client_1/
cd portal_ddb

# invoke Datical DB's Deployment Packager
echo
echo === Invoke Deployment Packager
hammer groovy deployPackager.groovy pipeline=Pipeline1 labels="2017-july" scm=true

#
# Pipeline Status to refresh AuditDB
echo
echo ==== Refresh AuditDB ====
call hammer --pipeline=Pipeline1 status 

echo ==== Creating ddb-packager-${BUILD_NUMBER}.zip ====
zip -q -r ddb-packager-${BUILD_NUMBER}.zip *
mv *.zip ..

echo
echo =====FINISHED====



Post-build Actions

Log Access

In jenkins to be able to access the logs and reports, use the "archive the artifacts" action:

Image Added


Artifact Creation

Like for a CI build, you may want to create an artifact C in Jenkins, or an external Artifact Repository like ElectricCloud Artifact Repository, IBM CodeStation, ... that you will use later on for deployment.

IBM uBuild

Atlassian Bamboo

ElectricCloud ElectricCommander

Artifact Repositories

IBM CodeStation

JFrog Artifactory

Jenkins 

  1. First install the Artifactory plugins
    1. Setup the connection in the "Manage Jenkins/Configure System" section
  2. Create a generic repository on the Artifactory server
  3. Select the "Generic-Artifactory integration" in your Jenkins build
    1. select your artifactory server
    2. Insert a spec that looks like

      Code Block
      {
        "files": [
          {
            "pattern": "*.zip",
            "target": "${PROJECT}/${Pipeline}/",
            "flat" : "false"
          }
        ]
      }


ElectricCloud Artifact Repository

Deployment Systems

IBM UDeploy

ElectricCloud ElectricFlow