How To: Load Data with a CSV File

Liquibase Enterprise and CSV Files

Liquibase Enterprise does have the ability to load and update data using CSV formatted files.

Note the following with the CSV loadData and loadUpdateData:

  1. The Deploy Report does not show the Generated SQL & Output.

  2. Rollback of CSV data files is not supported.

  3. CSV file format is mandatory and should contain headers in the first line.

  4. CSV file formats and CSV data files are not currently supported in Packager automation. Files must be placed manually or via external automation in the Resources directory of the Liquibase Project.

(Optional) Oracle Setup Data

The below samples are provided for an Oracle database. In order to run the below samples, the following Setup Data is required on your database.

  1. Create a dbo schema

  2. Create a dbo.contacts table, eg.

    CREATE TABLE dbo.contacts ("id" NUMBER NOT NULL, "activeflag" NUMBER(1), "firstname" VARCHAR2(50), "lastname" VARCHAR2(50), "age" NUMBER, "lastcall" TIMESTAMP, CONSTRAINT "PK_CONTACTS" PRIMARY KEY ("id"));
  3. Create a dbo.contacts_seq, eg.

    CREATE SEQUENCE dbo.contacts_seq START WITH 1 INCREMENT BY 1 NOCACHE NOCYCLE;

Loading Data with a CSV file

Step 1:

Create the CSV file that contains the data you wish to have loaded. The first line needs to be the list of columns to be loaded. Liquibase Enterprise will create the INSERTs with the columns and data listed in the file.

CSV File and Format

id,activeflag,firstname,lastname,age,lastcall dbo.contacts_seq.nextval,0,Chris,Klackson,32,2025-07-04 12:33:27 dbo.contacts_seq.nextval,1,Samantha,Sallers,47,2025-07-15 01:01:16 dbo.contacts_seq.nextval,1,Pete,Prosser,50,2025-07-15 06:45:48 dbo.contacts_seq.nextval,0,Joeseph,Scala,70,2025-07-28 03:14:32 dbo.contacts_seq.nextval,0,Umbert,Klassen,31,2025-09-17 22:11:08 dbo.contacts_seq.nextval,1,Gary,Finer,35,2025-09-17 18:25:09

Step 2:

Place the file in the Resources directory of the Project Repo.

CSV File Path

Path Considerations

If the CSV file is stored in the project resources directory - {DaticalDBWorkspace}/{DaticalDBProjectName}/Resources, then you can use a relative path to reference the file.

  • Full Path: /home/kevin/datical/ecomm/Resources/data/contacts.csv

  • Relative Path:data/contacts.csv

It is a good practice to create a data subdirectory in the Resources directory and keep your CSV files there.

Step 3:

Create a changeset in the Changelog/changelog.xml file of the Project Repo.

Changeset

While specifying the columns in the loadData refactoring is not explicitly required, providing this information gives us the data types so the best possible SQL is generated.

Note you can include changeset attributes such as labels and contexts.

See Valid Data Types for supported values.

Step 4:

The changeset is now eligible to be deployed to the various databases on the pipeline. If you are using artifacts, be sure the package the changelog updates prior to running any deployments.

Generated SQL Code

The CSV file generates the following SQL code. 

Note the Deploy Report does not show this Generated SQL & Output.

Updating Data with a CSV file

The previous example works nicely when you're loading data for the first time. But, what do you do if you want to use a CSV file to update data? How would that work?
In this case, you'll want to use the loadUpdateData refactoring. It's similar to using loadData with one exception – you need to specify the primary key.
Once specified, Liquibase Enterprise will generate the appropriate "INSERT or UPDATE" SQL to be applied to your database.

Step 1:

Create the CSV file that contains the data you wish to have loaded. The first line needs to be the list of columns to be loaded. Liquibase Enterprise will create the INSERTs with the columns and data listed in the file.

CSV File and Format

Step 2:

Place the file in the Resources directory of the Project Repo.

CSV File Path

Step 3:

Create a changeset in the Changelog/changelog.xml file of the Project Repo.

Changeset

Make sure to specify the primary key column.

While specifying the columns in the loadUpdateData refactoring is not explicitly required, providing this information gives us the data types so the best possible SQL is generated.

Note you can include changeset attributes such as labels and contexts.

See Valid Data Types for supported values.

Step 4:

The changeset is now eligible to be deployed to the various databases on the pipeline. If you are using artifacts, be sure the package the changelog updates prior to running any deployments.

Generated SQL Code

The CSV file generates the following SQL code. 

Note the Deploy Report does not show this Generated SQL & Output.

General Considerations for Loading Data With a CSV File

Date and Datetime Considerations

Note, that date and datetime must be in one of the following ISO formats.

  • yyyy-MM-dd'T'HH:mm:ss

  • yyyy-MM-dd HH:mm:ss

  • yyyy-MM-dd'T'HH:mm:ss.S

 

Valid Data Types

When specifying columns in the loadData refactoring, you may choose from the following generic data types.

  • BOOLEAN

  • NUMERIC

  • DATE

  • DATETIME

  • STRING

  • COMPUTED

  • SKIP

Copyright © Datical 2012-2020 - Proprietary and Confidential