This post has already been read 1939 times!

TLDR: I will show you how to set up a continuous deployment process for your SSDT Projects. To make that job easier I made SSDT Continuous Deployment-Enabled project template for Visual Studio 2017 which was also published on Visual Studio Marketplace. I will guide you through the process of importing your database schema into the project template and adapting it for continuous deployment.

SSDT has been doing a perfect job for getting your database schema done right. There is a good toolset for retaining, managing, comparing and deploying your schema. I won't explain the benefits of using SSDT. Find more about SSDT.

But everyone was too shy to ask about how we really do data manipulation before/after a release. SSDT will generate scripts for modifying the schema to its latest definition but very often you have to think several steps ahead when it comes to the adaptation of your existing data. Where does the static reference data (aka Code, Lookup, List, Taxonomy or Reference data) come from? In the end, all that matters is the information in your database and its fitness, not the schema. The schema is just a technicality allowing us to store the data. So maybe a team member will compose and provide a list of scripts that should be executed against the database in a specific order before every release?

Humans make mistakes! And really, I'm too scared of running manual scripts from here and there against production.

You need a repeatable process

DACPAC is the artifact produced which holds the definition of your database. It's the output of an SSDT project.
You shouldn't think much about what you should do when the time for a release comes by. There are a couple of points which can alleviate your anxiety around the time of releasing something to production:

  • A build = DACPAC = your database definition.
  • Maintain all of your databases by deploying a new DACPAC. No manual changes, if possible!
  • A successful release = a successful successive deployment of the dacpac across all your environments ūüôā
    SSDT Repeatable Deployment Process
    There seems to be a huge chance for a production release to be successful after having successfully integrated it into the Dev and QA environments.

SSDT Continuous Deployment Project Template

This is a Visual Studio 2017 template which comes with some infrastructure to allow your SSDT project to be ready for Continuous Deployment. Install it from Visual Studio Marketplace.
Once you created a project using the template it will look like the following:
SQL Server Data Tools - Continuous Deployment project template
The StaticTables folder holds the files containing MERGE statements that define your static reference data. Those files are listed in 2.PostDeployment.StaticTablesList.sql. Pre/Post-Deployment updates take place in 1.PreDeployment.Updates.sql and 3.PostDeployment.Updates.sql.

The order of execution during deployment follows the diagram:
SSDT Deployment diagram

A demo with AdventureWorks2016

As a prerequisite, you will need to have Visual Studio 2017, SQL Server Management Studio and SQL Server 2016 or higher.
You can find this demo on GitHub.

Create a new project.

The project template will appear under the SQL Server type.
New SSDT CD Project

Create the schema objects

You will probably be eager to use Import>Database tool but that's only available for empty projects. But if you feel more comfortable with Import>Database you can use it in an empty project and just copy the files.
Anyway, we'll use Schema compare in order to import the schema of AdventureWorks.
Schema Compare
I decided to delete some full-text objects as they were not needed for the demo.

Static data

Now that we have the schema imported, we need to script out the static tables.
But before we are able to do so we have to deploy the database project because we'll use the stored procedure sp_generate_merge.
Execute the following:

EXEC sp_generate_merge @schema = 'Person', @table_name = 'CountryRegion' 
EXEC sp_generate_merge @schema = 'Sales', @table_name = 'Currency'
EXEC sp_generate_merge @schema = 'Sales', @table_name = 'CountryRegionCurrency' 
EXEC sp_generate_merge @schema = 'Person', @table_name = 'ContactType'

Create a script file of type "Not in build" in StaticTables folder for each of those tables and copy the merge statements.
Then we have to reference those scripts in 2.PostDeployment.StaticTablesList.sql. The order DOES matter so start with the tables sitting at the end of your relationship chain:

:r .\StaticTables\Person.CountryRegion.sql
:r .\StaticTables\Sales.Currency.sql
:r .\StaticTables\Sales.CountryRegionCurrency.sql
:r .\StaticTables\Person.ContactType.sql

Pre/Post-Deployment Updates

All statements in these files will be executed via sp_execute_script which aims to track the execution of a given snippet. It will ensure that a snippet won't be executed twice as long its content stays the same. Note that making a small change to the statement will change the hash of the code snippet so it is always a good practice to implement your scripts in an idempotent and repeatable way.

Note that if you want to implement a Pre-Deployment update you will need to have deployed the project at least once, otherwise sp_execute_script won't exist in the database.

For the sake of the example I added the column IsManager of type BIT to Person.ContactType. It defaults to 0(false). Put the following update in 3.PostDeployment.Updates.sql:

EXEC sp_execute_script @sql = '
UPDATE Person.ContactType
SET IsManager = 1
WHERE Name LIKE ''%manager%''
', @author = 'Radoslav Gatev'


I always deploy using Publish. In my opinion Schema compare is good for visualization of the changes but it doesn't seem to include the Pre/Post-Deployment scripts which play a crucial role in our DevOps solution.

Let's review the logs:

FROM [dbo].[__MigrationLog]
ORDER BY [StartTime]

Logs of deployment updates

There is Part 2: Creating Azure DevOps pipelines!

Continuous Integration & Continuous Deployment of SSDT Projects: Part 2, Creating Azure DevOps pipelines

Part 1: Make it repeatable and self-sufficient turned out to be a big hit in my blog. The Visual Studio database project template reached 20k installs. I got a number of questions about it and two of my readers (thanks!) asked me to do a second part to tell you about Release Management of databases in Azure DevOps (formerly Visual Studio Team Services, formerly Visual Studio Online).

I have seen projects that have CI/CD defined for their application code and not for their databases. And when they do releases they trigger the pipelines for the applications and deploy the database manually because it's tricky, they say. So, let me share what's the way I found to deal with all this complexity.

Before we start, what's new about this project template?

It's been exactly a year, and the Visual Studio project template already has 20k installs, and the interest in doing release management of database changes is still pretty high! Thanks, everyone!
That's why I decided to enable the support for Visual Studio 2019! It's available now at Visual Studio Marketplace! If you have any ideas on how to make it better go ahead and suggest them! Or even open a pull request in GitHub.

Getting started with Azure DevOps

Azure DevOps is the evolution of VSTS (Visual Studio Team Services). It is the result of years Microsoft using their own tools and developing a process for building and delivering products in an efficient and effective way. An interesting thing is that Azure DevOps is being developed by using Azure DevOps. What a recursion, huh!

I think of Azure DevOps as the union of VSTS (Visual Studio Team Services), TFS (Team  Foundation Server) and Azure, with some improvements and a few extras added.  So, what was called VSTS is now Azure DevOps Services which is still a cloud offering with 99.9% SLA. And what was called Team Foundation Server (TFS) is now called Azure DevOps Server - the on-premises offering that's built on a SQL Server back end.

Azure DevOps is split into multiple subservices:

  • Azure Boards - Powerful work tracking with Kanban boards, backlogs, team dashboards, and custom reporting.
  • Azure Repos - a set of version control tools that you can use to manage your code. Two types of version control systems, Git and Team Foundation Version Control (TFVC).
  • Azure Pipelines - CI/CD that works with any language, platform, and cloud. Every open source project gets unlimited CI/CD minutes for free.
  • Azure Artifacts - Maven, npm, and NuGet package feeds from public and private sources.
  • Azure Test Plans - All in one planned and exploratory testing solution.

If you want to learn more about the DevOps practices at Microsoft, visit the DevOps Resource Center.
In this post, we are going to focus on Release Management with Azure Pipelines.


Set up your project to use the Continuous Deployment template which is based on the standard SSDT (SQL Server Data Tools) project. Follow the steps from Part 1: Make SSDT projects repeatable and self-sufficient, if you haven't done so.

Create an Azure DevOps project if you haven't done so. You can keep your source code both internally in Azure Repos or externally (GitHub, Subversion, Bitbucket Cloud or External Git).

You will need access to an active Azure subscription.

Let's create a build pipeline

As promised, we'll be looking at pipelines in this post. The first part of the chain is the definition of a build pipeline. After the code has been committed to Azure Repos, a build can be triggered. Of course, there are always options to schedule it or to manually trigger it.

Navigate to Go to Pipelines in your project and create a new Build pipeline. Then you have to tell it where your source code is. Choose an empty template.

Your build pipeline will look like this:

A build pipeline for the SSDT project

Let's review each task individually.

Visual Studio Build

This task will build the whole solution while. If your database project is configured properly it will produce a DACPAC file which becomes the main artifact containing the definition of the entire database.
You can pass MSBuild Arguments, change build platform and build configuration. I've exposed variables for Platform and Configuration in order to make it reusable. Platform is set to any cpu, Configuration is set to release.

More info about Visual Studio Build Task.

Copy Files to the staging directory

This will copy the DACPAC file into the artifacts staging directory.

More info about Copy Files task.

Publish Artifact

This produces the end result of the build which will be consumed by the Release pipeline.

The configuration of Publish Build Artifacts.

More info about Publish Build Artifacts task.

DacPac Schema Compare

As its name suggests, this tasks shows the schema changes of your database between two builds and outputs the results to the build summary. This task comes as a part of Colin's ALM Corner Build & Release Tools which you have to install from the marketplace.

You must enable  the Allow Scripts to Access OAuth Token option which can be found in Build Options, as you can see below:

Click on the agent job which is named "Phase 1" in my case. Then you will be able to check this checkbox.

If you don't do so, you will get an error stating that the token for authenticating cannot be found.

You can see how this task is configured below. Compiled DacPac Path is used to find the dacpac produced by the current build. Drop Name is the name of the artifact produced by the previous build.

Go ahead and trigger a build!

You will find two reports in the Build summary generated by this task. Therefore you will be able to see what is going to be executed against your real database (Assuming no one messed around with it manually):

Schema Change Summary and Change Script

Let's create a release pipeline

Go to Pipelines > Releases and create a new empty Release pipeline.

Set artifacts

Add a new artifact - choose Build as an artifact source type and select the Build pipeline producing it.

Go and edit the first stage of you release pipeline. By the end of this part it will look like this:

The definition of a Stage named Dev

Resource Template Deployment

We are going to deploy against an Azure SQL Database in this post. So, let's go straight to the Azure portal and create it? No! Because that's not DevOps! Infrastructure as code, as a key DevOps practice, says that you have to manage and provision everything through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.

We need an ARM (Azure Resource Manager) Template for a database. Let's take a look at one such:

  "$schema": "",
  "contentVersion": "",
  "parameters": {
    "sqlDatabaseName": {
      "type": "string",
      "metadata": {
        "description": "The name of the database"
    "sqlAdministratorLogin": {
      "type": "string",
      "metadata": {
        "description": "The admin user of the SQL Server"
    "sqlAdministratorLoginPassword": {
      "type": "securestring",
      "metadata": {
        "description": "The password of the admin user of the SQL Server"
  "variables": {
    "sqlserverName": "[concat('sqlserver', uniqueString(resourceGroup().id))]"
  "resources": [
      "name": "[variables('sqlserverName')]",
      "type": "Microsoft.Sql/servers",
      "location": "[resourceGroup().location]",
      "tags": {
        "displayName": "SqlServer"
      "apiVersion": "2014-04-01",
      "properties": {
        "administratorLogin": "[parameters('sqlAdministratorLogin')]",
        "administratorLoginPassword": "[parameters('sqlAdministratorLoginPassword')]",
        "version": "12.0"
      "resources": [
          "name": "[parameters('sqlDatabaseName')]",
          "type": "databases",
          "location": "[resourceGroup().location]",
          "tags": {
            "displayName": "Database"
          "apiVersion": "2015-01-01",
          "dependsOn": [
          "properties": {
            "edition": "Basic",
            "collation": "SQL_Latin1_General_CP1_CI_AS",
            "maxSizeBytes": "1073741824",
            "requestedServiceObjectiveName": "Basic"
          "type": "firewallrules",
          "apiVersion": "2014-04-01",
          "dependsOn": [
          "location": "[resourceGroup().location]",
          "name": "AllowAllWindowsAzureIps",
          "properties": {
            "endIpAddress": "",
            "startIpAddress": ""
  "outputs": {
    "SqlServerFQDN": {
      "type": "string",
      "value": "[reference(concat('Microsoft.Sql/servers/', variables('sqlserverName'))).fullyQualifiedDomainName]"

The template above accepts 3 input parameters - sqlDatabaseName, sqlAdministratorLogin and sqlAdministratorLoginPassword. Then it creates a SQL server containing a single database. The location of the resources comes from the resource group. And finally, it outputs the FQDN of the server so the tasks following the Resource template deployment will be able to use it.

By no means, this is the perfect definition of a Resource template but it's simple enough and it works. For the sake of simplicity, I have this uploaded to a gist in GitHub which is being easily referenced by URL upon deployment. Ultimately, you will have more projects than just the database in your CI/CD pipelines. So you will probably be going to consider using one or more Resource Group projects in Visual Studio, building and validating them, exposing them as a separate artifact and eventually deploying them.

Go to your first Stage in the Release Pipeline and add Azure Resource Group Deployment task. You will have to create a Service connection of type Azure Resource Manager which will grant Azure DevOps the rights to access your Azure subscription.

It will look like this:

The configuration of the Azure Resource Group Deployment task.

Maybe you have noticed that I've exposed some variables like $(ResourceGroup) and $(DatabaseName). That's because I prefer to make those things reusable so you can copy them across environments or even create Task groups.

You have to be careful with the Deployment mode. Choose Incremental as it leaves unchanged resources that exist in the resource group but are not specified in the template as opposed to Complete mode which deletes resources that are not present in your templates.

More info about Azure Resource Group Deployment Task.

ARM Outputs

I found the ARM Outputs pretty useful because it lets you use the output values of an ARM template as variables in Azure Pipelines. ARM templates are the thing forming up your resource names based on some input parameters, and they also have direct access to Microsoft Azure.

Install this extension from Azure DevOps Marketplace.

The configuration of the ARM Outputs task

Azure SQL Database Deployment

And finally, that is the task executing the actual database deployment. You give it a DACPAC file and it deploys all your database changes using SqlPackage.exe under the hood. Apart from the essential parameters you can pass additional parameters like /p:IgnoreAnsiNulls=True, /p:IgnoreComments=True, /p:BlockOnPossibleDataLoss=false or /p:DropObjectsNotInSource=true just to name a few.

I am using the same variables as in the ARM template deployment.

The configuration of the Azure SQL Database Deployment task.

More info about Azure SQL Database Deployment Task.

Set up environments

Having this reusable definition, we can go ahead and clone the stage in order to define multiple environments. Add as many as you would like. And not only that, you can create the relationships between them, the pre- and post-deployment conditions, approvals and other options.

A release with three environments (stages) - Dev, UAT, and Production.
Pipeline variables across environments (stages).

I hope this is helpful for you! I would love to know more about how you typically deploy your databases in your organization.

Leave a Reply

Post Navigation