Cowboy in the desert.

Howto: Deploy a SQL Server database using Octopus Deploy

Databases can be one of the trickiest components to deal with when trying to automate your deployments. In this post, I'm going to walk you through one approach to tackling automated SQL Server database deployments. This isn't the only way, but it's a way that has worked well for me for years.

Update: you might also like to check out a third party tool called ReadyRoll, which can produce packages for Octopus Deploy

Goals

Perhaps you have an existing database, and you want to automate deployment of changes to the database from now on. Or, perhaps this is a completely new database for a new application, and you want to do things right from the start. In either case, there are a number of goals we should try to aim for:

  1. We want it to be simple
  2. We want it to be repeatable
  3. We want to use the same process for dev, QA and production deployments of our changes
  4. We don't want to become too dependent on Octopus Deploy or other tools

Number 4 might sound surprsing coming from me, but actually, one of the goals of Octopus Deploy is that the packages you create ought to be useful on their own, without depending on Octopus. That is why Octopus uses standard conventions like web.config files, appSettings, XML transforms and PowerShell. In the worst case, you could rename a NuGet package to a .ZIP, extract the files manually, invoke the scripts manually, and you are deployed. Octopus just exists to make it easier.

Getting the database under control: creating scripts

A couple of years ago I blogged about the philosophy behind how I approach database migrations, and I'll be following that here. Before we can even begin to think about automated deployment, we need to get the database under control. We're going to be using a change script approach to managing deployments.

For example, imagine that Sally wants to add a column to a table. To do this, she might use the designer in SQL Management Studio to add the column and generate a script (there's a button in Management Studio to do that). Or she might use a tool like Red Gate SQL Compare to help create the script. Or, she may know T-SQL well enough to write it by hand.

Whatever the process used to create the script is irrelevant. The important point is that there will be a script. The script describes how to get the database schema (and data) from one state to another. That script should go in source control.

For example:

alter table dbo.Customer 
add PhoneNumber varchar(20)

This would get saved as a file on disk, named something like Script0091 - Add phone number to customer.sql. Notice the number in the name; that's because migration scripts always need to be run in a specific order (if one script added a column and the next one renamed it, it wouldn't make sense to run them out of order).

The idea is that the scripts describe how the database gets from one version to another. No one changes the database without writing a script and checking it into source control - not even DBA's! :-)

By doing this, you'll already be halfway to having a more maintainable database:

  1. It's in source control, so you'll get a much better view over the history of your database
  2. The scripts are sequential, you can take any old database and easily upgrade it to the latest version by running the scripts that haven't been run yet
  3. The scripts you run in QA will be the exact same scripts you run in production

Automating the execution

The next step is to run those scripts automatically. You have a few different options here, and again, none of this is Octopus Deploy specific - you ought to be able to handle database deployments without depending on Octopus Deploy.

One option is to have a PowerShell script that takes the scripts, sorts them, and passes them to the SQLCMD utility to be executed.

Another option is to use open source tools like DbUp or Tarantino, or even commercial tools like SSW SQL Deploy. I'll use DbUp in this example, because I think it's the simplest tool.

Using DbUp to run the scripts

DbUp is a class library that you call from a console app, so I'll create a console app in Visual Studio:

Create console app

Next, I'll add the DbUp NuGet package:

Install DbUp

Next, I'll add a Scripts folder, and add my SQL scripts. On each script file, I'm going to set the build action so that they are embedded in the assembly:

Add scripts

I'll then replace the Main() method in Program.cs with:

static int Main(string[] args)
{
    var connectionString = ConfigurationManager.ConnectionStrings["DatabaseConnection"].ConnectionString;

    var upgrader =
        DeployChanges.To
            .SqlDatabase(connectionString)
            .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly())
            .LogToConsole()
            .Build();

    var result = upgrader.PerformUpgrade();

    if (!result.Successful)
    {
        Console.ForegroundColor = ConsoleColor.Red;
        Console.WriteLine(result.Error);
        Console.ResetColor();
        return -1;
    }

    Console.ForegroundColor = ConsoleColor.Green;
    Console.WriteLine("Success!");
    Console.ResetColor();
    return 0;
}

Notice that the connection string comes from the ConnectionStrings section of my app.config file. In my configuration file, I've added this:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <connectionStrings>
    <add 
      name="DatabaseConnection" 
      connectionString="Server=(local)\SQL2012;Database=SampleDb;Trusted_connection=true" />
  </connectionStrings>
</configuration>

At this point, I've automated my database changes - I can run my application from the command line to execute the scripts:

First run

The next time I run the application, DbUp will detect that it has already been run:

Second run

It does this by using a SchemaVersions table to track what scripts have been run. You can customize this behavior using the DbUp API, but I think it's a good default.

Schema versions table

This also makes for a nice developer experience - other developers on my team can get latest, and run the console app to update their own local copy of the database. We don't need to use shared databases anymore to stay in sync.

Packaging the changes for Octopus Deploy

Let's review what we have so far. We have a set of scripts that describe how the database needs to be changed. We have a console app which runs those scripts. The console app gets it's connection string from a configuration file. None of this depends on Octopus to run, and if we had to, we could run the scripts by hand (or ask the DBA to run them for us). All that is left to do is to package everything into a NuGet package so that Octopus can run them.

I'll start by adding a NuSpec file that describes my package:

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <id>OctoSample.Database</id>
    <title>Octopus Sample - Database Scripts</title>
    <version>1.0.0</version>
    <authors>OctopusDeploy</authors>
    <owners>OctopusDeploy</owners>
    <licenseUrl>http://octopusdeploy.com</licenseUrl>
    <projectUrl>http://octopusdeploy.com</projectUrl>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Database deployment scripts for the sample application.</description>
  </metadata>
</package>

Next, I'll add a really simple Deploy.ps1 PowerShell script that will be executed automatically by Octopus:

& .\OctoSample.Database.exe | Write-Host

Note: remember to set Copy to Output Directory = Copy if newer on the properties of Deploy.ps1 to ensure it is copied to the output directory

Finally, I'll install the OctoPack NuGet package, which will help me to create the final package for Octopus:

Install-Package OctoPack

At this point, if I change my build configuration to Release mode, I'll get a NuGet package in my bin directory:

NuGet package

Inspecting the package, I can see that it contains all of my scripts and the executable to run them:

NuGet contents

Deploying the package with Octopus

The final step is to have Octopus deploy the package. First, I'll create a Step, and choose the machine it's going to run on:

Create step in Octopus

Next, under Variables, I'll configure a variable with the connection string for each environment.

Create variables

Octopus will automatically update the connectionStrings section of the app.config file with these settings depending on which environment I deploy to.

Now I can deploy the changes to Staging:

Deploy release to staging

Inspecting the output logs, I see:

Deployment log

And there you have it - automated database deployment with Octopus Deploy.

Summary

In this post I demonstrated one technique to achieving automated database deployments using Octopus Deploy. There are plenty of other solutions, and if you are using Entity Framework or NHibernate these tools have migration support built in, but the core approach will be the same.

Learn more

Loading...