What's new in Octopus Deploy 2.6

Octopus Deploy 2.6 is now in pre-release! For those who like to live on the edge, you can Download the Octopus Deploy 2.6 pre-release. And who wouldn't want to live on the edge when this release contains so many new features? Here are the highlights:

  • Lifecycles to control promotion and automate deployments
  • Automatic release creation from NuGet push
  • Running steps in parallel
  • Up to 5x faster package uploads
  • Skipping offline machines


This heading just does not have enough fanfare for this feature. Imagine balloon's springing out, trumpeters trumpeting and confetti cannons making a mess everywhere.

Okay I will stop, but I do love this feature!

Lifecycles main

Lifecycles let you specify and control the progression of deployments to environments. You not only have the ability to order environments for deployment but you can:

  • set environments to auto deploy when they are eligible for deployment
  • gate your workflow to be sure that N QA environments have been deployed to before moving on
  • group multiple environments into a single stage
  • deploy to more than one environment at a time

Yep, deploy to more than one environment at a time!

A Lifecycle consists of phases and retention policies. Let's start with phases.

Lifecycle Phases

Lifecycle phases

A Lifecycle can consist of many phases. A phase can consist of many environments. Each phase will allow you to add environments to it. You can gate each phase, to be sure that N many environments have been released to before the next phase is eligible for deployment.

Lifecycle automagic

When selecting which environment to add to a phase you have the choice of if they will be a manually release or auto release. If they are set to auto release, when they get to their phase of the deployment chain, they will begin deployment.

Lifecycles and Retention Policies

Lifecycles have their own retention policies. Each has an overall retention policy that each phase will inherit. However you can overwrite this for each phase.

Lifecycle Retention Policy 1

This will mean for development where you have 1300 releases a week, you can have a very strict retention policy set to delete all but the last 3 releases. But for production you can keep everything forever. Or somewhere in the middle if your needs aren't so extreme

Lifecycles and Projects

Lifecycles are assigned to projects via the Process screen.

Lifecycles project process

You might notice that your Projects Overview screen has had a bit of an overhaul.

Lifecycles project overview

It will now display your most recent releases, where they are at per environment, and provide any deployment or promotion buttons. It allows you to see the most current, and previous deployments at a glance. Solid green being your most recent deployment, medium faded green as your previous deployment, and light faded green as all others.

Lifecycles, Releases, Deployments, and Promotions

Lifecycles releases

The release page now gives you a graphical tree of where a deployment is currently, which phase and what's deploying. You may notice a few things here. The deploy/promote button got a bit smarter. It knows whats next in the chain. It also allows you to deploy to any environment that has already been deployed to.

Lifecycles two at once

You can now release to multiple environments at the click of one button. Yep.

Lifecycles multiple environments

Or you can use this select box, and choose them all!

Lifecycles smart promote

And when you have finished a deployment, the promote button knows what's next and gives you the option to promote to the next environment.

Lifecycle deployment

The deployment screen got a little simpler too. Much easier to find the deploy now button. But don't worry everything is available under Advanced, and if you are so inclined you can tell it to remember to show the advanced settings all the time.

Lifecycles and Blocking Deployments

If you have a bad release that has just done some bad stuff (tm) to your QA server, you might want to block that release from being released further down the chain until the issue is resolved. Now you can block a deployment.

block deployment

Step 1 block the deployment for reasons.

show blocked deployment

On your release screen you can see that the promote button has vanished and your Lifecycle deployment tree has a red icon for those environments it cannot promote to.


You will also now see a warning marker on your overview, and will no longer see promotion buttons for that release. In fact all promotion buttons are gone for it. You can only at this point deploy to environments that have already been deployed to. Unblocking when the issue is resolved will give you full access to deploy the release.

Lifecycles and Automatic Release Creation

Yes, there is still more!

Lifecycles automagic settings

On a project process screen, you can define the name of a package, that when pushed or uploaded to the internal repository will automatically create a release for you.

lifecycles automagic create

And if you have your first environment in the Lifecycle setup to automatically deploy, this means you can push a NuGet package to the internal repository and have it automatically create a release and deploy it! We are looking at you TFS users!

As you can see Lifecycles is a feature that has it's hands in many areas of Octopus, it's a large feature, that we are very proud of. We have used this opportunity to try to listen to your feedback and suggestions in UserVoice to to add in more value. We really hope you like it as much as we do!

Run Steps in Parallel

Another feature in 2.6 allows you to setup multiple project steps to run in parallel.

Project step trigger

You can select on a project step for it to run in parallel with the previous step.

project step together

The process page has been updated to show these steps grouped together. If they run on the same machine, they will still queue unless you configure the project to allow multiple steps to run in parallel

Retention Policies have moved

As seen above, retention policies have moved into Lifecycles. You will no longer find a Retention Policy tab under Configuration. They are also not able to be set for Project Groups any more. This leaves setting the retention policy for the internal package repository.

repository retention settings

This has been moved to where the packages live, under Library -> Packages.

Package Upload Streaming

In 2.6 when Octopus downloads a package and then sends it to the Tentacles, it will now do so via streaming. We have seen a 5x speed increase. This will also alleviate some of the overhead memory that Tentacle was using to store the package chunks before saving.

SNI Support

As well as fixing up some issues with SSL bindings, we added SNI support.

SNI support

Skip Offline Machines

Currently, when doing a deployment to very large environments, offline machines can get in your way. We now give the ability to continue with the deployment but skip the offline machines.

show offline machines

We now show offline machines (displayed in red) on the deploy screen. This will allow you to go back and check the machines connection. Or you can use the "ignore offline machines" feature.

ignore offline machines

This will automatically list all machines but the offline machines.

This ends the tour of what's new in 2.6. We have only mentioned the big features in this release, but there were quite a few smaller changes and bug fixes made, so please check the release notes for more details on these smaller items. We hope you are excited about Lifecycles as we are!

Download the Octopus Deploy 2.6 Pre-release now!

Invoking an executable from PowerShell with a dynamic number of parameters

Calling an executable from PowerShell is easy - most of the time, you just put an & in front. To illustrate, let's take this C# executable:

static void Main(string[] args)
    for (int i = 0; i < args.Length; i++)
        Console.WriteLine("[" + i + "] = '" + args[i] + "'");

If we call it like this:

& .\Argsy.exe arg1 "argument 2"

We get:

[0] = 'arg1'
[1] = 'argument 2'

PowerShell variables can also be passed to arguments:

$myvariable = "argument 2"
& .\Argsy.exe arg1 $myvariable

# Output:
[0] = 'arg1'
[1] = 'argument 2'

Note that the value of $myvariable contained a space, but PowerShell was smart enough to pass the whole value as a single argument.

This gets tricky when you want to conditionally or dynamically add arguments. For example, you might be tempted to try this:

$args = ""
$environments = @("My Environment", "Production")
foreach ($environment in $environments) 
    $args += "--environment "
    $args += $environment + " "

& .\Argsy.exe $args

However, you'll be disappointed with the output:

[0] = '--environment My Environment --environment Production '

The right way

The way to do this instead is to create an array. You can still use the += syntax in PowerShell to build the array:

$args = @() # Empty array
$environments = @("My Environment", "Production")
foreach ($environment in $environments) 
    $args += "--environment"
    $args += $environment
& .\Argsy.exe $args

Which outputs what we'd expect:

[0] = '--environment'
[1] = 'My Environment'
[2] = '--environment'
[3] = 'Production'

You can also mix regular strings with arrays:

& .\Argsy.exe arg1 "argument 2" $args

# Output:
[0] = 'arg1'
[1] = 'argument 2'
[2] = '--environment'
[3] = 'MyEnvironment'
[4] = '--environment'
[5] = 'Production'

Edge case

There's is a very odd edge case to what I said above about passing a single string with all the arguments. Take this example, which is similar to the one above:

$args = "--project Foo --environment My Environment --environment Production"
& .\Argsy.exe $args

# Output: 
[0] = '--project Foo --environment My Environment --environment Production'

To make it work as intended, just put a quote around the first argument, and the behaviour changes completely! (The backticks are PowerShell's escape characters)

$args = "`"--project`" Foo --environment My Environment --environment Production"
& .\Argsy.exe $args

# Output: 
[0] = '--project'
[1] = 'Foo'
[2] = '--environment'
[3] = 'My'
[4] = 'Environment'
[5] = '--environment'
[6] = 'Production'

The behavior doesn't change if the first argument isn't quoted:

$args = "--project `"Foo`" --environment My Environment --environment Production"
& .\Argsy.exe $args

# Output: 
[0] = '--project Foo --environment My Environment --environment Production'

Ahh, PowerShell. Always full of surprises!

Dynamically setting TeamCity version numbers based on the current branch

When you are using TeamCity to build a project with multiple branches, it's desirable to have different build numbers depending on the branch. For example, instead of simple TeamCity build numbers like 15, 16, and so on, you might have:

  • Branch master: 1.6.15
  • Branch release-1.5: 1.5.15 (major/minor build from branch name)
  • Branch develop: 2.0.15 (different minor build)
  • Branch feature-rainbows: 2.0.15-rainbows (feature branch as a tag)

Here's how it looks:

TeamCity builds with build numbers based on the branch

Handling a branching workflow like GitFlow, and using these version formats, turns out to be pretty easy with TeamCity, and in this blog post I'll show you how. Your own versioning strategy is likely to be different, but hopefully this post will get you started.


First, there are two built-in TeamCity parameters that we care about:

  • build.counter - this is the auto-incrementing build counter (15 and 16 above)
  • build.number - this is the full build number. By default it is %build.counter%, but it can be more complicated

The format of build.number and value of build.counter is defined in the TeamCity UI:

Build number and build counter in TeamCity

However, you can also set it dynamically during the build, using service messages. That is, your build script can write the following text to stdout:

##teamcity[buildNumber '1.1.15']

This will override the build number, and the new value will then be passed to the rest of the steps in the build.

Putting it together

Depending on whether the branch name is master or develop, we will use different major/minor build numbers. To do this, we're going to define two parameters in TeamCity. These need to be "system" parameters in TeamCity so that they are available to build scripts.

Adding the major/minor build number parameters

To dynamically set the build number based on the branch name, I'm going to add a PowerShell script step as the first build step in my build:

Using a PowerShell script build step to set the build number

Finally, here's the PowerShell script:

# These are project build parameters in TeamCity
# Depending on the branch, we will use different major/minor versions
$majorMinorVersionMaster = "%system.MajorMinorVersion.Master%"
$majorMinorVersionDevelop = "%system.MajorMinorVersion.Develop%"

# TeamCity's auto-incrementing build counter; ensures each build is unique
$buildCounter = "%build.counter%" 

# This gets the name of the current Git branch. 
$branch = "%teamcity.build.branch%"

# Sometimes the branch will be a full path, e.g., 'refs/heads/master'. 
# If so we'll base our logic just on the last part.
if ($branch.Contains("/")) 
  $branch = $branch.substring($branch.lastIndexOf("/")).trim("/")

Write-Host "Branch: $branch"

if ($branch -eq "master") 
 $buildNumber = "${majorMinorVersionMaster}.${buildCounter}"
elseif ($branch -eq "develop") 
 $buildNumber = "${majorMinorVersionDevelop}.${buildCounter}"
elseif ($branch -match "release-.*") 
 $specificRelease = ($branch -replace 'release-(.*)','$1')
 $buildNumber = "${specificRelease}.${buildCounter}"
 # If the branch starts with "feature-", just use the feature name
 $branch = $branch.replace("feature-", "")
 $buildNumber = "${majorMinorVersionDevelop}.${buildCounter}-${branch}"

Write-Host "##teamcity[buildNumber '$buildNumber']"

Now that %build.number% is based on the branch, your TeamCity build has a consistent build number that can then be used in the rest of your build steps. If you are using OctoPack, for example, the build number can be used as the value of the OctoPackPackageVersion MSBuild parameter so that your NuGet packages match the build number.

Azure VM extension for Octopus Deploy

Today ScottGu announced that the Octopus Deploy Tentacle agent is now available as an extension for Azure VM's:

Octopus simplifies the deployment of ASP.NET web applications, Windows Services and other applications by automatically configuring IIS, installing services and making configuration changes. Octopus integration of Azure was one of the top requested features on Azure UserVoice and with this integration we will simplify the deployment and configuration of octopus on the VM.

Of course, even before this extension, you could always install Tentacle either manually or automatically via scripts. The extension just puts a pretty UI around that. Under the hood, the extension uses our open source PowerShell DSC resource for Tentacles.

The extension on Azure

Why Tentacles on Azure VMs?

There are many different ways to host applications on Microsoft Azure: websites, cloud services, or as regular .NET applications running on a virtual machine.

When you provision a VM on Azure, out of the box you get a running operating system, a remote desktop connection, and a PowerShell remoting connection. And that's about it. If you want to deploy, configure and re-deploy applications on the machine, you'll either need to do it manually, or write custom scripts to copy files, update configuration files, and so-on.

Of course, these are all problems that Octopus Deploy solves, and solves. By adding the Tentacle agent to your Azure VM, you can then immediately start to deploy to it just like any other machine in Octopus.

For more information on using the extension, or adding the extension from the command line via PowerShell, check out our documentation.

Docker on Windows and Octopus Deploy

Today, the Gu announced that Microsoft is partnering with Docker to bring Docker to Windows.

Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server. This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers. Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.

How exciting! I've spent the last few hours drilling into Docker and what this announcement might mean for the future of .NET application deployments. Here are my thoughts so far.

Containers vs. virtual machines

Apart from Scott's post I can't find much information about the container support in Windows Server, so I'll prefix this by saying that this is all speculation, purely on the assumption that they'll work similar to Linux containers.

Once upon a time, you'd have a single physical server, running IIS with a hundred websites. Now, with the rise of virtualization and cloud computing, we tend to have a single physical server, running dozens of VM's, each of which runs a single application.

Why do we do it? It's really about isolation. Each application can run on different operating systems, have different system libraries, different patches, different Windows features (e.g., IIS installed), different versions of the .NET runtime, and so on. More importantly, if one application fails so badly that the OS crashes, or the OS needs to restart for an update, the other applications aren't affected.

In the past, we'd start to build an application on one version of the .NET framework (say, 3.5,), only to be told there's no way anyone is putting 3.5 on the production server because there are 49 other applications on that server using 3.0 that might break, and it will take forever to test them all. Virtualization has saved us from these restrictions.

From a deployment automation perspective, a build server compiles code, and produces a package ready to be deployed. The Octopus Deploy server pushes that package to a remote agent, the Tentacle, to deploy it.

Deployment today with Octopus on virtual machines

So, isolation is great. But the major downside is that we effectively have a single physical server, each running many copies of the same OS kernel. Which is a real shame, since that OS is a server-class OS designed for multitasking. In fact, assuming you run one main application per virtual machine, your physical box is actually running more OS's than it is running primary applications!

Containers are similar, but different: there's just one kernel, but each container remains relatively isolated from each other. There's plenty of debate about just how secure containers are compared to virtual machines, so VM's might always be preferred for completely different customers sharing the same hardware. However, assuming a basic level of trust exists, containers are a great middle ground.

The What is Docker page provides a nice overview of why containers are different to virtual machines. I've not seen much about how the containers in Windows Server will work, but for this post I'll assume they'll be pretty similar.

Where Docker fits

Docker provides a layer on top of these containers that makes it easier to build images to run in containers, and to share those images. Docker images are defined using a text-based Dockerfile, which specifies:

  • A base OS image to start from
  • Commands to prepare/build the image
  • Commands to call when the image is "run"

For a Windows Dockerfile, I imagine it will look something like:

  • Start with Windows Server 2014 SP1 base image
  • Install .NET 4.5.1
  • Install IIS with ASP.NET enabled
  • Copy the DLL's, CSS, JS etc. files for your ASP.NET web application
  • Configure IIS application pools etc. and start the web site

Since it's just a small text file, your Dockerfile can be committed to source control. From the command line, you then build an "image" (i.e., execute the Dockerfile), which will download all the binaries and create a disk image that can be executed later. You can then run instances of that image on different machines, or share it with others via Docker's Hub.

The big advantage of Docker and using containers like this isn't just in memory/CPU savings, but in making it more likely that the application you're testing in your test environment will actually work in production, because it will be configured exactly the same way - it is exactly the same image. This is a really good thing, taking building your binaries once to the extreme.

What it means for Octopus

First up, remember that Octopus is a deployment automation tool, and we're especially geared for teams that are constantly building new versions of the same application. E.g., a team building an in-house web application on two-week sprints, deploying a new release of the application every two weeks.

With that in mind, there are a few different ways that Docker and containers might be used with Octopus.

Approach 1: Docker is an infrastructure concern

This is perhaps the most basic approach. The infrastructure team would maintain Dockerfiles, and build images from them and deploy them when new servers are provisioned. This would guarantee that no matter which hosting provider they used, the servers would have a common baseline - the same system libraries, service packs, OS features enabled, and so on.

Instead of including the application as part of the image, the image would simply include our Tentacle service. The result would look similar to how Octopus works now, and in fact would require no changes to Octopus.

Octopus/Tentacle in a world of Docker

This has the benefit of making application deployments fast - we're just pushing the application binaries around, not whole images. And it still means the applications are isolated from each other, almost as if they were in virtual machines, without the overhead. However, it does allow for cruft to build up in the images over time, so it might not be a very "pure" use of Docker.

Approach 2: Build a new image per deployment

This approach is quite different. Instead of having lots of copies of Tentacle, we'd just need one on the physical server. On deployment, we'd create new images and run them in Docker.

  1. Build server builds the code, runs unit tests, etc. and creates a NuGet package
  2. Included in the package is a Dockerfile containing instructions to build the image
  3. During deployment, Octopus pushes that NuGet package to the remote machine
  4. Tentacle runs docker build to create an image
  5. Tentacle stops the instance if it is running, then starts the new instance using the new image

The downside of this is that since we're building a different image each time, we're losing the consistency aspect of Docker; each web server might end up with a slightly different configuration depending on what the latest version of various libraries was at the time.

On the upside, we do gain some flexibility. Each application might have different web.config settings etc., and Octopus could change these values prior to the files being put in the image.

Approach 3: Image per release

A better approach might be to build the Docker image earlier in the process, like at the end of the build, or when a Release is first created in Octopus.

Docker images in Octopus

  1. Build server builds the code, runs unit tests, etc.
  2. Build server (or maybe Octopus) runs docker build and creates an image
  3. The image is pushed, either to Octopus or to Docker Hub
  4. Octopus deploys that image to the remote machine
  5. Tentacle stops the instance if it is running, then starts the new instance using the new image

This approach seems to align best with Docker, and provides much more consistency between environments - production will be the same as UAT, because it's the exact same image running in production as was running in UAT.

There's one catch: how will we handle configuration changes? For example, how will we deal with different connection strings or API keys in UAT vs. production? Keep in mind that these values tend to change at a different rate than the application binaries or other files that would be snapshotted in the image.

In the Docker world, these settings seem to be handled by passing environment variables to docker run when the instance of the image is started. And while Node or Java developers might be conditioned to use environment variables, .NET developers rarely use them for configuration - we expect to get settings from web.config or app.config.

There's some other complexity too; at the moment, when deploying a web application, Octopus deploys the new version side-by-side with the old one, configures it, and then switches the IIS bindings, reducing the overall downtime on the machine. With Docker, we'd need to stop the old instance, start the new one, then configure it. Unless we build a new image with different configuration each time (approach #2), downtime is going to be tricky to manage.

Would Octopus still add value?

Yes, of course! :-)

Docker makes it extremely easy to package an application and all the dependencies needed to run it, and the containers provided by the OS make for great isolation. Octopus isn't about the mechanics of a single application/machine deployment (Tentacle helps with that, but that's not the core of Octopus). Octopus is about the whole orchestration.

Where Octopus provides value is for deployments that involve more than a single machine, or more than a single application. For example, prior to deploying your new web application image to Docker, you might want to backup the database. Then deploy it to just one machine, and pause for manual verification, before moving on to the rest of the web servers. Finally, deploy another Docker image for a different application. The order of those steps are important, and some run in parallel and some are blocking. Octopus will provide those high-level orchestration abilities, no matter whether you're deploying NuGet packages, Azure cloud packages, or Docker images.

Future of Azure cloud service projects?

Speaking of Azure cloud packages, will they even be relevant anymore?

There's some similarity here. With Azure, there's web sites (just push some files, and it's hosted for you on existing VM's), or you can provision entire VM's and manage them yourself. And then in the middle, there's cloud services - web and worker roles - that involve provisioning a fresh VM every deployment, and rely on the application and OS settings being packaged together. To be honest, in a world of Docker on Windows, it's hard to see there being any use for these kinds of packages.


This is a very exciting change for Windows, and it means that some of the other changes we're seeing in Windows start to fit together. Docker leans heavily on other tools in the Linux ecosystem, like package managers, to configure the actual images. In the Windows world, that didn't exist until very recently with OneGet. PowerShell DSC will also be important, although I do feel that the sytax is still too complicated for it to gain real adoption.

How will Octopus fit with Docker? Time will tell, but as you can see we have a few different approaches we could take, with #3 being the most likely (#1 being supported already). As the next Windows Server with Docker gets closer to shipping we'll keep a close eye on it.

SSL 3.0 "POODLE" and Octopus Deploy

There's a newly discovered security vulnerability named POODLE:

The attack described above requires an SSL 3.0 connection to be established, so disabling the SSL 3.0 protocol in the client or in the server (or both) will completely avoid it. If either side supports only SSL 3.0, then all hope is gone, and a serious update required to avoid insecure encryption. If SSL 3.0 is neither disabled nor the only possible protocol version, then the attack is possible if the client uses a downgrade dance for interoperability.

As discussed in our post on Heartbleed and Octopus Deploy, we use the .NET framework's SslStream class to set up a secure connection whenever the Octopus Deploy server and Tentacle deployment agents communicate.

When creating an SslStream, you specify the protocols to use. .NET 4.0 supports SSL 2.0, 3.0, and TLS 1.0. .NET 4.5 supports SSL 2.0, 3.0, and TLS 1.0, 1.1 and 1.2.

Interestingly, the default protocol value (in both .NET 4.0 and 4.5) is Tls | Ssl3. In other words, TLS 1.0 is preferred, but if the client/server only supports SSL 3.0, then it will fall back to that. As discussed in the paper, this is a problem even if your client/server support TLS, since an attacker could force a downgrade.

But there's good news - in Octopus, when we construct our SslStream, we're specific about the protocol to use - we specifically limit the connection to TLS 1.0 (Octopus runs on .NET 4.0 so we can't do TLS 1.1/1.2 yet). Since we control both the client and server, we don't need to worry about falling back to SSL 3.0, so we don't allow it.

We've actually been doing this for a long time now; in January 2013 we published an open source project called Halibut, which was a prototype that eventually morphed into the communication stack we use between Octopus and Tentacle. Even back then we were specific about only supporting TLS:

ssl.AuthenticateAsServer(serverCertificate, true, SslProtocols.Tls, false);

Things are a little different with the Octopus web portal (the HTML web front end used to manage your Octopus server). The portal is hosted on top of HTTP.sys, the kernel-mode driver behind IIS. Out of the box the portal use HTTP, but you can configure your web portal to be available over HTTPS if you prefer.

From what I understand, IIS and HTTP.sys use whatever protocols are supported by SChannel, which means they'll allow SSL 3.0. It looks like a registry change is necessary to disable SSL 3.0 in SChannel in order to prevent IIS/HTTP.sys from using it.

Microsoft also have a security advisory that uses Group Policy to disable SSL 3.0, but it seems focussed on Internet Explorer and not IIS.

TL;DR: Octopus/Tentacle communication isn't affected by POODLE. The web portal (if you expose it over HTTPS) might be, just as any other site you serve over HTTPS using IIS might be.

As always, Troy Hunt has a good write up on the POODLE bug.

Deploy ASP.NET applications to Azure Web Sites

Lately we have been getting more and more people wanting to deploy their Azure Web Sites from Octopus. The problem is that there is no OOTB functionality for this currently.

Since there currently isn't a built-in way to do it, one of our users has created a step template (it is available on the Octopus Library site together with a bunch of other useful step templates) that runs a PowerShell script that uses Web Deploy to deploy your application to Azure, with that in mind, I thought I'd write a small(ish) blog post stepping through how to get your ASP.NET application setup and ready to be deployed to Azure using this step template.

For the purposes of this blog post, I will create a demo ASP.NET MVC application in Visual Studio 2013.

Create your ASP.NET application

First lets select our project type and give it a name

Create New Project

Then specify the template to use, I will just be using the provided MVC template, I will leave the Host in the cloud check box unchecked as I want to use Octopus Deploy to handle my deployments.

Specify Web Template

Once the project has been created, press F5 to run up your new and shiny ASP.NET MVC application.

Web site up and running

Nothing too exciting, but it gives us a starting point from where we can get our deployment setup and running.

Create the NuGet package

As Octopus Deploy uses NuGet packages when deploying your applications we have created a little utility that will create a NuGet package from the output files created when you build your project.

Side note: the NuGet package that OctoPack creates is slightly different to the NuGet packages that you install from the NuGet Gallery. Our NuGet packages are just a bunch of files and folders that make up the structure of your application.

Add OctoPack NuGet package to your project

To add OctoPack to our project, right-click your solution and select Manage NuGet Packages for Solution, search for octopack and click the Install button.

Install OctoPack

Select the projects to install OctoPack in, in my case, I only have one project so I select it and click Ok.

Select project to install OctoPack in

OctoPack Installed

Build project and generate NuGet package from the command line using MSBuild

Now that OctoPack is installed, we can tell it to generate a NuGet package for us when we build our solution from the command line.

From a command prompt enter:

C:\Code\OctoWeb\OctoWeb>msbuild OctoWeb.sln /t:build /p:RunOctoPack=true

If everything worked, you should see output similar to the below:

Microsoft (R) Build Engine version 12.0.30723.0
[Microsoft .NET Framework, version 4.0.30319.34014]
Copyright (C) Microsoft Corporation. All rights reserved.

Building the projects in this solution one at a time. To enable parallel build, please add the "
/m" switch.
Build started 23/09/2014 3:25:24 PM.
Project "C:\Code\OctoWeb\OctoWeb\OctoWeb.sln" on node 1 (build target(s)).
  Building solution configuration "Debug|Any CPU".
Project "C:\Code\OctoWeb\OctoWeb\OctoWeb.sln" (1) is building "C:\Code\OctoWeb\OctoWeb\OctoWeb\
OctoWeb.csproj" (2) on node 1 (default targets).
  OctoWeb -> C:\Code\OctoWeb\OctoWeb\OctoWeb\bin\OctoWeb.dll
  OctoPack: Get version info from assembly: C:\Code\OctoWeb\OctoWeb\OctoWeb\bin\OctoWeb.dll
  Using package version:
  OctoPack: Written files: 101
  OctoPack: A NuSpec file named 'OctoWeb.nuspec' was not found in the project root, so the file
   will be generated automatically. However, you should consider creating your own NuSpec file  
  so that you can customize the description properly.
  OctoPack: Packaging an ASP.NET web application
  OctoPack: Add content files
  OctoPack: Add binary files to the bin folder
  OctoPack: Attempting to build package from 'OctoWeb.nuspec'.
  OctoPack: Successfully created package 'C:\Code\OctoWeb\OctoWeb\OctoWeb\obj\octopacked\OctoWe
  OctoPack: Copy file: C:\Code\OctoWeb\OctoWeb\OctoWeb\obj\octopacked\OctoWeb.
  OctoPack: OctoPack successful
Done Building Project "C:\Code\OctoWeb\OctoWeb\OctoWeb\OctoWeb.csproj" (default targets).

Done Building Project "C:\Code\OctoWeb\OctoWeb\OctoWeb.sln" (build target(s)).

Build succeeded.
    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:01.86

And if you take a look in the folder where OctoPack says it created the NuGet package, you should see that it is, in fact, there.

OctoPack NuGet package created

And if you open the generated NuGet package in NuGet Package Explorer, you should see that OctoPack has packed up your web site as it will be deployed to your web server.

NuGet Package Explorer

If you want OctoPack to copy the created NuGet package to a local folder or file share you can use the following call to msbuild

C:\Code\OctoWeb\OctoWeb>msbuild OctoWeb.sln /t:build /p:RunOctoPack=true /p:OctoPackPublishPackageToFileShare=C:\NuGet

or, to publish to the built-in repository in Octopus

C:\Code\OctoWeb\OctoWeb>msbuild OctoWeb.sln /t:build /p:RunOctoPack=true /p:OctoPackPublishPackageToHttp=http://your-octopus-server/nuget/packages /p:OctoPackPublishApiKey=API-ABCDEFGMYAPIKEY
Modify .csproj to generate NuGet package when building your project

If you want to generate the NuGet package and publish it to a local file share every time you build your solution, you can modify your .csproj file and add the following OctoPack tags to the Project Property Group:

    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>

or, if you want to publish the NuGet package to a local file share when a debug build is performed and a built-in repository only when a release build is performed:

    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">

Setting up your new Azure Web Site

Now we will setup the Azure Web Site that we will be deploying to.

Login to the Azure Management Portal and create a new web site.

Create New Azure Web Site

Once the web site has been created,

Azure Web Site created

click it to access the settings for the web site.

Download the publish profile

From the start page we will download the publish profile settings file to get the values we need to setup our deployment process within Octopus Deploy. So click on the Download the publish profile link to download the settings file required.

Download Publish Profile

Now that we've got everything outside of Octopus Deploy setup, we can move on over to your Octopus server to get our project and deployment process setup and ready to deploy your new Azure Web Site.

Add Web Deploy step template to Octopus

The first thing we need to do, now that we have our web site packaged up into a NuGet package is to import the Web Deploy - Publish Website (MSDeploy) step template from the Octopus Library site.

Get the 'Web Deploy - Publish Website (MSDeploy)' step template from the Octopus Library

Once at the Octopus Library site, search for web deploy

Web Deploy Step Template

Click the result that is returned and then click the big, green Copy to clipboard button.

Web Deploy Step Template details

Import the step template into Octopus Deploy

Login to your Octopus server and go to Library -> Step Templates. Once on the Step Templates tab, click the Import link

Import Step Template

This will display the Import dialog, paste the Web Deploy step template you copied from the Octopus Library site in the text area provided.


Click the Import button and the step template will be imported


Great, now we're ready to setup our Project and Deployment process to start deploying our ASP.NET MVC application to Azure Web Sites.

Setting up your Project in Octopus Deploy

The next thing to do is to setup our new Project and define the Deployment Process that we will use to get our ASP.NET MVC application deployed to our Azure Web Site.

Create our Project

Go to Projects -> All Projects, then click the Add Project button on the Project Group where you want your project to go.

Create New Octopus Project

Create New Octopus Project Details

Octopus Project

Congratulations, you've got yourself a shiny new project! ;)

Define your Project variables

To make it easy to setup your project to deploy multiple environments, we will create project variables that can be scoped to environments, roles and machines.

Open the Variables tab on the project site, then add variables for website name, publish URL, user name and password. We will make the password variable a Sensitive Variable so that we can keep it secret.

Open up the Publish Profile file that you downloaded earlier on, and grab the publishUrl, msdeploySite, userName and userPWD from the publish profile for Web Deploy (e.g. <publishProfile profileName="octowebdemo - Web Deploy">) then fill in the values in the appropriate variable. Then click Save.

Octopus Project Variables

For this demo, I won't scope the variables as I will only have one environment, one machine and one role.

Define your deployment process

Alright, now we're down to the business end of this whole process.

Now we get to specify the deployment process for our project, it will consist of two steps, a NuGet package step and our imported PowerShell step for Web Deploy.

Optionally You can add another step to 'warm-up' the website once it's been deployed. It just so happens that on the Library site, we have a step template for just that. Search for Test URL and import the step template that is returned.

Open the Process tab on the Project page.

Deployment Process tab

Add the 'Deploy a NuGet package' step

Click the Add Step button.

Select the Deploy a NuGet package step.

NuGet Package step

NuGet Package step setup

Fill in the necessary details, specifying your web applications NuGet package from the NuGet feed where it has been published (in my case a local folder on disk).

NuGet Package step completed

Click Save

Project deployment process with 1 step

Add the 'Web Deploy - Publish Website (MSDeploy)' step

Now it's time to add our Web Deploy step, click the Add Step button again and select the Web Deploy - Publish Website (MSDeploy)

Web Deploy step

Fill in the necessary details, using variable bindings to specify the Azure specific details.

Web Deploy step details completed

The Package Step Name required is the name of the step that has extracted the files that are contained in your NuGet package, this is used to locate the files on disk that needs to be uploaded to your Azure Web Site.

Click Save

Project deployment process with 2 steps

That's it, we're now ready to create a release and deploy it to Azure.

Create a release

To create a release, click the Create Release button at the top of the Project page.

Create a Release

On the release page, you can choose to specify a different version number to what Octopus Deploy will pre-populate for you (based on the Project setting you have chosen), what version of the NuGet package for your web application to use (I only have 1, so I will use that) and any release notes that should be included with the release.

Release details completed

Click Save. This will take you to the Release Overview page.

Release Overview

Now we want to deploy this release to our Azure Web Site, so click the Deploy this release button. Select the environment to deploy to. In my case, I only have my Dev environment setup so I will choose this.

Deploy Release to Dev

Deploy Release to Dev details

On the deployment page, you can choose to schedule the release for a later date/time, and which steps to deploy (and to what machines).

I will stick with the defaults and just click the Deploy Release button.

Deploy progressed

Once the deploy has completed, open a browser and browser to the URL of your web application running in Azure.

Deploy completed

Azure Web Site running

Congratulations, you have just deployed your Azure Web Site from Octopus Deploy using Web Deploy!

Updating and redeploying the application to Azure

Now that we have our application deploying to Azure from Octopus, lets make some modifications to the application and then deploy the new version to Azure.

I will update the application name, and some of the colors used.

Modified web site running

Recreate your NuGet package

When recreating the NuGet package from the command line, instead of using the version stored in the [assembly: AssemblyVersion] in the AssemblyInfo.cs file, I will override this by passing the OctoPackPackageVersion parameter to MSBuild.

C:\Code\OctoWeb\OctoWeb>msbuild OctoWeb.sln /t:build /p:RunOctoPack=true /p:OctoPackPackageVersion=

The end result should look similar to the below

  OctoPack: Attempting to build package from 'OctoWeb.nuspec'.
  OctoPack: Successfully created package 'C:\Code\OctoWeb\OctoWeb\OctoWeb\obj\octopacked\OctoWeb.'.

Copy the new NuGet package to the location where Octopus can access it.

Create a new release in Octopus Deploy

In Octopus, go back to the Releases tab and click Create Release. Octopus should now pick up the latest package (v1.0.0.1).

Create a new release

Click Save.

Deploy latest version to Azure Web Sites

All that is left now is to deploy the new release (0.0.2) to Azure.

Click Deploy this release, pick the environment to deploy to and finally click Deploy Release.

Create new release completed

Deploy new release in progress

This deploy should be much faster than the initial deploy as it will only upload files that have been changed. And once deploy has finished, the Azure Web Site should be updated with the changes that were made.

Deploy new release completed

Modified Azure Web Site running

Installing an MSI with Octopus Deploy

Octopus Deploy lets you deploy a wide range of software. Part of the reason behind this is that Octopus supports scripting as part of the deployment process to allow for virtually unlimited flexibility. The Octopus Deploy Library lets us, and the community expand on the capabilities of Octopus Deploy.

Our latest addition to the library is the new Microsoft Software Installer (MSI) Step template. If you're using MSI installers in your project and need to deploy one to one of your Tentacles, this script will help you do just that.

How it works

The MSI Step template step will install, repair or remove MSI files on the file system. Running the Step will build the command to invoke the Installer with the given MSI with the appropriate arguments. Logs of the installation are written to disk then recorded in the Octopus Log after the installation is complete. In order to use the Step Template, Windows Installer 3.0 must be present on the target system. The target MSI must also support quiet (no user interface) installation. This version of the script will also not support MSIs that require machine restarts and will always run installations with the norestart flag.


If your build process generates an MSI installer to use it with Octopus Deploy, it must be bundled inside an Octopus Deploy Nuget package. To bundle the installer, you'll need to run the octo.exe pack command. This command will call NuGet under the hood and generate a nuspec automatically. You'll just need a directory containing the files you want to package, in this case just the MSI. The resulting NuGet package will look something like the following.

Inside the package

Octo pack uses a number of command line arguments to avoid needing a nuspec file. The minimum possible usage only requires specifying the package id like so octo pack --id=MyCompany.MyApp. The full list of arguments are listed below.

Usage: octo pack [<options>]

Where [<options>] is any of:
  --id=VALUE               The ID of the package; e.g. MyCompany.MyApp
  --overwrite              [Optional] Allow an existing package file of the same ID/version to be overwritten
  --include=VALUE          [Optional, Multiple] Add a file pattern to include, relative to the base path e.g. /bin/- *.dll - if none are specified, defaults to **
  --basePath=VALUE         [Optional] The root folder containing files and folders to pack; defaults to '.'
  --outFolder=VALUE        [Optional] The folder into which the generated NUPKG file will be written; defaults to '.'
  --version=VALUE          [Optional] The version of the package; must be a valid SemVer; defaults to a timestamp-based version
  --author=VALUE           [Optional, Multiple] Add an author to the package metadata; defaults to the current user
  --title=VALUE            [Optional] The title of the package
  --description=VALUE      [Optional] A description of the package; defaults to a generic description
  --releaseNotes=VALUE     [Optional] Release notes for this version of the package
  --releaseNotesFile=VALUE [Optional] A file containing release notes for this version of the package


To use the step template in your local Octopus Deploy instance, you'll need to import it from the Octopus Deploy Library. In the library hit the big green Copy to clipboard button and paste it into the import window under Library > Step templates > Import.

Once it's in the library, you can add it as a new step in your project's deployment process. Note that you'll still need a package deployment step to get the MSI onto your server, then the installer step can run. The MSI step has three custom properties, the path of the MSI, the installer action and any installer properties. Usually you'll just need to specify the location of the MSI to be installed which can be built using an octopus variable #{Octopus.Action[Step 1].Output.Package.InstallationDirectoryPath}\installer.msi. Note that the variable used includes the name of the step that extracts the installer, so Step 1 will have to be replaced with your step name.


After the step has been saved, your project is now ready to deploy MSI files.

Domain does DevOps

I was stoked to come across this article in ITNews: Domain does DevOps. Domain is one of Australia's largest property buying and renting websites.

This week Domain Group technology director Paul McManus revealed the company has embraced DevOps to such a degree that it was able to push 150 small changes into production in a single month, up from eight to ten under its former model. At the 'build' end of the cycle, the team uses a set of products developed in Australia - Atlassian's Bamboo for build orchestration and Octopus Deploy for deployment orchestration. Read the full article at ITNews

The Domain.com.au delivery pipeline

You can read a lot more detail about how their deployment pipeline works on their technical blog.

I have to say that we were not convinced that Octopus Deploy would be suitable for deployments in an auto-scaling environment on AWS (it seems to be more directed at Microsoft Azure when it comes to cloud) but it has done the job brilliantly for us so far. One of the best thing is that it has been built API-first which means that anything you can do from the Octopus dashboard you can also do with the API. It is also a very polished product and we haven’t really had any issue with it for deployment of our new micro-services on AWS or of our legacy applications on-premise.

You just have to watch the video of their Release Train.

To get the train running I added a snippet of Powershell code to Octopus based roughly on this. It pulls in some Octopus variables such as project, environment and release number to create a tweet, which goes out via the @DomainTrain account.

The Domain Train

PS: If you are in Sydney, they are hiring!

We're hiring: Support Engineer (x2, US-based)

Right now our full-time team are all based in Australia. For product development, it makes no difference. But it does make providing support in US time zones difficult. 5:00 AM support calls are tough, and we're probably not in the best frame of mind to diagnose production issues at that time in the morning.

Traditionally, support at Octopus has been a reactive position - people try our software, and if they hit a problem, they reach out and we provide support. My goal is to grow our support capability beyond reactive, and into pro-active support.

With that in mind, we're currently hiring for two US-based, full time support team members. If you know Octopus, and live in the US or a US-friendly time zone, why not join us? Help us eliminate remote desktop from production deployments!

If you agree with us that support is one of the most important jobs in the company, that support staff should be consulted on feature design and product changes, and you are really driven to impress and delight customers, then we'd love to have you. Support Engineer

(We're also hiring for a Brisbane-based test engineer)