Installing an MSI with Octopus Deploy

Octopus Deploy lets you deploy a wide range of software. Part of the reason behind this is that Octopus supports scripting as part of the deployment process to allow for virtually unlimited flexibility. The Octopus Deploy Library lets us, and the community expand on the capabilities of Octopus Deploy.

Our latest addition to the library is the new Microsoft Software Installer (MSI) Step template. If you're using MSI installers in your project and need to deploy one to one of your Tentacles, this script will help you do just that.

How it works

The MSI Step template step will install, repair or remove MSI files on the file system. Running the Step will build the command to invoke the Installer with the given MSI with the appropriate arguments. Logs of the installation are written to disk then recorded in the Octopus Log after the installation is complete. In order to use the Step Template, Windows Installer 3.0 must be present on the target system. The target MSI must also support quiet (no user interface) installation. This version of the script will also not support MSIs that require machine restarts and will always run installations with the norestart flag.

Deployment

If your build process generates an MSI installer to use it with Octopus Deploy, it must be bundled inside an Octopus Deploy Nuget package. To bundle the installer, you'll need to run the octo.exe pack command. This command will call NuGet under the hood and generate a nuspec automatically. You'll just need a directory containing the files you want to package, in this case just the MSI. The resulting NuGet package will look something like the following.

Inside the package

Octo pack uses a number of command line arguments to avoid needing a nuspec file. The minimum possible usage only requires specifying the package id like so octo pack --id=MyCompany.MyApp. The full list of arguments are listed below.

Usage: octo pack [<options>]

Where [<options>] is any of:
  --id=VALUE               The ID of the package; e.g. MyCompany.MyApp
  --overwrite              [Optional] Allow an existing package file of the same ID/version to be overwritten
  --include=VALUE          [Optional, Multiple] Add a file pattern to include, relative to the base path e.g. /bin/- *.dll - if none are specified, defaults to **
  --basePath=VALUE         [Optional] The root folder containing files and folders to pack; defaults to '.'
  --outFolder=VALUE        [Optional] The folder into which the generated NUPKG file will be written; defaults to '.'
  --version=VALUE          [Optional] The version of the package; must be a valid SemVer; defaults to a timestamp-based version
  --author=VALUE           [Optional, Multiple] Add an author to the package metadata; defaults to the current user
  --title=VALUE            [Optional] The title of the package
  --description=VALUE      [Optional] A description of the package; defaults to a generic description
  --releaseNotes=VALUE     [Optional] Release notes for this version of the package
  --releaseNotesFile=VALUE [Optional] A file containing release notes for this version of the package

Configuration

To use the step template in your local Octopus Deploy instance, you'll need to import it from the Octopus Deploy Library. In the library hit the big green Copy to clipboard button and paste it into the import window under Library > Step templates > Import.

Once it's in the library, you can add it as a new step in your project's deployment process. Note that you'll still need a package deployment step to get the MSI onto your server, then the installer step can run. The MSI step has three custom properties, the path of the MSI, the installer action and any installer properties. Usually you'll just need to specify the location of the MSI to be installed which can be built using an octopus variable #{Octopus.Action[Step 1].Output.Package.InstallationDirectoryPath}\installer.msi. Note that the variable used includes the name of the step that extracts the installer, so Step 1 will have to be replaced with your step name.

Settings

After the step has been saved, your project is now ready to deploy MSI files.

Domain does DevOps

I was stoked to come across this article in ITNews: Domain does DevOps. Domain is one of Australia's largest property buying and renting websites.

This week Domain Group technology director Paul McManus revealed the company has embraced DevOps to such a degree that it was able to push 150 small changes into production in a single month, up from eight to ten under its former model. At the 'build' end of the cycle, the team uses a set of products developed in Australia - Atlassian's Bamboo for build orchestration and Octopus Deploy for deployment orchestration. Read the full article at ITNews

The Domain.com.au delivery pipeline

You can read a lot more detail about how their deployment pipeline works on their technical blog.

I have to say that we were not convinced that Octopus Deploy would be suitable for deployments in an auto-scaling environment on AWS (it seems to be more directed at Microsoft Azure when it comes to cloud) but it has done the job brilliantly for us so far. One of the best thing is that it has been built API-first which means that anything you can do from the Octopus dashboard you can also do with the API. It is also a very polished product and we haven’t really had any issue with it for deployment of our new micro-services on AWS or of our legacy applications on-premise.

You just have to watch the video of their Release Train.

To get the train running I added a snippet of Powershell code to Octopus based roughly on this. It pulls in some Octopus variables such as project, environment and release number to create a tweet, which goes out via the @DomainTrain account.

The Domain Train

PS: If you are in Sydney, they are hiring!

We're hiring: Support Engineer (x2, US-based)

Right now our full-time team are all based in Australia. For product development, it makes no difference. But it does make providing support in US time zones difficult. 5:00 AM support calls are tough, and we're probably not in the best frame of mind to diagnose production issues at that time in the morning.

Traditionally, support at Octopus has been a reactive position - people try our software, and if they hit a problem, they reach out and we provide support. My goal is to grow our support capability beyond reactive, and into pro-active support.

With that in mind, we're currently hiring for two US-based, full time support team members. If you know Octopus, and live in the US or a US-friendly time zone, why not join us? Help us eliminate remote desktop from production deployments!

If you agree with us that support is one of the most important jobs in the company, that support staff should be consulted on feature design and product changes, and you are really driven to impress and delight customers, then we'd love to have you. Support Engineer

(We're also hiring for a Brisbane-based test engineer)

Encrypting connection strings in Web.config

When specifying a connection string, it's usually preferable to use Windows Authentication because you don't need to put passwords in your Web.config file. However, this doesn't work in every scenario, so if you have to use a password in your connection string then it's a good idea to encrypt it. Step Templates are a great way to enhance the native capabilities of Octopus Deploy. So to make encrypting the connection string section easier, I created a new template to do just that.

You can grab the new IIS Website - Encrypt Web.Config Connection Strings from the Octopus Deploy Library. Under the covers, this template makes use of the aspnet_regiis tool in order to encrypt the Web.Config. Adding the step to your deployment process will take you do the step details page for configuration.

Configure Encrypt Connection Strings

This template takes a single parameter called Website directory that you typically set to the InstallationDirectoryPath for the package deployment step. Note that you must specify the name of the package deployment step as part of the variable.

#{Octopus.Action[MyDeployPackageStep].Output.InstallationDirectoryPath}

This step is designed to run after a package has been deployed to a web server. Unfortunately, this means that if you use the IIS web site and application pool feature, there is a slight window in which the Web.config will not be encrypted. To be completely safe, it would be ideal to apply the encryption before repointing IIS to the new Website. To work around this issue turn off the feature and use a custom PowerShell script to update IIS.

After you've added the step to your deployment process, your next deployment will have a nicely encrypted connection string section.

Deploy a PHP Site to IIS with Octopus

As you have probably read by now, my background pre-Octopus was PHP in a LAMP environment. Doing system administration duties manually releasing and deploying PHP sites. So when a customer recently asked if they could deploy a PHP site to IIS, Paul in his wisdom asked me to write up a blog post about just that.

It's all about the packages

Octopus, as we know relies on NuGet packages to deploy files. So to start we need to put all of our PHP files into NuGet Package. Lucky for us the NuGet Package Explorer has this covered. Adding files and creating a NuGet Package really couldn't be easier.

NuGet Explorer

Development Folder

Once my package was created, and uploaded to the internal NuGet repository in Octopus, I was ready to go!

NuGet Repository

Creating the Project in Octopus

I then created a new project, and selected the NuGet Package step.

Package Deployment Step

As you can see I chose the Custom Install Directory, as I had a pre-existing site setup and I would like in this instance to always deploy to the same location. But we are using IIS so you can choose the other IIS options. I have also added a couple of other Octopus features to show that it can be used with .php files. So I made the Custom Install Directory use a variable, and I also created a test config file that has variables that will need replacing when it deploys to Production.

Test Config File

So I have used the Substitute Variables in Files feature and defined my config.php file.

Test Variables

All three variables, for my Custom Installation Directory, and my two variables in the config.php file.

Time to Deploy

Now that my package is chosen as a step, and my variables are configured and scoped, it is time to test a release. I guess I should create one!

Creating a Release

Deployment time!

Deployment Time

And here is the deployment log for the package:

Deployment Log

Results on Server

And we are done, Octopus has deployed my PHP files. So let's have a look at the results. Here is my lovely phpinfo() page.

PHPINFO display

And my files are deployed in the correct location on my IIS Server.

Files on Disk

We also cannot forget about my config.php which has had it's variables nicely substituted.

Config.php Contents

Complete Success

So my deployment of PHP files onto an IIS Server was a complete success. I was able to utilise everything that Octopus already did, the only thing I had to do manually was create my package, but it really is a case of 'find a folder and add'. Yes this deployment was an an IIS Server, and mostly PHP is run on Linux, but maybe that deployment reality really isn't too far from our future.

Previous deployments

A small feature that we added to 2.5.4 is the ability to easily view previous deployments when looking at a project overview. By default, you just see the latest deployment per environment:

Previous tab

You can click the Previous tab to expand and show the previous successful deployments to each environment:

Previous deployments

The goal is to make it easier to find the deployment to roll back to. This means that we only show successful deployments, and we only show deployments for a release other than the one currently deployed (e.g., if it took 4 attempts to deploy 3.18, you'd still see 3.17 as the previous deployment).

Similarly, when viewing deployments, we've added a list of previous deployments to that environment:

Previous deployments to an environment

Again, the goal is to quickly be able to help you roll back. This list actually shows not just the previous deployments, but any future deployments too (in case you are viewing an old deployment).

It's a minor change, but sometimes it's the small changes that really help. Happy deployments!

Introducing Vanessa

Vanessa Love

If you've contacted support lately, it's quite likely that you received a reply from Vanessa Love. In fact it's difficult to imagine how we handled the support volume without her. Welcome Vanessa!

Starting her professional life in the world of PHP programming back in 2004 working on mostly small to medium business portals. Growing a team of developers in a small business led to her being the point of contact between Customers and the programmers, and turned out to be something she enjoyed.

Due to a gap needing to be filled she moved into assisting the Systems Administrator of a Linux server farm, and became the point of call for server issues and website setups. Helping customers from domain purchases to complex configuration issues, the only logical next step was to move into the world of high level technical support.

Switching from LAMP to Windows has been a fun challenge and she promises not to ask you if you tried turning it on and off again!

Vanessa is @fly401 on Twitter

5 Remote Desktop Alternatives

If you build applications to run on Windows servers, and you are involved in deployments, it's quite likely that you'll be spending time in remote desktop.

In the olden days, when ships were made of wood and men of steel, we'd have a couple of servers and run as many applications as we could on them. An IIS server with a dozen sites or applications wasn't just common, it was the standard.

Nowadays, of course, cloud. Virtualization means that instead of one server running many applications, we have one server, running many virtual servers, each with a single application. This means we're seldom in a single remote desktop session at once.

The following list of tools help you more easily manage multiple remote desktop sessions at once.

1. Remote Desktop Connection Manager

It's free, and it's from Microsoft. What's not to love?

RDCMan

It can save credentials if you like, and is great for sharing connections between teammates. The only feature it lacks is that it can't save credentials for a remote desktop gateway. That's why we switched to...

2. mRemoteNG

An open source fork of mRemote, this is the tool that we currently use. The Octopus team are distributed, so we save the mRemoteNG settings file in Dropbox so that everyone on the team can use to easily connect to any of our VM's.

mRemoteNG

3. RoyalTS

RoyalTS is a very nice looking commercial alternative, and has a killer feature: a button that lets you click "Start" remotely. I'm not sure who forgot to tell the UX team on Windows that people don't normally run Windows Server 2012 on tablets, but I'm sure they had a reason for making it nigh impossible to launch programs over remote desktop. Never fear, RoyalTS is here.

RoyalTS

4. Terminals

Another open source tabbed session manager, but looks to be actively developed. And the source code is in C#!

Terminals

5. Octopus Deploy!

OK, it's a shameless plug :-)

Octopus Deploy dashboard

Octopus Deploy is a remote desktop alternative in the same way that TeamCity/Team Build is a Visual Studio alternative.

Remote desktop tools are essential for diagnostics and some configuration tasks; there's no denying it. That said, our entire raison d'être here at Octopus Deploy is to make it so that a typical deployment involves no remote desktop whatsoever. Through better visibility, accountability and reliability, our goal is to reduce the time you spend in remote desktop sessions.

What's your experience with the tools above, and what did I miss?

RFC: Linux Deployments

Currently our highest voted Uservoice idea is to add support for Linux deployments in Octopus. We are going to implement this by adding first-class support for servers running SSH that will map very closely to the way that Windows deployments with Octopus work today.

octopenguin

For the purpose of this we will be introducing a new term in Octopus, agentless machines. An agentless machine will not be running a Tentacle, instead it will use a different method of communication, e.g. SSH.

Our goal with this RFC is to make sure that the way we implement this feature will be suitable for the widest range of customers.

Introducing agentless machines

Setting up a new agentless machine, e.g. a Linux server running SSH, in Octopus will work the same way as when adding a new machine running a Tentacle.

Adding an agentless machine

Agentless machines are configured by chosing SSH rather than Listening or Polling as the communication style.

Adding an agentless machine

Environment with an agentless machine

Agentless machines appear on the Environments page just like regular Tentacles, showing their location and status (online/offline)

Environment with an agentless machine

Health checking an agentless machine

Typical Octopus tasks like heath checks, ad-hoc scripts and so-on run across all appropriate machines, including both Tentacle and agentless machines if both styles are being used.

Health check an agentless machine

Authentication

Our aim is to support the following authentication types for the SSH target machines

Authentication Types

Password

Password

Key without passphrase

Key without passphrase

Key with passphrase

Key with passphrase

Private keys will be stored in the Octopus database as encrypted properties.

Network topology

Connections to agentless machines won't be made directly from the Octopus Server; instead, one or more Tentacles will be used to make outbound connections to the machine. We're planning to add a hidden, "shadow" Tentacle running in a low-privilege process on the Octopus Server itself as a convenient default, but using specific Tentacles to handle different network topologies is also a feature we're considering.

Octopus footprint on agentless machines

Octopus will upload compressed packages to the target machine before any deployment takes place, so we require some local storage on the target machine that will go to ~/.tentacle/. We will also extract the packages to a default location like we do on a Tentacle machine, e.g. ~/.tentacle/apps/{environment}/{project}/{package}/{version}, and we will also support custom installation locations to move the files elsewhere.

Package acquisition

Because a Tentacle machine is required for doing SSH deployments the package acquisition for these deployments will change slightly from the way Windows deployments with Octopus work today.

The Tentacle machine will extract the NuGet package and create a .tar.gz tarball that then will be uploaded to the target machines.

The Tentacle machine can be co-located with the target machines to optimize bandwidth usage, i.e. Octopus uploads the package to the Tentacle, which in turn will send the package to the target machines.

Deployment

Package deployment steps will run entirely via a single shell session on the target machine.

  1. We will check and ensure the Octopus scripts are up-to-date
  2. Package and supporting deployment files will be uploaded via SCP
  3. Deployment orchestration script will be executed
  4. Default installation directory will be created if it doesn't exist
  5. tar file will be unpacked
  6. predeploy will run
  7. If a custom installation directory has been specified
    • If the option to purge the directory before deployment is true, we purge the custom installation directory
    • Copy the extracted files to the custom directory
  8. deploy will run
  9. postdeploy will run
  10. Run retention policies to clean up old deployments
  11. Delete the Octopus variables file (to ensure sensitive variables aren't left on the server)

Deployment scripts

The main deployment orchestration script will be written in bash, as this is the least common denominator amongst *nix distributions. This script will look for predeploy/deploy/postdeploy scripts, that can be created by the user, and execute them if they are present.

The predeploy/deploy/postdeploy scripts can be written in the users preferred scripting language (but the user would have to ensure that it is installed on the server that the deployment is run on).

  • predeploy
    • tasks required to be run pre deployment, e.g. config transformations needed for your application.
  • deploy
    • tasks required for the actual deployment of your application.
  • postdeploy
    • tasks required to be run post deployment. e.g. clean up any temp files created during the deployment of your application.

The working directory will be the default installation directory for the predeploy script, and either the default or custom installation directory for the deploy and postdeploy scripts.

Environment variables for deployments

Octopus has a more sophisticated variable system and syntax than Linux environment variables can support. Having to map between names like Octopus.Action[Install website].Status.Code and valid POSIX equivalents seems uncomfortable and error-prone. Large Octopus deployments also tend to carry a lot of variables, so we're uneasy about dumping these arbitrarily into the environment in which the deployment script runs.

Instead of setting environment vars directly, deployment scripts will have access to a tentacle command that can be used to retrieve values that they require. E.g. to retrieve the custom installation directory used by the deployment the user can call the tentacle command like so:

DEST=$(tentacle get Octopus.Action.Package.CustomInstallationDirectory)

This declares an environment variable DEST to hold the custom installation directory (subsequently available to the script as $DEST).

Values with embedded spaces and so-on can be supported using " quotes.

Though we're unlikely to implement it in the first version of the command, we're considering some more sophisticated features like iteration:

for ACTION in $(tentacle get "Octopus.Action[*]")
do
    echo "The status of $ACTION was $(tentacle get "Octopus.Action[$ACTION].Status.Code")"
done

This highlights the kinds of opportunities we see to make writing deployment scripts more enjoyable.

Other features of the tentacle command

Using the tentacle helper will also provide consistent access to the commands supported using PowerShell cmdlets on Windows machines.

Setting output variables

Output variables can be sent to the Octopus server using tentacle set:

tentacle set ActiveUsers 3

Or:

ps -af | tentacle set RunningProcesses

Collecting artifacts

Files from the target machine can be collected as Octopus artifacts using tentacle collect:

tentacle collect InstallLog.txt

Running tools

Where we (or others) provide helper scripts that themselves need access to variables, paths and so-on, these can be invoked using tentacle exec:

tentacle exec xmlconfig Web.config

Deployment features

Features like XML configuration transformations/appsettings support will run on the target machine.

Supporting Octopus scripts and executables will be part of the default folder structure on the target machine, i.e. ~/.tentacle/tools/, in this folder we can also include helper apps using Mono for supporting .NET-specific conventions like XML configuration transformation/appsettings.

We can also include different scripting/executable options to support other deployment features.

Retention policies

Once a deployment has completed, we will apply the retention policy that has been specified for the project, just like we do with Windows deployments.

The user can specify to keep a number of days worth of deployments, or a specific number of deployments. If either of these have been specified we will remove any files that do not fall within the specified retention policy.

System requirements

Linux distributions can vary significantly in their default configuration and available packages. We're aiming to choose a widely-supported baseline that makes it possible to deploy with Octopus to practically any current Linux distribution.

The fundamental assumptions we will make about a target machine are:

  • It's accessible using SSH and SCP
  • The user's login shell is Bash 4+
  • tar is available

The platforms that we ourselves plan to build and test against are:

  • Amazon Linux AMI 2014.03
  • Ubuntu Server 12.04 LTS

We will do our best to remain distro-agnostic, but if you're able to choose one of these options for your own servers you'll be helping us provide efficient testing and support.

Outstanding Questions

  1. Management of platform-specific paths
    • where apps are being deployed to both Windows and Linux servers, paths like "Custom Installation Directory" will need to be specified separately for Linux and Windows. Can we make this experience better?
  2. Naming of the deploy scripts
    • predeploy/deploy/postdeploy, or
    • pre-deploy/deploy/post-deploy, or
    • pre_deploy/deploy/post_deploy?
  3. Customisation of paths where we will upload packages and extract packages by default
    • is it necessary to configure this via Octopus, or can locations like ~/.tentacle/apps be linked by an administrator to other locations as needed?
  4. Writing out variables like we do in PowerShell
    • In PowerShell we first encrypt them with DPAPI, is there a similar standard crypto function available on Linux?

We need your help!

What we would really appreciate is for customers who are already using SSH in Octopus, or that would want to start using it, to give us feedback on our plan of how to implement SSH deployments in Octopus.

Whether it be improvements to the suggested implementation above or if we've made assumptions that just will not work, then please let us know in the comments below.

RFC: Lifecycles

Lifecycles are a new concept in Octopus that will allow us to tackle a number of related suggestions that we've been longing to solve:

  • Automatically promoting between environments (triggers)
  • Marking a release as bad (so it cannot be deployed any more)
  • Preventing production deployments until test deployments are complete (gates)

Lifecycle example

Lifecycles and phases

A lifecycle is made up of a number of phases, each of which specifies triggers and rules around promotion. The simplest lifecycle, which would ship out of the box and be the default, would simply be:

Phase 1: Anything Goes
  • Allow manual deployment to: all environments

In other words, this lifecycle simply says "Releases can be deployed to any environment, in any order". It's total chaos!

A custom lifecycle might split the world into pre-production and production phases:

Phase 1: Pre-Production
  • Automatically deploy to: Development
  • Allow manual deployment to: UAT1, UAT2, UAT3, Staging
  • Minimum environments before promotion: 3
Phase 2: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion:

Finally, a more structured lifecycle might look like this:

Phase 1: Development
  • Automatically deploy to: Development
  • Allow manual deployment to:
  • Minimum environments before promotion: 1
Phase 2: Test
  • Automatically deploy to:
  • Allow manual deployment to: UAT1, UAT2, UAT3
  • Minimum environments before promotion: 2
Phase 3: Staging
  • Automatically deploy to:
  • Allow manual deployment to: Staging
  • Minimum environments before promotion: 1
Phase 4: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion: 1

Note that the Test phase unlocks 3 different test environments, and users must deploy to at least 2 of them before the release enters the Staging phase.

Assumptions

We're making a few assumptions with this feature, in order to keep it simple.

First, progression through phases is always linear - you start in phase 1, then go to 2, then 3, and so on. You cannot skip a phase, and there is no branching.

Second, the environments that can be deployed to are cumulative as you get further into the lifecycle. For example, if the release is in phase 3 (Staging) in the third example above, you can deploy to Development, UAT1/2/3, or Staging, just not production.

Automatic promotion

Since each phase can be configured to deploy to one or more environments, you can use this option to automate promotion between environments. For example, upon successful deployment to a development environment, you might automatically promote to a test environment.

Keep in mind that you can mix this feature with the existing manual intervention steps system to pause for approval before/after a deployment, and prior to promotion.

Automatic release creation

When you assign a lifecycle to a project, you'll also be able to configure the project to create releases as soon as a new NuGet package is detected.

Create releases automatically

For now, I think this will be limited only to our built-in NuGet repository (not for packages in external feeds).

When combined with the features above, this is very exciting - from the push of a NuGet package we can create and deploy releases with no external integration.

Flag a problem

Normally, we assume that if a release is deployed successfully, it's ready to be promoted. Just like now, you can use manual steps to force a review/approval as an explicit step at the end of a deployment.

However, sometimes a deployment looks good and gets approved, and only later do you discover a problem - perhaps a terrible bug that deletes customer data. If that happens, you can flag a problem with the deployment:

Flag a problem

When a problem is flagged, the deployment doesn't count towards progress through the lifecycle - if we flag a problem with the Staging deployment, we won't be able to promote to Production, even if Staging was successful.

Scenarios enabled

I want to automate the promotion of deployments from Development all the way to Production, just by pushing a NuGet package

  1. Use the "Automatically create a release" option
  2. In each phase of the pipeline, set the 'automatically deploy to' environments such that the release automatically progresses through the pipeline

Prevent Production deployments unless you have deployed to Staging

Simply put them in different phases, and don't unlock the Production environment unless there's a successful Staging deployment.

Prevent production deployments even if staging was successful, if we later find a problem with the application

Use the "Flag a problem" feature to prevent the release from progressing to the next phase, or revert it to the previous phase, in the lifecycle.

Lifecycles will consume project groups

Currently, project groups in Octopus are used to organize collections of projects, as well as limit which environments they can deploy to, and to set the retention policy.

When lifecycles are introduced, it's via lifecycles that you'll control which environments a project can be deployed to, and the retention policy to apply. Project groups will just be left to organize collections of projects, and nothing more.

So, what do you think? Is this a feature that will be useful to you? What would your lifecycle look like?