Deploy a PHP Site to IIS with Octopus

As you have probably read by now, my background pre-Octopus was PHP in a LAMP environment. Doing system administration duties manually releasing and deploying PHP sites. So when a customer recently asked if they could deploy a PHP site to IIS, Paul in his wisdom asked me to write up a blog post about just that.

It's all about the packages

Octopus, as we know relies on NuGet packages to deploy files. So to start we need to put all of our PHP files into NuGet Package. Lucky for us the NuGet Package Explorer has this covered. Adding files and creating a NuGet Package really couldn't be easier.

NuGet Explorer

Development Folder

Once my package was created, and uploaded to the internal NuGet repository in Octopus, I was ready to go!

NuGet Repository

Creating the Project in Octopus

I then created a new project, and selected the NuGet Package step.

Package Deployment Step

As you can see I chose the Custom Install Directory, as I had a pre-existing site setup and I would like in this instance to always deploy to the same location. But we are using IIS so you can choose the other IIS options. I have also added a couple of other Octopus features to show that it can be used with .php files. So I made the Custom Install Directory use a variable, and I also created a test config file that has variables that will need replacing when it deploys to Production.

Test Config File

So I have used the Substitute Variables in Files feature and defined my config.php file.

Test Variables

All three variables, for my Custom Installation Directory, and my two variables in the config.php file.

Time to Deploy

Now that my package is chosen as a step, and my variables are configured and scoped, it is time to test a release. I guess I should create one!

Creating a Release

Deployment time!

Deployment Time

And here is the deployment log for the package:

Deployment Log

Results on Server

And we are done, Octopus has deployed my PHP files. So let's have a look at the results. Here is my lovely phpinfo() page.

PHPINFO display

And my files are deployed in the correct location on my IIS Server.

Files on Disk

We also cannot forget about my config.php which has had it's variables nicely substituted.

Config.php Contents

Complete Success

So my deployment of PHP files onto an IIS Server was a complete success. I was able to utilise everything that Octopus already did, the only thing I had to do manually was create my package, but it really is a case of 'find a folder and add'. Yes this deployment was an an IIS Server, and mostly PHP is run on Linux, but maybe that deployment reality really isn't too far from our future.

Previous deployments

A small feature that we added to 2.5.4 is the ability to easily view previous deployments when looking at a project overview. By default, you just see the latest deployment per environment:

Previous tab

You can click the Previous tab to expand and show the previous successful deployments to each environment:

Previous deployments

The goal is to make it easier to find the deployment to roll back to. This means that we only show successful deployments, and we only show deployments for a release other than the one currently deployed (e.g., if it took 4 attempts to deploy 3.18, you'd still see 3.17 as the previous deployment).

Similarly, when viewing deployments, we've added a list of previous deployments to that environment:

Previous deployments to an environment

Again, the goal is to quickly be able to help you roll back. This list actually shows not just the previous deployments, but any future deployments too (in case you are viewing an old deployment).

It's a minor change, but sometimes it's the small changes that really help. Happy deployments!

Introducing Vanessa

Vanessa Love

If you've contacted support lately, it's quite likely that you received a reply from Vanessa Love. In fact it's difficult to imagine how we handled the support volume without her. Welcome Vanessa!

Starting her professional life in the world of PHP programming back in 2004 working on mostly small to medium business portals. Growing a team of developers in a small business led to her being the point of contact between Customers and the programmers, and turned out to be something she enjoyed.

Due to a gap needing to be filled she moved into assisting the Systems Administrator of a Linux server farm, and became the point of call for server issues and website setups. Helping customers from domain purchases to complex configuration issues, the only logical next step was to move into the world of high level technical support.

Switching from LAMP to Windows has been a fun challenge and she promises not to ask you if you tried turning it on and off again!

Vanessa is @fly401 on Twitter

5 Remote Desktop Alternatives

If you build applications to run on Windows servers, and you are involved in deployments, it's quite likely that you'll be spending time in remote desktop.

In the olden days, when ships were made of wood and men of steel, we'd have a couple of servers and run as many applications as we could on them. An IIS server with a dozen sites or applications wasn't just common, it was the standard.

Nowadays, of course, cloud. Virtualization means that instead of one server running many applications, we have one server, running many virtual servers, each with a single application. This means we're seldom in a single remote desktop session at once.

The following list of tools help you more easily manage multiple remote desktop sessions at once.

1. Remote Desktop Connection Manager

It's free, and it's from Microsoft. What's not to love?

RDCMan

It can save credentials if you like, and is great for sharing connections between teammates. The only feature it lacks is that it can't save credentials for a remote desktop gateway. That's why we switched to...

2. mRemoteNG

An open source fork of mRemote, this is the tool that we currently use. The Octopus team are distributed, so we save the mRemoteNG settings file in Dropbox so that everyone on the team can use to easily connect to any of our VM's.

mRemoteNG

3. RoyalTS

RoyalTS is a very nice looking commercial alternative, and has a killer feature: a button that lets you click "Start" remotely. I'm not sure who forgot to tell the UX team on Windows that people don't normally run Windows Server 2012 on tablets, but I'm sure they had a reason for making it nigh impossible to launch programs over remote desktop. Never fear, RoyalTS is here.

RoyalTS

4. Terminals

Another open source tabbed session manager, but looks to be actively developed. And the source code is in C#!

Terminals

5. Octopus Deploy!

OK, it's a shameless plug :-)

Octopus Deploy dashboard

Octopus Deploy is a remote desktop alternative in the same way that TeamCity/Team Build is a Visual Studio alternative.

Remote desktop tools are essential for diagnostics and some configuration tasks; there's no denying it. That said, our entire raison d'être here at Octopus Deploy is to make it so that a typical deployment involves no remote desktop whatsoever. Through better visibility, accountability and reliability, our goal is to reduce the time you spend in remote desktop sessions.

What's your experience with the tools above, and what did I miss?

RFC: Linux Deployments

Currently our highest voted Uservoice idea is to add support for Linux deployments in Octopus. We are going to implement this by adding first-class support for servers running SSH that will map very closely to the way that Windows deployments with Octopus work today.

octopenguin

For the purpose of this we will be introducing a new term in Octopus, agentless machines. An agentless machine will not be running a Tentacle, instead it will use a different method of communication, e.g. SSH.

Our goal with this RFC is to make sure that the way we implement this feature will be suitable for the widest range of customers.

Introducing agentless machines

Setting up a new agentless machine, e.g. a Linux server running SSH, in Octopus will work the same way as when adding a new machine running a Tentacle.

Adding an agentless machine

Agentless machines are configured by chosing SSH rather than Listening or Polling as the communication style.

Adding an agentless machine

Environment with an agentless machine

Agentless machines appear on the Environments page just like regular Tentacles, showing their location and status (online/offline)

Environment with an agentless machine

Health checking an agentless machine

Typical Octopus tasks like heath checks, ad-hoc scripts and so-on run across all appropriate machines, including both Tentacle and agentless machines if both styles are being used.

Health check an agentless machine

Authentication

Our aim is to support the following authentication types for the SSH target machines

Authentication Types

Password

Password

Key without passphrase

Key without passphrase

Key with passphrase

Key with passphrase

Private keys will be stored in the Octopus database as encrypted properties.

Network topology

Connections to agentless machines won't be made directly from the Octopus Server; instead, one or more Tentacles will be used to make outbound connections to the machine. We're planning to add a hidden, "shadow" Tentacle running in a low-privilege process on the Octopus Server itself as a convenient default, but using specific Tentacles to handle different network topologies is also a feature we're considering.

Octopus footprint on agentless machines

Octopus will upload compressed packages to the target machine before any deployment takes place, so we require some local storage on the target machine that will go to ~/.tentacle/. We will also extract the packages to a default location like we do on a Tentacle machine, e.g. ~/.tentacle/apps/{environment}/{project}/{package}/{version}, and we will also support custom installation locations to move the files elsewhere.

Package acquisition

Because a Tentacle machine is required for doing SSH deployments the package acquisition for these deployments will change slightly from the way Windows deployments with Octopus work today.

The Tentacle machine will extract the NuGet package and create a .tar.gz tarball that then will be uploaded to the target machines.

The Tentacle machine can be co-located with the target machines to optimize bandwidth usage, i.e. Octopus uploads the package to the Tentacle, which in turn will send the package to the target machines.

Deployment

Package deployment steps will run entirely via a single shell session on the target machine.

  1. We will check and ensure the Octopus scripts are up-to-date
  2. Package and supporting deployment files will be uploaded via SCP
  3. Deployment orchestration script will be executed
  4. Default installation directory will be created if it doesn't exist
  5. tar file will be unpacked
  6. predeploy will run
  7. If a custom installation directory has been specified
    • If the option to purge the directory before deployment is true, we purge the custom installation directory
    • Copy the extracted files to the custom directory
  8. deploy will run
  9. postdeploy will run
  10. Run retention policies to clean up old deployments
  11. Delete the Octopus variables file (to ensure sensitive variables aren't left on the server)

Deployment scripts

The main deployment orchestration script will be written in bash, as this is the least common denominator amongst *nix distributions. This script will look for predeploy/deploy/postdeploy scripts, that can be created by the user, and execute them if they are present.

The predeploy/deploy/postdeploy scripts can be written in the users preferred scripting language (but the user would have to ensure that it is installed on the server that the deployment is run on).

  • predeploy
    • tasks required to be run pre deployment, e.g. config transformations needed for your application.
  • deploy
    • tasks required for the actual deployment of your application.
  • postdeploy
    • tasks required to be run post deployment. e.g. clean up any temp files created during the deployment of your application.

The working directory will be the default installation directory for the predeploy script, and either the default or custom installation directory for the deploy and postdeploy scripts.

Environment variables for deployments

Octopus has a more sophisticated variable system and syntax than Linux environment variables can support. Having to map between names like Octopus.Action[Install website].Status.Code and valid POSIX equivalents seems uncomfortable and error-prone. Large Octopus deployments also tend to carry a lot of variables, so we're uneasy about dumping these arbitrarily into the environment in which the deployment script runs.

Instead of setting environment vars directly, deployment scripts will have access to a tentacle command that can be used to retrieve values that they require. E.g. to retrieve the custom installation directory used by the deployment the user can call the tentacle command like so:

DEST=$(tentacle get Octopus.Action.Package.CustomInstallationDirectory)

This declares an environment variable DEST to hold the custom installation directory (subsequently available to the script as $DEST).

Values with embedded spaces and so-on can be supported using " quotes.

Though we're unlikely to implement it in the first version of the command, we're considering some more sophisticated features like iteration:

for ACTION in $(tentacle get "Octopus.Action[*]")
do
    echo "The status of $ACTION was $(tentacle get "Octopus.Action[$ACTION].Status.Code")"
done

This highlights the kinds of opportunities we see to make writing deployment scripts more enjoyable.

Other features of the tentacle command

Using the tentacle helper will also provide consistent access to the commands supported using PowerShell cmdlets on Windows machines.

Setting output variables

Output variables can be sent to the Octopus server using tentacle set:

tentacle set ActiveUsers 3

Or:

ps -af | tentacle set RunningProcesses

Collecting artifacts

Files from the target machine can be collected as Octopus artifacts using tentacle collect:

tentacle collect InstallLog.txt

Running tools

Where we (or others) provide helper scripts that themselves need access to variables, paths and so-on, these can be invoked using tentacle exec:

tentacle exec xmlconfig Web.config

Deployment features

Features like XML configuration transformations/appsettings support will run on the target machine.

Supporting Octopus scripts and executables will be part of the default folder structure on the target machine, i.e. ~/.tentacle/tools/, in this folder we can also include helper apps using Mono for supporting .NET-specific conventions like XML configuration transformation/appsettings.

We can also include different scripting/executable options to support other deployment features.

Retention policies

Once a deployment has completed, we will apply the retention policy that has been specified for the project, just like we do with Windows deployments.

The user can specify to keep a number of days worth of deployments, or a specific number of deployments. If either of these have been specified we will remove any files that do not fall within the specified retention policy.

System requirements

Linux distributions can vary significantly in their default configuration and available packages. We're aiming to choose a widely-supported baseline that makes it possible to deploy with Octopus to practically any current Linux distribution.

The fundamental assumptions we will make about a target machine are:

  • It's accessible using SSH and SCP
  • The user's login shell is Bash 4+
  • tar is available

The platforms that we ourselves plan to build and test against are:

  • Amazon Linux AMI 2014.03
  • Ubuntu Server 12.04 LTS

We will do our best to remain distro-agnostic, but if you're able to choose one of these options for your own servers you'll be helping us provide efficient testing and support.

Outstanding Questions

  1. Management of platform-specific paths
    • where apps are being deployed to both Windows and Linux servers, paths like "Custom Installation Directory" will need to be specified separately for Linux and Windows. Can we make this experience better?
  2. Naming of the deploy scripts
    • predeploy/deploy/postdeploy, or
    • pre-deploy/deploy/post-deploy, or
    • pre_deploy/deploy/post_deploy?
  3. Customisation of paths where we will upload packages and extract packages by default
    • is it necessary to configure this via Octopus, or can locations like ~/.tentacle/apps be linked by an administrator to other locations as needed?
  4. Writing out variables like we do in PowerShell
    • In PowerShell we first encrypt them with DPAPI, is there a similar standard crypto function available on Linux?

We need your help!

What we would really appreciate is for customers who are already using SSH in Octopus, or that would want to start using it, to give us feedback on our plan of how to implement SSH deployments in Octopus.

Whether it be improvements to the suggested implementation above or if we've made assumptions that just will not work, then please let us know in the comments below.

RFC: Lifecycles

Lifecycles are a new concept in Octopus that will allow us to tackle a number of related suggestions that we've been longing to solve:

  • Automatically promoting between environments (triggers)
  • Marking a release as bad (so it cannot be deployed any more)
  • Preventing production deployments until test deployments are complete (gates)

Lifecycle example

Lifecycles and phases

A lifecycle is made up of a number of phases, each of which specifies triggers and rules around promotion. The simplest lifecycle, which would ship out of the box and be the default, would simply be:

Phase 1: Anything Goes
  • Allow manual deployment to: all environments

In other words, this lifecycle simply says "Releases can be deployed to any environment, in any order". It's total chaos!

A custom lifecycle might split the world into pre-production and production phases:

Phase 1: Pre-Production
  • Automatically deploy to: Development
  • Allow manual deployment to: UAT1, UAT2, UAT3, Staging
  • Minimum environments before promotion: 3
Phase 2: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion:

Finally, a more structured lifecycle might look like this:

Phase 1: Development
  • Automatically deploy to: Development
  • Allow manual deployment to:
  • Minimum environments before promotion: 1
Phase 2: Test
  • Automatically deploy to:
  • Allow manual deployment to: UAT1, UAT2, UAT3
  • Minimum environments before promotion: 2
Phase 3: Staging
  • Automatically deploy to:
  • Allow manual deployment to: Staging
  • Minimum environments before promotion: 1
Phase 4: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion: 1

Note that the Test phase unlocks 3 different test environments, and users must deploy to at least 2 of them before the release enters the Staging phase.

Assumptions

We're making a few assumptions with this feature, in order to keep it simple.

First, progression through phases is always linear - you start in phase 1, then go to 2, then 3, and so on. You cannot skip a phase, and there is no branching.

Second, the environments that can be deployed to are cumulative as you get further into the lifecycle. For example, if the release is in phase 3 (Staging) in the third example above, you can deploy to Development, UAT1/2/3, or Staging, just not production.

Automatic promotion

Since each phase can be configured to deploy to one or more environments, you can use this option to automate promotion between environments. For example, upon successful deployment to a development environment, you might automatically promote to a test environment.

Keep in mind that you can mix this feature with the existing manual intervention steps system to pause for approval before/after a deployment, and prior to promotion.

Automatic release creation

When you assign a lifecycle to a project, you'll also be able to configure the project to create releases as soon as a new NuGet package is detected.

Create releases automatically

For now, I think this will be limited only to our built-in NuGet repository (not for packages in external feeds).

When combined with the features above, this is very exciting - from the push of a NuGet package we can create and deploy releases with no external integration.

Flag a problem

Normally, we assume that if a release is deployed successfully, it's ready to be promoted. Just like now, you can use manual steps to force a review/approval as an explicit step at the end of a deployment.

However, sometimes a deployment looks good and gets approved, and only later do you discover a problem - perhaps a terrible bug that deletes customer data. If that happens, you can flag a problem with the deployment:

Flag a problem

When a problem is flagged, the deployment doesn't count towards progress through the lifecycle - if we flag a problem with the Staging deployment, we won't be able to promote to Production, even if Staging was successful.

Scenarios enabled

I want to automate the promotion of deployments from Development all the way to Production, just by pushing a NuGet package

  1. Use the "Automatically create a release" option
  2. In each phase of the pipeline, set the 'automatically deploy to' environments such that the release automatically progresses through the pipeline

Prevent Production deployments unless you have deployed to Staging

Simply put them in different phases, and don't unlock the Production environment unless there's a successful Staging deployment.

Prevent production deployments even if staging was successful, if we later find a problem with the application

Use the "Flag a problem" feature to prevent the release from progressing to the next phase, or revert it to the previous phase, in the lifecycle.

Lifecycles will consume project groups

Currently, project groups in Octopus are used to organize collections of projects, as well as limit which environments they can deploy to, and to set the retention policy.

When lifecycles are introduced, it's via lifecycles that you'll control which environments a project can be deployed to, and the retention policy to apply. Project groups will just be left to organize collections of projects, and nothing more.

So, what do you think? Is this a feature that will be useful to you? What would your lifecycle look like?

Cleaning temporary ASP.NET files

The vast majority of ASP.NET websites and web applications make use of dynamic compilation to compile certain parts of the application. Assemblies created by dynamic compilation for ASP.NET websites are stored in the Temporary ASP.NET files directory. These temporary files will build up over time and have to be removed manually. Copies of a website bin directory are also stored in this folder as part of Shadow Copying. Many users of Octopus Deploy also use continuous integration or release far more frequently that they otherwise would. This in turn means that these temporary files can accumulate quite quickly.

Instead of leaving it to manual process, there are a few ways that we can clean up after a deployment. To clean out this directory you can use the File System - Clean ASP.NET Temp Files PowerShell script template from the Octopus Deploy Library. This script will let you clean the temporary files directory as a step in the deployment process.

How to use the script

After importing the step from the library and adding the step to your project you can configure the framework version and days to keep.

Step details

The step only requires two parameters Framework version and Days to keep. By default, it will clean site directories under the Temporary ASP.NET files directory older than 30 days.

Framework Version

Specifying All will clean out the temp files for each installed version of the framework. If you need to target a specific version, you can specify either the bit-ness, version or both.

Specifying only a bit-ness value will match all versions. Here you can only specify one of the following two options.

 - Framework
 - Framework64

Specifying only a version will match that version regardless of bit-ness (both 32 and 64 bit). The version must always start with v.

 - v1.1.4322
 - v2.0.50727

A fully specified framework version will match only that path.

 - Framework\v4.0.30319
 - Framework64\v4.0.30319

Days to keep

If the last write time of the site directory is older than the specified number of days to keep, it will be deleted. The directory is created on application startup.

How does the script work

The directory structure under Temporary ASP.NET files consists of directories that map to application names which is the virtual path name or root. Under that there is a code generation directory for each version of the each website. The script will identify only delete code generation directories. The code generation directories are also what Days to keep uses for retention.

Deleting temporary ASP.NET files is a safe operation, keep in mind that multiple websites can use this folder and websites that are currently in use may be recycled.

You can execute the script before or after a deployment. However, it is recommended that you run the script before a deployment as the previous deployment may still be in use even after the deployment is finished. Any code generation directories that contain locked files will be skipped.

Are there any other solutions

If you have multiple websites or need to avoid as much downtime as possible configuring, a custom compilation directory can be used isolate each sites code generation directories. You can specify a custom temporary files directory with the tempDirectory attribute on the compilation tag in your web.config.

<configuration> 
    <system.web>    
        <compilation tempDirectory="C:\TempAspNetFiles\">
        </compilation>
    </system.web>
</configuration> 

When to worry about this?

You should only have to worry about this is you're doing frequent deployments, or you need highly robust deployments. Other factors to consider are how many sites you deploy, the size of the bin directory, how often deployments are made and how much disk space you have.

Introducing Daniel

Daniel Little

This post is long overdue. A couple of months ago, Daniel Little joined our team. In his first few weeks he built the scheduled deployments feature that shipped as part of 2.5. Welcome Daniel!

Daniel has been working professionally as a software engineer on the .NET stack since 2010. He's worked on a wide range of different projects over the years. Automating the deployments of, and developing, enterprise CMS solutions, plus a range of custom built .NET ones. He also has a passion for software architecture and functional programming.

Daniel is a strong advocate of automated deployments, and has been using Octopus Deploy for over a year since introducing it on a number of projects.

Daniel is @lavinski on Twitter and blogs at lavinski.me

RFC: Branching

In the next version of Octopus we're planning to improve our support for people working on different branches of code. We're still in the planning stage on this one, so it's a good time to share what we're doing and get your comments.

Each time you create a release in Octopus, you choose the versions of each package to include, defaulting to the latest:

When different teams work on different branches at the same time, it can make the create release page difficult to use. For example, suppose that the team published packages in the following order:

Acme.Web.1.9.3.nupkg
Acme.Web.1.9.4.nupkg
Acme.Web.1.9.5.nupkg
Acme.Web.2.3.1.nupkg
Acme.Web.2.3.2.nupkg
Acme.Web.1.9.6.nupkg
Acme.Web.1.9.7.nupkg
Acme.Web.2.3.3.nupkg
Acme.Web.2.3.4.nupkg

When viewing the create release page, since Octopus defaults to the latest version, it means people have to hunt to select the right package version - not a great experience.

To help with this, we're introducing branches. You'll define a branch like this:

Defining a simple branch

A branch has a simple name, and then a set of rules that decide which versions should appear when creating a release. In the Step name field, you can choose which steps to apply the rule to - one, many or all steps in the project. The Version range field uses the NuGet versioning syntax to specify the range of versions to include. The Tag can contain a regex, and is used over the SemVer pre-release tag.

On the create release page, you can select which branch to use, and it will control which packages appear:

Selecting a branch

A more complicated branch definition might be:

More complex branch

Here, the project has multiple steps. For the Acme.Util step, we have one rule, and we have a different rule for the other steps. We're using tags to only show packages with the -vnext SemVer tag.

Our goal with this feature is to keep it simple - branches really only matter from the perspective of the release creation page (and Octo.exe create-release). If you need different environments or deployment steps depending on the branch, it's probably best to clone a project.

So, what do you think? Would such a feature be useful to you? Let us know in the box below.

Octopus vs. Build Servers

In my previous life, I used CruiseControl.NET, and later TeamCity, as my deployment automation tool. The goal was to automate deployments, and since my build server was already compiling the code and running tests, it felt natural to make it deploy too.

I extended the build process by adding custom scripts that would deploy the software. Sometimes they'd be short scripts running RoboCopy and some XML transforms. Other times they'd be reams and reams of PowerShell with agents listening on remote machines; it all depended on how complicated the actual deployment process was. It was these experiences that led to Octopus being built.

That said, it's a question that still comes up once a week: why should I use Octopus when I already have a CI server?

In fact, I asked it myself when writing some documentation on integrating Atlassian Bamboo with Octopus Deploy. Bamboo even has deployment concepts baked in; why would a Bamboo user need Octopus?

Here's why:

Different focus

Build servers usually contain a number of built-in tasks with a focus on build. TeamCity and Bamboo come with a bunch of build runner types that are handy for building: they can call the build engines of many platforms (such as MSBuild), can deal with build-time dependencies (NuGet, Maven), have many unit test runners (NUnit, MSTest), and can run and report on code quality and consistency (code coverage, FXCop).

When you look at the task runner list in your build server from the perspective of deployments, you'll notice a difference. Where's the built in task to configure an IIS website? Or install a Windows service? Change a configuration file? Or the other 54 (as of today) deployment-oriented tasks that are available in the Octopus library?

Even in Bamboo, which has a whole deployment feature built-in, the deployment tasks available are pretty much limited to running scripts:

Deployment tasks in Bamboo

The reality is, when deploying from a CI server, the closest thing to built-in deployment tasks you'll usually find is the ability to run a script. Which leads to the next issue:

Remote deployments

When a build server executes a job, the build agent isn't usually the same as the machines that are ultimately being deployed to. This is because unlike builds, deployments involve coordinating activities across many machines.

When a deployment touches multiple machines, it's up to you to figure out how to perform the entire deployment remotely, using tools like XCOPY, SSH or PowerShell remoting over the network. You can then wrap this in a for-loop to iterate over all the machines involved.

This brings up a number of challenges:

1) Do you have the right permissions on all of those machines? 2) Can you make all the necessary firewall changes? 3) What if the machines aren't on the same Active Directory domain? 4) How will you get the logs back? If something goes wrong, how will you diagnose it? 5) Is the connection secure?

I lost so much time solving these problems on different projects in the past, and it's precisely why we invested so much time in designing the Tentacle agent system:

  • Tentacles don't need to be on the same AD domain
  • We use SSL and two-way trust for security
  • Tentacles can either listen or poll - it's up to you
  • When Tentacles run a command, they do so as a local user, so you can perform a number of activities that might not be possible via the network

Automatic parallelization

Related to the above, in all but the simplest deployments, there are usually multiple machines involved in a deployment: perhaps a couple of web servers, or a fleet of application servers.

A good deployment automation tool solves the problem of running all of the deployment activities in parallel across all of those machines. In Octopus, you register machines and tag them with a role, then you specify packages to be deployed to machines in a given role. Octopus ensures the packages are on the machine, then deploys them, either in parallel or rolling.

The distinction is obvious when you compare the flat build log that comes from a CI tool like Bamboo or Jenkins:

Flat build log from Bamboo

Against the hierarchical deployment log that comes out of Octopus:

Deployment log in Octopus

Octopus isn't using the nesting for display purposes (like TeamCity): we use it because each of those steps are running at the same time. There simply is no other way to display it. If you have a script that needs to run on 10 machines, you'll see the log messages as it executes on all 10 simultaneously, without them being jumbled together.

A customer last Friday used Octopus to deploy an application to over six hundred machines. It would have taken a long time to do this with a build server without writing all the coordinating scripts themselves, and the logs would have been impossible to decipher.

Configuration

All build servers have some level of build parameter/variable support. But the ability to scope them is very limited. If you have multiple target servers, and need to use different values for a setting for each one, how do you manage this?

In Octopus, we manage variables with different scopes, and can handle passwords and other sensitive settings securely. And as you would expect, we take snapshots between releases, and audit the changes made. Best of all, you can change configuration (which is really an operational concern) without having to rebuild the application.

Deployment can be messy

When your build server encounters an error - a failed unit test, or bad code, what does it do? Ideally, it fails fast. You wouldn't want the CI server to pause, and ask what you want to do next.

When you're performing a rolling deployment of a web site across 10 web servers, and server #7 fails, fail fast is probably the last thing you want. Perhaps you'd like to investigate the problem and retry, or skip that machine and continue with the rest. That's why Octopus has guided failures.

In fact, a human intervening might be something that you require for all deployments, not just when something goes wrong. In Octopus, you can add manual intervention steps to a deployment, which can run at the start (for approval), at the end (for verification), or even half-way through (some legacy component that just has to be configured manually).

Summary

All of these issues - the built-in tasks available, remote execution and infrastructure problems, parallelism, and failure modes - point to a conclusion: build and deployment are fundamentally different beasts. In fact, the only characteristics that they really share is that they both sometimes involve scripting, and the team needs visibility into them.

As developers tasked with automating a deployment, this distinction isn't obvious at first. Build and CI servers have been around a long time, and we're familiar with them, so it's natural to imagine how they can be extended to deployment. Even though we've been working on Octopus for years, when designing new features for Octopus I still find myself looking at CI tools. It's only when you are very far down the rabbit hole with custom scripts to coordinate an increasingly complicated deployment do these differences become painfully obvious.

Build and deployment are different, but equally important. The rule of "best tool for the job" should apply to both of them. Our goal is to focus on being the best deployment tool; we'll leave the problem of building to the build servers :-)