We're hiring: Support Engineer (x2, US-based)

Right now our full-time team are all based in Australia. For product development, it makes no difference. But it does make providing support in US time zones difficult. 5:00 AM support calls are tough, and we're probably not in the best frame of mind to diagnose production issues at that time in the morning.

Traditionally, support at Octopus has been a reactive position - people try our software, and if they hit a problem, they reach out and we provide support. My goal is to grow our support capability beyond reactive, and into pro-active support.

With that in mind, we're currently hiring for two US-based, full time support team members. If you know Octopus, and live in the US or a US-friendly time zone, why not join us? Help us eliminate remote desktop from production deployments!

If you agree with us that support is one of the most important jobs in the company, that support staff should be consulted on feature design and product changes, and you are really driven to impress and delight customers, then we'd love to have you. Support Engineer

(We're also hiring for a Brisbane-based test engineer)

Encrypting connection strings in Web.config

When specifying a connection string, it's usually preferable to use Windows Authentication because you don't need to put passwords in your Web.config file. However, this doesn't work in every scenario, so if you have to use a password in your connection string then it's a good idea to encrypt it. Step Templates are a great way to enhance the native capabilities of Octopus Deploy. So to make encrypting the connection string section easier, I created a new template to do just that.

You can grab the new IIS Website - Encrypt Web.Config Connection Strings from the Octopus Deploy Library. Under the covers, this template makes use of the aspnet_regiis tool in order to encrypt the Web.Config. Adding the step to your deployment process will take you do the step details page for configuration.

Configure Encrypt Connection Strings

This template takes a single parameter called Website directory that you typically set to the InstallationDirectoryPath for the package deployment step. Note that you must specify the name of the package deployment step as part of the variable.


This step is designed to run after a package has been deployed to a web server. Unfortunately, this means that if you use the IIS web site and application pool feature, there is a slight window in which the Web.config will not be encrypted. To be completely safe, it would be ideal to apply the encryption before repointing IIS to the new Website. To work around this issue turn off the feature and use a custom PowerShell script to update IIS.

After you've added the step to your deployment process, your next deployment will have a nicely encrypted connection string section.

Deploy a PHP Site to IIS with Octopus

As you have probably read by now, my background pre-Octopus was PHP in a LAMP environment. Doing system administration duties manually releasing and deploying PHP sites. So when a customer recently asked if they could deploy a PHP site to IIS, Paul in his wisdom asked me to write up a blog post about just that.

It's all about the packages

Octopus, as we know relies on NuGet packages to deploy files. So to start we need to put all of our PHP files into NuGet Package. Lucky for us the NuGet Package Explorer has this covered. Adding files and creating a NuGet Package really couldn't be easier.

NuGet Explorer

Development Folder

Once my package was created, and uploaded to the internal NuGet repository in Octopus, I was ready to go!

NuGet Repository

Creating the Project in Octopus

I then created a new project, and selected the NuGet Package step.

Package Deployment Step

As you can see I chose the Custom Install Directory, as I had a pre-existing site setup and I would like in this instance to always deploy to the same location. But we are using IIS so you can choose the other IIS options. I have also added a couple of other Octopus features to show that it can be used with .php files. So I made the Custom Install Directory use a variable, and I also created a test config file that has variables that will need replacing when it deploys to Production.

Test Config File

So I have used the Substitute Variables in Files feature and defined my config.php file.

Test Variables

All three variables, for my Custom Installation Directory, and my two variables in the config.php file.

Time to Deploy

Now that my package is chosen as a step, and my variables are configured and scoped, it is time to test a release. I guess I should create one!

Creating a Release

Deployment time!

Deployment Time

And here is the deployment log for the package:

Deployment Log

Results on Server

And we are done, Octopus has deployed my PHP files. So let's have a look at the results. Here is my lovely phpinfo() page.

PHPINFO display

And my files are deployed in the correct location on my IIS Server.

Files on Disk

We also cannot forget about my config.php which has had it's variables nicely substituted.

Config.php Contents

Complete Success

So my deployment of PHP files onto an IIS Server was a complete success. I was able to utilise everything that Octopus already did, the only thing I had to do manually was create my package, but it really is a case of 'find a folder and add'. Yes this deployment was an an IIS Server, and mostly PHP is run on Linux, but maybe that deployment reality really isn't too far from our future.

Previous deployments

A small feature that we added to 2.5.4 is the ability to easily view previous deployments when looking at a project overview. By default, you just see the latest deployment per environment:

Previous tab

You can click the Previous tab to expand and show the previous successful deployments to each environment:

Previous deployments

The goal is to make it easier to find the deployment to roll back to. This means that we only show successful deployments, and we only show deployments for a release other than the one currently deployed (e.g., if it took 4 attempts to deploy 3.18, you'd still see 3.17 as the previous deployment).

Similarly, when viewing deployments, we've added a list of previous deployments to that environment:

Previous deployments to an environment

Again, the goal is to quickly be able to help you roll back. This list actually shows not just the previous deployments, but any future deployments too (in case you are viewing an old deployment).

It's a minor change, but sometimes it's the small changes that really help. Happy deployments!

Introducing Vanessa

Vanessa Love

If you've contacted support lately, it's quite likely that you received a reply from Vanessa Love. In fact it's difficult to imagine how we handled the support volume without her. Welcome Vanessa!

Starting her professional life in the world of PHP programming back in 2004 working on mostly small to medium business portals. Growing a team of developers in a small business led to her being the point of contact between Customers and the programmers, and turned out to be something she enjoyed.

Due to a gap needing to be filled she moved into assisting the Systems Administrator of a Linux server farm, and became the point of call for server issues and website setups. Helping customers from domain purchases to complex configuration issues, the only logical next step was to move into the world of high level technical support.

Switching from LAMP to Windows has been a fun challenge and she promises not to ask you if you tried turning it on and off again!

Vanessa is @fly401 on Twitter

5 Remote Desktop Alternatives

If you build applications to run on Windows servers, and you are involved in deployments, it's quite likely that you'll be spending time in remote desktop.

In the olden days, when ships were made of wood and men of steel, we'd have a couple of servers and run as many applications as we could on them. An IIS server with a dozen sites or applications wasn't just common, it was the standard.

Nowadays, of course, cloud. Virtualization means that instead of one server running many applications, we have one server, running many virtual servers, each with a single application. This means we're seldom in a single remote desktop session at once.

The following list of tools help you more easily manage multiple remote desktop sessions at once.

1. Remote Desktop Connection Manager

It's free, and it's from Microsoft. What's not to love?


It can save credentials if you like, and is great for sharing connections between teammates. The only feature it lacks is that it can't save credentials for a remote desktop gateway. That's why we switched to...

2. mRemoteNG

An open source fork of mRemote, this is the tool that we currently use. The Octopus team are distributed, so we save the mRemoteNG settings file in Dropbox so that everyone on the team can use to easily connect to any of our VM's.


3. RoyalTS

RoyalTS is a very nice looking commercial alternative, and has a killer feature: a button that lets you click "Start" remotely. I'm not sure who forgot to tell the UX team on Windows that people don't normally run Windows Server 2012 on tablets, but I'm sure they had a reason for making it nigh impossible to launch programs over remote desktop. Never fear, RoyalTS is here.


4. Terminals

Another open source tabbed session manager, but looks to be actively developed. And the source code is in C#!


5. Octopus Deploy!

OK, it's a shameless plug :-)

Octopus Deploy dashboard

Octopus Deploy is a remote desktop alternative in the same way that TeamCity/Team Build is a Visual Studio alternative.

Remote desktop tools are essential for diagnostics and some configuration tasks; there's no denying it. That said, our entire raison d'être here at Octopus Deploy is to make it so that a typical deployment involves no remote desktop whatsoever. Through better visibility, accountability and reliability, our goal is to reduce the time you spend in remote desktop sessions.

What's your experience with the tools above, and what did I miss?

RFC: Linux Deployments

Currently our highest voted Uservoice idea is to add support for Linux deployments in Octopus. We are going to implement this by adding first-class support for servers running SSH that will map very closely to the way that Windows deployments with Octopus work today.


For the purpose of this we will be introducing a new term in Octopus, agentless machines. An agentless machine will not be running a Tentacle, instead it will use a different method of communication, e.g. SSH.

Our goal with this RFC is to make sure that the way we implement this feature will be suitable for the widest range of customers.

Introducing agentless machines

Setting up a new agentless machine, e.g. a Linux server running SSH, in Octopus will work the same way as when adding a new machine running a Tentacle.

Adding an agentless machine

Agentless machines are configured by chosing SSH rather than Listening or Polling as the communication style.

Adding an agentless machine

Environment with an agentless machine

Agentless machines appear on the Environments page just like regular Tentacles, showing their location and status (online/offline)

Environment with an agentless machine

Health checking an agentless machine

Typical Octopus tasks like heath checks, ad-hoc scripts and so-on run across all appropriate machines, including both Tentacle and agentless machines if both styles are being used.

Health check an agentless machine


Our aim is to support the following authentication types for the SSH target machines

Authentication Types



Key without passphrase

Key without passphrase

Key with passphrase

Key with passphrase

Private keys will be stored in the Octopus database as encrypted properties.

Network topology

Connections to agentless machines won't be made directly from the Octopus Server; instead, one or more Tentacles will be used to make outbound connections to the machine. We're planning to add a hidden, "shadow" Tentacle running in a low-privilege process on the Octopus Server itself as a convenient default, but using specific Tentacles to handle different network topologies is also a feature we're considering.

Octopus footprint on agentless machines

Octopus will upload compressed packages to the target machine before any deployment takes place, so we require some local storage on the target machine that will go to ~/.tentacle/. We will also extract the packages to a default location like we do on a Tentacle machine, e.g. ~/.tentacle/apps/{environment}/{project}/{package}/{version}, and we will also support custom installation locations to move the files elsewhere.

Package acquisition

Because a Tentacle machine is required for doing SSH deployments the package acquisition for these deployments will change slightly from the way Windows deployments with Octopus work today.

The Tentacle machine will extract the NuGet package and create a .tar.gz tarball that then will be uploaded to the target machines.

The Tentacle machine can be co-located with the target machines to optimize bandwidth usage, i.e. Octopus uploads the package to the Tentacle, which in turn will send the package to the target machines.


Package deployment steps will run entirely via a single shell session on the target machine.

  1. We will check and ensure the Octopus scripts are up-to-date
  2. Package and supporting deployment files will be uploaded via SCP
  3. Deployment orchestration script will be executed
  4. Default installation directory will be created if it doesn't exist
  5. tar file will be unpacked
  6. predeploy will run
  7. If a custom installation directory has been specified
    • If the option to purge the directory before deployment is true, we purge the custom installation directory
    • Copy the extracted files to the custom directory
  8. deploy will run
  9. postdeploy will run
  10. Run retention policies to clean up old deployments
  11. Delete the Octopus variables file (to ensure sensitive variables aren't left on the server)

Deployment scripts

The main deployment orchestration script will be written in bash, as this is the least common denominator amongst *nix distributions. This script will look for predeploy/deploy/postdeploy scripts, that can be created by the user, and execute them if they are present.

The predeploy/deploy/postdeploy scripts can be written in the users preferred scripting language (but the user would have to ensure that it is installed on the server that the deployment is run on).

  • predeploy
    • tasks required to be run pre deployment, e.g. config transformations needed for your application.
  • deploy
    • tasks required for the actual deployment of your application.
  • postdeploy
    • tasks required to be run post deployment. e.g. clean up any temp files created during the deployment of your application.

The working directory will be the default installation directory for the predeploy script, and either the default or custom installation directory for the deploy and postdeploy scripts.

Environment variables for deployments

Octopus has a more sophisticated variable system and syntax than Linux environment variables can support. Having to map between names like Octopus.Action[Install website].Status.Code and valid POSIX equivalents seems uncomfortable and error-prone. Large Octopus deployments also tend to carry a lot of variables, so we're uneasy about dumping these arbitrarily into the environment in which the deployment script runs.

Instead of setting environment vars directly, deployment scripts will have access to a tentacle command that can be used to retrieve values that they require. E.g. to retrieve the custom installation directory used by the deployment the user can call the tentacle command like so:

DEST=$(tentacle get Octopus.Action.Package.CustomInstallationDirectory)

This declares an environment variable DEST to hold the custom installation directory (subsequently available to the script as $DEST).

Values with embedded spaces and so-on can be supported using " quotes.

Though we're unlikely to implement it in the first version of the command, we're considering some more sophisticated features like iteration:

for ACTION in $(tentacle get "Octopus.Action[*]")
    echo "The status of $ACTION was $(tentacle get "Octopus.Action[$ACTION].Status.Code")"

This highlights the kinds of opportunities we see to make writing deployment scripts more enjoyable.

Other features of the tentacle command

Using the tentacle helper will also provide consistent access to the commands supported using PowerShell cmdlets on Windows machines.

Setting output variables

Output variables can be sent to the Octopus server using tentacle set:

tentacle set ActiveUsers 3


ps -af | tentacle set RunningProcesses

Collecting artifacts

Files from the target machine can be collected as Octopus artifacts using tentacle collect:

tentacle collect InstallLog.txt

Running tools

Where we (or others) provide helper scripts that themselves need access to variables, paths and so-on, these can be invoked using tentacle exec:

tentacle exec xmlconfig Web.config

Deployment features

Features like XML configuration transformations/appsettings support will run on the target machine.

Supporting Octopus scripts and executables will be part of the default folder structure on the target machine, i.e. ~/.tentacle/tools/, in this folder we can also include helper apps using Mono for supporting .NET-specific conventions like XML configuration transformation/appsettings.

We can also include different scripting/executable options to support other deployment features.

Retention policies

Once a deployment has completed, we will apply the retention policy that has been specified for the project, just like we do with Windows deployments.

The user can specify to keep a number of days worth of deployments, or a specific number of deployments. If either of these have been specified we will remove any files that do not fall within the specified retention policy.

System requirements

Linux distributions can vary significantly in their default configuration and available packages. We're aiming to choose a widely-supported baseline that makes it possible to deploy with Octopus to practically any current Linux distribution.

The fundamental assumptions we will make about a target machine are:

  • It's accessible using SSH and SCP
  • The user's login shell is Bash 4+
  • tar is available

The platforms that we ourselves plan to build and test against are:

  • Amazon Linux AMI 2014.03
  • Ubuntu Server 12.04 LTS

We will do our best to remain distro-agnostic, but if you're able to choose one of these options for your own servers you'll be helping us provide efficient testing and support.

Outstanding Questions

  1. Management of platform-specific paths
    • where apps are being deployed to both Windows and Linux servers, paths like "Custom Installation Directory" will need to be specified separately for Linux and Windows. Can we make this experience better?
  2. Naming of the deploy scripts
    • predeploy/deploy/postdeploy, or
    • pre-deploy/deploy/post-deploy, or
    • pre_deploy/deploy/post_deploy?
  3. Customisation of paths where we will upload packages and extract packages by default
    • is it necessary to configure this via Octopus, or can locations like ~/.tentacle/apps be linked by an administrator to other locations as needed?
  4. Writing out variables like we do in PowerShell
    • In PowerShell we first encrypt them with DPAPI, is there a similar standard crypto function available on Linux?

We need your help!

What we would really appreciate is for customers who are already using SSH in Octopus, or that would want to start using it, to give us feedback on our plan of how to implement SSH deployments in Octopus.

Whether it be improvements to the suggested implementation above or if we've made assumptions that just will not work, then please let us know in the comments below.

RFC: Lifecycles

Lifecycles are a new concept in Octopus that will allow us to tackle a number of related suggestions that we've been longing to solve:

  • Automatically promoting between environments (triggers)
  • Marking a release as bad (so it cannot be deployed any more)
  • Preventing production deployments until test deployments are complete (gates)

Lifecycle example

Lifecycles and phases

A lifecycle is made up of a number of phases, each of which specifies triggers and rules around promotion. The simplest lifecycle, which would ship out of the box and be the default, would simply be:

Phase 1: Anything Goes
  • Allow manual deployment to: all environments

In other words, this lifecycle simply says "Releases can be deployed to any environment, in any order". It's total chaos!

A custom lifecycle might split the world into pre-production and production phases:

Phase 1: Pre-Production
  • Automatically deploy to: Development
  • Allow manual deployment to: UAT1, UAT2, UAT3, Staging
  • Minimum environments before promotion: 3
Phase 2: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion:

Finally, a more structured lifecycle might look like this:

Phase 1: Development
  • Automatically deploy to: Development
  • Allow manual deployment to:
  • Minimum environments before promotion: 1
Phase 2: Test
  • Automatically deploy to:
  • Allow manual deployment to: UAT1, UAT2, UAT3
  • Minimum environments before promotion: 2
Phase 3: Staging
  • Automatically deploy to:
  • Allow manual deployment to: Staging
  • Minimum environments before promotion: 1
Phase 4: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion: 1

Note that the Test phase unlocks 3 different test environments, and users must deploy to at least 2 of them before the release enters the Staging phase.


We're making a few assumptions with this feature, in order to keep it simple.

First, progression through phases is always linear - you start in phase 1, then go to 2, then 3, and so on. You cannot skip a phase, and there is no branching.

Second, the environments that can be deployed to are cumulative as you get further into the lifecycle. For example, if the release is in phase 3 (Staging) in the third example above, you can deploy to Development, UAT1/2/3, or Staging, just not production.

Automatic promotion

Since each phase can be configured to deploy to one or more environments, you can use this option to automate promotion between environments. For example, upon successful deployment to a development environment, you might automatically promote to a test environment.

Keep in mind that you can mix this feature with the existing manual intervention steps system to pause for approval before/after a deployment, and prior to promotion.

Automatic release creation

When you assign a lifecycle to a project, you'll also be able to configure the project to create releases as soon as a new NuGet package is detected.

Create releases automatically

For now, I think this will be limited only to our built-in NuGet repository (not for packages in external feeds).

When combined with the features above, this is very exciting - from the push of a NuGet package we can create and deploy releases with no external integration.

Flag a problem

Normally, we assume that if a release is deployed successfully, it's ready to be promoted. Just like now, you can use manual steps to force a review/approval as an explicit step at the end of a deployment.

However, sometimes a deployment looks good and gets approved, and only later do you discover a problem - perhaps a terrible bug that deletes customer data. If that happens, you can flag a problem with the deployment:

Flag a problem

When a problem is flagged, the deployment doesn't count towards progress through the lifecycle - if we flag a problem with the Staging deployment, we won't be able to promote to Production, even if Staging was successful.

Scenarios enabled

I want to automate the promotion of deployments from Development all the way to Production, just by pushing a NuGet package

  1. Use the "Automatically create a release" option
  2. In each phase of the pipeline, set the 'automatically deploy to' environments such that the release automatically progresses through the pipeline

Prevent Production deployments unless you have deployed to Staging

Simply put them in different phases, and don't unlock the Production environment unless there's a successful Staging deployment.

Prevent production deployments even if staging was successful, if we later find a problem with the application

Use the "Flag a problem" feature to prevent the release from progressing to the next phase, or revert it to the previous phase, in the lifecycle.

Lifecycles will consume project groups

Currently, project groups in Octopus are used to organize collections of projects, as well as limit which environments they can deploy to, and to set the retention policy.

When lifecycles are introduced, it's via lifecycles that you'll control which environments a project can be deployed to, and the retention policy to apply. Project groups will just be left to organize collections of projects, and nothing more.

So, what do you think? Is this a feature that will be useful to you? What would your lifecycle look like?

Cleaning temporary ASP.NET files

The vast majority of ASP.NET websites and web applications make use of dynamic compilation to compile certain parts of the application. Assemblies created by dynamic compilation for ASP.NET websites are stored in the Temporary ASP.NET files directory. These temporary files will build up over time and have to be removed manually. Copies of a website bin directory are also stored in this folder as part of Shadow Copying. Many users of Octopus Deploy also use continuous integration or release far more frequently that they otherwise would. This in turn means that these temporary files can accumulate quite quickly.

Instead of leaving it to manual process, there are a few ways that we can clean up after a deployment. To clean out this directory you can use the File System - Clean ASP.NET Temp Files PowerShell script template from the Octopus Deploy Library. This script will let you clean the temporary files directory as a step in the deployment process.

How to use the script

After importing the step from the library and adding the step to your project you can configure the framework version and days to keep.

Step details

The step only requires two parameters Framework version and Days to keep. By default, it will clean site directories under the Temporary ASP.NET files directory older than 30 days.

Framework Version

Specifying All will clean out the temp files for each installed version of the framework. If you need to target a specific version, you can specify either the bit-ness, version or both.

Specifying only a bit-ness value will match all versions. Here you can only specify one of the following two options.

 - Framework
 - Framework64

Specifying only a version will match that version regardless of bit-ness (both 32 and 64 bit). The version must always start with v.

 - v1.1.4322
 - v2.0.50727

A fully specified framework version will match only that path.

 - Framework\v4.0.30319
 - Framework64\v4.0.30319

Days to keep

If the last write time of the site directory is older than the specified number of days to keep, it will be deleted. The directory is created on application startup.

How does the script work

The directory structure under Temporary ASP.NET files consists of directories that map to application names which is the virtual path name or root. Under that there is a code generation directory for each version of the each website. The script will identify only delete code generation directories. The code generation directories are also what Days to keep uses for retention.

Deleting temporary ASP.NET files is a safe operation, keep in mind that multiple websites can use this folder and websites that are currently in use may be recycled.

You can execute the script before or after a deployment. However, it is recommended that you run the script before a deployment as the previous deployment may still be in use even after the deployment is finished. Any code generation directories that contain locked files will be skipped.

Are there any other solutions

If you have multiple websites or need to avoid as much downtime as possible configuring, a custom compilation directory can be used isolate each sites code generation directories. You can specify a custom temporary files directory with the tempDirectory attribute on the compilation tag in your web.config.

        <compilation tempDirectory="C:\TempAspNetFiles\">

When to worry about this?

You should only have to worry about this is you're doing frequent deployments, or you need highly robust deployments. Other factors to consider are how many sites you deploy, the size of the bin directory, how often deployments are made and how much disk space you have.

Introducing Daniel

Daniel Little

This post is long overdue. A couple of months ago, Daniel Little joined our team. In his first few weeks he built the scheduled deployments feature that shipped as part of 2.5. Welcome Daniel!

Daniel has been working professionally as a software engineer on the .NET stack since 2010. He's worked on a wide range of different projects over the years. Automating the deployments of, and developing, enterprise CMS solutions, plus a range of custom built .NET ones. He also has a passion for software architecture and functional programming.

Daniel is a strong advocate of automated deployments, and has been using Octopus Deploy for over a year since introducing it on a number of projects.

Daniel is @lavinski on Twitter and blogs at lavinski.me