5 Remote Desktop Alternatives

If you build applications to run on Windows servers, and you are involved in deployments, it's quite likely that you'll be spending time in remote desktop.

In the olden days, when ships were made of wood and men of steel, we'd have a couple of servers and run as many applications as we could on them. An IIS server with a dozen sites or applications wasn't just common, it was the standard.

Nowadays, of course, cloud. Virtualization means that instead of one server running many applications, we have one server, running many virtual servers, each with a single application. This means we're seldom in a single remote desktop session at once.

The following list of tools help you more easily manage multiple remote desktop sessions at once.

1. Remote Desktop Connection Manager

It's free, and it's from Microsoft. What's not to love?


It can save credentials if you like, and is great for sharing connections between teammates. The only feature it lacks is that it can't save credentials for a remote desktop gateway. That's why we switched to...

2. mRemoteNG

An open source fork of mRemote, this is the tool that we currently use. The Octopus team are distributed, so we save the mRemoteNG settings file in Dropbox so that everyone on the team can use to easily connect to any of our VM's.


3. RoyalTS

RoyalTS is a very nice looking commercial alternative, and has a killer feature: a button that lets you click "Start" remotely. I'm not sure who forgot to tell the UX team on Windows that people don't normally run Windows Server 2012 on tablets, but I'm sure they had a reason for making it nigh impossible to launch programs over remote desktop. Never fear, RoyalTS is here.


4. Terminals

Another open source tabbed session manager, but looks to be actively developed. And the source code is in C#!


5. Octopus Deploy!

OK, it's a shameless plug :-)

Octopus Deploy dashboard

Octopus Deploy is a remote desktop alternative in the same way that TeamCity/Team Build is a Visual Studio alternative.

Remote desktop tools are essential for diagnostics and some configuration tasks; there's no denying it. That said, our entire raison d'être here at Octopus Deploy is to make it so that a typical deployment involves no remote desktop whatsoever. Through better visibility, accountability and reliability, our goal is to reduce the time you spend in remote desktop sessions.

What's your experience with the tools above, and what did I miss?

RFC: Linux Deployments

Currently our highest voted Uservoice idea is to add support for Linux deployments in Octopus. We are going to implement this by adding first-class support for servers running SSH that will map very closely to the way that Windows deployments with Octopus work today.


For the purpose of this we will be introducing a new term in Octopus, agentless machines. An agentless machine will not be running a Tentacle, instead it will use a different method of communication, e.g. SSH.

Our goal with this RFC is to make sure that the way we implement this feature will be suitable for the widest range of customers.

Introducing agentless machines

Setting up a new agentless machine, e.g. a Linux server running SSH, in Octopus will work the same way as when adding a new machine running a Tentacle.

Adding an agentless machine

Agentless machines are configured by chosing SSH rather than Listening or Polling as the communication style.

Adding an agentless machine

Environment with an agentless machine

Agentless machines appear on the Environments page just like regular Tentacles, showing their location and status (online/offline)

Environment with an agentless machine

Health checking an agentless machine

Typical Octopus tasks like heath checks, ad-hoc scripts and so-on run across all appropriate machines, including both Tentacle and agentless machines if both styles are being used.

Health check an agentless machine


Our aim is to support the following authentication types for the SSH target machines

Authentication Types



Key without passphrase

Key without passphrase

Key with passphrase

Key with passphrase

Private keys will be stored in the Octopus database as encrypted properties.

Network topology

Connections to agentless machines won't be made directly from the Octopus Server; instead, one or more Tentacles will be used to make outbound connections to the machine. We're planning to add a hidden, "shadow" Tentacle running in a low-privilege process on the Octopus Server itself as a convenient default, but using specific Tentacles to handle different network topologies is also a feature we're considering.

Octopus footprint on agentless machines

Octopus will upload compressed packages to the target machine before any deployment takes place, so we require some local storage on the target machine that will go to ~/.tentacle/. We will also extract the packages to a default location like we do on a Tentacle machine, e.g. ~/.tentacle/apps/{environment}/{project}/{package}/{version}, and we will also support custom installation locations to move the files elsewhere.

Package acquisition

Because a Tentacle machine is required for doing SSH deployments the package acquisition for these deployments will change slightly from the way Windows deployments with Octopus work today.

The Tentacle machine will extract the NuGet package and create a .tar.gz tarball that then will be uploaded to the target machines.

The Tentacle machine can be co-located with the target machines to optimize bandwidth usage, i.e. Octopus uploads the package to the Tentacle, which in turn will send the package to the target machines.


Package deployment steps will run entirely via a single shell session on the target machine.

  1. We will check and ensure the Octopus scripts are up-to-date
  2. Package and supporting deployment files will be uploaded via SCP
  3. Deployment orchestration script will be executed
  4. Default installation directory will be created if it doesn't exist
  5. tar file will be unpacked
  6. predeploy will run
  7. If a custom installation directory has been specified
    • If the option to purge the directory before deployment is true, we purge the custom installation directory
    • Copy the extracted files to the custom directory
  8. deploy will run
  9. postdeploy will run
  10. Run retention policies to clean up old deployments
  11. Delete the Octopus variables file (to ensure sensitive variables aren't left on the server)

Deployment scripts

The main deployment orchestration script will be written in bash, as this is the least common denominator amongst *nix distributions. This script will look for predeploy/deploy/postdeploy scripts, that can be created by the user, and execute them if they are present.

The predeploy/deploy/postdeploy scripts can be written in the users preferred scripting language (but the user would have to ensure that it is installed on the server that the deployment is run on).

  • predeploy
    • tasks required to be run pre deployment, e.g. config transformations needed for your application.
  • deploy
    • tasks required for the actual deployment of your application.
  • postdeploy
    • tasks required to be run post deployment. e.g. clean up any temp files created during the deployment of your application.

The working directory will be the default installation directory for the predeploy script, and either the default or custom installation directory for the deploy and postdeploy scripts.

Environment variables for deployments

Octopus has a more sophisticated variable system and syntax than Linux environment variables can support. Having to map between names like Octopus.Action[Install website].Status.Code and valid POSIX equivalents seems uncomfortable and error-prone. Large Octopus deployments also tend to carry a lot of variables, so we're uneasy about dumping these arbitrarily into the environment in which the deployment script runs.

Instead of setting environment vars directly, deployment scripts will have access to a tentacle command that can be used to retrieve values that they require. E.g. to retrieve the custom installation directory used by the deployment the user can call the tentacle command like so:

DEST=$(tentacle get Octopus.Action.Package.CustomInstallationDirectory)

This declares an environment variable DEST to hold the custom installation directory (subsequently available to the script as $DEST).

Values with embedded spaces and so-on can be supported using " quotes.

Though we're unlikely to implement it in the first version of the command, we're considering some more sophisticated features like iteration:

for ACTION in $(tentacle get "Octopus.Action[*]")
    echo "The status of $ACTION was $(tentacle get "Octopus.Action[$ACTION].Status.Code")"

This highlights the kinds of opportunities we see to make writing deployment scripts more enjoyable.

Other features of the tentacle command

Using the tentacle helper will also provide consistent access to the commands supported using PowerShell cmdlets on Windows machines.

Setting output variables

Output variables can be sent to the Octopus server using tentacle set:

tentacle set ActiveUsers 3


ps -af | tentacle set RunningProcesses

Collecting artifacts

Files from the target machine can be collected as Octopus artifacts using tentacle collect:

tentacle collect InstallLog.txt

Running tools

Where we (or others) provide helper scripts that themselves need access to variables, paths and so-on, these can be invoked using tentacle exec:

tentacle exec xmlconfig Web.config

Deployment features

Features like XML configuration transformations/appsettings support will run on the target machine.

Supporting Octopus scripts and executables will be part of the default folder structure on the target machine, i.e. ~/.tentacle/tools/, in this folder we can also include helper apps using Mono for supporting .NET-specific conventions like XML configuration transformation/appsettings.

We can also include different scripting/executable options to support other deployment features.

Retention policies

Once a deployment has completed, we will apply the retention policy that has been specified for the project, just like we do with Windows deployments.

The user can specify to keep a number of days worth of deployments, or a specific number of deployments. If either of these have been specified we will remove any files that do not fall within the specified retention policy.

System requirements

Linux distributions can vary significantly in their default configuration and available packages. We're aiming to choose a widely-supported baseline that makes it possible to deploy with Octopus to practically any current Linux distribution.

The fundamental assumptions we will make about a target machine are:

  • It's accessible using SSH and SCP
  • The user's login shell is Bash 4+
  • tar is available

The platforms that we ourselves plan to build and test against are:

  • Amazon Linux AMI 2014.03
  • Ubuntu Server 12.04 LTS

We will do our best to remain distro-agnostic, but if you're able to choose one of these options for your own servers you'll be helping us provide efficient testing and support.

Outstanding Questions

  1. Management of platform-specific paths
    • where apps are being deployed to both Windows and Linux servers, paths like "Custom Installation Directory" will need to be specified separately for Linux and Windows. Can we make this experience better?
  2. Naming of the deploy scripts
    • predeploy/deploy/postdeploy, or
    • pre-deploy/deploy/post-deploy, or
    • pre_deploy/deploy/post_deploy?
  3. Customisation of paths where we will upload packages and extract packages by default
    • is it necessary to configure this via Octopus, or can locations like ~/.tentacle/apps be linked by an administrator to other locations as needed?
  4. Writing out variables like we do in PowerShell
    • In PowerShell we first encrypt them with DPAPI, is there a similar standard crypto function available on Linux?

We need your help!

What we would really appreciate is for customers who are already using SSH in Octopus, or that would want to start using it, to give us feedback on our plan of how to implement SSH deployments in Octopus.

Whether it be improvements to the suggested implementation above or if we've made assumptions that just will not work, then please let us know in the comments below.

RFC: Lifecycles

Lifecycles are a new concept in Octopus that will allow us to tackle a number of related suggestions that we've been longing to solve:

  • Automatically promoting between environments (triggers)
  • Marking a release as bad (so it cannot be deployed any more)
  • Preventing production deployments until test deployments are complete (gates)

Lifecycle example

Lifecycles and phases

A lifecycle is made up of a number of phases, each of which specifies triggers and rules around promotion. The simplest lifecycle, which would ship out of the box and be the default, would simply be:

Phase 1: Anything Goes
  • Allow manual deployment to: all environments

In other words, this lifecycle simply says "Releases can be deployed to any environment, in any order". It's total chaos!

A custom lifecycle might split the world into pre-production and production phases:

Phase 1: Pre-Production
  • Automatically deploy to: Development
  • Allow manual deployment to: UAT1, UAT2, UAT3, Staging
  • Minimum environments before promotion: 3
Phase 2: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion:

Finally, a more structured lifecycle might look like this:

Phase 1: Development
  • Automatically deploy to: Development
  • Allow manual deployment to:
  • Minimum environments before promotion: 1
Phase 2: Test
  • Automatically deploy to:
  • Allow manual deployment to: UAT1, UAT2, UAT3
  • Minimum environments before promotion: 2
Phase 3: Staging
  • Automatically deploy to:
  • Allow manual deployment to: Staging
  • Minimum environments before promotion: 1
Phase 4: Production
  • Automatically deploy to:
  • Allow manual deployment to: Production
  • Minimum environments before promotion: 1

Note that the Test phase unlocks 3 different test environments, and users must deploy to at least 2 of them before the release enters the Staging phase.


We're making a few assumptions with this feature, in order to keep it simple.

First, progression through phases is always linear - you start in phase 1, then go to 2, then 3, and so on. You cannot skip a phase, and there is no branching.

Second, the environments that can be deployed to are cumulative as you get further into the lifecycle. For example, if the release is in phase 3 (Staging) in the third example above, you can deploy to Development, UAT1/2/3, or Staging, just not production.

Automatic promotion

Since each phase can be configured to deploy to one or more environments, you can use this option to automate promotion between environments. For example, upon successful deployment to a development environment, you might automatically promote to a test environment.

Keep in mind that you can mix this feature with the existing manual intervention steps system to pause for approval before/after a deployment, and prior to promotion.

Automatic release creation

When you assign a lifecycle to a project, you'll also be able to configure the project to create releases as soon as a new NuGet package is detected.

Create releases automatically

For now, I think this will be limited only to our built-in NuGet repository (not for packages in external feeds).

When combined with the features above, this is very exciting - from the push of a NuGet package we can create and deploy releases with no external integration.

Flag a problem

Normally, we assume that if a release is deployed successfully, it's ready to be promoted. Just like now, you can use manual steps to force a review/approval as an explicit step at the end of a deployment.

However, sometimes a deployment looks good and gets approved, and only later do you discover a problem - perhaps a terrible bug that deletes customer data. If that happens, you can flag a problem with the deployment:

Flag a problem

When a problem is flagged, the deployment doesn't count towards progress through the lifecycle - if we flag a problem with the Staging deployment, we won't be able to promote to Production, even if Staging was successful.

Scenarios enabled

I want to automate the promotion of deployments from Development all the way to Production, just by pushing a NuGet package

  1. Use the "Automatically create a release" option
  2. In each phase of the pipeline, set the 'automatically deploy to' environments such that the release automatically progresses through the pipeline

Prevent Production deployments unless you have deployed to Staging

Simply put them in different phases, and don't unlock the Production environment unless there's a successful Staging deployment.

Prevent production deployments even if staging was successful, if we later find a problem with the application

Use the "Flag a problem" feature to prevent the release from progressing to the next phase, or revert it to the previous phase, in the lifecycle.

Lifecycles will consume project groups

Currently, project groups in Octopus are used to organize collections of projects, as well as limit which environments they can deploy to, and to set the retention policy.

When lifecycles are introduced, it's via lifecycles that you'll control which environments a project can be deployed to, and the retention policy to apply. Project groups will just be left to organize collections of projects, and nothing more.

So, what do you think? Is this a feature that will be useful to you? What would your lifecycle look like?

Cleaning temporary ASP.NET files

The vast majority of ASP.NET websites and web applications make use of dynamic compilation to compile certain parts of the application. Assemblies created by dynamic compilation for ASP.NET websites are stored in the Temporary ASP.NET files directory. These temporary files will build up over time and have to be removed manually. Copies of a website bin directory are also stored in this folder as part of Shadow Copying. Many users of Octopus Deploy also use continuous integration or release far more frequently that they otherwise would. This in turn means that these temporary files can accumulate quite quickly.

Instead of leaving it to manual process, there are a few ways that we can clean up after a deployment. To clean out this directory you can use the File System - Clean ASP.NET Temp Files PowerShell script template from the Octopus Deploy Library. This script will let you clean the temporary files directory as a step in the deployment process.

How to use the script

After importing the step from the library and adding the step to your project you can configure the framework version and days to keep.

Step details

The step only requires two parameters Framework version and Days to keep. By default, it will clean site directories under the Temporary ASP.NET files directory older than 30 days.

Framework Version

Specifying All will clean out the temp files for each installed version of the framework. If you need to target a specific version, you can specify either the bit-ness, version or both.

Specifying only a bit-ness value will match all versions. Here you can only specify one of the following two options.

 - Framework
 - Framework64

Specifying only a version will match that version regardless of bit-ness (both 32 and 64 bit). The version must always start with v.

 - v1.1.4322
 - v2.0.50727

A fully specified framework version will match only that path.

 - Framework\v4.0.30319
 - Framework64\v4.0.30319

Days to keep

If the last write time of the site directory is older than the specified number of days to keep, it will be deleted. The directory is created on application startup.

How does the script work

The directory structure under Temporary ASP.NET files consists of directories that map to application names which is the virtual path name or root. Under that there is a code generation directory for each version of the each website. The script will identify only delete code generation directories. The code generation directories are also what Days to keep uses for retention.

Deleting temporary ASP.NET files is a safe operation, keep in mind that multiple websites can use this folder and websites that are currently in use may be recycled.

You can execute the script before or after a deployment. However, it is recommended that you run the script before a deployment as the previous deployment may still be in use even after the deployment is finished. Any code generation directories that contain locked files will be skipped.

Are there any other solutions

If you have multiple websites or need to avoid as much downtime as possible configuring, a custom compilation directory can be used isolate each sites code generation directories. You can specify a custom temporary files directory with the tempDirectory attribute on the compilation tag in your web.config.

        <compilation tempDirectory="C:\TempAspNetFiles\">

When to worry about this?

You should only have to worry about this is you're doing frequent deployments, or you need highly robust deployments. Other factors to consider are how many sites you deploy, the size of the bin directory, how often deployments are made and how much disk space you have.

Introducing Daniel

Daniel Little

This post is long overdue. A couple of months ago, Daniel Little joined our team. In his first few weeks he built the scheduled deployments feature that shipped as part of 2.5. Welcome Daniel!

Daniel has been working professionally as a software engineer on the .NET stack since 2010. He's worked on a wide range of different projects over the years. Automating the deployments of, and developing, enterprise CMS solutions, plus a range of custom built .NET ones. He also has a passion for software architecture and functional programming.

Daniel is a strong advocate of automated deployments, and has been using Octopus Deploy for over a year since introducing it on a number of projects.

Daniel is @lavinski on Twitter and blogs at lavinski.me

RFC: Branching

In the next version of Octopus we're planning to improve our support for people working on different branches of code. We're still in the planning stage on this one, so it's a good time to share what we're doing and get your comments.

Each time you create a release in Octopus, you choose the versions of each package to include, defaulting to the latest:

When different teams work on different branches at the same time, it can make the create release page difficult to use. For example, suppose that the team published packages in the following order:


When viewing the create release page, since Octopus defaults to the latest version, it means people have to hunt to select the right package version - not a great experience.

To help with this, we're introducing branches. You'll define a branch like this:

Defining a simple branch

A branch has a simple name, and then a set of rules that decide which versions should appear when creating a release. In the Step name field, you can choose which steps to apply the rule to - one, many or all steps in the project. The Version range field uses the NuGet versioning syntax to specify the range of versions to include. The Tag can contain a regex, and is used over the SemVer pre-release tag.

On the create release page, you can select which branch to use, and it will control which packages appear:

Selecting a branch

A more complicated branch definition might be:

More complex branch

Here, the project has multiple steps. For the Acme.Util step, we have one rule, and we have a different rule for the other steps. We're using tags to only show packages with the -vnext SemVer tag.

Our goal with this feature is to keep it simple - branches really only matter from the perspective of the release creation page (and Octo.exe create-release). If you need different environments or deployment steps depending on the branch, it's probably best to clone a project.

So, what do you think? Would such a feature be useful to you? Let us know in the box below.

Octopus vs. Build Servers

In my previous life, I used CruiseControl.NET, and later TeamCity, as my deployment automation tool. The goal was to automate deployments, and since my build server was already compiling the code and running tests, it felt natural to make it deploy too.

I extended the build process by adding custom scripts that would deploy the software. Sometimes they'd be short scripts running RoboCopy and some XML transforms. Other times they'd be reams and reams of PowerShell with agents listening on remote machines; it all depended on how complicated the actual deployment process was. It was these experiences that led to Octopus being built.

That said, it's a question that still comes up once a week: why should I use Octopus when I already have a CI server?

In fact, I asked it myself when writing some documentation on integrating Atlassian Bamboo with Octopus Deploy. Bamboo even has deployment concepts baked in; why would a Bamboo user need Octopus?

Here's why:

Different focus

Build servers usually contain a number of built-in tasks with a focus on build. TeamCity and Bamboo come with a bunch of build runner types that are handy for building: they can call the build engines of many platforms (such as MSBuild), can deal with build-time dependencies (NuGet, Maven), have many unit test runners (NUnit, MSTest), and can run and report on code quality and consistency (code coverage, FXCop).

When you look at the task runner list in your build server from the perspective of deployments, you'll notice a difference. Where's the built in task to configure an IIS website? Or install a Windows service? Change a configuration file? Or the other 54 (as of today) deployment-oriented tasks that are available in the Octopus library?

Even in Bamboo, which has a whole deployment feature built-in, the deployment tasks available are pretty much limited to running scripts:

Deployment tasks in Bamboo

The reality is, when deploying from a CI server, the closest thing to built-in deployment tasks you'll usually find is the ability to run a script. Which leads to the next issue:

Remote deployments

When a build server executes a job, the build agent isn't usually the same as the machines that are ultimately being deployed to. This is because unlike builds, deployments involve coordinating activities across many machines.

When a deployment touches multiple machines, it's up to you to figure out how to perform the entire deployment remotely, using tools like XCOPY, SSH or PowerShell remoting over the network. You can then wrap this in a for-loop to iterate over all the machines involved.

This brings up a number of challenges:

1) Do you have the right permissions on all of those machines? 2) Can you make all the necessary firewall changes? 3) What if the machines aren't on the same Active Directory domain? 4) How will you get the logs back? If something goes wrong, how will you diagnose it? 5) Is the connection secure?

I lost so much time solving these problems on different projects in the past, and it's precisely why we invested so much time in designing the Tentacle agent system:

  • Tentacles don't need to be on the same AD domain
  • We use SSL and two-way trust for security
  • Tentacles can either listen or poll - it's up to you
  • When Tentacles run a command, they do so as a local user, so you can perform a number of activities that might not be possible via the network

Automatic parallelization

Related to the above, in all but the simplest deployments, there are usually multiple machines involved in a deployment: perhaps a couple of web servers, or a fleet of application servers.

A good deployment automation tool solves the problem of running all of the deployment activities in parallel across all of those machines. In Octopus, you register machines and tag them with a role, then you specify packages to be deployed to machines in a given role. Octopus ensures the packages are on the machine, then deploys them, either in parallel or rolling.

The distinction is obvious when you compare the flat build log that comes from a CI tool like Bamboo or Jenkins:

Flat build log from Bamboo

Against the hierarchical deployment log that comes out of Octopus:

Deployment log in Octopus

Octopus isn't using the nesting for display purposes (like TeamCity): we use it because each of those steps are running at the same time. There simply is no other way to display it. If you have a script that needs to run on 10 machines, you'll see the log messages as it executes on all 10 simultaneously, without them being jumbled together.

A customer last Friday used Octopus to deploy an application to over six hundred machines. It would have taken a long time to do this with a build server without writing all the coordinating scripts themselves, and the logs would have been impossible to decipher.


All build servers have some level of build parameter/variable support. But the ability to scope them is very limited. If you have multiple target servers, and need to use different values for a setting for each one, how do you manage this?

In Octopus, we manage variables with different scopes, and can handle passwords and other sensitive settings securely. And as you would expect, we take snapshots between releases, and audit the changes made. Best of all, you can change configuration (which is really an operational concern) without having to rebuild the application.

Deployment can be messy

When your build server encounters an error - a failed unit test, or bad code, what does it do? Ideally, it fails fast. You wouldn't want the CI server to pause, and ask what you want to do next.

When you're performing a rolling deployment of a web site across 10 web servers, and server #7 fails, fail fast is probably the last thing you want. Perhaps you'd like to investigate the problem and retry, or skip that machine and continue with the rest. That's why Octopus has guided failures.

In fact, a human intervening might be something that you require for all deployments, not just when something goes wrong. In Octopus, you can add manual intervention steps to a deployment, which can run at the start (for approval), at the end (for verification), or even half-way through (some legacy component that just has to be configured manually).


All of these issues - the built-in tasks available, remote execution and infrastructure problems, parallelism, and failure modes - point to a conclusion: build and deployment are fundamentally different beasts. In fact, the only characteristics that they really share is that they both sometimes involve scripting, and the team needs visibility into them.

As developers tasked with automating a deployment, this distinction isn't obvious at first. Build and CI servers have been around a long time, and we're familiar with them, so it's natural to imagine how they can be extended to deployment. Even though we've been working on Octopus for years, when designing new features for Octopus I still find myself looking at CI tools. It's only when you are very far down the rabbit hole with custom scripts to coordinate an increasingly complicated deployment do these differences become painfully obvious.

Build and deployment are different, but equally important. The rule of "best tool for the job" should apply to both of them. Our goal is to focus on being the best deployment tool; we'll leave the problem of building to the build servers :-)

Step template contest winners

To help fill the Octopus Deploy library, we ran a short competition.

All you have to do is submit a pull request for a step template, and if it's accepted, you'll go into the draw. At the end of June I'll pick 3 winners at random, and we'll mail a mug to you.

To select the 3 winners, I took the list of contributors to the GitHub repository, and wrote a short PowerShell script to select 3 at random. I put the script in the Script Console on our demo server.

Script console

The script ran, and you can view the output to see who won. Isn't it great to have a central, persistent, audited place from which to run PowerShell scripts? :-)

For everyone who submitted a template but didn't win, we'll have something for you too. Happy deployments!

What's new in Octopus 2.5

Available as a pre-release Octopus Deploy 2.5 is on the shelves (so to speak). Closing out just under 50 GitHub Issues, lets see whats under the hood.

Scheduled Deployments

Are you one of those systems adminstrators who has to get up at ungodly hours of the night to run a deployment? Well this is the feature for you. You can now stay in bed while a scheduled deployment runs for you.

Scheduled Deployments

As you can see you can now schedule when deployments run.

Scheduled Deployments

When you submit, the task page shows that the deployment is Queued and when it will run.

Sheduled Deployments

You can also view the queued deployment on the project dashboard.

Organised task page including filtering

Previously on the task page it was pretty hard to find specific tasks that had run, just offering a list of all tasks by date. But not now. Now you can choose an environment, project or activity type and filter the lists. It lists all active tasks in a group at the top of the page, and filterable and searchable list of completed tasks below.

Filtered Task List

Octo.exe now shows progress

You can now pass the --progress argument when using Octo.exe to deploy and this will output the deployment to the console.

Octo.exe showing progress

Step Template updates

The very first of the step template changes: now you can view where the step template has been used, and if it is up to date within the project that is has been previously used on.

Step template usage

We also updated the step template variables. They now allow for custom field types and typed parameters.

Step template variables

Deployments no longer attempted to out of date Tentacles

Previously a deployment would continue if a Tentacle did not have a matching version to the Octopus Server. This has been corrected by stopping the deployment and giving an error message.

Out of date tentacle

Breaking Changes!

There are two breaking changes in this release to be noted.

Breaking Change: Internal NuGet Repository no longer watched

Those of you who use the inbuilt NuGet repository, we no longer watch the folder for changes, as this was causing some locking issues. If you use the API or nuget.exe to push files up to the repository, this change should not effect you. If you are using XCOPY it will have an effect. Using an external feed will be the only option. We will be refreshing the repositories index on every restart of the Octopus Server Service however.

Breaking Change: Unique SQUID required for all machines used in a deployment

If a deployment detects two or more machines with the same SQUID the deployment will be stopped until the duplicate machines have been removed from the machine targets.

Check it out now!

We have updated the Octopus Live Demo to use 2.5! So go and take a peek!

Featured Step Template: HTTP - Test URL

Whether it's to warm up a web server or simply to smoke test a fresh deployment, hitting a URL and verifying the result is a pretty common thing to see in the deployment processes for web apps.

While it's always been possible to write a PowerShell step for this with Octopus Deploy, Sporting Solutions have been kind enough to package this up in an easy-to-reuse template for the Octopus Deploy Library.

Since this is the first time we've featured a contributed step template on the Octopus Deploy blog, let's walk through how to bring this into your Octopus projects.

Importing the template

The first thing to do is visit the library and find the template. You can find the HTTP - Test URL template here.

Library view

The library itself gives a quick overview of the template so that you can determine whether it suits your requirements; the important thing on this page, unsurprisingly given its prominence, is the big green "Copy to Clipboard" button.

Press it now to retrieve the JSON document describing the template.

Next, sign in to your Octopus Deploy server, and visit Library > Step templates. At the top right of the page there's an Import link - click it, and paste in the JSON you just copied from the website.

Import dialog with JSON included

Once you've hit Import the template will be displayed - you can now tweak the parameters or script if you like, but otherwise, the template's ready to use in your deployment process.

Template in library

Using the template

Assuming you have a project with a deployment process that installs a web app, the first step is to open the Project > Process tab, and select the Add step button. At the bottom of the list that is shown, you'll see the name of the template that was just imported.

Process view

When you click the name of the template to create a new step, you'll be presented with input fields for each of the template's parameters. In the case of HTTP - Test URL, that's the URL to test, the status code that's expected, and a timeout value.

Step view

Step template parameters like the URL in this example work just like regular Octopus step parameters - they can be bound using #{Variable} substitution, and each parameter itself is exposed as a variable that can be used elsewhere, for example in PowerShell scripts.

The results!

Finally - and this is the most exciting part :) - when you create and deploy a new release of your project, the step will run and the results printed into the deployment log.


We really like the HTTP - Test URL template since though it's very simple, it is a great example of what can be done using step templates and the Octopus Deploy Library. If you haven't browsed the library yet, you might be surprised to find out just how many templates are already available.

If you're already writing step templates and want to share those with the Octopus community, it's not too late to enter our library competition; if you submit a template and it's accepted before the end of June, you could be the proud owner of an Octopus Deploy mug!

Happy deployments!