Roadmap published

When I was first building Octopus in my spare time, I used Trello to manage all the work items. I could keep most of the small issues in my head, so Trello was mainly used for the bigger issues. It made for a good way for everyone to see what I planned to work on - a nice public roadmap.

When things started getting serious, Trello wasn't scaling, so we switched to GitHub. GitHub tracks the issues we're working on in the near future, but we no longer had a nice way to view the really big features coming in the next few months. Talking to some people at the ALM Forum last week, it was clear that people missed being able to see our public roadmap. I missed it too.

So, without further ado, I present to you: the public roadmap for Octopus Deploy (for the next 5 months). If there's a feature there that you really want, make sure you vote for it on UserVoice!

Heartbleed and Octopus Deploy

The Heartbleed bug in OpenSSL has been big news past week. The best overview of the issue I've seen so far has been by Troy Hunt: Everything you need to know about the Heartbleed SSL bug.

There are two places where SSL is used in Octopus Deploy:

  • For Octopus server/Tentacle communication. We use the SslStream class that is built into the .NET framework, which relies on SChannel rather than OpenSSL.
  • For the Octopus server web interface, which we allow you to host over HTTPS. This uses HTTP.sys, the HTTP server component that IIS uses and is built into Windows, which again uses SChannel and not OpenSSL.

You can read more about SChannel and IIS in the context of Heartbleed. Suffice it to say, there's nothing in Octopus that relies on OpenSSL. Since this is an implementation issue in OpenSSL and not a core problem of the SSL protocol, and no one has reported issues in Microsoft's implementation of SSL in SChannel, there doesn't seem to be any risk of Heartbleed in Octopus Deploy.

Structured Logging with Seq

On the first day of a new project, many teams install a CI server to build and test their application, and a deployment server like Octopus to get new versions into their test and production environments.

What's missing from this setup is a way for the team to monitor the application when it gets there. Seq is a project from Nick's company that just hit 1.0 and aims to fill in this gap, improving visibility smoothly all the way from development to production.

There are powerful log collection tools out there already, but few are fitted to the Windows and .NET toolchain well enough that they can be installed in a few minutes on "day 1". Seq comes with an MSI for Windows Server, and NuGet packages that integrate with the popular .NET logging frameworks.

If you're writing structured logs with a framework like p&p's SLAB or Serilog, then Seq has some features that make it especially nice to filter and correlate events, too.

Structured logging with Seq

My favorite thing about Seq is that it makes collecting events about what happens in an application as easy as writing to a log. It means that during development, you can use structured logging to log information about what is happening, without worrying too much about how you're going to use it later. Then just turn on Seq and suddenly those logs become far more useful and accessible than log files on disk. It's a very worthwhile tool.

Seq comes in several supported professional editions as well as a free developer edition you can download from the Seq website. Hopefully you'll find structured logging and Seq a useful tool in developing and operating your applications!

Automatically provisioning Amazon EC2 instances with Tentacle installed

The Tentacle agent used by Octopus to automate deployments has long supported configuration via command line. It's possible to automatically download the MSI, install it, configure the Tentacle instance, and even register it with an Octopus server, all via the command line.

When provisioning a Windows server via Amazon EC2, you can pass a PowerShell script as User data. This will get executed when the EC2 instance starts for the first time:

Provisioning the EC2 instance

I've put together an example script based on our instructions on Automating Tentacle installation. You can find the script below. Paste it between the <powershell> XML elements, then wait for the machine to start. When it finally starts, the machine will be registered with your Octopus server:

It's alive!

If you hit any problems, remote to the machine and look at the following two files:

  • C:\TentacleInstallLog.txt
  • C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2ConfigLog.txt

Here's the PowerShell script:

When you set up your EC2 instance and assign a security group to it, make sure the Tentacle listen port that you specify (10933 by default) allows TCP traffic. The script automatically adds the exception to Windows Firewall, but you need to do the same for the AWS firewall.

For more information on EC2 instance provisioning and User Data, see:

My quest to reclaim the backlog

I've just spent the last 9 hour taming our backlog, which was awesome fun. Not! To give me inspiration, I re-watched this Business of Software video by Intercom.io's Des Traynor on the subject of managing feature requests. Highly recommended:

Backlog fatigue

Before I started, we had around 280 open issues in our GitHub issues list, some of them assigned to a backlog milestone, others posted by users which weren't assigned to a milestone. Most of these were suggestions, or features that we thought would be great to do someday, so we just threw them into GitHub. When suggestions came up in discussions or on the forums, we'd say:

Thanks! I can see why that would be a useful idea. I've added it to the backlog.

It builds up so quickly! And I'll be honest: aside from items in the current milestone, I just avoided looking at any of the other issues - there were just too many. The problem with backlogs is that once they reach a certain size, they suddenly become infinite. Once something has been in the backlog for months, and it gets skipped over every time you do an iteration plan, it's just never going to get done, so it gets bumped and bumped again. Yet every time you plan the next iteration, there they are, just another thing to read and skip over.

I really wanted to have a high-level roadmap of where we're going for the next 6 months, as well as a detailed list of things we're doing in the next few weeks, but making sense of any of that was flat out impossible when the backlog was in this state!

Reclaiming the backlog

I took a deep breath and decided: I will hide from my backlog no longer! I made a plan and decided to draw a sharp line between things we are absolutely going to do, and suggestions. Here's the rules I came up with:

  • If it's something we absolutely intend to do in the next two months, it is in GitHub issues (the backlog)
  • Everything else goes to UserVoice

9 hours later, I emerged, tired and weary, but ultimately victorious:

  • There are 21 open issues in our current milestone, planned to be done next week for pre-release
  • There are 56 open issues in the next milestone
  • There's a small handful of unassigned issues that I'm waiting on replies to
  • That's it. No more!

Items that were so small that I could fix them in the code faster than responding to them got fixed right away and pushed. For everything else, the slow part was searching either for an existing matching item, or copying and pasting things into UserVoice, then typing the reply to say the suggestion was being moved in a personalised way.

In fact, I even changed the way we use milestones. Previously we had a milestone for the current iteration (e.g., 2.4) and a milestone for the backlog. Now, we just have a 2.4 and a 2.5; anything else will be on UserVoice.

Conquering the fear

I was actually quite scared of doing this. Suggestions are hugely important to us and they really do drive the direction of the product, and I didn't want anyone to think their suggestion isn't appreciated, or that we don't care about what they need from the product. We do care.

Yet the truth is, we already have enough suggestions, that if we just picked the most popular ones, we'd have at least 6 months of work ahead of us, let alone looking at the suggestions with just one or two votes. I'd be lying to try and keep everything in the backlog, as if it were all going to get done next week.

Why UserVoice?

Instead of having 200 suggestions in GitHub, we now have 200 suggestions on UserVoice. Does it really make any difference? Well, I think it does:

  1. The Issues list implies an item will be done soon. UserVoice implies it might be done. I think it's more honest.
  2. UserVoice issues can be voted on. That makes it easier for us to see what's important.
  3. I need to read every issue on GitHub every time we plan a milestone. By contrast, with UserVoice, I can just read them when they come in, or only when they start to become popular.

Now, I wrote last week about how votes aren't everything. A lot of the items on UserVoice have pretty good existing workarounds, even some of the highly voted ones. We're still going to add extra priorities to things that we think solve a real pain point and don't have a good workaround.

Balancing strategy vs. demand

Some companies do a terrible job at managing suggestions. Not to name names, but Microsoft have a tendancy to leave items on Connect for years, only to close them as "by-design" when they are clearly bugs. We don't want to do that - every iteration we need to deliver improvements that people want and need.

Of course, we also have a vision for where the product needs to go in the long term, and it's important to balance that with suggestions. We've put together a high-level roadmap for the next 6 months, which I'll publish this week. Each item in the roadmap also exists on UserVoice. Whether they get any votes or not, we will do them, but if some seem very popular we'll let that influence when they get done.

Ideally, we release new versions of Octopus every 2-3 weeks, and I'd like to see each release contain a healthy mix of:

  • Bug fixes (we'll always aim to do them first)
  • Community suggestions
  • Our roadmap items

We're hiring!

We're lucky to have a new developer starting in a couple of weeks (I'll announce it then), but we're looking to grow even more. If you're an Awesome .NET Developer, get in touch! I think we have enough exciting work for you to do :-)

Octopus Deploy vs. Puppet/Chef

On the support site, Dmitry asks:

Hi, how can we argue that Octopus Deploy is a way much better than Puppet or similar tools?

I've been kicking our system administrators for several years, trying to make them to setup automatic deployments of services. In the end I've given up. Then I've heard about Octopus Deployment and did everything myself using Octopus. We're quite happy with Octopus, we've bought a license and I don't think we need anything else. But recently our system administrators have told me that they're going to setup automatic deployments using Puppet and that it looks like we'll have to abandon our Octopus deployments and move to Puppet.

Octopus vs. Puppet

I'm not going to argue that Octopus is "better" than Puppet or Chef; instead, I'm going to argue that Octopus and Puppet are different. In fact, I don't think there's any reason to choose.

Puppet and Chef are configuration management tools. They work by describing the state that a system should be in, and they take steps to ensure systems are in that state.

For example, imagine you were managing 30 web servers. Rather than using remote desktop to configure them all, installing Windows components, enabling features, and so on, you could use Puppet or Chef to specify:

  • All web servers should have IIS configured with the Windows Authentication module enabled
  • MSMQ should be installed and running
  • A folder at C:\Logs should exist and be writable by a given user account

Octopus Deploy, by contrast, is a release automation tool. Octopus doesn't work on "desired state"; rather, it's an orchestration tool that runs steps in a very specific order. It's a lot closer to tools like Nolio or UrbanDeploy (sans the price tag or army of consultants required to install it :-)) than it is to Puppet or Chef.

Octopus is ideal for deployments that look like this:

  1. Redirect load balancer to a "down for maintenance" site
  2. Remove web servers from load balancer
  3. Stop application servers
  4. Backup and upgrade the database
  5. Start application servers
  6. Add web servers back to load balancer

Or this:

  1. Send a message to put the application into "read-only" mode
  2. Apply database migrations
  3. Rolling deployment for all web servers:
    1. Remove from load balancer
    2. Update web application to latest version
    3. Add back to load balancer
  4. Notify and pause while someone inspects the system to ensure everything is OK
  5. Send a message to re-enable write mode

Puppet/Chef work well if you can describe your automation in terms of a checklist, and order/time isn't important. Octopus works well when order matters. You can't update the web application before migrating the database, for example, or there will be chaos.

Now, you could have Octopus use PowerShell to perform your infrastructure provisioning and configuration management tasks, and many people do. But for infrastructure automation and provisioning of new systems, I'd agree that using Puppet, if you already have the skills, would make a better choice: the workflows are much more designed for that.

Conclusion: use both

If your system administrators find that tools like Puppet work best for them, then by all means they should use Puppet. It's a great tool designed for the kinds of automation that system administrators might need. When it comes to frequently deploying updates to the applications you are building, or allowing QA to self-service deployments, Octopus is a much better fit for that scenario.

Use Puppet to provision the infrastructure and ensure everything is ready, and use Octopus for the continuous delivery of your applications on top of those systems.

If the development team prefer one tool, and system administrators prefer another, it's probably because the different tools are better suited to their specific scenarios, goals and constraints. One isn't better than the other; they are just designed for different things, and for true productivity we should leverage that.


What do you think are the major differences between the two sets of tools? When would you use Puppet/Chef, and when would you use Octopus? Let's help Dmitry out.

Oh, one last thing: if you use Octopus, you don't have to learn Ruby and dress like a hipster. Neener neener! :-)

Feature prioritization: do votes trump all?

On our UserVoice site, feature requests and suggestions usually sit somewhere between two extremes:

  1. Enable me to do something that is currently impossible to do
  2. Make it slightly easier to do something that is already currently possible (if annoying) to do

As a software publisher, we have limited resources, and we can't act on every single suggestion. Prioritising which suggestions to act on is important, but it can often seem random, especially when items with very few votes are implemented before items with many votes. Should prioritising features be driven purely by votes? Or should it be a judgement call?

When deciding which suggestions to act on, I think that there are a number of factors to consider:

  • How many people want the feature, and how much they want it (votes)?
  • Are there acceptable workarounds?
  • How difficult/ideal are the workarounds?
  • How difficult the feature is to implement?
  • What are the ongoing costs of supporting the feature?
  • How long are we going to get benefit from the feature?

Three examples

To illustrate, here are three suggestions on the Octopus Deploy UserVoice site:

  1. Schedule deployment of a release (18 votes)
  2. Select multiple environments to deploy to (40 votes)
  3. Bring back support for Internet Explorer 8 (3 votes)

When you deploy a release in Octopus, the deployment currently executes immediately. The first suggestion would mean that users could specify a time for the deployment to start. It might look something like this:

Choosing when to deploy

The current workaround would be to either come into the office at 2 AM, or to have a scheduled task that may or may not run at 2 AM that uses our API or command line tool to trigger a deployment.

The second suggestion is to make it so that instead of selecting one environment to deploy to, we'd allow multiple to be selected:

Selecting multiple environments to deploy to

The workaround for this is simply to deploy to one environment, come back to this screen, and choose the next environment, and deploy again. Or to use our API or command line tool and specify multiple environments to deploy to.

The third suggestion is self-explanatory - bringing back support for IE8. Currently we only support IE9 or above.

Which of these should we do first?

The first two suggestions are probably about the same difficulty - either of them could be done pretty quickly. They are also very isolated features that aren't going to affect many other features in the product. Once we build them, and add some tests to make sure they work, these are the kinds of features we can forget about. I like these kinds of features.

"Multiple environments" has more votes than "Scheduled deployments". But the workaround for "Scheduled deployments" is a lot less ideal than the workaround for "Multiple environments". So on balance, we'd probably implement "Scheduled deployments" first, even though it has half the votes.

What about IE 8 support?

This is an interesting one. First, it's very difficult; we use a number of libraries that also don't support IE8 nicely, so we'd have to fix that. Then, we'd have to test that all/most of the application was usable before we can say that we "support it".

But a feature like this comes with ongoing costs: any time we implement a new feature, we're also going to have to test it with IE 8. Unlike the other two, we can never "forget" about this feature. And it's a lot easier to write an integration test that ensures that a scheduled deployment happens at the right time, than it is to automate browser acceptance testing.

The workarounds for IE8 are also interesting: workarounds exist (upgrade IE, get a different browser). But for reasons largely driven by costs (or politics), the workarounds, though easy, aren't deemed acceptable for all customers. If I'm being honest, I find myself having less sympathy when a workaround exists but is rejected for these kinds of reasons, than if the workaround is just difficult (coming into work at 2 AM for example).

And then there's the fact that IE8 market share is shrinking; we could spend a week adding support for it, plus additional time testing each feature we add/responding to bug reports about IE8 problems, and then in 6 months no one will need it anyway.

For this reason, I decided to mark the IE8 suggestion as declined. It's not just lower priority than the other two features; it's actually a negative feature due to the ongoing support costs and eventual obsolescence. I could leave it open to avoid hurting the suggester's feelings, but the reality is, even if it had 50 votes, we're just never going to do it.

I'd love to know: do you agree with this reasoning? What other factors should be considered when prioritising features? Should popularity of a suggestion be the key driver?

Explore a live Octopus Deploy server

It can be difficult to get a sense of how a product like Octopus Deploy works, without being able to interact with it. We have a ten short videos that show how to set things up, but that's not quite the same as being able to click around and explore a real server.

To help, we've created a Live Demo server:

View the live demo server

The demo server runs the latest version of Octopus, and has two projects which are deployed to seven Tentacles. You can sign in as a guest to view the server and browse around. We've made the Guest account a read-only Octopus Administrator; you can view anything, but you can't actually change the system.

Octopus dashboard

There's also a demo TeamCity server which is configured to compile code and deploy to Octopus. Every hour it triggers a build and deploy, and every week it promotes a release to the Acceptance environment.

The demo projects highlight a few different features, including a rolling web application deployment, library variable sets, and configuring a Windows Service. I hope you'll find it a useful reference server!

What's new in Octopus 2.3

We just shipped a pre-release of Octopus Deploy 2.3. Since 2.0 shipped, we've been getting into a nice groove, releasing new builds every couple of weeks. If you look at our release history, you can see that our last release was just two weeks ago, Octopus 2.2.

Below are the highlights of 2.3. I think you'll agree that we've been pretty busy in that short period of time!

Prompting for variables at deployment time

Sometimes you need to provide additional information to Octopus when you queue a deployment. Prompted variables allow you to define a variable, along with a label and help text, whose value will be provided by the user when creating the deployment.

Adding a prompted variable

When you click the Prompt link, you'll be able to configure details for the prompt:

Configuring the prompt settings

At deployment time, the prompt will appear on the create deployment page. If you provided a default value for the variable, this will be the default value in the text box:

Prompt values appear at deployment creation time

Note that prompted variables can be scoped like other variables; so you could use fixed values for one environment, and a prompted value for production deployments. Prompted variables can also be marked sensitive, in which case a password box will appear.

Prompted variable can be used in scripts and configuration just like any other variable:

Referencing prompted variables

And sensitive prompted variables will be masked just like other variables.

Task output when using prompted variables

Deploy to specific machines

Sometimes you might add a new machine to an environment, and you need to redeploy a release to that machine, but don't want to affect the other machines in the environment. In Octopus 2.3 you can now select specific machines to deploy to:

Deploying to specific machines

Task output "Interesting" mode

When a task is running, you had to click "Expand All" continuously to keep seeing output as new log nodes were added (unlike a build server that logs output sequentially, Octopus does a lot of things in parallel, so the log output is hierarchical and multiple nodes produce log messages at the same time).

We've now made the Expand All/Errors/None links "sticky" - if you expand all, and new nodes are added, they'll be automatically expanded too. We've also created a new mode, called Interesting mode, which automatically expands nodes that are either running or have failed. This is the default mode, and it makes for a nice experience - as soon as you view task output, you automatically see the things that are probably of most interest to you.

Interesting mode

Audit log filtering

Audit logs can now be filtered by person, project or date range:

Filtering the audit log

Custom expressions in package ID's and feeds

This one is easier to explain with pictures. You can now define steps in your deployment process like this:

Defining a step with bound package details

With variables like this:

Variables for package binding

And this will still work:

Creating a release

And so will this:

Viewing the release

This makes certain workflows like using different feeds for different environments much easier, but I'll blog about that later.

Why is my deployment queued?

Sometimes when you execute a deployment, your deployment may stay in the "Queued" state. The reason is usually because another deployment is currently running for that environment/project combination, but it can be hard to work out why.

To help, we now show a list of tasks that the current task is waiting for before it can be executed:

Stuck in the queue

Template-based file transformations

Nick already blogged about this feature. I think it's pretty cool!

The Cancelator

In previous versions of Octopus, the Cancel button on a running task was more of a suggestion than a command. For example, let's say I had this script:

Write-Output "Sleeping for 1 second..."
Start-Sleep 1000
Write-Output "Done!"

Oops! Start-Sleep assumes I mean seconds, by default, not milliseconds. Now I'll be waiting forever for my deployment to complete. Argh!

In prior versions, clicking Cancel on the task wouldn't help - Tentacle would still wait for the script to complete before cancelling the rest of the actions. But in Octopus 2.3, we'll now terminate the running PowerShell process:

Cancel a PowerShell script

This is a much nicer experience because it means cancel now actually works when you have a hanging task. On the other hand, you'll have to be a little more careful when using it!

Better task output and dashboard performance

The dashboard and task output page got a lot of attention. Previously, the task output would freeze up with a few hundred lines of output, and the dashboard would freeze intermittently when you had many projects/environments. Both of these issues have now been addressed and they should feel much snappier!

We also addressed some other known bugs and other performance issues this release. Check it out and let me know if you hit any problems. Happy deployments!

Variable substitution in files with Octopus 2.3

Octopus has rich support for variables that can be scoped to specific machines, environments and roles. These can also be substituted into XML App.config and Web.config files at deployment time, by matching variable names to the names of <appSetting> and <connectionString> elements.

In Octopus 2.3, we're extending this to other kinds of files, using the same variable syntax that works throughout the application.

Our sample app is a feed monitoring service that keeps a list of target servers in a JSON file, Config.json:

{
  "Servers": [{
    "Name": "localhost",
    "Port": 1567
  }]
}

When the app is deployed, we'd like to configure the list based on the target deployment environment. The servers in the UAT environment are called FOREXUAT01 and FOREXUAT02.

Variables

We've implicitly set up a ServerEndpoints collection with two items in it, one for each of the servers in the UAT environment.

We can iterate through these in a template file that we'll run at deployment time. So that it doesn't get in the way of development, we'll call it Config.json.template.

{
  "Servers": [
    #{each server in ServerEndpoints}
      {
        "Name": "#{server.Name}",
        "Port": #{server.Port}
      }#{unless Octopus.Template.Each.Last},#{/unless}
    #{/each}
  ]
}

You're probably familiar with the simple form of the #{Variable} syntax. This example uses a few new features added in Octopus 2.0 including #{each} and #{unless} as well.

With Config.json.template included in our app's NuGet package, the next step is to enable the deployment feature.

Enabling the feature

The feature is simple to configure, accepting a list of files that Octopus will perform variable substitution in.

Configuring substitutions

Since we used a different template name from the target file, we'll also add a snippet of PowerShell to replace the original Config.json with the contents of the template file.

PostDeploy

The template will now be run as part of our deployment process.

Execution

And the resulting JSON file is (with whitespace cleaned up a little):

{
  "Servers": [
    {
      "Name": "forexuat01.local",
      "Port": 1566
    },
    {
      "Name": "forexuat02.local",
      "Port": 1566
    }
  ]
}

Happy deployments!