June Community Roundup

Published on: 29 Jun 2015 in community by Damian Maclennan

It's been a busy month for all of our friends out there in internetland! Here's a summary in case you've missed any of it.

First up was Steve Fenton's eBook "Exploring Octopus Deploy: Deployment Automation for .NET". You've missed his free offer now but it's a good read if you need a walk through of a a 2.6 installation and project set up. Nice work Steve!

Next up is Manimaran Chandrasekaran's blog series on all things Octopus Deploy. Start here with his Octopus 3.0 roundup and keep reading. Thanks for all those posts Mani! Some good stuff in there!

Of course our good friends at Domain haven't been idle. Jason Brown, the resident "Cloudy Ops Poet" has a couple of great Octopus Deploy related posts. One on doing Canary Deployments and another talking about using PoshServer for triggering scripts with webhooks from GitHub. Worth a read, I had no idea somebody had written a web server in PowerShell!

Just as I finished writing this, Jason also Tweeted this...

Friday deployments are no longer scary!

This post on using Gulp, TFS Build and Octopus by Chris Eelmaa was brought to our attention. It's worth a read if you're working with Gulp as part of your build process. I know of a few other people doing similar things, but Chris is the first person to blog about it!

Finally, if you're using Splunk in your organization, Brisbane local Matthew Erbs has written an add on to stream in your Octopus Deploy logs and get some metrics. Very cool Matt!.

Don't forget if you are writing a blog post, doing a user group talk or anything like that you can Tweet us @OctopusDeploy and we'll let the world know.

DDD Perth

We're going to be Silver sponsors of the DDD Perth conference. If you're in Perth you definitely want to be at that, I love the DDD conference format and always have a good time at them.

Until next time

Phew, that's a lot! Keep an eye out for 3.0 hitting the full release milestone next month, we'd love any feedback on the pre-release if you'd like to play and you've got a good bit of blog reading to catch up on!

The Octopus Deploy 3.0 time saver: delta compression

Published on: 19 Jun 2015 by Shane

One of the cool new features in Octopus Deploy 3.0 is delta compression. A typical scenario in Octopus Deploy is frequent deployments of small changes to packages. For example, you might push some code updates but all of your libraries are unchanged. In the past Octopus Deploy would upload the entire package to each Tentacle, regardless of how little had changed. With the introduction of delta compression, only the changes to your package will be uploaded to each Tentacle. That sounds great in theory but I was curious what kind of benefits delta compression would yield in practice.

For this experiment I set up 2.6 and 3.0 servers with 20 Tentacles each. The Tentacles were located on the other side of the world so that network conditions would slow package acquisition, making it a significant part of the deployment. I performed 50 deployments of 110MB packages. Each package contained 10MB of changes.

The results:

Deployment times

2.6 took on average 7 minutes to complete a deployment and had a total run time of 6 hours 7 minutes.

3.0 took 6 minutes 23 seconds to performed the first deployment and then averaged 1 minute 30 seconds to calculate deltas and push them to the Tentacles for a total run time of 1 hour 26 minutes.

That is a pretty amazing result. With Octopus Deploy 3.0 you will spend less time waiting for deployments. Delta compression also saves your bandwidth and frees your Octopus Server and Tentacle resources for more deployments!

Octopus 3.0 pre-release is here

So the big news we've all been waiting for, Octopus 3.0 pre-release is now released to the public!

We know we took a little longer that we thought with this one, it's a HUGE release for us with a lot of changes. This post will give you an outline of the major ones.

For the TL;DR, read the next bit carefully.

This is a pre-release, which means it's not the officially supported version of Octopus Deploy (that's a few more weeks away). What that means is download it, install it, use it and help us improve it by giving us feedback and letting us know about any bugs you might find. But it's probably not perfect, and it’s not the final release.

What that also means is that we're taking feedback and bug reports differently.

This is important.

For Octopus Deploy 3.0 pre-release support, feedback and bug reports use our new Community Forum. We're posting a lot there so it's the place to get all the latest information. We've got some How To posts there which will eventually become blog posts or part of the documentation. You can also subscribe to the pre-release forum to get an email when there's new activity. We'll release most builds on the Downloads Page, but we may ship some interim builds directly to the forum.

For 2.* support, use our regular support channels (support@octopusdeploy.com and our support forum).

OK, on with the details, you can read on or watch the video of our webinar for a more visual walk through of what's new.


In 3.0, we took our time because there were a few things in the architecture that we wanted to change for performance and supportability.

SQL Server

This is the big one you've all been waiting for. As of 3.0 we've moved away from RavenDB and on to SQL Server as a data store. We have a few blog posts explaining the why and how of this change which you can read. The upshot of this though is a faster product, one that scales better and something that is easier for us to support. This comes at the slight expense of our install experience, we chose not to bundle an embedded SQL Server instance but we also know that a SQL Server install is a pretty familiar thing to our customers. We support SQL Server 2008 and upwards and the Express edition all the way to Enterprise as well as Azure SQL instances.

Halibut, our new communications stack

In Octopus 2.* we used a library we called Pipefish, an Actor framework similar to Akka.NET, for our communications. With 3.0 we've replaced that with something we call Halibut, which has simplified our codebase as well as giving us something which is easier to trace and debug, which means something easier to support. It's open source too if you'd like to go and see how it works.


As well as the gains we're seeing in development cost and supportability of these two changes, they also make Octopus a whole lot faster and more scalable. We'll share some more metrics with you in the coming weeks as we push it some more.


With 3.0 we've also changed our Tentacle architecture. The Tentacle used to contain all the code to do deployments of packages, run our conventions (i.e. change configuration files etc) and do tasks like IIS configuration. This made it very tightly coupled to the Octopus Version. We've now broken that into two pieces. Tentacle is an agent which doesn't do much other than securely transfer files and run commands, and all the logic is in Calamari. Newer Calamari packages get pushed to the Tentacle as part of the health check process, so this means fewer Tentacle upgrades. Calamari is also open source. What that means for you is that you could fork and tweak it for your environment and have some of your own deployment code.

New Features

We've finally added the Delta Compression library into Octopus for package transfer. Depending on the contents of your Nuget packages, you should see a dramatic reduction in the amount of data you need to transfer to the Tentacle between versions. This is going to be a great saving in time and network utilization for a lot of customers.

Deployment Targets

We've talked a little about Deployment Targets in the past, but we've made the concept of a "machine" a little more generic.

Offline Deployments

Offline Deployment is a new Deployment Target. One that a lot of people have been asking for. If you're one of our customers who currently has a custom installation script for deploying to totally isolated environments, you can now add an Offline Deployment target. When you deploy to this, we'll package up the Nuget packages, a copy of Calamari, a JSON file containing the release variables and a bootstrapping script that can be run on the target machine. There's a "getting started" post here you can read if you'd like to use that.

This was one of the drivers for separating Calamari from Tentacle. Now we can just bundle up all our deployment tools as part of this package and you can take them with you!

Azure Web sites

Previously we had a Web Deploy script in our Library which used WebDeploy to push the contents of a package to an Azure Website. This is now built in as a Deployment Target.

Cloud Services

We've also revamped our support for Cloud Services. As part of this you can now perform any conventions on your package (variables, config transforms etc) because we'll unpack your .cspkg, run conventions, then re-pack before pushing to Azure.

For details on our Azure support, have a read of this post.

SSH Deployments

We've had SSH and Linux support on the radar for ages now, and it's here! To do this, we made Calamari compatible with Mono so you'll need that on your target machine. We did this so that we could maintain a single codebase for these tools and Linux support didn't become a second class citizen. Also, now that Tentacle is just an agent it means our architecture for the two platforms is very similar. To read more about our SSH support read this post, and for a walk through of setting up your own we have another.


The Migrator feature is part of the Octopus Server manager and also available from the command line. It is a tool which will:

  • Import your 2.6 backup into 3.0
  • Export your 3.0 project configuration into a directory of JSON files
  • Import 3.0 JSON files into Octopus Deploy

This will help you bring your existing Octopus configuration into 3.0. It will help you transition a 3.0 POC or Pilot into a production environment. With some scripting you could also build something to export projects out of a staging Octopus instance and push them to a production one.

With some scripting you can also... wait for it... commit your entire Octopus Deploy configuration into Git (or the VCS of your choice)!!!!!!

A couple of things you won't see

You might notice that Octopus 3.0 currently looks just like 2.6. That will change by the time we're at the final release. We're spending the next few weeks giving it a bit of a look and feel change, but we wanted to get the functionality out there for you to test.

You might also notice that FTP is missing as a deployment target. FTP was another item that was very hard to support for us. The component we used worked in some environments and not others. Due to the very low number of people using FTP steps (and we think a lot of them are for Azure websites anyway) we made the decision to not port that feature to 3.0. You'd still be able to achieve the same goal with a PowerShell step, and we might look at providing one in the library.

If this one is an issue for you, please let us know and we'll help you out.

The other thing you may encounter, because we've totally revamped our Cloud Services support, the old Azure Web Role support isn't something that the migrator will bring across. You'll need to do a bit of a manual port if you're using that feature in 2.6.

So let's get started

As I said at the beginning of the post, the Community Forum is the place for more information, feedback, bug reports and discussion. Any Octopus 2.* support should be via the normal channels.

Once we're out of pre-release this will change, and we're hoping that the Community Forum stays around as a place to discuss more general things. So go and grab the bits and get started!

A quick note on licensing

A few people have asked this. If you have a valid maintenance agreement with us (i.e. you have bought Octopus in the last 12 months OR have renewed your maintenance) then 3.0 is a FREE upgrade. If your maintenance has expired, you'll want to renew that to be able to upgrade.

Happy Deployments!

Announcing Octopus Deploy 3.0 pre-release

Published on: 2 Jun 2015 by Damian Maclennan

The news you've all been waiting for.....

Octopus 3.0 is ready for you to try!

As we've said previously, we ended up making a lot of changes for this version, so we took a little longer to release something that was ready to use. Octopus 3.0 is faster, more stable, and has loads of good features in it, and we hear you… you want to get your hands on it. So here's how it's going to happen.

Register for a webinar

If you'd like to be among the very first to get a look, come along to one of our webinars early next week. We're repeating it three times so we should have everybody's timezone covered. In the webinar we'll show you what's new, a few things that are different about 3.0 and how you can get started with it. The link to register is http://octopusdeploy.com/webinars

Watch the blog

A couple of days after the webinar we'll do a blog post and share the link with the world. So don't worry if you can't make the webinar, you won't have long to wait. We just want to make sure we have answers to the most common questions so we're staging the release announcement just a little.

So get ready!

So for everybody that has been waiting anxiously... you'll get your hands on a pre-release next week. You'll be able to bring a 2.6 backup across to it, you'll have SQL Server, offline deployments, SSH and Linux support, new Azure Website and Cloud Service support and a much faster, much better Octopus experience. We can't wait!

How Octopus 3.0 blows 2.6 out of the water

Published on: 28 May 2015 by Shane

At Octopus Deploy we have been working diligently to create an update that will put a smile on your face. In version 2.x we experienced some growing pains as our beloved users put Octopus through its paces. One of our goals for Octopus 3.0 was to provide a major performance improvement. In order to achieve this, we have:

  • Re-written our persistence layer and replaced Raven with SQL Server
  • Re-written our communication layer, swapping out Pipefish for Halibut
  • Made vast improvements to logging and orchestration
  • Started testing and measuring large scale deployments

Yesterday, as a part of our bug-bash, we performed a complex deployment on Octopus 2.6 and 3.0 to compare the performance. The deployment involved:

  • 200 NuGet feeds
  • 50 steps with 3 child package steps
  • Packages ranging from 10MB to 1000MB
  • 5 variable sets with 5000 variables each

We are pleased with the results and hope that you will be too. The 3.0 deployment finished in 2 hours 41 minutes compared to 4 hours 18 minutes for 2.6. You will notice in the charts below that 3.0 CPU and memory usage drops off much sooner than 2.6:

CPU usage

Octopus Server 2.6 hammers the CPU, remaining around 100% for over two hours. In comparison, Octopus Server 3.0 has a spike at the beginning of the deployment (50 minute mark) and then continues under 20% for the rest of the deployment.

Memory usage

What?! Yes, that is almost 3.5GB used by Octopus Server 2.6 compared to under 500MB for Octopus Server 3.0.

In Octopus 3.0 you can expect a marked performance improvement, especially for large and complicated deployment scenarios. We will continue to investigate ways we can improve Octopus and measure our improvements so that you can be confident Octopus Deploy can handle any deployment you throw at it. Also, lots of cool new features. Coming soon!

Octopus integration with TFS Build vNext

If you're an Octopus Deploy user who uses Team Foundation Server or Visual Studio Online, you're probably very familiar with OctoPack. It's a tool that hooks into your MSBuild process to package your project ready for use with Octopus.

Right now, the CI process for a project in TFS involves using OctoPack to package and push your application, then Automatic Release Creation to create a new release as soon as you push to Octopus, and finally a Lifecycle that will automatically deploy that release to an environment. There's a walkthrough on our Training Videos page if you want to learn more.

This scenario works really well for 90% of projects, but it blurs the line between the build and deploy stages of your process. These stages should really be separate. For example, if your build succeeds but half your integration tests fail, would you want a deployment to happen?

If you're sufficiently dedicated, you can modify build process templates to add additional steps, but nobody wants to work with this:

Process Build Template

Team Build vNext

Thankfully, Microsoft has put a lot of effort into replacing Team Build. If you're a Visual Studio Online user, or you've played with one of the TFS 2015 Prerelease versions, you might have seen the new Build.Preview option in the menu (it's going to be combined into the existing Build option very soon).

Build Preview

If you want to know how it works, Chris Patterson gave a great walkthrough at //build/ this year, and Microsoft has made some information available online.

As a long-time TFS user, I'm very confident you'll want to move to the new Build system as soon as you can.

Octopus Integration

The new structure of Team Build gives us a great opportunity to integrate better with your build process. To that end, we've created a new, public OctoTFS repository in GitHub.

It currently contains two options for creating a Release in Octopus as an independent step in your build definition. Both of them let you separate the build and deploy phases really nicely. Note that you'll still have to package and push your Nuget package to the Octopus Server (or another Nuget repo) - these steps just create releases. You can still use OctoPack for packaging and pushing.

The integration I'm most excited about is the custom Build Step. It gives you a really nice UI and even includes release notes pulled from changesets and work items - something we get asked for a lot.

Unfortunately, because you need to upload the package to TFS/VSO, it won't be available to use until the new build system hits RTM. That shouldn't be too far away. At least this way you'll be able to use it from day one of RTM instead of having to wait!

Custom Build Step

Release Notes

The other option is a PowerShell script you can include in your project. This one you can use right now, and it works nearly as well (no release notes yet). It's not quite as nice to work with, but it does the job for now.

Powershell Step


We will continue to work on these integrations so they're useful and easy to use for as many people as possible. Our priority is always going to be on the core product though, so we'll improve and add when we can.

Of course the OctoTFS repository is open source, and we will be accepting pull requests, so if you see a bug, a potential improvement, or even a completely new integration option, we'd love your contribution!

Other TFS Integrations

I anticipate we'll start building a few more integrations with TFS in the future. Microsoft is starting to open up some awesome opportunities with the REST API, Service Hooks, and even Extensions. Octopus dashboard inside TFS anyone?

Octopus 3.0: Migrator RFC

Published on: 21 May 2015 in Octopus Features by Paul Stovell

Octopus 2.6 uses RavenDB, while Octopus 3.0 uses SQL Server. Of course, this means that we need some way to help you migrate data from 2.6 to 3.0. Since we're putting so much effort into data migration, we may as well make it a feature: it's also going to be a general Octopus 3.0 exporter and importer.

The migrator tool will support the following four scenarios:

  • Importing data from 2.6 into a 3.0 server
  • Exporting data from a 3.0 server, and importing it back into a 3.0 server
  • Splitting data from a 3.0 server into two or more
  • Merging data from multiple 3.0 servers into one

The data that we'll import and export is limited to "configuration"-style data (i.e., projects, environments, machines, deployment processes, variable sets, teams, NuGet feeds, and so on). We don't plan to support exporting historical data (i.e., releases, deployments, and audit events), but we will eventually import historical data from 2.6 backups.


Exporter UI in Octopus 3.0

The exporter tool exports data as JSON files inside a directory.

Exported directory

What can you do with this export?

  • Commit it to a Git repository
    We're making the JSON as friendly and predictable as possible so that if you commit multiple exports, the only differences that will appear are actual changes that have been made, and comparing the changes will be obvious.
  • Transfer it to a new Octopus server
    You can delete files you don't want to import (e.g., if you're transferring one project, just delete everything except the files for that project) and then import it using the Importer.

While the JSON files contain ID's, when importing, we actually use the names to determine if something already exists. This means you can export from multiple Octopus servers, combine them together, and then import to a single Octopus server.

If you use sensitive variables, these will be encrypted in the JSON using a password (note that sensitive variables are normally stored in SQL encrypted with your master key; the exporter will decrypt them, then encrypt them with this new password). Assuming you use the same password between exports, you'll get the same output, so they won't appear as changed in your diff tool.


Importer in Octopus 3.0

The importer tool can either take an exported directory and the password used to export it, or an Octopus 2.6 backup file (.octobak) and the Octopus master key. It will then import the data.

You'll get a chance to preview the changes first, and you can tell the tool to either:

  • Overwrite documents if they already exist in the destination (e.g., if a project with the same name already exists, overwrite it)
  • Skip documents if they already exist in the destination (e.g., if a project with the same name already exists, do nothing)

The importer wraps all data changes in a SQL transaction; if any problems are discovered during the import, the transaction will be rolled back and nothing will be imported.

Is history important?

As I mentioned, one feature we are currently leaving out for our upcoming public beta of 3.0 is exporting and importing deployment history (even from 2.6) - i.e., releases, deployments, tasks, artifacts, and audit events won't be exported or imported. Those are much more complicated, so we plan to ship this tool as-is, then add history later.

We definitely plan to extend the importer to import history from 2.6 backups before 3.0 ships. What we're not sure about, is whether it's important to import and export history in 3.0 - we figure the vast majority of people only care about project configuration data, not history. We'd love to hear some scenarios where 3.0 import of history might be useful.

PS: Like the new admin tool look?

Octopus Deploy High Availability

Sometime in the near future (post 3.0) we're going to be releasing a High Availability edition of Octopus Deploy. We have customers using our product to automate the deployment of applications for hundreds of developers, to many hundreds of machines. For these customers, Octopus Deploy is a critical piece of their development infrastructure and downtime has a significant cost.

Currently with the architecture of Octopus Deploy we don't have a good high availability scenario. We ship by default with an embedded RavenDB and have a lot of dependencies on the file system for message persistence. We do support moving to an external RavenDB cluster, but our networking and messaging stack means that having more than one Octopus Server just wouldn't work.

With 3.0, our support of SQL Server means that we can support a SQL Server cluster. This is fantastic news for a lot of people who already have the infrastructure and support for SQL Server in a clustered environment. However we're doing extra work to make the Octopus Deploy server cluster aware in it's communications, and task management.

We've had various requests for different topologies to support in a high availability scenario, so in this post I'm going to talk about what we are and aren't planning to support, and what we'll probably support in the future.

So here goes.

The main scenario we want to support is a highly available and load balanced configuration with a SQL Server cluster and multiple Octopus Servers behind a load balancer.

Octopus HA Scenario

This will give you fault tolerance at the Octopus Server level, a machine failing would be picked up by the load balancer health check and all traffic would be routed to the other nodes. SQL Server availability will be taken care of via SQL Server and Windows Clustering services.

You'll notice that as well as the SQL Server infrastructure, Octopus also needs a file share for packages, artifacts and log files.

Scenarios that don't make sense to us.

We have some customers who want to keep data centers separate, either to keep their Production and Development environments segregated for security reasons, or have multiple regions and want to save on bandwidth or latency.

We think that the most manageable way to achieve this is via Octopus to Tentacle communication, routed via VPN if necessary. This scenario is secure (and by secure, I mean that any system is as secure as it's weakest link, and the TLS security we use for Tentacle communication is not the weakest link in nearly all scenarios), and will perform well from a latency point of view.

For customers with multiple data centers in different regions who are concerned with the data transfer of large Nuget packages to multiple tentacles, we would recommend replicating or proxying your Nuget server into each region and through DNS aliasing have the tentacle fetch packages from geographically local package repositories. Or, an even simpler solution than that is have two separate but identical Nuget repositories and have the Feed ID as a variable.

Some of the requested configurations that we don't plan on supporting are :

Bad Octopus HA Scenario

In this instance, Octopus Servers in different data centers share a common database and fileshare.

Independent Octopus Servers and Environments having local SQL Servers and Fileshares which are replicated.

Both of these won't work properly as each Octopus Deploy server wouldn't be able to see Tentacles in the remote envionments meaning that health checks and deployment tasks will fail. Additionally, we believe that the latency of remote SQL Server communications and a replicated or shared filesystem will result in performance that is not as good as Octopus to Tentacle communications in those environments. These configurations also don't cater to any security concerns customers have about visibility between environments.

Some alternative solutions

While not strictly High Availability solutions, for those customers who really want to have segregated environments, we do have two options which may work for them.

Migrator Tool

For 3.0 we needed to build a way to migrate a 2.6 RavenDB database to 3.0. What started as a database tool is turning into a general purpose Octopus Deploy migration and data replication feature. With this it would be possible to transfer project configuration and releases to an upstream Octopus Server.

Relay Tentacles

This isn't something we have now, but something that is potentially on our radar for 3.1 or 3.2. The concept of a Tentacle that can act as a relay, gateway or proxy to other Tentacles on a private network.

It would look something like this:

Octopus Relay

When you add a machine to an Environment, you'll be able to specify how the Octopus server can route to it, and any task (Deployments and Health Checks) would go via this Relay.

A few closing thoughts

Octopus Deploy High Availability won't be available for all editions. It is something we'll release and sell as a new licensing tier and price point. To ensure we can support customers on this edition we'll very likely provide some consulting and solution architecture to make sure it is running in a supported configuration to give us and our customers assurance that their Octopus Deploy infrastructure remains highly available.

How to deploy from Minecraft with Octopus Deploy

Published on: 8 May 2015 by Shane

Imagine a world where deployments were simple. You could push a button in your favorite video game and your latest release was deployed to production. Some may scoff: "My spreadsheets, RDP and manual config file edits could never be replaced by a virtual button!" Allow me to present OctoCraft Deploy:

On the Minecraft side all you need is Bukkit and a copy of Minecraft. Bukkit allows the creation of custom Minecraft plugins. OctoCraft Deploy is just a Minecraft plugin that interacts with the Octopus Deploy API.

Octopus Deploy is API first. Anything you can do through the UI can be accomplished through the API. Even with Java. OctoCraft Deploy calls the Octopus Deploy API to create a release, deploy that release to an environment and then monitor the state of the deployment. You can find full API doco here.

For example, to create a release:

Create a method to post to the API.

public HttpResponse Post(String path, String json) throws ClientProtocolException, IOException {
    HttpClient client = new DefaultHttpClient();
    HttpPost post = new HttpPost(url + path);
    post.setHeader(API_KEY_HEADER, apiKey);

    post.setEntity(new StringEntity(json, ContentType.TEXT_PLAIN));
    return client.execute(post);

POJO that will be serialized to the post request.

public class ReleasePost {
    private String projectId;
    private String version;

    public ReleasePost(String projectId, String version) {
        this.projectId = projectId;
        this.version = version;

    public String getProjectId() {
        return projectId;

    public String getVersion() {
        return version;

Do all the hard work of creating a release.

private Release createRelease(Project project) throws ClientProtocolException, IOException {                
    ReleasePost releasePost = new ReleasePost(project.getId(), "1.i");
    String content = new String(); 
    String json = objectMapper.writeValueAsString(releasePost);
    HttpResponse response = webClient.Post(RELEASES_PATH, json);

    content = EntityUtils.toString(response.getEntity());           
    return objectMapper.readValue(content, Release.class);  

The full source on GitHub.

Introducing Dalmiro Grañas

Dalmiro Grañas

My name is Dalmiro Grañas and I'm from Buenos Aires, Argentina. I joined Octopus last October as a support engineer. If you've asked for an API Script on our support forums anytime since then, odds are that I was the one who wrote it.

Prior to Octopus, I was the guy that was supposed to do manual deployments for the company's most critical app (Time and Expenses management). On the second week at that job I realized that wasn't my thing, so i learned PowerShell to automate the entire process. Later on at that company the internal DevOps team recruited me and I was given the task to implement Octopus Deploy at the company while teaching 200+ teams how to use it. Most of these team's deployment strategy involved handling over spreadsheets with manual instructions to the company's sysadmins, so you can guess how hard that battle was.

Even though my main background is on Windows Server Administration (and automation around it), as years went by I've leaned more and more towards development (.NET) while helping other developers to use the right tools for the job. Hopefully here at Octopus I'll be able to help many devs get the most out of our tool so they never have to suffer awful deployments like I did!