Octopus 3.0: Migrator RFC

Published on: 21 May 2015 in Octopus Features by Paul Stovell

Octopus 2.6 uses RavenDB, while Octopus 3.0 uses SQL Server. Of course, this means that we need some way to help you migrate data from 2.6 to 3.0. Since we're putting so much effort into data migration, we may as well make it a feature: it's also going to be a general Octopus 3.0 exporter and importer.

The migrator tool will support the following four scenarios:

  • Importing data from 2.6 into a 3.0 server
  • Exporting data from a 3.0 server, and importing it back into a 3.0 server
  • Splitting data from a 3.0 server into two or more
  • Merging data from multiple 3.0 servers into one

The data that we'll import and export is limited to "configuration"-style data (i.e., projects, environments, machines, deployment processes, variable sets, teams, NuGet feeds, and so on). We don't plan to support exporting historical data (i.e., releases, deployments, and audit events), but we will eventually import historical data from 2.6 backups.

Exporting

Exporter UI in Octopus 3.0

The exporter tool exports data as JSON files inside a directory.

Exported directory

What can you do with this export?

  • Commit it to a Git repository
    We're making the JSON as friendly and predictable as possible so that if you commit multiple exports, the only differences that will appear are actual changes that have been made, and comparing the changes will be obvious.
  • Transfer it to a new Octopus server
    You can delete files you don't want to import (e.g., if you're transferring one project, just delete everything except the files for that project) and then import it using the Importer.

While the JSON files contain ID's, when importing, we actually use the names to determine if something already exists. This means you can export from multiple Octopus servers, combine them together, and then import to a single Octopus server.

If you use sensitive variables, these will be encrypted in the JSON using a password (note that sensitive variables are normally stored in SQL encrypted with your master key; the exporter will decrypt them, then encrypt them with this new password). Assuming you use the same password between exports, you'll get the same output, so they won't appear as changed in your diff tool.

Importing

Importer in Octopus 3.0

The importer tool can either take an exported directory and the password used to export it, or an Octopus 2.6 backup file (.octobak) and the Octopus master key. It will then import the data.

You'll get a chance to preview the changes first, and you can tell the tool to either:

  • Overwrite documents if they already exist in the destination (e.g., if a project with the same name already exists, overwrite it)
  • Skip documents if they already exist in the destination (e.g., if a project with the same name already exists, do nothing)

The importer wraps all data changes in a SQL transaction; if any problems are discovered during the import, the transaction will be rolled back and nothing will be imported.

Is history important?

As I mentioned, one feature we are currently leaving out for our upcoming public beta of 3.0 is exporting and importing deployment history (even from 2.6) - i.e., releases, deployments, tasks, artifacts, and audit events won't be exported or imported. Those are much more complicated, so we plan to ship this tool as-is, then add history later.

We definitely plan to extend the importer to import history from 2.6 backups before 3.0 ships. What we're not sure about, is whether it's important to import and export history in 3.0 - we figure the vast majority of people only care about project configuration data, not history. We'd love to hear some scenarios where 3.0 import of history might be useful.

PS: Like the new admin tool look?

Octopus Deploy High Availability

Sometime in the near future (post 3.0) we're going to be releasing a High Availability edition of Octopus Deploy. We have customers using our product to automate the deployment of applications for hundreds of developers, to many hundreds of machines. For these customers, Octopus Deploy is a critical piece of their development infrastructure and downtime has a significant cost.

Currently with the architecture of Octopus Deploy we don't have a good high availability scenario. We ship by default with an embedded RavenDB and have a lot of dependencies on the file system for message persistence. We do support moving to an external RavenDB cluster, but our networking and messaging stack means that having more than one Octopus Server just wouldn't work.

With 3.0, our support of SQL Server means that we can support a SQL Server cluster. This is fantastic news for a lot of people who already have the infrastructure and support for SQL Server in a clustered environment. However we're doing extra work to make the Octopus Deploy server cluster aware in it's communications, and task management.

We've had various requests for different topologies to support in a high availability scenario, so in this post I'm going to talk about what we are and aren't planning to support, and what we'll probably support in the future.

So here goes.

The main scenario we want to support is a highly available and load balanced configuration with a SQL Server cluster and multiple Octopus Servers behind a load balancer.

Octopus HA Scenario

This will give you fault tolerance at the Octopus Server level, a machine failing would be picked up by the load balancer health check and all traffic would be routed to the other nodes. SQL Server availability will be taken care of via SQL Server and Windows Clustering services.

You'll notice that as well as the SQL Server infrastructure, Octopus also needs a file share for packages, artifacts and log files.

Scenarios that don't make sense to us.

We have some customers who want to keep data centers separate, either to keep their Production and Development environments segregated for security reasons, or have multiple regions and want to save on bandwidth or latency.

We think that the most manageable way to achieve this is via Octopus to Tentacle communication, routed via VPN if necessary. This scenario is secure (and by secure, I mean that any system is as secure as it's weakest link, and the TLS security we use for Tentacle communication is not the weakest link in nearly all scenarios), and will perform well from a latency point of view.

For customers with multiple data centers in different regions who are concerned with the data transfer of large Nuget packages to multiple tentacles, we would recommend replicating or proxying your Nuget server into each region and through DNS aliasing have the tentacle fetch packages from geographically local package repositories. Or, an even simpler solution than that is have two separate but identical Nuget repositories and have the Feed ID as a variable.

Some of the requested configurations that we don't plan on supporting are :

Bad Octopus HA Scenario

In this instance, Octopus Servers in different data centers share a common database and fileshare.

Independent Octopus Servers and Environments having local SQL Servers and Fileshares which are replicated.

Both of these won't work properly as each Octopus Deploy server wouldn't be able to see Tentacles in the remote envionments meaning that health checks and deployment tasks will fail. Additionally, we believe that the latency of remote SQL Server communications and a replicated or shared filesystem will result in performance that is not as good as Octopus to Tentacle communications in those environments. These configurations also don't cater to any security concerns customers have about visibility between environments.

Some alternative solutions

While not strictly High Availability solutions, for those customers who really want to have segregated environments, we do have two options which may work for them.

Migrator Tool

For 3.0 we needed to build a way to migrate a 2.6 RavenDB database to 3.0. What started as a database tool is turning into a general purpose Octopus Deploy migration and data replication feature. With this it would be possible to transfer project configuration and releases to an upstream Octopus Server.

Relay Tentacles

This isn't something we have now, but something that is potentially on our radar for 3.1 or 3.2. The concept of a Tentacle that can act as a relay, gateway or proxy to other Tentacles on a private network.

It would look something like this:

Octopus Relay

When you add a machine to an Environment, you'll be able to specify how the Octopus server can route to it, and any task (Deployments and Health Checks) would go via this Relay.

A few closing thoughts

Octopus Deploy High Availability won't be available for all editions. It is something we'll release and sell as a new licensing tier and price point. To ensure we can support customers on this edition we'll very likely provide some consulting and solution architecture to make sure it is running in a supported configuration to give us and our customers assurance that their Octopus Deploy infrastructure remains highly available.

How to deploy from Minecraft with Octopus Deploy

Published on: 8 May 2015 by Shane

Imagine a world where deployments were simple. You could push a button in your favorite video game and your latest release was deployed to production. Some may scoff: "My spreadsheets, RDP and manual config file edits could never be replaced by a virtual button!" Allow me to present OctoCraft Deploy:

On the Minecraft side all you need is Bukkit and a copy of Minecraft. Bukkit allows the creation of custom Minecraft plugins. OctoCraft Deploy is just a Minecraft plugin that interacts with the Octopus Deploy API.

Octopus Deploy is API first. Anything you can do through the UI can be accomplished through the API. Even with Java. OctoCraft Deploy calls the Octopus Deploy API to create a release, deploy that release to an environment and then monitor the state of the deployment. You can find full API doco here.

For example, to create a release:

Create a method to post to the API.

public HttpResponse Post(String path, String json) throws ClientProtocolException, IOException {
    HttpClient client = new DefaultHttpClient();
    HttpPost post = new HttpPost(url + path);
    post.setHeader(API_KEY_HEADER, apiKey);

    post.setEntity(new StringEntity(json, ContentType.TEXT_PLAIN));
    return client.execute(post);
}

POJO that will be serialized to the post request.

public class ReleasePost {
    private String projectId;
    private String version;

    public ReleasePost(String projectId, String version) {
        this.projectId = projectId;
        this.version = version;
    }

    @JsonProperty("ProjectId")
    public String getProjectId() {
        return projectId;
    }

    @JsonProperty("Version")
    public String getVersion() {
        return version;
    }
}

Do all the hard work of creating a release.

private Release createRelease(Project project) throws ClientProtocolException, IOException {                
    ReleasePost releasePost = new ReleasePost(project.getId(), "1.i");
    String content = new String(); 
    String json = objectMapper.writeValueAsString(releasePost);
    HttpResponse response = webClient.Post(RELEASES_PATH, json);

    content = EntityUtils.toString(response.getEntity());           
    return objectMapper.readValue(content, Release.class);  
}

The full source on GitHub.

Introducing Dalmiro Grañas

Dalmiro Grañas

My name is Dalmiro Grañas and I'm from Buenos Aires, Argentina. I joined Octopus last October as a support engineer. If you've asked for an API Script on our support forums anytime since then, odds are that I was the one who wrote it.

Prior to Octopus, I was the guy that was supposed to do manual deployments for the company's most critical app (Time and Expenses management). On the second week at that job I realized that wasn't my thing, so i learned PowerShell to automate the entire process. Later on at that company the internal DevOps team recruited me and I was given the task to implement Octopus Deploy at the company while teaching 200+ teams how to use it. Most of these team's deployment strategy involved handling over spreadsheets with manual instructions to the company's sysadmins, so you can guess how hard that battle was.

Even though my main background is on Windows Server Administration (and automation around it), as years went by I've leaned more and more towards development (.NET) while helping other developers to use the right tools for the job. Hopefully here at Octopus I'll be able to help many devs get the most out of our tool so they never have to suffer awful deployments like I did!

Introducing Michael Richardson

Michael Richardson

I have been a professional Software Engineer since 2003, and an amateur one long before that. I’ve mostly worked freelance, across many industries and I certainly believe in the ability of software to change the world.

I’ve found there are a few reliable predictors of successful delivery, but perhaps none better than the management (i.e. automation!) of releases and environments. Personally, I have spent far too many hours of my life coercing various unsuited products into performing deployments.

I believe Octopus Deploy can allow teams to spend less time shaving the Yak, and more time making the world a better place.

So I joined the Octopus Deploy team, as Senior Software Developer. To help save the world, one deployment at a time.

Ping me on Twitter or LinkedIn

April 2015 Community Roundup

Published on: 30 Apr 2015 in community by Damian Maclennan

While we've been a bit quiet here lately head down finishing up 3.0, the great folks in our community haven't!

Here's a summary of some good things that have come across my radar this month.

Deployment Tracking in Raygun. Our friends at RayGun have implemented a deployment tracking feature so you know if your new release has fixed everything, or let you know if you've inadvertently made things worse. They have a great step by step guide to integrating Octopus Deploy and Raygun so your deployments are tagged for you. Great stuff! If you're using Raygun you definitely want to look at that, and if you're not using Raygun... go check it out, tell them we sent you!

Again from the unstoppable Jason Brown at Domain, more on automating AWS Infrastructure with Octopus Deploy and his Robot Army 2.5.

While we're on infrastructure management, check out this great post from the DevOpsGuys on Powershell DSC and Octopus.

We've had a few people talking more about SQL Deployments this month, firstly Redgate's Tugberk Ugurlu (who contributed the SQL Release scripts to our library) has written a step by step post about them. Or if you're using Visual Studio tooling in your environment, you might want to read Colin Svingen's post on Deploying Dacpacs.

We've had a few people ask on Twitter lately about deploying DotNetNuke sites with Octopus. We can all now thank Darrell Tunnell who has written up a fantastic guide to doing just that. Fantatsic stuff!

Lastly, I've not seen a blog post about this anywhere, but there is a Node script out there on NPM to create and deploy releases using the Octopus API. If Node's your thing you might want to check that out

Don't forget, if you've published a blog post, running a user group talk or a workshop or anything like that and would like us to let the world know, send us a Tweet or an email and we can give you a shout out!

Vulnerability in HTTP.sys Could Allow Remote Code Execution

Published on: 16 Apr 2015 by Paul Stovell

It may not have a cool code name, but this is a very severe problem:

Vulnerability in HTTP.sys Could Allow Remote Code Execution

A remote code execution vulnerability exists in the HTTP protocol stack (HTTP.sys) that is caused when HTTP.sys improperly parses specially crafted HTTP requests. An attacker who successfully exploited this vulnerability could execute arbitrary code in the context of the System account.

As far as this applies to Octopus Deploy:

  1. The Octopus server/web portal uses HTTP.sys (as does IIS), therefore you'll need to ensure this patch is installed on your Octopus server
  2. The Tentacle agent software does not use HTTP.sys
  3. If you are deploying applications to IIS, or self-hosted web applications built with frameworks like Nancy SelfHost or WebAPI self host (which build on HttpListener which ultimately builds on HTTP.sys), you should patch those servers

Calamari: open sourcing Tentacle deployments

Published on: 13 Apr 2015 by Paul Stovell

I wrote recently about the new deployment targets coming as part of Octopus 3.x, and I explained that we'd be turning Tentacle into a "shell":

From 3.0 onwards, think of Tentacle as a Windows-oriented version of SSH, except with support for both polling and listening modes. All it knows how to do is to execute PowerShell scripts, or transfer files. Everything else - the config transform stuff, the IIS stuff - will be PowerShell scripts that Octopus will ensure exist on the Octopus server prior to deployments executing.

That lead to a logical conclusion:

Now that we're decoupling Tentacle the communication channel from Tentacle the deployment engine, we gain a new possibility: all the scripts and deployment logic that Tentacle executes during a deployment can now be open sourced

Making this happen has been our main focus over the last couple of months, and has manifested as Calamari, the open-source, convention-driven deployment runner:

Calamari GitHub repository

Since we started, we've believed that deployment pipelines take a significant time investment and become critical to how teams operate. For that reason, it's very important to avoid vendor lock-in. Octopus has always been built on open/standard technologies, like NuGet and PowerShell, to minimize your dependencies on Octopus - open sourcing Calamari and all the conventions we use during a deployment is a natural evolution of that goal.

Calamari is a console application with a number of commands, for example:

Calamari deploy-package --package MyPackage.nupkg --variables Variables.json

The package is a NuGet package, and the variables JSON file looks like this:

{
    "MyVariableA": "My value A",
    "MyVariableB": "My value B"
}

Deployments now work like this:

  1. Octopus acquires packages and generates the variables file
  2. These are pushed to Tentacle
  3. Tentacle is told to run a PowerShell script which just invokes Calamari
  4. Calamari takes care of the deployment, and terminates

Now that Calamari is open source, it might help to answer any questions you had around what happens during a deployment on a Tentacle. For example, did you ever wonder what order conventions run in?

Conventions

Or maybe you always wondered how Tentacle (now Calamari) calls PowerShell, and passes variables to it?

Calamari is published under the Apache license, and we'll continue to work on it in the open. One of my favourite features of this architecture is that you can fork the project, make your own changes, and then tell your Octopus 3.0 server to use your own Calamari package.

March 2015 Community Roundup

Published on: 27 Mar 2015 by Damian Maclennan

There has been so much goodness from our friends out there in the community this month.

First up is Gregor Suttie blogging about "One Week using Octopus Deploy" and his need for Blue/Green Deployments.

David Roberts snuck this gem in right at the end of Feb: OctoPygmy - a Chrome Extension customizing the dashboard to filter parts of the Octopus Deploy UI by Project Groups, Environments or Roles. We think it's a cool idea and if you're managing a large environment it's worth a look.

Over on the OPTEN blog, Tim Pickin has been running a series on their Octopus Odyssey, automating the deployment of a complex multi-tenant web application. Some good lessons and ideas in there.

You might remember Karoline Klever and her blog series from the last newsletter. She's done it again with the Octopus Deploy Lab, a set of exercises developed for a workshop to help get Octopus neophytes up and running.

Are you in London? If so, get along to the Zimbra Social Usergroup on the 1st of April to find out from Rich Else about how the Macmillan Cancer Support Community used agile, continuous delivery and Octopus Deploy to upgrade their online community platform.

Also while we're in London, Ed Andersen recently did a good post on the state of .NET web app cloud deployments.

Next up our friend Jason at Domain, recently used Octopus Deploy to help with a rather tricky AWS migration. MacGyver would be proud.

Last but not least, if you're in Chicago in April, get along to the Chicago Code Camp on the 18th. It looks like a fantastic program and Ian Paullin will be presenting an Intro to Octopus Deploy. So if you know anybody who needs that sort of knowledge, send them along.

Introducing Damian Brady

Published on: 26 Mar 2015 by Damian Brady
Damian Brady

My name is Damian Brady (Damo to avoid mixups with DamianM), and I joined Octopus Deploy very recently at the end of February 2015 as a Solution Architect. Prior to Octopus, I worked as a developer, business analyst, and trainer in a range of companies. Most recently, I spent 4 years being exposed to many varied teams and industries as a consultant at SSW. I cut a lot of code, but I also spent a lot of time introducing teams to better ways to work.

I became aware of Octopus less than a year ago, and it’s fair to say I was hooked straight away. It became a go-to tool for any development team I was involved with. I introduced it to clients, wrote internal tools that used its API, and spoke about it, repeatedly, at conferences. Joining the team at Octopus was a natural (albeit challenging) step.

I’m here primarily to help the development community get the most out of Octopus and improve their DevOps process. I’ve always loved pushing people to take that next step to get better software out faster, and this is a great opportunity to do that.

I’m also a Microsoft Visual Studio ALM MVP, so I’ll be spending a bit of time making sure there’s the best possible ALM story for Octopus and Microsoft devs.