We’re always looking for bright minds!
See open positions

Automation: From Ansible to SaltStack

This is a guest blog post by Karolis Pabijanskas, a SysAdmin at Tesonet.

 

Automation is crucial for surviving scale. When your infrastructure is comprised of more than a few hosts, it starts playing an increasingly important role in your day-to-day operations. Thanks to automation, the infrastructure becomes predictable, reproducible and coherent. And the workflow is faster and more agile – you finally have time to tackle all the projects on your hands. Every infrastructure has a tipping point where surviving without automation becomes nearly impossible. Where this tipping point lies, largely depends on your infrastructure, but it happens sooner or later.

However, the automation process isn’t easy. Rolling out an infrastructure-wide change may have taken you minutes or seconds when you started, but eventually, that timeframe will grow to hours and maybe even days, depending on the size of the change. You may try to apply every known fix and optimisation to your configuration management software in order to make it work faster. But still, you’ve hit a wall – your automation isn’t scaling with you anymore, and the time has come for a change.


Selecting automation software

Does the above scenario sound familiar? Or maybe you can already see it, approaching you somewhere in the horizon? Well, this is exactly what we’ve experienced in one of our SysAdmin teams here at Tesonet. We started out with Ansible, an automation software tool maintained by Red Hat Inc., and things were good – everything was automated and we could focus on making our infrastructure better.

But as our scale grew in size, we started hitting a wall after wall. Small infrastructure-wide changes were still relatively quick to implement, but bigger ones could take hours to roll out. Eventually, it reached a point where pushing out some changes could take over a day! This is, of course, unacceptable – we need to be able to react to changes quickly and reliably, and a critical fix deploy that may take over a day is completely unacceptable. In fact, even an hour is, for us, too long. We realised that, sadly, Ansible just doesn’t cut it for us anymore.

Now don’t get me wrong – we LOVE Ansible. It’s the friendliest automation tool around. Quick to learn, easy to be productive in, simple to deploy and manage. In my opinion, it’s the perfect automation tool for small-scale infrastructures. But eventually, you’ll hit limits, and while there are ways of scaling Ansible past many of those limits, none of them is easy.

So we looked around for a new configuration management tool. While a non-exhaustive list by far, here are some of the key requirements we had for our new tool:

  • Has to be VERY fast – should be able to deploy changes very quickly even at scale
  • Easy to scale up as our infrastructure grows
  • Flexible in how we can approach configuration management – i.e. we can bend the tool to our will rather than vice-versa
  • No bottlenecks anytime soon – we can focus on our infrastructure for the foreseeable future and not our tooling

So we tried many tools, but SaltStack really did seem to fit our requirements nearly perfectly.


SaltStack

Some key details about SaltStack are:

  • It’s VERY fast;
  • It’s basically just a Message Queue (ZeroMQ) – you may already imagine all the possible benefits this can bring;
  • Easy to scale with a syndicated set-up, which basically involves intermediate masters that handle most of the load for a subset of your infrastructure;
  • Allows you to delegate most of the actual work on the target host itself – master only provides it with various configuration variables and templates;
  • Can be used in a master-slave type of set-up, a masterless set-up over SSH, and many other ways in between;
  • It really is just a big Lego – you can bend it to your will and not the other way around.

You may have noticed me using some weird vocabulary above, like “syndicated set-up”. Well, that’s because SaltStack has some of the weirdest vocabularies out of any automation system. So here are a few words explained that SaltStack uses, so everything makes more sense to you (note, these are compared to Ansible equivalents, but if you’ve used any automation tool in the past, all of these should make sense to you):

  • Grains – values the host computes about itself. Same as Host Facts in Ansible;
  • Pillars – weirdest word in any configuration management system. Basically variables you define about the host. Same as Ansible Host/Group Variables;
  • External Pillars – Special modules that allow you to get variables about the host from an external source (an API, a database, etc.). There are chainable and easily pluggable;
  • Minion – End server controlled by a Salt Master. Basically a Slave in your typical Maser-Slave set-up;
  • State – Think of it as an Ansible Playbook. A collection of tasks to run;
  • Orchestration Runner – As I mentioned previously, Salt executes most of the work on the end host. As such, there’s no easy way to delegate tasks to a different host during a run (unless you allow minions to control each other – yes, there are options for this). An orchestrate runner is a special type of state execution module, where the state gets applied to the master, but the master can then trigger states on any other host in whatever order you wish. This really allows you to implement an equivalent of Ansible’s delegate_to parameter.

But even though SaltStack fit us very well, it’s definitely not without its quirks. Some things just work in a very odd way, especially for us, Ansible users. So what follows is a list of some of the weird things that you may encounter throughout during the beginning of your SaltStack journey, taken from the perspective of Ansible users. Note that these assume that you’re an intermediate to a senior System Administrator, and also note that this isn’t an exhaustive list – by far!

 

Language itself

Without going into too much detail, it’s worth mentioning that Salt itself uses a chain of interpreters to compute the final instructions the minion will execute. This is actually a very powerful concept – Salt playbooks and templates can be written in almost any language (for example, we use direct Python a lot), and if there’s no interpreter for it – you’re free to write one. The default, however, is Jinja followed by YAML. This means that you really have to be very careful with your Jinja, as it must produce syntactically correct YAML files. You normally use Jinja to pull in various variables into your states too and generate them into the produced YAML files. So really be careful how you generate your YAML, as it can be a big source of bugs and issues!

You also must ensure that every task generated into the final set of execution instructions has a unique name. This is because under the hood these are just Pythonic dictionaries, which do have to have unique key names. Luckily, Salt does track where it includes tasks from, so if you do need to call the same task from different places in your code – just include it from a separate file.


State application behaviour

One of the weird things you’ll quickly notice is how batching works in SaltStack. In Ansible, these control groups of hosts that will get states applied to. A new group does not get touched until the first one is completed. In SaltStack, however, batching controls the number of hosts that the state runs at once. This means that if certain hosts finish the run early, SaltStack will start the run on new hosts to keep the batch size the same. This does speed runs significantly, but it does mean that you have to be more careful with states that you may still consider risky (though, of course, you have testing and staging for this).

Restarting a salt-minion service is also not simple during the state-run. By default, SaltStack doesn’t send any response back to the master until it completes all of the tasks given. This does mean that if you do want to receive some output, you cannot restart the minion during the state-run. What’s worse, Salt doesn’t prevent you from doing so! So make sure you check out the SaltStack FAQ for the proper way of handling this.

And, as you probably guessed, rebooting the minion is even more difficult. Not only you won’t receive any response back about how the state-run went, but also if the reboot happened during the middle of the run, remaining tasks won’t run either. To get around this, you do have to use the Orchestrate Runner, which allows you to apply a state that reboots the minion, then wait on the master for the minion to come back online, and then continue on with your work.

 

Minion data from external sources

At scale, you are likely to have your minion data defined somewhere outside of your configuration management tool of choice. It can be a database, an API, or any other system. With SaltStack, this system will likely need to provide two pieces of information about each minion:

  • What this server is (a web server, a database, etc.)
  • Information about the details of this minion (its IP address, IP addresses of other peers it needs to know about, etc.)

SaltStack makes a separate request (via the External Pillar system) to your backend for each piece of information, and a separate request for each minion. While you don’t necessarily have to get both pieces of information for each minion, you can imagine that a state-run on many minions concurrently can easily cause a denial of service to your backend.

My general suggestion is, as long as your requirements allow it, to keep a local cache that you pre-fill with your minion information. Something like Redis, pre-filled via Cron, works perfectly, and also allows your automation system to survive outages in your external configuration store.

 

Pillar Encryption

SaltStack uses GnuPG for pillar encryption. It’s also defined as an external pillar, so by default, pillars are not encrypted until you enable it. While GnuPG is usually very familiar to many System Administrators, few use it at scale.

In Salt, you encrypt each pillar individually. This can easily lead to a scenario where any host can have tens or hundreds of pillars to decrypt. Combine this with infrastructure-wide changes on many minions at once, and you can have thousands or tens of thousands of pillars to decrypt at once, and it actually becomes easy to DoS your GnuPG Agent.

The reason for this is that GnuPG Agent is single-threaded, and, by default, it also has a limited amount of secure memory allocated for decryption. While you can spread the load via different masters (each master will only decrypt variables for minions directly under it), GnuPG Agent can easily become the bottleneck in your SaltStack set-up. This is bad because salt state runs will start to time out due to no progress with the GnuPG system.

So if you can, we highly suggest running HashiCorp Vault instead of GnuPG as your encrypted pillar system (the most popular alternative). But if you need to stick to GnuPG, then make sure you run the latest GnuPG and libcrypt possible (Debian 10 has correct versions). Once you have the correct versions, enable the ability for secure memory allocations to be increased (see the gpg-agent man page for details on this or the GnuPG bug tracker), and, if possible, try to batch-encrypt variables into a single PGP message, so there is only one decryption to process per-host. The flexibility of the External Pillar system allows you to achieve this in many different ways, so be sure to experiment!

 

A few other tips
  • Read the SaltStack code, and not just the docs. SaltStack really tries to do a lot in many different ways, and not everything is fully documented.
  • For most of the issues, you may have with SaltStack, check out their bug trackers. Chances are you’re not the first one and there are solutions or, at least, workarounds.
  • Understand that Salt can be buggy at times due to how many different things it tries to do, and the different approaches it tries to allow. And often bugs aren’t even their own fault, but the fault of external libraries used by various Salt modules.

I’ve only scratched the surface of our SaltStack migration in this article. There are many other things to talk about, such as the salt event system, which is really the core of SaltStack’s behaviour, and the true source of its power. Make sure you experiment with various ways you can accomplish your tasks and fulfil your goals. SaltStack is a wonderfully flexible, albeit very complex system, but the time invested will be directly proportionate to your results.

I hope some of the issues and behaviours mentioned in this article will help make your initial foray into the SaltStack world smoother and easier. There’s a lot more to learn, so make sure you enjoy the journey!