Let's Encrypt Support Added

tl;dr: Support for automated Let's Encrypt SSL certificates has been added to both Nginx and Apache webservers.

You are now able to spin up a server with Nginx or Apache and choose to generate your valid (and free!) SSL certificates using Let's Encrypt. The certs are automatically renewed once a month.

What is Let's Encrypt?

Let's Encrypt is a new SSL certificate provider taking the industry by storm.

It used to be that you would need to pay at least $10/year for a single-domain certificate. Some registrars like Namecheap would provide a free certificate for your domain's first year, but would require you purchase a certificate once the free one expired.

Thanks to Let's Encrypt you are now able to create SSL certificates that are accepted by all major web browsers. Best of all, they are completely free.

However, there are some caveats to keep in mind:

The certificates are only valid for 90 days.

Usually certificates from your traditional vendors will last at least one year.

With Let's Encrypt you must renew your certificate at least once every 90 days. If you do not renew before 90 days, your certificate will expire and will no longer be valid.

This is actually a good decision, as it forces developers to implement auto-renewal into their devops workflow. Every one of us has scrambled a year after purchasing our certificate because we simply forgot to keep track and oops! it's expired now.

No wildcard certificates available

Many registrars offer wildcard certificates for purchase. This means instead of purchasing one certificate for each of sub1.bar.com, sub2.bar.com and sub3.bar.com, you can purchase a single certificate that is valid for *.bar.com.

Let's Encrypt does not currently offer this, and probably never will. For each unique domain and subdomain you wish to protect you must request a new certificate. Thankfully this is a painless, streamlined process, as long as you stay within their rate limits.

No extended validation

Let's Encrypt does not currently offer the cute green bar visible and PayPal or your bank.

This type of certificate requires the vendor to manually verify your information and cannot be completely automated.

If you are dealing with data that requires extended validation certificates you are still required to use one of the traditional vendors.

Rate limits

Let's Encrypt enforces fairly generous rate limits. You can view full details here but the tl;dr of it is:

  • 20 certificate issuances per domain per 7 days,
  • 5 certificates per unique set of FQDNs per 7 days,
  • 500 registrations per IP per 3 hours,
  • 300 pending authorizations per account per week, and
  • 100 subject alternative names per certificate

If you have a fairly small amount of domains and subdomains you should fall well within the rate limits. If you have a large pool of domains you need certificates for then you may run into the rate limit.

As of right now the only solution is to simply move the domains over in small chunks as the rate limit clears for you.

Challenges Integrating into PuPHPet

Adding Let's Encrypt into the existing PuPHPet workflow posed some challenges that I had not anticipated.

I will assume that the most common scenario will be users creating completely new manifests for new servers.

Pointing DNS so domain name resolves

Let's Encrypt requires that the domain you are creating a certificate for be publicly accessible. This means you cannot create a certificate for a development environment (http://puphpet.local), or for a non-resolving domain.

If you are deploying to any host like Rackspace or Digital Ocean, you should already have updated your nameservers to point to the proper place. Newly registered domains may take several hours to be visible to the internet.

However, once your nameservers are updated you still need to tell your host on which server the domain will reside on. This means that before spinning up your new server, your host must have an A Record already in place for your domain, and this record must point to the correct IP address.

How do you know your new server's IP address before spinning it up? You don't!.

There are two possible solutions to this problem:

Update the A Record after your server spins up

This means the initial attempt will fail. Let's Encrypt will not be able to connect to your domain to verify your ownership of it and will refuse to provide a certificate. This will cause vagrant up to fail.

Once the server is up and has an IP address, you can then update the A Record to point to the new server and run vagrant provision.

Floating IP address

Your host may provide a feature called floating IP address. Simply put, it is an publicly-available IP address that belongs to you that you can assign to any server within your infrastructure.

If you create a floating IP and set your A Record to point to it then you can actually assign the IP address to the new server right after it is created and visible in your host's dashboard.

There is usually a 2 to 3 minute gap between when the server is available to have a floating IP assigned to it, and when the Let's Encrypt Puppet module is run and the domain needs to resolve to the server, so you have a nice cushion to be able to set things up in time!

Nginx/Apache vhosts pointing to certificate before they are generated

The recommended way to run Let's Encrypt is to use its included Apache plugin and to install the Nginx plugin. They handle everything that needs done by creating temporary vhost configs and removing them once the cert is generated.

Unfortunately the way Puppet works, the certs must be created by the time the Nginx and Apache vhosts are setup, pointing to where it expects the certs to be located.

On initial vagrant up, the certs will not be generated in time, failing the build.

The solution is to use the Let's Encrypt included standalone server option to listen on port 80 and generate the certs.

Webserver integration/port conflicts

Needing to run Let's Encrypt using its standalone server option means that it must run and generate certs before Nginx or Apache are even running because they lock down port 80.

Attempting to run the process after Nginx or Apache are installed, but before vhosts are configured, will fail because port 80 is already in use and Let's Encrypt cannot listen in on it.

On subsequent cert renewals Let's Encrypt creates a temporary directory within your target vhost webroots to verify domain ownership, and removes this directory once the certificate has been renewed and downloaded. Neither Nginx or Apache need to be turned off during the renewal process, and with luck you will never realize it renewed unless you check the certificate issued date!

Let's Encrypt offering free certificates that require constant renewal ushers in a new era for encrypting everything by default.

Once you remove the cost and force a machine to handle the process for you then enforcing SSL encryption for all your public websites is no longer a hassle, but simply something you do for a new domain, as simple as choosing to install Nginx or PHP.

There are many more articles that go into more detail on Let's Encrypt that I could possibly write about. I encourage you to learn more about this awesome new service and show your support by using them to create your next SSL certificates!

Have fun, learn something!

Juan Treminio

Multi-Machine Support Added

tl;dr: You can spin up multiple machines with a single vagrant up. Unfortunately, old configs are no longer drag/drop compatible!

Since its creation, PuPHPet has had a well-defined purpose: Provide an interface for developers to quickly and easily configure, launch and distribute highly customized virtual machines.

The project has more than its fair share of bugs. In fact, it has the unenviable position of having to react to upstream changes in outside dependencies that constantly break existing, working configs. That said, I do believe that so far it has accomplished its goal quite well!

One "vagrant up", Limitless Machines

One thing that has constantly remained on my wishlist feature-wise has been the ability to spin up multiple machines with a single vagrant up command.

Until very recently, PuPHPet has generated a config for a single machine. If you wanted to create another identical machine, you would need to copy the whole unzipped directory, delete the .vagrant directory if it existed within, and then run vagrant up in the new directory. Not many steps, really, but now you have identical directories for identical machines, with double the number of Puppet module files littering your hard drive.

Now, however, you can define multiple machines in a single config file!

For some time you have been able to install Nginx or Apache and add as many virtualhosts as you'd like. I took the same concept and applied it to defining as many machines as you'd like.

Thankfully, Vagrant has had native support for multiple machines for some time now.

With this recent update, you can create identical machines that you can deploy to any provider you would like, including locally, Rackspace, Digital Ocean, AWS, etc.

Unfortunately, due to limitations of Vagrant, I was unable to provide the feature of choosing different provider from a single config. For example, you cannot currently spin up one machine on AWS, another on Rackspace, and another on Linode. All machines must be deployed to the same provider. [0]

One Provider, Multiple Datacenters

You are limited to a single provider, yes, but if the provider offers multiple datacenters you are free to deploy to as many datacenters as your wallet allows.

Take Digital Ocean, for example. You can currently deploy to the following datacenters:

New York 1, New York 2, New York 3, San Francisco 1, Amsterdam 1, Amsterdam 2, Amsterdam 3, Singapore 1, London 1, Frankfurt 1

Now you can create as many machines as you want, from one config, and choose to deploy each machine to a different (or same) datacenter.

Aside from that, you can also customize memory and CPU per machine.

Cool! So What Is This For?

Adding multiple machine support to PuPHPet introduces a massive benefit: All of a sudden you easily and quickly create a highly redundant, scalable, high-availability application!

For instance, you can create one config that spins up 10 machines, spread across the globe. These machines handle your application exclusively. Spin up another 3 machines to house your replicated databases.

Then, you can either spin up another machine to house your static files or do it The Right Way and use S3.

Lastly, you can install HAProxy to distribute the load between each machine, and to handle the case when one machine stops responding.

With little effort you have created a very powerful network of machines to make your application faster and more stable.

The Thing About HAProxy...

HAProxy support is not currently implemented within the GUI. I have been working on a solution to add HAProxy and have been using it privately. It works well.

However, even if you create a server that houses HAProxy, you still need to tell HAProxy itself about the IP addresses of the machines you want it to forward traffic to. Currently, this would need to be handled manually - you spin up your app/database servers, grab the IP address of the machines, add those to your HAProxy config and then spin that server up.

If you add new servers you need to add those IPs to the HAProxy config and vagrant provision.

This works ok, but no one likes to do manual tasks that can be easily automated.

One solution I've come up with is using Jenkins to spin up new machines, grab the new IP addresses, add them to an existing HAProxy config and then provision that machine.

If I cannot figure out a more elegant way that does not require a build server, I will eventually write a tutorial on how to set this up.

Broken Backward Compatibility

Unfortunately with this new feature comes one small downside: configs generated before this change are no longer fully compatible with the GUI. If you drag/drop your config.yaml file, everything will be populated as before, except no machines will be defined.

All you need to do is simply choose your provider and add your machine(s) as usual. That's it!

Have fun, learn something!

Juan Treminio

[0] Truthfully, it is rather easy to add multiple providers to multiple machines in a single config. However, the problem is requiring to pass in the --provider flag via the command line. I was not happy with the final implementation, however, and decided to stick with one config, one provider for now.

Symlink Support in Windows and Virtualbox

tl;dr: Setting up a VM with symlinks inside shared folders is now pretty easy!

I recently purchased a Windows machine with one of the primary goals being to increase PuPHPet's support for Windows OS. On the roadmap is Hyper-V support, but right now I am picking low hanging fruit.

Line Endings

The first thing I fixed was Window's pesky line endings messing up shell scripts. Bash would fall apart when attempting to run scripts that contained \r\n. This usually happened when a Windows user would edit a file using an editor that had not been configured to always use \n. A simple call to sed fixed this fairly quickly.

The next thing had been a thorn in my side for some time. Unfortunately, not having a Windows machine handy forced me to keep putting this off, until now.

Some packages like Apache on CentOS attempt to create symlinks in the /var/www directory. Since Windows doesn't have native support for Linux-style symlinks, this would fail and break provisioning.

The solution was not very easy to find, but thanks to some very helpful people I was finally able to get this working properly.

The first step is installing Polsedit - User Policies Editor. When you open it, look for Create symbolic links.

Polsedit "SeCreateSymbolicLinkPrivilege"

Double click the row, click Add User or Group... and look for your username in the list. Closing the app will automatically save your choices, and you'll need to reboot your machine.

You will only need to do this one time.

After rebooting, the only step you must keep in mind is to always run Vagrant via "Run as administrator". You can accomplish this by running cmd.exe (or, preferably, Cygwin) with administrator privileges.

That's it! The code that runs the magic is here.

In short:

  • Download Polsedit and add your user to the SeCreateSymbolicLinkPrivilege section
  • Always run cmd.exe with "Run as administrator".

Here's some proof!

Symlinks working on Windows host

Have fun, learn something!

Juan Treminio