Ubuntu 16.04, CentOS 7, PHP 7.1 support added, Debian and Ubuntu 12.04 support dropped


  • Ubuntu 16.04 64-bit and 32-bit support added
  • CentOS 7 support added
  • Ubuntu 12.04 support dropped
  • Debian support dropped
  • PHP 7.1 support added
  • Major Puppet code refactor
  • All boxes upgraded

New OS support

Ubuntu 16.04 and CentOS 7 both came out several months ago. I have not wanted to add support for these to PuPHPet.com until recently because I needed all repos for all packages PuPHPet uses to update.

Now that the repos have caught up, I took the dive and created the base boxes and updated the Puppet code to work with the new versions.

Ubuntu 16.04 is the only distro on PuPHPet.com with a 32-bit box.

The Ubuntu 16.04 64-bit box has also been made the default selection for local VMs.

Dropped OS support

With the new OS support the amount of distros on PuPHPet.com ballooned, as did the amount of work required for maintenance and testing.

With that in mind, I decided to drop support for Ubuntu 12.04 and Debian.

Ubuntu 12.04 was an easy choice - it is now 4 years old and 2 versions behind stable LTS. You should really consider upgrading to 14.04 at least. Repos for up to date packages for this distro were becoming harder and harder to find, with most package maintainers having moved on to 14.04 long ago.

Dropping Debian was not as easy as Ubuntu 12.04, but still necessary. While Debian is a fine, stable system, it is far too hard to keep it up to date with the other distros.

For example, most updated packages come from DotDeb which does not have support for beta versions of software, like PHP 7.1 (at the time of this writing). It also occasionally goes down and I had resorted to keeping a mirror of it on my own servers.

For Apache 2.4 PuPHPet depended on a single source, d7031.de. It is the only repo I have been able to find that has Apache 2.4 for Debian Wheezy. It also recently went down for upwards of a week, breaking all new Debian PuPHPet boxes.

Attempting to upgrade Apache from official Debian repos upgrades the whole system to Jessie, which is not ok.

Some packages simply would not work with Debian Wheezy at all, due to severely outdated dependencies that did not have updated versions available for the distro.

Aside from that, having to support 3 distinctly different distros, each with their own way of handling config files, was simply too much work.

Upgrade your boxes!

If you have previously created an Ubuntu 14.04 or CentOS 6 box, you must upgrade your box via Vagrant.

The easiest way is to simply remove the downloaded box from your system and let Vagrant download the new versions. The commands are:

vagrant box remove puphpet/ubuntu1404-x64
vagrant box remove puphpet/centos65-x64

The next time you vagrant up the new version will be downloaded.

PHP 7.1 support added

PHP 7.1 is now in RC status and is more or less stable (NOT FOR PRODUCTION!).

You may now spin up a new box and test it out for yourself. All currently supported distros have PHP 7.1 support!

The default PHP version has been updated to 7.0 in the GUI.

MySQL 5.5 support dropped

It was old and not found in the official MySQL repos. Upgrade to 5.6 or 5.7.

Major refactoring of Puppet code

For you Puppet-heads interested in this sort of thing, some major reshuffling and rewriting has occurred.

Previously the Puppet code was split between the PuPHPet repo itself, and the puppet-puphpet repo. All Puppet code has now been moved into puppet-puphpet.

This is a huge step forward proper versioning support since the GUI no longer needs to be kept up to date with the puppet-puphpet. If you know the git commit of the puppet-puphpet repo you can simply check it out yourself. There is still some work needing done but this is a major part of the effort.

In addition, all my Puppet code has been rewritten to be, imho, a bit easier to understand and work with. I am still not as comfortable with immutable Puppet code as I am with PHP, but I feel I am maturing as a Puppet developer and my code is becoming a bit better over time. I am also in the very early stages of adding tests to my Puppet code.

All 3rd party modules have also been updated, mostly for CentOS 7 support.

With big changes come the very real possibility of new bugs. I have tested just about everything but time has shown me, again and again, that I miss something and people encounter them. Please forgive any bugs you may run into and open a new GitHub issue!

PuPHPet has become more popular than I ever imagined it would be. I honestly believed that maybe 10 people would find the project useful, and now it has over 3,600 GitHub stars and the boxes have millions and millions of downloads. That is humbling and scary!

That said, PuPHPet remains and will continue to be a completely open-sourced MIT-licensed project. If you see something, fix something!

Have fun, learn something!

Juan Treminio

Let's Encrypt Support Added

tl;dr: Support for automated Let's Encrypt SSL certificates has been added to both Nginx and Apache webservers.

You are now able to spin up a server with Nginx or Apache and choose to generate your valid (and free!) SSL certificates using Let's Encrypt. The certs are automatically renewed once a month.

What is Let's Encrypt?

Let's Encrypt is a new SSL certificate provider taking the industry by storm.

It used to be that you would need to pay at least $10/year for a single-domain certificate. Some registrars like Namecheap would provide a free certificate for your domain's first year, but would require you purchase a certificate once the free one expired.

Thanks to Let's Encrypt you are now able to create SSL certificates that are accepted by all major web browsers. Best of all, they are completely free.

However, there are some caveats to keep in mind:

The certificates are only valid for 90 days.

Usually certificates from your traditional vendors will last at least one year.

With Let's Encrypt you must renew your certificate at least once every 90 days. If you do not renew before 90 days, your certificate will expire and will no longer be valid.

This is actually a good decision, as it forces developers to implement auto-renewal into their devops workflow. Every one of us has scrambled a year after purchasing our certificate because we simply forgot to keep track and oops! it's expired now.

No wildcard certificates available

Many registrars offer wildcard certificates for purchase. This means instead of purchasing one certificate for each of sub1.bar.com, sub2.bar.com and sub3.bar.com, you can purchase a single certificate that is valid for *.bar.com.

Let's Encrypt does not currently offer this, and probably never will. For each unique domain and subdomain you wish to protect you must request a new certificate. Thankfully this is a painless, streamlined process, as long as you stay within their rate limits.

No extended validation

Let's Encrypt does not currently offer the cute green bar visible and PayPal or your bank.

This type of certificate requires the vendor to manually verify your information and cannot be completely automated.

If you are dealing with data that requires extended validation certificates you are still required to use one of the traditional vendors.

Rate limits

Let's Encrypt enforces fairly generous rate limits. You can view full details here but the tl;dr of it is:

  • 20 certificate issuances per domain per 7 days,
  • 5 certificates per unique set of FQDNs per 7 days,
  • 500 registrations per IP per 3 hours,
  • 300 pending authorizations per account per week, and
  • 100 subject alternative names per certificate

If you have a fairly small amount of domains and subdomains you should fall well within the rate limits. If you have a large pool of domains you need certificates for then you may run into the rate limit.

As of right now the only solution is to simply move the domains over in small chunks as the rate limit clears for you.

Challenges Integrating into PuPHPet

Adding Let's Encrypt into the existing PuPHPet workflow posed some challenges that I had not anticipated.

I will assume that the most common scenario will be users creating completely new manifests for new servers.

Pointing DNS so domain name resolves

Let's Encrypt requires that the domain you are creating a certificate for be publicly accessible. This means you cannot create a certificate for a development environment (http://puphpet.local), or for a non-resolving domain.

If you are deploying to any host like Rackspace or Digital Ocean, you should already have updated your nameservers to point to the proper place. Newly registered domains may take several hours to be visible to the internet.

However, once your nameservers are updated you still need to tell your host on which server the domain will reside on. This means that before spinning up your new server, your host must have an A Record already in place for your domain, and this record must point to the correct IP address.

How do you know your new server's IP address before spinning it up? You don't!.

There are two possible solutions to this problem:

Update the A Record after your server spins up

This means the initial attempt will fail. Let's Encrypt will not be able to connect to your domain to verify your ownership of it and will refuse to provide a certificate. This will cause vagrant up to fail.

Once the server is up and has an IP address, you can then update the A Record to point to the new server and run vagrant provision.

Floating IP address

Your host may provide a feature called floating IP address. Simply put, it is an publicly-available IP address that belongs to you that you can assign to any server within your infrastructure.

If you create a floating IP and set your A Record to point to it then you can actually assign the IP address to the new server right after it is created and visible in your host's dashboard.

There is usually a 2 to 3 minute gap between when the server is available to have a floating IP assigned to it, and when the Let's Encrypt Puppet module is run and the domain needs to resolve to the server, so you have a nice cushion to be able to set things up in time!

Nginx/Apache vhosts pointing to certificate before they are generated

The recommended way to run Let's Encrypt is to use its included Apache plugin and to install the Nginx plugin. They handle everything that needs done by creating temporary vhost configs and removing them once the cert is generated.

Unfortunately the way Puppet works, the certs must be created by the time the Nginx and Apache vhosts are setup, pointing to where it expects the certs to be located.

On initial vagrant up, the certs will not be generated in time, failing the build.

The solution is to use the Let's Encrypt included standalone server option to listen on port 80 and generate the certs.

Webserver integration/port conflicts

Needing to run Let's Encrypt using its standalone server option means that it must run and generate certs before Nginx or Apache are even running because they lock down port 80.

Attempting to run the process after Nginx or Apache are installed, but before vhosts are configured, will fail because port 80 is already in use and Let's Encrypt cannot listen in on it.

On subsequent cert renewals Let's Encrypt creates a temporary directory within your target vhost webroots to verify domain ownership, and removes this directory once the certificate has been renewed and downloaded. Neither Nginx or Apache need to be turned off during the renewal process, and with luck you will never realize it renewed unless you check the certificate issued date!

Let's Encrypt offering free certificates that require constant renewal ushers in a new era for encrypting everything by default.

Once you remove the cost and force a machine to handle the process for you then enforcing SSL encryption for all your public websites is no longer a hassle, but simply something you do for a new domain, as simple as choosing to install Nginx or PHP.

There are many more articles that go into more detail on Let's Encrypt that I could possibly write about. I encourage you to learn more about this awesome new service and show your support by using them to create your next SSL certificates!

Have fun, learn something!

Juan Treminio

Multi-Machine Support Added

tl;dr: You can spin up multiple machines with a single vagrant up. Unfortunately, old configs are no longer drag/drop compatible!

Since its creation, PuPHPet has had a well-defined purpose: Provide an interface for developers to quickly and easily configure, launch and distribute highly customized virtual machines.

The project has more than its fair share of bugs. In fact, it has the unenviable position of having to react to upstream changes in outside dependencies that constantly break existing, working configs. That said, I do believe that so far it has accomplished its goal quite well!

One "vagrant up", Limitless Machines

One thing that has constantly remained on my wishlist feature-wise has been the ability to spin up multiple machines with a single vagrant up command.

Until very recently, PuPHPet has generated a config for a single machine. If you wanted to create another identical machine, you would need to copy the whole unzipped directory, delete the .vagrant directory if it existed within, and then run vagrant up in the new directory. Not many steps, really, but now you have identical directories for identical machines, with double the number of Puppet module files littering your hard drive.

Now, however, you can define multiple machines in a single config file!

For some time you have been able to install Nginx or Apache and add as many virtualhosts as you'd like. I took the same concept and applied it to defining as many machines as you'd like.

Thankfully, Vagrant has had native support for multiple machines for some time now.

With this recent update, you can create identical machines that you can deploy to any provider you would like, including locally, Rackspace, Digital Ocean, AWS, etc.

Unfortunately, due to limitations of Vagrant, I was unable to provide the feature of choosing different provider from a single config. For example, you cannot currently spin up one machine on AWS, another on Rackspace, and another on Linode. All machines must be deployed to the same provider. [0]

One Provider, Multiple Datacenters

You are limited to a single provider, yes, but if the provider offers multiple datacenters you are free to deploy to as many datacenters as your wallet allows.

Take Digital Ocean, for example. You can currently deploy to the following datacenters:

New York 1, New York 2, New York 3, San Francisco 1, Amsterdam 1, Amsterdam 2, Amsterdam 3, Singapore 1, London 1, Frankfurt 1

Now you can create as many machines as you want, from one config, and choose to deploy each machine to a different (or same) datacenter.

Aside from that, you can also customize memory and CPU per machine.

Cool! So What Is This For?

Adding multiple machine support to PuPHPet introduces a massive benefit: All of a sudden you easily and quickly create a highly redundant, scalable, high-availability application!

For instance, you can create one config that spins up 10 machines, spread across the globe. These machines handle your application exclusively. Spin up another 3 machines to house your replicated databases.

Then, you can either spin up another machine to house your static files or do it The Right Way and use S3.

Lastly, you can install HAProxy to distribute the load between each machine, and to handle the case when one machine stops responding.

With little effort you have created a very powerful network of machines to make your application faster and more stable.

The Thing About HAProxy...

HAProxy support is not currently implemented within the GUI. I have been working on a solution to add HAProxy and have been using it privately. It works well.

However, even if you create a server that houses HAProxy, you still need to tell HAProxy itself about the IP addresses of the machines you want it to forward traffic to. Currently, this would need to be handled manually - you spin up your app/database servers, grab the IP address of the machines, add those to your HAProxy config and then spin that server up.

If you add new servers you need to add those IPs to the HAProxy config and vagrant provision.

This works ok, but no one likes to do manual tasks that can be easily automated.

One solution I've come up with is using Jenkins to spin up new machines, grab the new IP addresses, add them to an existing HAProxy config and then provision that machine.

If I cannot figure out a more elegant way that does not require a build server, I will eventually write a tutorial on how to set this up.

Broken Backward Compatibility

Unfortunately with this new feature comes one small downside: configs generated before this change are no longer fully compatible with the GUI. If you drag/drop your config.yaml file, everything will be populated as before, except no machines will be defined.

All you need to do is simply choose your provider and add your machine(s) as usual. That's it!

Have fun, learn something!

Juan Treminio

[0] Truthfully, it is rather easy to add multiple providers to multiple machines in a single config. However, the problem is requiring to pass in the --provider flag via the command line. I was not happy with the final implementation, however, and decided to stick with one config, one provider for now.