New Server Setup with Caching!

A while back when re-building my file server I had already bought into the VM (Virtual Machine) craze thats sweeping the IT world. I knew that I couldn’t afford both a dedicated VM server, and a file server both so I had to consolidate the two functions into a single box. This has worked quite well for me, and with Proxmox I’ve managed to move vital core functions like web, mail, DNS, and more over to VM’s. Installing extra packages like Samba and NFS along with a custom filesystem layout based on LVM/MDADM for a RAID1 and RAID6 layout has also worked out very well for its file serving role.

My main concerns with moving so much onto this one machine, however, was the massive single point of failure should anything happen. The original web services setup relied very heavily on this machine, and likewise performance was also a concern. The host machine is an 8-Core Opteron 6212 at 2.6GHz and 32GB of RAM. More than enough for my simple setup, but I still had my concerns.

The original web services layout was using my pair of Dell 1850’s as two web nodes, a VM running Zen Load Balancer to split traffic between the 1850’s, a VM running MySQL, a VM running shared NFS storage for docroot’s, and finally a final VM which mirrored the 1850 web nodes in the event I needed to run everything from a single machine. This worked out great, not only in theory, but in practice.

The sites I host, this one included, were powered by this setup for the past couple months without any second thought. However, as I learned more about technologies in use at my place of work, I also wanted to test some of these to become better familiar with them. One of them being Varnish Cache. We use it at work for a major set of sites, and the more I learned about it the more I thought it could help me, and my sites. So I read some documentation, created yet another VM, and set out to figure out how to best use it.

Thanks to trial an error, the desire to offload some VM work, and just plain boredom, I ended up with a mostly new setup. This setup meant re-configuring each of the 1850’s from RAID1 to RAID0 for pure speed since redundancy isn’t required for them due to having content backups and VM’s which can be brought back online in the event of a failure. It also meant re-purposing one from a web node to become the database node.

Because of these seemingly minor changes, I was immediately able to free up resources that two VM’s were using — the load balancer VM, and the database VM.

Next I turned my attention back to Varnish, and the setup of my very own CDN (Content Delivery Network). Varnish needs a backend server to pull content from so it can cache said content for client requests. After a bit of research, and some trials between Lighttpd and Nginx, I decided the best backend would be Nginx.

So here I am now, with the current setup. This setup being one physical web node, one physical database node, one caching content delivery node as a VM, and one VM acting as shared storage for them all. This setup has decreased both the load time and the load on each of the servers significantly. Site load times went from an average of 2000-8000ms (2-8 seconds) down to just 500-1200ms (half a second to 1.2 seconds).

The CDN server is currently setup in conjunction with WP-Super-Cache under the CDN options, and a mirror of the static content (uploads, CSS, Javascript, Themes) all serves from the CDN. This gives me both PHP caching and static file caching.

I’d call this little project a success thus far, but being the type that can’t ever leave anything alone, I’m still going to play with it until I’m satisfied.

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>