It’s been quite a long time since I’ve last updated this website in any publicly facing capacity. It has been a year, in fact. A year spent in a mostly unproductive and indecisive state. All that has changed, however, within the past few months. After ending up in a critical state with the drive array on my VM host, Tokyo, it was finally time to rebuild. And rebuild I did. I purchased the hardware and assembled it, and then installed and tested various other virtualization solutions but always returned to Proxmox. It’s a solid, albeit a bit out of date now, platform that lets me work on the host in my own way. I treasure that greatly, not to mention since I use the provided backup tools it made migrating the virtual machines from the old host an absolute breeze.
I didn’t just build my virtualization host, but I rebuilt my former virtualization host into a pure file server despite the hardware being complete overkill for such a task. I replaced all the faulty drives with a proper set of drives more suited to RAID array usage, and restored the data from my backup node.
Despite all these changes the one thing that has remained constant is how I provision my network. New virtual machines have been spun up here and there to either take the place of a faulty node, a new node for a new service, or simply a new node for testing. The roles that these nodes perform isn’t really of any concern. What is of concern, however, is how much time it takes to deploy a new machine.
Up front, Proxmox is not as agile as more modern solutions such as OpenStack, or AWS, or really any of these newer “Devops” style platforms. Creating a new machine is still a very much manual process from creation of the machine itself, assigning the CPU/Disk/Network/etc resources, installing from the ISO, and so on. From that point forward though, I have a handy BASH script which will initially provision the node for me. Ok, provision is sort of a wrong term here. The BASH script installs a couple of additional Repos, installs a base setup of packages, and then runs a system wide update of all existing packages before finally rebooting the server (because there’s always a new kernel in that update). From that point forward it’s a 100% manual process to get everything setup for whatever role that VM will perform.
Over the years I’ve heard of various tools to automate the deployment of your infrastructure, and hell, over the past decade I’ve used these types of tools as part of my role in the company in which I’m employed. Puppet was the first tool I actually tried in my home environment, and partly… Ok, mostly due to lack of motivation I was unsuccessful in getting it to work. So I gave up on the idea, at least for my personal usage.
Recently, however, we’ve been having meetings at work due to a decision which was made to switch our current deployment and automation tools over to Chef. This, coupled with an email which was sent out that had contained some documentation on Chef, sparked my curiosity. Quick fact about me: Curiosity is all the motivation I need to dive head first into something provided it resonates inside of me. And boy, did this ever and I can’t explain why. It just did.
If we side track a bit, Ruby was not on my list of languages to learn. Ever. Much like curiosity for motivating me into projects, a language needs to be able to speak to me in order for me to want to write code in it. I will be the first to admit I’m not the best programmer in the world, very far from it, however I’ve been programming in some degree since the mid-1990’s and in a multitude of languages. The most prominent of which being PERL, in which I’ve now spent the most time with and have written the majority of my projects in.
Now, when I say speak to me in regards to a programming language I simply mean something in which I found my brain can easily transition to, and one in which is easily typed. When I say typed I don’t mean typed as in INT’s, and Strings, and Arrays, etc… I simply mean typed, with a keyboard. Because of my love affair with PERL, C, and even PHP in which the syntax is all very identical, I find Ruby is much different thus I find it hard to work with.
This is why what I said earlier, about it not being on my list of languages to learn ever, was a statement of my personal feeling rather than anything negative about the language itself. In fact, as I have learned from reading, Ruby is actually an interesting and powerful language with some really neat concepts not all too different from PERL. So to some degree I still am hesitant about the language, but to other degrees it’s something perhaps I can learn to live with. I guess you could say that by choosing Chef as my automation platform, I really don’t have much of a choice. I mean… I could learn enough Ruby to shell out to PERL code and do all the heavy lifting in PERL itself, but that’s far from proper now isn’t it? Not to mention something I think any serious company would frown upon!
Now, continuing on our sidetracked path, you can’t talk about development without talking about about source code management. Because at the end of the day your interactions with Chef is by using code, and code in a proper development environment needs some form of revision control otherwise peoples sanity is at risk (or their jobs…).
Being a UNIX guy as long as I have, and programming for just as long if not longer, I’ve had my dealings with source code control. I think my first experience was with Subversion, and then when I switched to a FreeBSD based world in the late 1990’s, CVS. Both of these tools were pretty easy to begin working with, but I do admit outside of personal curiosity I never used them in a “real” capacity.
I’ve heard of GIT over the years, but having been out of active code development both personally and professionally, there was no desire pick up any knowledge on it… much like my feelings on Ruby. Also much like Ruby, Chef has taught me to re-embrace source code management and there’s really no options these days outside of GIT. If you’re serious about development in some capacity or another, professional or not, you have a GitHub account. A few days ago I created mine.
So for the past 3 weeks I have totally thrown myself, and my environment at home, completely head first into learning these tools. While there is an added benefit that it may provide better positioning inside my company thats really not the goal here, although it could certainly help. My motivation is simply to provide a better solution to myself, and if by chance I write something worthwhile, then releasing on GitHub for the benefit of others.
As of right now I have two Chef servers, and 13 nodes utilizing one with another couple nodes utilizing the other purely for developmental testing. My main Chef server serves both a Production and Stage environment with the Stage environment running some newer and less stable code than the codebase that Production runs. The other Chef server is strictly tied to a couple nodes I’m using for testing new theories and ideas with. Basically an environment I’m perfectly Ok with breaking, and not having to rush like mad to fix.
I keep mentioning code, but as discussed the main issue is to get nodes provisioned and deployed as quickly and as uniformly as possibe. Well, that’s the idea behind these tools. Using code to provision a node. My tools at work don’t function this way, nor really anything I’ve ever used in the past. It was quite a weird (but neat) concept to comprehend at first, but Chef makes it so easy to get started that it turns out to make perfect sense once you spend a few minutes with it. Beyond perfect sense, actually. Because now you’re taking something that’s obviously a logical task such as, “Hey, I’ve got a new node which needs to be a webserver. In order to be said webserver I need these packages installed here, this configuration file needs to be changed to include this or that, and this shared content file system needs to be mounted to serve our application to the world!”, and implementing it in a programming language which by its very nature is driven by nothing more than logic.
This is different than my experience with the other tools I’ve used in that those tools are really dumb. Sure, they work, and work quite well but they’re extremely basic and static. I go into one set of software and attach a software package to a server, then push that package to the server itself and it installs based on how that package was setup when it was added into this tool. This can mean a 5 year old package that was built on a 5 year old system goes to a brand new system which may sometimes require interaction by hand. Another tool I use is a very basic (but very complex under the hood) Key/Value/Template website in which we input values for keys that generate configs for things like Apache, MySQL, etc. Like I said though, these tools work. They work quite well but are extremely rigid at the same time. If something goes wrong, and it does, we’re stuck. We need to login to a server and hack crap together to get things working. Far from ideal.
Whether or not Chef will turn out to be a better solution in this scenario is hard to say right now. What I do know from what little experience I now have with Chef, however, is that scenarios arising from out of date packages or configurations can be prevented or more easily worked around. Worked around in a more open and flexible manner that will benefit the environment as a whole.
Outside of my professional role I am an administrator of a monster of a home network that has grown out of control. Chef thus far has helped me to take back control. Granted, I’m still working on a lot of things and have an absolute ton of stuff to learn, but so far it’s been an amazing little adventure. A perfect example of how awesome this has been as a personal tool is a reference back to my earlier BASH script for “provisioning” a node… to be later provisioned by hand.
When using that old method, and keeping with the webserver example, it would take around an hour or more to get a new web node ready for traffic. Just about anywhere this is completely unacceptable; especially when your old web node is dead, or crashing non-stop as was the case for me. So my old web node (as just mentioned) was having an issue in which it was constantly crashing due to a kernel panic. I had to rebuild it and I had to rebuild it fast! Well, thankfully in my adoption of these new tools I had previously written a cookbook in Chef to handle my web server deployment and configuration management for me. It was working quite well in testing and it was even running against the live webserver node to manage configuration changes when there were some, but it remained untested yet on a fresh server.
I imaged the new node from the ISO, setup the IP and hostname, and then I let Chef take over from there. Thirteen minutes later, yep… 13, I had a fully functional and up to date web node. I updated my firewall rules to point traffic to it, and well, this site is being served off that new node. Awesome! Well, to me it is anyways…
Obviously I’m still learning and working, and I hope some day for this process to be 100% automated from start to finish. That includes the creation of the VM itself along with its IP and hostname. Proxmox may or may not be able to support this, however, but I’m ok with that. I’ve already saved myself 50 minutes or more on deploying a node, and the headache that goes along with trying to remember every last detail I go through to provision one. I enjoy the fact that with Chef I only need to remember it once, put it into my codebase, and I’m done. I assign it to a node when needed, and let it go do its thing.
So a lot has been written, but what…if anything… has actually been said? Well, rants and ramblings are generally just a bunch of bullshit opinion from an author who may or may not know what he’s talking about. Well, I can say that’s the case here to some degree. I really don’t know what I’m talking about when it comes to Chef, and Ruby, and even GIT. You know, I’m quite fine with that. I’m still new to these tools and still learning to utilize their potential, but my point is simply for the first time in a very long time I have renewed vigor in my chosen career path. It’s rare that happens and I am grateful that it has happened to me.