in Mr. Fixit, Reviews, X-Geek

Intel NUC as home server

It's nice when your server fits in your mailbox.

It’s nice when your server fits in your mailbox.

I’ve always liked to have a home server hanging around for things like email, file sharing, and the like. Over the years this has taken the form of a beefy desktop computer, a PowerPC-based MacMini, an embedded Linux-based router, and recently a beat-up old laptop. All had their challenges, power consumption and fan noise being the two main ones, though the PowerPC machine and the router also couldn’t run all the software I needed. I was limping along on my busted laptop for as long as I could but decided it was nearing the end of its useful life. It was time to go shopping for something that would last me a while.

The embedded idea still appealed to me for the two main reasons I mentioned above: power consumption and noise. I wanted something that sips electricity and was quiet yet still provided enough computing power to do what I needed. After reading up on some online reviews, I went with the Intel NUC.

Intel’s NUC (“Next Unit of Computing”) systems are embedded x86_64 machines which are about half the size of a brick. They have plenty of ports: HDMI, USB 3.0, and even a Thunderbolt port. They come with your choice of Intel processors, whether it is an i3, i5, or i7 series. Memory can be boosted to 32 GB and it accepts newer SSD drives. Some models can fit 2.5″ laptop drives as well. The hardest part about making the jump to an Intel NUC was simply deciphering which Intel model had which options. Sometimes having too many choices isn’t a good thing, I suppose.

I went with the Intel NUC model xx. It seemed a good balance of price and performance, with an i5 processor, room for a 2.5″ drive, and available memory of 32 GB. I bought it, a 1 TB SSD drive, and two sticks of 16 GB memory to round it out, anxiously awaiting its delivery.

The memory, it turns out, was in high demand. Amazon showed I wouldn’t get it for two weeks after everything else shipped. This was unacceptable, so I splurged on the “priority shipping” option for $20 more. Within two days, I had the memory in my hands, having been shipped overnight from Tokyo. Man, I love living in the future!

Next it was time to install the software. Since my goal was to run multiple things on this box, I was looking to put a hypervisor on it. VMware was overkill for me (and not open source). Red Hat Enterprise Virtualization (RHEV) seemed appealing but was not free. That left XenServer from Citrix, which was given favorable reviews in some online NUC how-tos I had read. I went with XenServer.

Downloading and installing XenServer was a piece of cake, simply grabbing the ISO and burning it to a flash drive. Soon I had it up and running with surprisingly little hassle. Xen comes with a Windows-based management tool which isn’t ideal to my all-things-Linux worldview, but I soon found comparable Linux-based tools to do the job. Before I knew it I had a configured Xen host with a CentOS guest running on it. I was home free!

Except I wasn’t. I wanted to take advantage of all the fancy ports the NUC gave me but try as I might I could not get my guest OS connected to them. It turns out that XenServer does not support device passthrough. This means I could not assign the host’s USB ports to the guests the way I’ve gotten used to doing with tools like Oracle’s VirtualBox. Bummer! I futzed for a week with finding a way around this limitation but finally had to pull the plug on XenServer. As easy as it was to get going it was clear that XenServer wasn’t going to meet my needs.

Where to turn to next? I love working with VirtualBox on my work and home laptops and considered putting it on my shiny new NUC. Still, VirtualBox doesn’t have the bells and whistles needed to properly manage multiple VMs at once the way that XenServer, VMware and RHEV do. I also didn’t want to put what is mainly a desktop VM tool on a server. The principle of the thing, you know.

I knew Red Hat’s excellent policy of making their software open source, and figured there was an open source project upstream of their RHEV product the same way Fedora feeds Red Hat Enterprise Linux. I then discovered oVirt, Red Hat’s open source testbed for RHEV. Bingo!

oVirt seemed to do all I needed. It’s open source, offers a nice, web-based management interface, and has support for device passthrough that XenServer doesn’t. The catch with oVirt, though, is that it’s not that easy to install.

oVirt at the time of this writing is on version 4.0. Version 4 made some changes to the previous version 3.6, one key one being that it no longer supports a self-hosted management system. oVirt 4.0’s solution to this is to download a pre-built oVirt VM to manage the host which seemed to be a logical way to go. However, the VM that Red Hat provides is set to use 16 GB of RAM all by itself! I appreciate that oVirt (and RHEV) are aimed at the enterprise market (thus the “Enterprise” in the name) but having the hypervisor stuff take up half of the memory on my little box would never do. I skipped oVirt 4.0 in favor of oVirt 3.6 which still allows for the oVirt tools to be run inside the bare-metal OS. I highly recommend anyone trying to run oVirt on a home server do the same or else you’ll be facing several weeks of maddening configuration, at best.

With oVirt 3.6 installed on a CentOS 7 host OS, I was now able to get started with building my VM host. I export filesystems via NFS from the host OS to the guest OSs and to oVirt’s use. I have the host OS doing only the things needed for supporting oVirt and nothing more. I want it to be as lean as possible so more resources can be available to the VMs. oVirt’s interface is fairly sophisticated but after a little time with it it becomes easy to navigate.

There’s still a bit of work to be done, namely with configuring a host-based video server for one of my VMs, but overall it’s doing what I wanted. I am enjoying having a pint-sized server that gives me a platform for easily testing new software.