A while back, I decided to build my own home lab whitebox (the Blue Bad Boy) with ESXi 4.1 U1. I've been running Workstation on my laptop with 4 GB memory for some years but the limitations to this setup is obvious. At work we do have a number of test servers that you can play around with but you still have to be a bit more careful than you would in a home setup.
Once the decision was taken, about a million questions followed. I wanted a setup that was similar to our production environment and that could do all the enterprise features such as HA, vMotion, FT, etc. Furthermore, there should be sufficient capacity to run a View 4.6 installation and a vCloud director setup which both require a number of infrastructure components.
So should it be one or two physical servers and what about a NAS box? The full blown setup, it turned out, would be way to expensive for my budget. So I decided to go with one physical box and then with an option to expand with a NAS box later on. For vMotion etc., this could be done with two virtual ESXi's and nested VMs.
There are quite a number of good blog posts and web sites about building home labs. I was leaning towards replicating the BabyDragon setup but two things kept me back. 1) The motherboard was about double the price in Denmark (if you buy from the States they will slaughter you with extra VAT and import taxes) and 2) There's already a number of people who have done this setup so it just seemed a bit too easy.
I ended up leaning towards a setup posted by VMwArune which included a real nice Intel Server Motherboard with dual port GigE ethernet.
Hardware parts
Motherboard
The motherboard is an Intel Server Board S3420GPV which is on the HCL. Form factor is ATX and it sports an integrated dual-port intel NIC (also an the HCL) - so it is not necessary to inject custom drivers or to buy additional Intel NICs (which are relatively expensive). Up to six SATA disks, no SAS. Max 16 GB unbuffered ECC memory. Socket 1156. One internal USB port for ESXi dongle. Unfortunately, it does not have KVM over IP as the Supermicro X8SILF board has.
CPU
For the CPU, I chose the Intel X3440 (on the HCL) which is a 2,53 Ghz quad core processor with hyperthreading. The X3430 was somewhat cheaper but did not have hyperthreading and the X3450 was a bit more expensive but the only difference was the clock frequency (I'm not totally sure it will support FT, though...)
Memory
16 GB (4 x 4) of unbuffered ECC memory, DDR3 (KVR1333D3E9S/4G). The motherboard only supports the more expensive ECC server memory (registered or unbuffered ECC) so that was a bit of a draw back. However, I did run it for a couple of days with regular non-parity non-ECC desktop memory and it worked fine.
Hard drive
I really wanted to get an SSD disk with 128 GB and then a 7200 RPM spindle with more capacity. But the SSD's are quite expensive and as I'm maybe going for NAS later I did not want to spend too much on storage up front. I decided to go with a Samsung F3 1 TB 7200 RPM.
USB dongle
1 x 4 GB regular Kingston DataTraveler for installing ESXi on.
Power supply
From what I understand, these whitebox home labs do not require that much power. So I chose a 430 watt Corsair CX power supply. Not much to say about that.
Chassis
For the chassis I chose a Cooler Master 430 Elite Black. I guess it could be any ATX compatible chassis, but this one was not too big and it is very affordable - and it has a nice glass pane on the side. After I bought it I saw that there's even smaller ATX chassis, the Elite 360, but it only has room for one or two disks.
Ethernet Switch
I wanted a VLAN tag enabled and manageable GigE switch. The HP Procurve 1810G series (8 ports) switch seemed to deliver just that - and again - affordable.
Pimping
Just to spice it up a bit - and because the chassis already holds a blue LED 120 mm fan, I have installed a Revoltec Kold Katode twin set (blue light..).
Inital experiences
I had to go through somewhat of a troubleshooting phase before I had ESXi 4.1 update 1 properly up and running. I was experiencing some very strange errors during install as I couldn't get passed the Welcome screen. If I tried ESX classic (v4.1, v4.0) it would hang in different places while loading drivers. So I updated the BIOS and that didn't help. I tried unplugging USB devices (the CD-ROM is external). Then I found out that the main board only supports ECC memory and I had bought non-ECC memory. So I was pretty sure that the memory was the fault. But - as I returned the memory, I bought a new cheap USB keyboard as I had seen some posts where people had USB keyboard issues. And low and behold - as soon as I changed the keyboard (I was using a Logitech G510 gaming keyboard to begin with), the installation went through clean. And that was even with 4 GB of non-ECC DDR3 memory from my other desktop.
Anyway, the beast is now up and running and everything works like a charm. And it's very quiet. I'd seen posts from ultimo 2010 where people couldn't get the second NIC to work - but it's been working fine for me.
Price
I've ordered all the parts in Denmark but I'll convert prices to Euro so it makes more sense. The total for the whole setup including the HP switch is about 925 EUR (~ 1.332 USD) so it's actually not that bad.
1TB Samsung 7200rpm 32MB SATA2 58 EUR
Intel Server Board S3420GPV - ATX - Intel 3420 160 EUR
INTEL XEON X3440 2530MHz 8MB LGA1156 BOX 208 EUR
Memory 16 GB Kingston unbuffered ECC 253 EUR
Cooler Master Elite 430 Black no/PSU 49 EUR
430W Corsair CX CMPSU-430CXEU 120mm 47 EUR
KINGSTON DataTraveler I 4GB Gen2 Yellow 17 EUR
HP ProCurve 1810G-8 Switch 97 EUR
Twin katode lights 10 EUR
Shipping ~ 27 EUR
Total 925 EUR
nice setup
ReplyDeleteglad to see some home labbers mixing it up
still amazed what we can do for the prices I'm seeing
good luck & looking forward to seeing what you're doing with the lab
@dboftlp
Will the onboard RAID work with esxi if I wanted to run two 1TB disks in RAID-1? I have poked around, and some have had success, and others not so. Thanks for the reply. :)
ReplyDeleteAccording to the Intel documentation, the onboard RAID is software based, see http://download.intel.com/support/motherboards/server/s3420gp/sb/s3420gp_tps_r2_4.pdf page 15 and page 70.
ReplyDeleteAt vm-help.com (http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php) it is stated that software RAID is not supported. Other than that, I've seen a number of posts where it hasn't worked - so I'm fairly certain that it will not. Jakob
thanks for posting this. I am setting up with similar hw. Have you been able to use the local SATA disks as vmfs - were you able to add SATA disk as datastore without issue ?
ReplyDeleteYes I have. ESXi is installed on a 4 GB USB dongle and boots from that. A 1 TB disk is attached with SATA and this disk is added as a VMFS volume. Works like a charm.
ReplyDeleteHi, do you get the second NIC on the S3420 running? I've heard that ESXi is only "realizing" one.
ReplyDeleteYes, both NICs worked with an out-of-the-box installation, no tweaks necessary. I have also tested that seamless NIC failover works by disabling first one then the other. I also read some of those posts, I believe it was an earlier build of 4.1 that experienced that issue.
ReplyDeleteI forgot to thank you for your blog. :) Now, thanx for the quick answer. -We have now the problem to get the RAID up and running. We have ESXi 4.1 U1 and two 2TB HDDs. First we used the OnBoard Intel Matrix then we switched to the Other Intel ESRT2. It would be good if there is now a fix to this Problem. :) Thanx again.
ReplyDeleteGreetings
LAAN
hi
ReplyDeleteis ft vt-d working ????