Monday, April 18, 2011

Installing View 4.6 in home lab

After recently finishing my home lab ESXi 4.1 installation (the Blue Bad Boy) I thought I'd put it to good use. I decided to do a full View 4.6 installation with external access over PCoIP via a Security Gateway. After getting it all up and running, I must admit that it was a bit more work than initially expected - but it has been a lot of fun.

In this post I will not go into detailed installation steps, in stead I'll try and give an overview of the components that I have used (local mode and linked clones not included) and then link to the posts I've used for inspiration.

Components

First of all, a vCenter installation and a domain controller are required. I have chosen to go with Windows Server 2008 R2 but other than that it is pretty much standard installations.

The main component of the View installation is the Connection Server. And then there is the Security Server which is basically a subset of features from the Connection Server. After installation it is linked to the Connection Server from the Connection Server administrative web interface - and it is also configured from there.

I used this excellent guide by Poul Slager to install the Connection Server. I did the same as Poul and installed just one Win7 VM with the View agent on it and added it to a static pool.

A new feature in View 4.6 is that the PCoIP protocol can now be used also from external sources (e.g. from outside the company network) but this requires a Security Server. The Security Server is typically placed in a DMZ and it is the Security Server which establishes the PCoIP connection directly to virtual desktop.

At the VMware View blog, there's a post with a 40 minute video explaining the infrastructure and new features of View 4.6.

For the specific configurations for enabling PCoIP from external sources, I used the Setting up PCoIP Remote Access with View 4.6 document.

I experienced a strange error when at first I connected to the Security Server from and external source. It worked fine internally but from the outside I could connect and authenticate but then the remote connection just showed a black screen for about 10 seconds and the connection closed. In the View desktop event viewer there was en entry stating: "Closed PCoIP connection doesn't match global value". To fix this I adjusted the configuration in the Connection Server under View Configuration -> Servers and made sure that the external URLs for the Security Server and the Connection Server were identical. The external URL was set for the actual outside URL in both cases and the IP was set for the outside ip of the ADSL modem in both cases - this solved the issue in my case (see screen dumps below).

Currently, with all the components running, the setup is taking up about 10 GB of memory, so there's still room to load up the ESXi box, it has a total of 16 GB, with more VMs! (see screendump below).





Networking

For routing and firewall internally between the infrastructure components I chose a Vyatta virtual appliance which I downloaded from VMware Marketplace. Per default, this appliance included three NICs which suited my requirements for creating an inside LAN, outside LAN, and a DMZ for the security server. On the vSwitch I have created three different VM networks. However, I have not VLAN tagged any of the networks as only one ip range will leave the physical ports on the switch (the Vyatta router acts as gateway for all the infrastructure components).

The learning curve for the Vyatta is quite steep in my opnion. I have spend my fair share of hours trying to figure out the logic of the NAT, DNAT, and the firewal rules. For configuration I have been using a mix between the web gui and the CLI. The CLI is actually quite nice when you get used to it (TAB is your friend).
Remember to save your configurations to disk before rebooting or you will loose all configurations (I learned this a couple of times ;-)). So obviously type 'configure' to into configuration mode and then 'commit' when your done. 'Exit' to exit configuration mode. 'save config.boot' to save configuration to disk. Default credentials for the vyatta is user: vyatta, pw: vyatta.

To get started and setup the Vyatta I used the Quick Start Guide which you can get at vyatta.org. At the site there is also a quick start video which is useful.

And then for firewall configuration etc. I used this guide which worked surprisingly well.

The basic principle for the router in this setup is that you want to allow all traffic from the Inside Lan and the DMZ to be able to get out to the internet. You also want your Inside LAN to be able to access the DMZ. All traffic from the Outside entering the gateway NIC on the router should be dropped. However from all addresses on the Internet, access on port 4172 should be allowed (and directed) only to the security server. And then only the Security server's IP will be allowed to open connections on the same port to the inside LAN. So for 'opening up' a port in the firewall you will need both a firewall rule and a DNAT rule (destination NAT). This last part had me quite confused.

So, the final setup currently configured according to the diagram below. They way I used is to connect to the View Desktop and from there I can open a vSphere client and have full access to the vSphere home lab.



Sunday, April 17, 2011

My ESXi home lab - the Blue Bad Boy

A while back, I decided to build my own home lab whitebox (the Blue Bad Boy) with ESXi 4.1 U1. I've been running Workstation on my laptop with 4 GB memory for some years but the limitations to this setup is obvious. At work we do have a number of test servers that you can play around with but you still have to be a bit more careful than you would in a home setup.

Once the decision was taken, about a million questions followed. I wanted a setup that was similar to our production environment and that could do all the enterprise features such as HA, vMotion, FT, etc. Furthermore, there should be sufficient capacity to run a View 4.6 installation and a vCloud director setup which both require a number of infrastructure components.

So should it be one or two physical servers and what about a NAS box? The full blown setup, it turned out, would be way to expensive for my budget. So I decided to go with one physical box and then with an option to expand with a NAS box later on. For vMotion etc., this could be done with two virtual ESXi's and nested VMs.

There are quite a number of good blog posts and web sites about building home labs. I was leaning towards replicating the BabyDragon setup but two things kept me back. 1) The motherboard was about double the price in Denmark (if you buy from the States they will slaughter you with extra VAT and import taxes) and 2) There's already a number of people who have done this setup so it just seemed a bit too easy.

I ended up leaning towards a setup posted by VMwArune which included a real nice Intel Server Motherboard with dual port GigE ethernet.

Hardware parts

Motherboard
The motherboard is an Intel Server Board S3420GPV which is on the HCL. Form factor is ATX and it sports an integrated dual-port intel NIC (also an the HCL) - so it is not necessary to inject custom drivers or to buy additional Intel NICs (which are relatively expensive). Up to six SATA disks, no SAS. Max 16 GB unbuffered ECC memory. Socket 1156. One internal USB port for ESXi dongle. Unfortunately, it does not have KVM over IP as the Supermicro X8SILF board has.

CPU
For the CPU, I chose the Intel X3440 (on the HCL) which is a 2,53 Ghz quad core processor with hyperthreading. The X3430 was somewhat cheaper but did not have hyperthreading and the X3450 was a bit more expensive but the only difference was the clock frequency (I'm not totally sure it will support FT, though...)

Memory
16 GB (4 x 4) of unbuffered ECC memory, DDR3 (KVR1333D3E9S/4G). The motherboard only supports the more expensive ECC server memory (registered or unbuffered ECC) so that was a bit of a draw back. However, I did run it for a couple of days with regular non-parity non-ECC desktop memory and it worked fine.

Hard drive
I really wanted to get an SSD disk with 128 GB and then a 7200 RPM spindle with more capacity. But the SSD's are quite expensive and as I'm maybe going for NAS later I did not want to spend too much on storage up front. I decided to go with a Samsung F3 1 TB 7200 RPM.

USB dongle
1 x 4 GB regular Kingston DataTraveler for installing ESXi on.

Power supply
From what I understand, these whitebox home labs do not require that much power. So I chose a 430 watt Corsair CX power supply. Not much to say about that.

Chassis
For the chassis I chose a Cooler Master 430 Elite Black. I guess it could be any ATX compatible chassis, but this one was not too big and it is very affordable - and it has a nice glass pane on the side. After I bought it I saw that there's even smaller ATX chassis, the Elite 360, but it only has room for one or two disks.

Ethernet Switch
I wanted a VLAN tag enabled and manageable GigE switch. The HP Procurve 1810G series (8 ports) switch seemed to deliver just that - and again - affordable.

Pimping
Just to spice it up a bit - and because the chassis already holds a blue LED 120 mm fan, I have installed a Revoltec Kold Katode twin set (blue light..).



Inital experiences

I had to go through somewhat of a troubleshooting phase before I had ESXi 4.1 update 1 properly up and running. I was experiencing some very strange errors during install as I couldn't get passed the Welcome screen. If I tried ESX classic (v4.1, v4.0) it would hang in different places while loading drivers. So I updated the BIOS and that didn't help. I tried unplugging USB devices (the CD-ROM is external). Then I found out that the main board only supports ECC memory and I had bought non-ECC memory. So I was pretty sure that the memory was the fault. But - as I returned the memory, I bought a new cheap USB keyboard as I had seen some posts where people had USB keyboard issues. And low and behold - as soon as I changed the keyboard (I was using a Logitech G510 gaming keyboard to begin with), the installation went through clean. And that was even with 4 GB of non-ECC DDR3 memory from my other desktop.

Anyway, the beast is now up and running and everything works like a charm. And it's very quiet. I'd seen posts from ultimo 2010 where people couldn't get the second NIC to work - but it's been working fine for me.


Price

I've ordered all the parts in Denmark but I'll convert prices to Euro so it makes more sense. The total for the whole setup including the HP switch is about 925 EUR (~ 1.332 USD) so it's actually not that bad.

1TB Samsung 7200rpm 32MB SATA2 58 EUR
Intel Server Board S3420GPV - ATX - Intel 3420 160 EUR
INTEL XEON X3440 2530MHz 8MB LGA1156 BOX 208 EUR
Memory 16 GB Kingston unbuffered ECC 253 EUR
Cooler Master Elite 430 Black no/PSU 49 EUR
430W Corsair CX CMPSU-430CXEU 120mm 47 EUR
KINGSTON DataTraveler I 4GB Gen2 Yellow 17 EUR
HP ProCurve 1810G-8 Switch 97 EUR
Twin katode lights 10 EUR
Shipping ~ 27 EUR
Total 925 EUR