Showing posts with label Hardware. Show all posts
Showing posts with label Hardware. Show all posts

Monday, February 25, 2013

BL460c G6 automatically powers on when shut down

Multiple times I've experienced that an ESX host automatically boots when you shut it down. This is fairly annoying when you try to shut it down to, for exampel, have a memory stick replaced by the hardware guys - just to find out that it powers itself back on after a couple of minutes.

I've seen it before but haven't really spend much energy on it. However, this other day we had a very consistent example where the blade server powered itself everytime you shut it down - or powered it off - after a couple of minutes.

Any 'auto power on' features in the ILO and enclosure OA were disabled. Also ASR was disabled in the BIOS.

The culprit turned out to be Wake-On LAN in the BIOS. As soon as this feature was disabled, the blade server stayed powered off. As far as I know we don't have any devices on the network broadcasting magic packets, but it still happened. As long as you're not using the DPM setting, it should be safe to turn off the WOL feature.





Friday, January 18, 2013

Disabling cores in BIOS for BL460c Gen8

Due mainly to licensing rules imposed by Oracle and Microsoft, there is an increasing demand for either locking VMs to specific hosts (like with VM-host-affinity rules) or for decreasing the number of physical CPUs or logical cores in the ESX hosts.

For HP hardware it is possible to order Blade servers with 2, 4, 6, or 8 cores - at least for BL460c Gen8. But in my company, we like to keep things as standard as possible, not having too many different hardware models.

As per Gen8, it is possible to disable a given number of cores in the BIOS. It has to be increased/decreased in pairs from 1 to 8. So as a minimum you can have 1 core enabled on each CPU. It is not possible to deactivate one of the physical CPUs.


Sunday, April 17, 2011

My ESXi home lab - the Blue Bad Boy

A while back, I decided to build my own home lab whitebox (the Blue Bad Boy) with ESXi 4.1 U1. I've been running Workstation on my laptop with 4 GB memory for some years but the limitations to this setup is obvious. At work we do have a number of test servers that you can play around with but you still have to be a bit more careful than you would in a home setup.

Once the decision was taken, about a million questions followed. I wanted a setup that was similar to our production environment and that could do all the enterprise features such as HA, vMotion, FT, etc. Furthermore, there should be sufficient capacity to run a View 4.6 installation and a vCloud director setup which both require a number of infrastructure components.

So should it be one or two physical servers and what about a NAS box? The full blown setup, it turned out, would be way to expensive for my budget. So I decided to go with one physical box and then with an option to expand with a NAS box later on. For vMotion etc., this could be done with two virtual ESXi's and nested VMs.

There are quite a number of good blog posts and web sites about building home labs. I was leaning towards replicating the BabyDragon setup but two things kept me back. 1) The motherboard was about double the price in Denmark (if you buy from the States they will slaughter you with extra VAT and import taxes) and 2) There's already a number of people who have done this setup so it just seemed a bit too easy.

I ended up leaning towards a setup posted by VMwArune which included a real nice Intel Server Motherboard with dual port GigE ethernet.

Hardware parts

Motherboard
The motherboard is an Intel Server Board S3420GPV which is on the HCL. Form factor is ATX and it sports an integrated dual-port intel NIC (also an the HCL) - so it is not necessary to inject custom drivers or to buy additional Intel NICs (which are relatively expensive). Up to six SATA disks, no SAS. Max 16 GB unbuffered ECC memory. Socket 1156. One internal USB port for ESXi dongle. Unfortunately, it does not have KVM over IP as the Supermicro X8SILF board has.

CPU
For the CPU, I chose the Intel X3440 (on the HCL) which is a 2,53 Ghz quad core processor with hyperthreading. The X3430 was somewhat cheaper but did not have hyperthreading and the X3450 was a bit more expensive but the only difference was the clock frequency (I'm not totally sure it will support FT, though...)

Memory
16 GB (4 x 4) of unbuffered ECC memory, DDR3 (KVR1333D3E9S/4G). The motherboard only supports the more expensive ECC server memory (registered or unbuffered ECC) so that was a bit of a draw back. However, I did run it for a couple of days with regular non-parity non-ECC desktop memory and it worked fine.

Hard drive
I really wanted to get an SSD disk with 128 GB and then a 7200 RPM spindle with more capacity. But the SSD's are quite expensive and as I'm maybe going for NAS later I did not want to spend too much on storage up front. I decided to go with a Samsung F3 1 TB 7200 RPM.

USB dongle
1 x 4 GB regular Kingston DataTraveler for installing ESXi on.

Power supply
From what I understand, these whitebox home labs do not require that much power. So I chose a 430 watt Corsair CX power supply. Not much to say about that.

Chassis
For the chassis I chose a Cooler Master 430 Elite Black. I guess it could be any ATX compatible chassis, but this one was not too big and it is very affordable - and it has a nice glass pane on the side. After I bought it I saw that there's even smaller ATX chassis, the Elite 360, but it only has room for one or two disks.

Ethernet Switch
I wanted a VLAN tag enabled and manageable GigE switch. The HP Procurve 1810G series (8 ports) switch seemed to deliver just that - and again - affordable.

Pimping
Just to spice it up a bit - and because the chassis already holds a blue LED 120 mm fan, I have installed a Revoltec Kold Katode twin set (blue light..).



Inital experiences

I had to go through somewhat of a troubleshooting phase before I had ESXi 4.1 update 1 properly up and running. I was experiencing some very strange errors during install as I couldn't get passed the Welcome screen. If I tried ESX classic (v4.1, v4.0) it would hang in different places while loading drivers. So I updated the BIOS and that didn't help. I tried unplugging USB devices (the CD-ROM is external). Then I found out that the main board only supports ECC memory and I had bought non-ECC memory. So I was pretty sure that the memory was the fault. But - as I returned the memory, I bought a new cheap USB keyboard as I had seen some posts where people had USB keyboard issues. And low and behold - as soon as I changed the keyboard (I was using a Logitech G510 gaming keyboard to begin with), the installation went through clean. And that was even with 4 GB of non-ECC DDR3 memory from my other desktop.

Anyway, the beast is now up and running and everything works like a charm. And it's very quiet. I'd seen posts from ultimo 2010 where people couldn't get the second NIC to work - but it's been working fine for me.


Price

I've ordered all the parts in Denmark but I'll convert prices to Euro so it makes more sense. The total for the whole setup including the HP switch is about 925 EUR (~ 1.332 USD) so it's actually not that bad.

1TB Samsung 7200rpm 32MB SATA2 58 EUR
Intel Server Board S3420GPV - ATX - Intel 3420 160 EUR
INTEL XEON X3440 2530MHz 8MB LGA1156 BOX 208 EUR
Memory 16 GB Kingston unbuffered ECC 253 EUR
Cooler Master Elite 430 Black no/PSU 49 EUR
430W Corsair CX CMPSU-430CXEU 120mm 47 EUR
KINGSTON DataTraveler I 4GB Gen2 Yellow 17 EUR
HP ProCurve 1810G-8 Switch 97 EUR
Twin katode lights 10 EUR
Shipping ~ 27 EUR
Total 925 EUR



Tuesday, February 8, 2011

ESX 4.1 install error on BL460c G7 - NIC driver fails to load

The HP BL460c G7 is on the VMware HCL list for ESX 4.1. However, when trying to install ESX 4.1 there's an error during install - it fails to load drivers for the network adapter ("No network adapters were detected"). It doesn't help to update all firmware to latest version (even though this should be done in any case...) (Update 2011.02.18: This problem persists on ESX 4.1 U1)

This is a known error and there's a fix for it. However, it seems strange that the G7 blade has made it to the HCL list...

The problem is that the Integrated NC553i Dual Port FlexFabric 10Gb NIC driver is not included in VMware's installation ISO for ESX 4.1. There are two ways to solve the issue. One is to load a custom set of drivers for the NIC and the other is to use an HP VMware install image. If you're using ESXi or scripted installation of ESX classic, then you have to use the HP image.
(Update 2011.07.19: Custom HBA driver should also be loaded during installation - simply load both ISOs)

Custom NIC driver from VMware can be downloaded here.
Custom HBA driver from VMware can be downloaded here.

(Update 2011.02.18: Apparently, the NIC drivers are updated quite frequently at the moment. Go to this main link and then 'plus out' Driver CDs to find the most recent one.)

HP image for ESX(i) can be downloaded from here.

The custom driver, when downloaded, is in an ISO format. To load it during installation, do the following:

  • Upload the ISO file to where you have the ESX installation image
  • On the Custom Drivers page in the wizard, choose Yes and click on Add. It will tell you to load the driver CD (see picture below)
  • Unmount the ESX installation CD and mount the driver ISO in stead. This can be done without interrupting the installation. You will be prompted to verify the custom driver package, click OK (see picture below)
  • That's it. at a later stage in the wizard, you will be prompted to reinsert the installation ISO. Do that when prompted.


Saturday, April 11, 2009

How to enable 64-bit in BIOS on HP server

To be able to run 64-bit VM's in VMware ESX server, then Intel-VT technology needs to be enable in BIOS. Furthermore, to enable EVC (Enhanced VMotion Compability), No-execute memory feature should be enabled, see below.

1. Go to BIOS (press F9 during boot)
2. Go to Advanced Options -> Processor Options -> Intel ® Virtualization Technology
3. Choose Enable
4. Furtermore, to enable VMware EVC, enable 'No-Execute Memory Protection' (just above Intel-VT).
5. Save and exit


NB: All hosts in your cluster should have the same BIOS settings. If not, this can result in VMotion issues.