Saturday, December 11, 2010

Online backup for personal computer

For a while now I have been trying to persuade myself to start using an online backup service for my private documents, pictures, emails etc. The thing is that over the years I have, like most others I guess, built up quite a lot of personal data which it would be impossible to recreate should it be lost.
Until now I have backed up data to from the laptop to an external USB drive. This method works smoothly with the built-in backup software on the USB drive from Seagate. And I will be able to restore data should the disk fail or if I lost the laptop. However, for the most part the external disk with the backup data was, and is, lying just next to laptop. So in case of a break in, a thief would most likely take both the laptop and the usb disk. Or in case of a fire or other such extreme cases both laptop and backup will be lost. The risk is small but the impact, from my personal perspective, is rather high.

After looking online backup I found the following things which was reassuring:
Online backup seems to be a relatively mature market. There are quite a few players, and prices seem competitive. A one year subscription ranges from 50-100 US$.

Security seems to be in adequate . Most providers encrypt data before, while, and after it has been uploaded. Unfortunately, encryption is only protected by password (at least for the solution that I chose) so it's not this double security approach where it's something you have (like a RSA token or PKI) and something you know (the password). However, for my chosen solution, I first have to log in, and then I have to provide a strong password, which is not stored at the service provider, to acutally restore data.

Of the bigger players can be mentioned IDrive, Mozy, Carbonite, Crashplan, and BackBlaze. I read a few reviews and it seems that, a part for some minor differences, they offer pretty much the same service, so one should be pretty safe to choose either one of them.

Here's link to two reviews: From Notebook Review and from Digital Inspiration

I was leaning towards Crashplan mostly because the interface appealed to me and a colleague recommended it. However, I ended up choosing IDrive for the following reasons:
  • It's been around for longer than the other players (you don't want your backup provider to go out of business).
  • Phone support (when trouble hits it's always nice to be able to call someone...)
  • Price was right for my needs (50 US$/year for one computer and 150 GB storage)
  • Online browse and restore of files
  • Continuous backup and file versioning
  • Sufficient security
  • Status reports via email
My experience after a weeks use is that maybe it is not the fanciest of interfaces, b
ut it gets the job done. After three or four days I had uploaded my ~25 GB of data over the ADSL connection and I have tested restore of files from the web interface successfully. And email notifications work.

On a final note I can mention that I have been using Dropbox for a while to be able to share files between computers and to be able to reach files online. It's limited to 2 GB in the free edition but it can be highly recommended. It supplements the backup application but it cannot substitute it in my opinion.

Tuesday, November 23, 2010

Installing a web server on an Amazon AWS free VM

In the previous post I described how you get a free linux VM in the Amazon AWS cloud up and running. This post will describe how you can use it for something practical.

Apt-get is not installed on this Micro Instance VM. So at first I tried to do a manual install of Apache by simply uploading the .tar.gz files to the VM via WinSCP and tried to run the .configure file. This didn't work as a C compiler was not installed on the system. I went on to look for GCC and got that installed and then I could install Apache. For some reason it didn't quite work, though. And also it's mayby a little too much work to get a web server up and running...

Then I stumbled upon the Yum command which is similar to Apt-get and which is actually pre-installed in the VM and is working out of the box.

With Yum, installation is a breeze. Issue the following commands:

#sudo yum install httpd
#sudo chkconfig httpd on
#sudo /etc/init.d/httpd start

The sudo command will not prompt you for a password but will let you execute commands as root. You can't su -root... (alternatively, you can try sudo -i to get a root shell)

If it complains about a missing C compiler, then install it this way:

#sudo yum install gcc

The web server installs its .conf file in /etc/httpd/conf/httpd.conf. There's is the usual test page displayed until you place an index.html file in the /var/www/html folder.

Free linux cloud VM with Amazon Web Services (AWS)

Recently, Amazon announced that you can get a free linux VM for one year in their public cloud solution - Amazon Web Services (AWS). They call it a Micro Instance and it's got something like 1 vCPU, 600 MB of memory, and 10 GB storage, see specs here. You get full access to the VM via SSH but there's no console access as such.

So I decided to give it a try.

First, you need to create an AWS account (there's a link on the front page..). They need a valid creditcard for that. Then you log into the AWS Management Console. This requires you to register again. They had implemented a rather odd security feature where they call your mobile phone and you have to punch in a pin-number to confirm. I must admit that, for testing purposes, this wasn't the most smooth registration process.

Once into the AWS Management Console you're presented with a number of tabs. The first one is Amazon S3 which is an online file placeholder (i guess like an FTP server). To create your VM, go to the Amazon EC2 tab and click on Launch Instance (see below). This process is fairly simple. It is not quite easy, though, to see exactly which one is the free edition, but I just chose the minimum specs available to be on the safe side. And look for something like linux and Micro Instance.

Firewall rules are easy to configure via the web interface. You can add some pre-defined ports such as mail, web, etc. Port 22 is enabled by default.

A KPI keyset is generated (for authentication purposes) and you can download the .pem file to your local harddrive. They give an example of howto login via ssh from a console and use the generated key. Example:

ssh -i keyname.pem

If you use this command will receive a login error as root cannot login directly. So just change 'root' in front of the @ with the, in the error message, suggested 'ec2-user'.

Once logged in you can execute commands as root with the 'sudo' command. It will not prompt for a password. Or alternatively use sudo -i to get a root console. But you can't su - root.

If you want to use Putty to acces the VM directly, then you have to convert the .pem file to a .ppk file. This is easily done using this guide.

To use the .ppk file, open Putty and go to SSH -> Auth and browse to the directory where you stored the file. And then you connect to the VM (saving the profile will save you some time at next login..). There's no password.

The same .ppk file can also be used for WinSCP which is handy for uploading files directly to the VM.

As you have a public DNS name, this can be used to create an eiasier to remember C-name DNS that you can point to the generated machine name.

So far so good. Now there's access via SSH. Then I tried to configure a simple web server. I'll describe that in the next post.

Wednesday, November 17, 2010

vMotion between firewalls

Currently, I'm setting up a new VMware cluster as the exsiting hardware needs to be retired. The new cluster is in another management zone (and in another vCenter). To minimise downtime I looked at doing vMotion between the two clusters.

What I did was to disconnect one host from vCenter. Then add the host to the other vCenter directly on the ip number. The host was not added to the newly created cluster, only to the datacenter. And then drag and drop VMs between the clusters (EVC was enabled).

There was a couple of things that had to be tweaked before it worked.

vMotion had to be done between firewalls. When doing this, there are two important things to remember:

1. Set the default gatway of the vMotion interface (via vSphere Client)
2. Open inbound/outbound on port 8000 TCP in the firewall (see ESX configuration guide, page 150).

Furthermore, I encountered another issue. A number of VMs had a vmxnet NIC (it's some old VMs...). When starting to vMotion there was a warning that vmxnet is not supported on target host which is ESX 4 (source was ESX 3.5). However, after vMotion, the vmxnet NIC still worked. I tried to update VMware Tools and virtual hardware version to v7 and that also worked. vmxnet is kept as NIC after upgrade.

Wednesday, October 13, 2010

Hands-on labs - that I did...

This post is mostly for myself to keep track of the labs that i did at VMworld 2010 and what struck me as useful features...

vSphere 4.1 - new features
Storage IO shares and limits. They're not quite there yet with Storage DRS but this is a first step.
DRS feature - possibility to keep one or more VMs bound to a specific host. This is practical in relation to licensing issues where you pay per physical core/socket in a cluster.
8 concurrent VMotions!
Better HA info with new Cluster info link. It will aggregate relevant HA info and display it in a window.
Scripting ESXi installation with PXE boot and get the files over the network from a tftp server looked pretty straight forward.

VMware View 4.5 - install and configure
Components: Main View Manager server (not the correct name, I believe..). Transfer server for offline clients. View agent installs on the VM. View client to connect to View Manager.

vCloud Director - install and configure
Basically a VM that install on top of your VI and communicates with vCenter. Works as administration unit and self service portal to customers/admins. Installs on a RHEL 5 U4 or up, 64-bit. Licensing is done per VM. vShield Manager is required.

Update Manager
Practical for host patching of course... Also relevant for upgrading VMware Tools and HW version. Tools and HW version can be done in a single script.

Thinapp 4.6 - new features
The new feature that IE6 can be packaged easily is very useful. If you package an app on WinXP it can be run on other OS'es, e.g. Win7, without modification. Every time I play around with Thinapp it always strikes me why we don't use it more...

PowerCLI 4.1
One of the first examples is a script to go through your VMs and find and delete all snapshots older than a given date - e.g. older than seven days. This example is spot on and something that we're working on imlementing as well.

vShield Zones
One thing that I noted in this lab is that vShield has a built in load balancer which was easy to configure. This could probably be a good substitute for MS NLB.

Tuesday, October 12, 2010

VMworld 2010 - Hands-on labs, first impressions

There's been a lot of talk about the hands-on labs at this years VMworld. Here in Copenhagen there are around 240 thin clients with each two screens running up against two datacenters in Florida and Virginia with a dedicated 100 Mbit line over the Atlantic. There's also a fail-over possibility to a datacenter in Europe, should an issue occur. The whole solution is based on a number of Lab Manager installations with a custom interface on top of it which has been built for the purpose.
So far, I have taken four labs, and I must admit that I'm impressed. It works so well that you're not even thinking about what is going on under the hood. What you're actually experiencing is that as soon as you're choosing a lab, then almost instantly you have two ESX servers and a number of VMs provisioned for you. A VMware employee told me that part of the custom code is calculating the most popular labs, and then pre-deploying a number of these up front as to reduce waiting time.
Furthermore, the whole user experience is pretty cool. They've made a GUI for choosing the labs, and on flatscreens around in the room you can see statistics such as most popular labs and total number of VMs created and labs completed (see pictures below).

Tuesday, September 28, 2010

Restart of ESX management agents

This is just a post to remember the commands for restarting the management agents on an VMware ESX server:

#service mgmt-vmware restart

#service vmware-vpxa restart (the HA agent)

Both of these agents can be restarted without affecting VM operation. Restarting them can be a useful step in troubleshooting if vCenter has trouble connecting to a host or if you experience HA errors.

For restarting mgmt agents in ESXi, this can be done via the console menu interface, see link above.

Friday, September 24, 2010

Console-setup - service console tool for network config in ESX4

As of ESX 4.0 U2 a new tool for configuring network in the service console (COS) has been introduced. If you're not too comfortable with CLI then this might come in handy. The tool will give you a numbered menu and you can list and configure nics, vswitches, vswifs, etc.

Here's a link to a VMware KB article that presents the tool.

To run the tool, type console-setup in the COS.

Menu entry 1, 2, and 3 will show the output of esxcfg-vswif -l, esxcfg-nics -l, and esxcfg-vswitch -l respectively.

Menu entry 5 will let you configure your service console without having to remember any of the commands. Pretty neat..

Here's a link on how to do it the old school way.

Tuesday, August 3, 2010

What does ESX stand for?

Not too long ago I was doing some general VMware introduction to a number of colleauges from our Chinese branch. One of the guys asked what ESX was short for. And I had actually no idea. It's a product I've been working with for several years and yet it hadn't ocurred to that ESX was probably more than just a three letter name.

Today, I saw an article on acronyms on Yellow-Bricks which reveals the secret. Furthermore, there's a link to a video interview with Mike DiPetrillo which elaborates on the matter (about 20 minutes in...)


ESX: Elastic Sky X
GSX: Ground Storm X

The two names was invented by some marketing people hired by VMware. VMware didn't like it too much so they shortened it to ES and GS and the X was added just to make it sound more technical!

Thursday, July 8, 2010

Disaster recovery: Procedure in case of site failure

Here's a short example of a procedure for recovering a VMware cluster from a site failure. The example scenario consists of two ESX4 hosts on replicated storage divided on seperate locations. There's no automatic failover for storage between sites, manual breaking of the mirror is required.


Log into vCenter and verify whether or not storage is available for the cluster. If storage is unavailable, create an incident ticket for the storage group with priority urgent and with a request to:

“Manually break the mirror for the “"Customer X" replicated storage group” used by ESXA and ESXB”

The ticket should be followed by a phone call to the storage day/night duty to notify of the situation.


When mirror has been broken, rescan remaining hosts in the cluster. This rescan can possibly time out. If this happens, reboot the hosts.

After the rescan/reboot all shared LUNs will be missing on the hosts. These should be added/mounted manually from the console (step 3) (in ESX4u1 there's a bug in the add "storage" wizzard, so it doesn't work from the vSphere client, see this post for more info)


Putty to each of the hosts and run the following commands:

#esxcfg-volume –l

This will list available volumes. For each volume, run the following command:

#esxcfg-volume –M label or UUID>

For example:

#esxcfg-volume –M PSAM_REPL_001

See screendump below for further exemplification:


From the vSphere client, for each of the available hosts go to Configuration -> Storage and click “Refresh”. Verify that all LUNs appear as before the site failure


Power on all VMs


Done. In this situation, storage will run from the secondary site. The storage group will be able to reverse the replication seamlessly at a later stage when failed site is operational again. This does not require involvement from the VMware group.

Site redundancy with manual breaking of storage mirror

We have just installed a site redundant cluster for a customer. The cluster consists of two ESX4 hosts on replicated EMC CLariion storage. The ESX servers as well as the storage reside on different locations (preferrably we would have liked to have done it with storage virtualisation and seamless storage failover ala Datacore or SVC, but this was not an option..).

The site redundancy is enabled by using replicated storage. Should the site with the active LUNs fail, then the storage mirror can be broken manually and operation can be resumed on the remaining site.

One thing we discovered we that resignaturing of the LUNs is no longer necessary, as it was in previous versions when a mirror had been broken. This means that LUNs can be remounted directly without modifications, see Fibre Channel SAN Configuration guide pp. 74-76.
Earlier, you had to first break the mirror, then resignature your LUNs with the advanced feature LVM.resignature and then add the LUNs. This changed the UUID (and the label on the LUNs for that matter) which means that all VM had to be manually reregistered in virtualcenter. This is a bit time consuming and not something you want to spend yor time on in a disaster scenario.

In vCenter, you can use the "add storage" wizzard to remount the LUNs. However, there's a known bug in the software so it does not work. In stead, it has has to be done from command line with the following command (rescan the HBAs first. if it hangs, then reboot):

# esxcfg-volume -l (to list available volumes)
# esxcfg-volume -M (to persistently mount volume)

See this post for example site recovery procedure

Thursday, May 20, 2010

My VMworld session ready for public voting

Update: Unfortunately, my session was not among the lucky winners. Apparantly, the world is not ready for exciting service descriptions ;-) In stead, I'll be going to VMworld in CPH as an attendee.

My session has passed the internal review and is now ready for public voting. It is placed under 'Private Cloud - Management' and the title is:

Defining your services and offerings on vSphere


As virtual infrastructures (VI) comprise a complex set of technologies, varying perceptions of virtual infrastructures and virtual servers, tend to exist. Ask any VI admin, a sales person, or a customer and you will likely get three different answers. As organizations grow, the degree of specialization typically increases, which augments the number of departments that contribute in the service delivery model. A lack of definitions for input, output and responsibility areas between these interfaces can have a negative impact such as prolonged delivery times and an unclear delivery and pricing model. Another consequence of not defining your services is that someone else will do it for you. This could be the sales department or a solution architect that sell a custom solution due to a lack of existing building blocks. These solutions typically do not scale well and the technical design tends to be less than optimal. Services, whether it be an ‘ESX operations service’ or a ‘virtual Windows server service’, need to be defined, standardized, and published in a service catalogue. Furthermore, there should be a clear distinction between an internal service and an external customer offering. These matters will be addressed in this session as well as different examples of how a virtual infrastructure- and a virtual server service can be defined. This session builds on the theoretical framework of the updated ITIL v3, specifically with a focus on Service Design and the Service Catalogue.

Wednesday, March 31, 2010

Identifying your WWN id's via ILO

For the storage department to be able to zone up one or more LUNs to a given ESX host, they need three pieces of information:

  • ESX host name (FQDN)
  • WWN id's of the HBA's
  • If new LUN, then the size of the LUN. If you're zoning existing LUNs, then they need to know the storage group that the host should be added to (this can be done by providing hostname of one or two existing hosts that already have that zoning).
The WWN id can be identified both from the VI client (Configuration -> Storage Adapters) and from the service console. But this can only be done after ESX has been installed.

Sometimes, it can be useful to be able to fetch WWN info before the host has been installed. This way, the storage department can begin zoning right away.

To identify WWN id's from ILO

  • Log into ILO either directly or via the blade enclosure
  • Go to the Information tab of your server
  • WWN id can be found under the info box for your HBA (see screendump below)

Monday, February 15, 2010

Howto: Installing VMware tools in a Linux VM

Installing VMware tools in a Linux VM take a few more steps than on a Windows VM. This is done the following way (tested on VMware Workstation 7 and Ubuntu Desktop 9.04 VM appliance).
  • install the guest OS (click here to see if guest OS is supported)
  • to exit the gui to simulate no X server: sudo service gdm stop and then alt+f1 to get console
  • right click the VM and choose install/update VMware tools. This will connect the cdrom with the VMware tools ISO file (if files are not already available, they will be downloaded) but you still need to mount the cdrom manually: sudo mount /dev/scd0 /media/cdrom (if folder don't exist, create it first)
  • copy the tar file to /tmp folder and untar it: tar -xvf VMware-tools-vXX.tar.gz
  • ls to the untar'ed folder and run sudo ./
  • start the gui: sudo service gdm start or simply startx
  • verify that VMware tools are running: sudo ps -auxwww 'pipe-symbol' grep vm (look for /usr/bin/vmtoolsd and you will also find the balloon driver vmmemctl). You can also check if the vmtools startup script has been put into the startup folder /etc/rc0.d/
link to VMware KB article on installing VMtools (alternatively this KB article)

Update 2011.10.28: When installing VMware tools in Linux Redhat Enterprise 5.6 the installation failed as it needed gcc and some kernel developer packages. I ran the following commands and then reran the tools installation again after ./

yum install gcc
yum install kernel-devel
yum install kernel-xen-devel

Thursday, February 11, 2010

Example of an HA error - and a fix

The other day, I got an HA error when trying to add a new host into a cluster. It was weird, as the host was identical to the others - same model, same installation procedure, and everything. In VirtualCenter, the error looked like this:

This piece of information did not help much in relation to troubleshooting.

The only thing that was different with the new host was that is was configured from the service console (COS) as its NICs were DOA. I had used my own guide for this, so I thought I was in good shape ;-).

A more descriptive error was to be found in the VirtualCenter agent log file on the host (/var/log/vmware/vpx/vpxa.log). Grepping for the word "error" gave the following output:

errorcat = "hostipaddrsdiffer",
errotext = "cmd addnote failed for primary node: Host misconfigured. IP address of ... not found on local interface"

Earlier on, I had changed the IP address, as the first one assigned was already in use, but I'd forgotten to change the IP address in the /etc/hosts file. After doing that and restarting the network (service network restart), everything worked fine.

As a side node, I can mention that it can be pretty confusing manoeuvering through the various log files. Check this post by Eric Siebert for further explanation of VMware log files on VI3.

Wednesday, February 3, 2010

Differences between Windows Server 2008, SP2, and R2

So what are the differences between win2k8, win2k8 SP2, and win2k8 R2? These naming conventions and differences between versions are a constant cause for confusion. So here's the short take:

Win2k8 was first released with SP1. Later on came Win2k8 SP2.
Win2k8 R2 is the new version of the OS that introduces several new features. It has the look and feel of Win7, it is only x64 bit, and Hyper-V Quick migration (~VMotion) is introduced.

There's no SP2 installed on top of win2k8 R2. R2 is a clean install or you can upgrade from SP2 to R2. In any case, the SP2 will disappear and it will only be called R2.

The reason for pointing this out is that it was a bit different with win2k3. Here, you installed SP2 and then you installed R2 on top of SP2 and the result was win2k3 SP2 R2 - so service pack and R2 at the same time.

I found this comparison somewhere and I quite like it (not quite sure how correct it is, though..)

Windows Vista SP1 ~ Windows Server 2008 SP1

Windows Vista SP2 ~ Windows Server 2008 SP2

Windows 7 ~ Windows Server 2008 R2

Thursday, January 14, 2010

Finally the VCP4 certification Welcome Kit arrived

Today, the official VCP4 certification arrived in the mail. Six months(!) did it take for VMware to send the papers. There's no visible sign on the certification to indicate that it was taken as part of the beta exam - except for the date in the lower left corner (July, 16th 2009) indicating that it was achieved before the official release in August 2009.

As a bonus, a free license for Workstation 7 was included in the package which is pretty cool.