Thursday, December 13, 2018

Deploy Avere vFXT in Azure

At current client there is an aim to use Azure Blob for storing large amounts of data and also for processing this data in an HPC environment.

A requirement from the users is that they can mount the storage as NFS shares in a POSIX compliant manner. The problem with this is that Blob is object storage and not block storage meaning that this feature isn't available out-of-the-box.

We have tried different things such as Azcopy v10, Blobfuse (which mounts a Blob container as an NFS share, Rclone, Data Lake Storage Gen2 (currently in tech preview) and also we've tried looking at Azure Files as an alternative. Non of it really fulfills the above requirement.

Avere claims at least to be able to solve the problem.

Avere vFXT is sort of a cache layer that you can put in front of either Azure Blob but also on-prem storage solutions and is meant for high volume HPC environments (such as Grid Engine or Slurm).

This guide will describe how to deploy Avere vFXT in Azure and connect to Azure Blob storage.

Architecture

Avere consists of one controller VM and three (minimum) cache node VMs that run in a cluster.


  • Controller: Small VM, small disk
  • Cache VMs: Minimum of 3 x 16 vCPUs, 64 GB mem, 1 TB premium ssd disk (4 x 256 GB RAID0)


The controller VM is deployed from the Azure market place and the three cache VMs are deployed via a script that is run from the controller VM.

This MS guide has been used in the process.

MS recommends creating a separate subscription for the deployment, this is not required but is a nice-to-have to be able to isolate costs. You do however, need to have ownership rights of the subscription for part of the installation.

Installation and configuration

To deploy the controller, log on to portal.azure.com and search for:

Avere vFXT for Azure Controller

Click Create and go through the deployment steps, this is pretty standard.

This will deploy the controller VM.

Next, create a new storage account from the Azure portal that we will later connect to Avere as backend storage (also called "core filer").

Log in to the Controller with ssh (the user will be the admin user specified during deployment) and run the following steps:

$ az login

This will generate a code and ask you to go to https://micorosoft.com/devicelogin  and input the code. When done return the the ssh session.

Then set the subscription ID. You can find that by searching for "subscriptions" in the Azure Portal.

$ az account set --subscription YOUR_SUBSCRIPTION_ID

Edit the avere-cluster.json file, search for "subscription id", replace with your own subscription ID and uncomment the line. Save the file

$ vi /avere-cluster.json
(Click here for quick vi editor guide or use nano)

Create a role for Avere to be able to perform necessary tasks:

$ az role definition create --role-definition /avere-cluster.json
(this is where ownership of the subscription is required, if you don't have it will throw an error)

Next we need to edit the cluster deploy script, make a copy of the script first:

$ cd /
$ sudo cp create-cloudbacked-cluster create-cloudbacked-cluster-blob

The original file has 777 permissions, so give the same to the copied file:

$ sudo chmod 777 create-cloudbacked-cluster-blob

Edit the file:

$ vi create-cloudbacked-cluster-blob

Below I have pasted the part of the script that needs editing and added example info in bold:

---------------

#!/usr/bin/env bash
set -exu

# Resource groups
# At a minimum specify the resource group.  If the network resources live in a
# different group, specify the network resource group.  Likewise for the storage
# account resource group.
# Below resource group I created while creating the controller VM
RESOURCE_GROUP=CLIENT-Avere-test

# The network resource group is an existing group where we want to assign IP addresses from
NETWORK_RESOURCE_GROUP=CLIENT_Network_RG
# I did not specify storage resource group as this is the same the default resource group above. I added the new storage account to that RG.
#STORAGE_RESOURCE_GROUP=

# eastus, etc.  To list:
# az account list-locations --query '[].name' --output tsv
LOCATION=westeurope

# Your VNET and Subnet names.
# To find network name and subnet name go to Azure Portal -> Virtual Networks -> YOUR_NETWORK -> Subnets
NETWORK=CLIENT_network_name
SUBNET=default

# The preconfigured Azure AD role for use by the vFXT cluster nodes.  Refer to
# the vFXT documentation.
AVERE_CLUSTER_ROLE=avere-cluster

# For cloud (blob) backed storage, provide the storage account name for the data
# to live within.
STORAGE_ACCOUNT=new_storage_account_name

# The cluster name should be unique within the resource group.
CLUSTER_NAME=avere-cluster
# Administrative password for the cluster
ADMIN_PASSWORD=INSERT_PASSWORD_HERE

# Cluster sizing for VM and cache disks.
# D16 is the smaller of the two options
INSTANCE_TYPE=Standard_D16s_v3 # or Standard_E32s_v3]
CACHE_SIZE=1024 # or 4096, 8192

# DEBUG="--debug"

# Do not edit below this line

--------------

Save the file and exit.

Run the script:

$  ./create-cloudbacked-cluster-blob

This take around half hour to run and spins up the nodes and the management web portal.

The output on screen will show you the IP address of the management server.

The script output (which is also stored in ~/vfxt.log) mentions a warning that you need to create a new encryption key. To do this:

Log in the web portal using the IP address:

http://IP_of_management_server

User: admin
Passwd: What was specified in the deploy script above

Go to the Settings tab -> Cloud Encryption Settings

Add a new password, click Generate Key and Download File.

This will download a file. Click Choose File and upload the same file (it's a precaution) and the click Activate Key. It will take effect right away.

Make sure you save the key (or certificate) and password as it's needed to access data in a restore or recovery situation.




Next enable Support uploads. This is just a couple of steps, follow this link to do this.

Now the cluster is ready for use and you can mount the Blob storage as an NFS share:

Log in to your client Linux (in this example) machine. It should be on the same network or at least be able to reach the Avere cluster.

There are several ways to distribute the client load between the currently three deployed cache nodes. This is described here. However, for testing purposes you can also mount directly on the IP address of one of the nodes. The IP address can be found on the web portal under Settings -> Cluster Networks.

From client make new directory and mount the remote share:

$ sudo mkdir /mnt/vfxt
$ sudo mount 172.xx.xx.xx:/msazure /mnt/vfxt

And that's it.

And files you copy there will be cached for 10 mins and then asynchronously uploaded to Blob.

Note that browsing the files in the Storage Explorer will show them in an encrypted and unreadable format. So files can only be accessed/read via Avere.

Another note: To be able to have true POSIX compliance including proper ownership of files, a directory service is required e.g. Active Directory (the boxes have to be domain joined via LDAP).








Monday, December 3, 2018

Check external IP address from Linux CLI

Three commands to check the external IP address from command line interface / CLI in Linux:

# wget -qO- http://ipecho.net/plain ; echo

or

# curl ipinfo.io/ip

or

# curl icanhazip.com -4



Tuesday, October 2, 2018

Using Iperf3 for bandwidth and througput test on Linux

At my current client we had to test the network speed between Azure and a local site.
Initially we used Rsync to copy files back and forth and although it gives an ok indication, it does not show the full line speed as Rsync encrypts data during transfer (among other things).

Iperf3 is a really easy to use and simple tool to test the bandwidth or line speed between to machines. This can be either Windows or Linux.

Below shows how to install and run Iperf3:

The test was done on RHEL 7.5 VMs:

1) Install Iperf3 on both the "client" and the "server":

# sudo yum install iperf3

2) Ensure that TCP traffic is allowed inbound on the "server":

# sudo firewall-cmd --zone=public --add-port=5201/tcp --permanent

# sudo firewall-cmd --reload

If you want to run test with UDP, then the following commands should be run:

# sudo firewall-cmd --zone=public --add-port=5201/udp --permanent

# sudo firewall-cmd --reload

3) Start Iperf3 on the "server" and put it in listen mode:

# iperf3 -s

4) Start Iperf3 on the "client" with -c and specify the IP of the server:

# iperf3 -c 192.168.1.25
(replace above IP with IP of your server)

That's it. This will run the test within around a minute and show the result, see screen dumps below.

When we ran the test, we could not max out the 1 Gbit line with TCP. So we changed to UDP and increased the packet size with the following command:

# iperf3 -c 192.168.1.26 --bandwidth 10G  --length 8900 --udp -R -t 180

-c specifies to run command as client
--bandwidth emulates or assumes a 10 Gbit line (even if we just have 1 Gbit)
--length is the packet size
--udp
-R specifies to run the test in reverse. So instead of sending data, you are retrieving data. This is useful in that you can test both ways without changing the setup
-t is amount of seconds. We specified 180 seconds to let it run a bit longer.

Run Iperf3 --help for more options

Below shows a standard test between two VMs in Azure. Results are shown both on the client and on the server.




Tuesday, September 25, 2018

Fixing a corrupt /etc/sudoers file in Linux VM in Azure

I was editing the /etc/sudoers file with nano on a linux VM (RHEL 7.5) in Azure trying to remove or disable being prompted for a password every time I sudo.

I added the following to the file

root        ALL=(ALL:ALL) ALL
myadminuser     ALL=(ALL:ALL) ALL     NOPASSWD: ALL

Apparently that does not follow the correct syntax so immediately after I was not able to sudo. Below is the error meesage:

[myadminuser@MYSERVER ~]$ sudo reboot
>>> /etc/sudoers: syntax error near line 93 <<<
sudo: parse error in /etc/sudoers near line 93
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin


Since on the Azure VMs you don't have the root password, then you're stuck as the regular user do not have permissions to edit the sudoers file and you can't sudo to root.

You could mount the VM disk to another VM and then edit the file that way, but that is cumbersome.

Fix:

From the Azure portal start Cloud CLI, choose Powershell

Run the following command to make /etc/sudoers editable by master

az vm run-command invoke --resource-group YOUR_RESOURCE_GROUP --name YOURVM --command-id RunShellScript --scripts "chmod 446 /etc/sudoers"

This gives the regular user permission to edit the file

with nano or VI undo the changes (i just deleted the NOPASSWD: ALL): 

nano /etc/sudoers (no sudo since you have access)

after edit, run the below command to configure default access to file.

az vm run-command invoke --resource-group YOUR_RESOURCE_GROUP --name YOURVM --command-id RunShellScript --scripts "chmod 440 /etc/sudoers"

I got the fix from the following link. Note that the syntax has changed a bit.

The useful thing about this command is that you can execute any command as root on your VMs as long as you have access to the Azure portal.

How to edit /etc/sudoers:

To ensure that you don't introduce the wrong syntax in the file, use the command to edit:

visudo

This will open the file using vi editor and if you use wrong syntax you'll get a warning/error.

See this link for a quick guide using vi editor

Update: 2018.11.07: On RHEL 7.5 and with visudo, the below lines work, meaning that with the command:
# sudo su -
you're not prompted for passwd

root    ALL=(ALL)       ALL
myadminuser    ALL=(ALL)       NOPASSWD: ALL


Saturday, June 17, 2017

Amazon AWS - first steps after creating an account

After creating an account in Amazon AWS, there are a couple of steps to be done before you start provisioning resources. This is all fairly well described in the AWS documentation, so the below info is just to summarize the steps:

What you want to do is to first add some additional security to the root user and then to create an IAM user with admin rights that will be used going forward. Root user should not be used.


  1. Log into https://console.aws.amazon.com 
  2. Go to Services -> IAM
  3. Under Security Status it will state that you have already deleted your root access keys. That is because you haven't created any (this is not the same as your account password, access keys are used to e.g. sign programmatic requests using SDK or REST).
  4. Before enabling multi-factor authentication (MFA), you need a software MFA app. Google Authenticator is a free app for both iPhone and Android. Download this app to your phone.
  5. To enable MFA under IAM, go to: Security Status -> Activate MFA on your root account ->  Manage MFA. This will open a simple wizard. Choose software MFA. A bar code will be presented that should be scanned from the phone. Open Google Authenticator, click the '+' sign and choose 'Scan barcode'. This will add an entry in the app. Type in two consecutive keys in the wizard and that's it. Next time you log in to the account, it will prompt for the six digit key after entering the password.
  6. To create a new user and group for daily use, go to Services -> IAM -> Users -> Add user. This will open a wizard. If you haven't done so already, you'll be prompted to create a group also to place the user in. This group should have full administrative access. Choose the first option in the list, 'AdministratorAccess', this will grant full access
  7. Once the user is created, a direct link to the AWS console will be created that will look somethng like: https://1562xxxxxxxx.signin.aws.amazon.com/console
  8. To create access keys for the user, go to IAM -> Users -> choose the user -> Security credentials tab -> click Create Access Key. This will let you do a one time download of the Access Key ID and the Secret Access Key
  9. On the same Security credentials tab, MFA can be enabled for this user by clicking the pencil next to 'Assigned MFA device'. The wizard will be the same as for the root user. When scanning the bar code, a second entry will show up in Google Authenticator, see screen dump below (so one for root account and one for the user)
  10. As a last step you can apply a password policy to your IAM users to make all the check boxes green, see screen dump below.
  11. Done. Now you can log out from your root account and only use the admin user going forward (which should be used for creating further users and groups to do the actual work)




Thursday, August 20, 2015

Nexus 1000v - will the network fail if the VSMs fail?

At current client there has been some concern regarding the robustness of the network in relation to Nexus 1000v. It's a vSphere 5.5 environment running in Vblock with Nexus 1000v switches (bundled with Vblock).

The question was whether the network on the ESXi hosts is dependent on the two management 1KV VMs and if the network will fail entirely if these two VMs are down.

Furthermore, there was a question whether all ESXi traffic flows through these two VMs, the management VMs were being perceived as actual switches.

The answer is, for most, probably pretty straight forward but I decided to verify anyway.

Two notes first:

1) By adding Nexus 1000v to your environment you may receive some benefits. But you also add complexity. Through the looking glass of vCenter, it is simply easier to understand and manage a virtual distributed switch (vDS). Some network admins may disagree of course.

2) From Googling a bit and also from general experience, it doesn't seem like that many people are actually using the 1KV's. There is not much info to be found online and the stuff there is seems a bit outdated.

That said, let's get to it:

The Nexus 1000v infrastructure consists of two parts.

1) Virtual Supervisor Module (VSM). This is a small virtual appliance for management and configuration. You can have one or two. With two VMs, they run in active/passive mode

2) Virtual Ethernet Modules (VEM). These modules are installed/pushed to each of the ESXi hosts in the environment

All configuration of networks/VLANs is done in the VSMs (NX-OS interface) and then pushed to the VEMs. From vCenter it looks like a regular vDS but you can see in the description that it is an 1KV, see below:


Even if both VMSs should fail, the network will continue to work as before on all ESXi hosts. The network state and info/configuration is kept separately on all ESXi hosts in the VEM. However, control is lost and no changes can be done before the VSMs are up and sync'ed again.

For the other question, no, the VM traffic does not flow through the VSM, they are only for management. The VM ethernet traffic flows through the pNICs in the ESXi hosts and on to the network infrastructure. The same as with standard virtual switches and vDS'es. This means that the VSMs cannot be a bandwidth bottleneck or single point of failure in that sense.

For documentation: See 45 seconds of this video from time: 14.45 to 15.30.

Below are two diagrams that show the overview of VSM and VEM:




Wednesday, July 22, 2015

Cold Migration of Shared Disks (Oracle RAC and NFS clusters)

Certain applications use shared disks (Oracle RAC and NFS clusters due to clustering features). These can be vMotion’ed between hosts (for RAC you have to be careful for monster VMs and high loads as the cluster timeout has low threshold that can be reached during cut-over), but svMotion is not possible. Migration of disks will have to be done while both virtual machines (or all VMs in cluster) are shut down (cold migration). The method involves shutting down both primary and secondary node, removing the shared disk that has to be migrated (without deleting it) from the secondary node, migrating the disk to new LUN from primary VM, and then re-adding the disk to secondary node after migration is completed (including configuration of multi-writer flag for disk). After this both VMs can be booted.

Note: This is not RDM disks but regular vmdk's with multi-writer flag set

Instruction steps

Steps to migrate shared disks (Oracle RAC and NFS)

  • Identify the two VMs that share disks, note the VM names
  • Identify the disk(s) that should be migrated to new LUN, note the scsi ID for each disk (e.g. SCSI (1:0))
  • Note (mostly for Oracle RAC) if disk is configured in Independent and persistent mode
  • Ensure maintenance/blackout windows is in place
  • Shut down both VMs
  • For secondary VM, go to Edit Settings -> Options -> General -> Configuration Parameters (see screen dump below) and verify if the “multi-writer” flag is set for the disks to be moved
  • While both VMs are shut down, remove the disk(s) from the secondary VM (without deleting it)
  • From primary VM, right click and choose Migrate. Migrate the disk(s) to the new LUN
  • Wait for the process to finish
  • On secondary VM, go to Edit Settings -> Hardware -> Add. Select Hard disk and Use existing hard disk. Browse for the disk in the new location and click add. Make sure the same SCSI ID is used as before
  • For secondary VM, go to Edit Settings -> Options -> General -> Configuration Parameters -> Add row and add the multi-writer flag to each of the re-added disks.
  • (If disk is/was configured in Independent and persistent mode, go to Edit settings -> Hardware -> Mark the disk -> under Mode, check the Independent check-box and verify that the Persistent option is set)
  • Boot the primary VM, boot the secondary VM
  • Ensure that application is functioning as expected. Done