Thursday, December 13, 2018

Deploy Avere vFXT in Azure

At current client there is an aim to use Azure Blob for storing large amounts of data and also for processing this data in an HPC environment.

A requirement from the users is that they can mount the storage as NFS shares in a POSIX compliant manner. The problem with this is that Blob is object storage and not block storage meaning that this feature isn't available out-of-the-box.

We have tried different things such as Azcopy v10, Blobfuse (which mounts a Blob container as an NFS share, Rclone, Data Lake Storage Gen2 (currently in tech preview) and also we've tried looking at Azure Files as an alternative. Non of it really fulfills the above requirement.

Avere claims at least to be able to solve the problem.

Avere vFXT is sort of a cache layer that you can put in front of either Azure Blob but also on-prem storage solutions and is meant for high volume HPC environments (such as Grid Engine or Slurm).

This guide will describe how to deploy Avere vFXT in Azure and connect to Azure Blob storage.

Architecture

Avere consists of one controller VM and three (minimum) cache node VMs that run in a cluster.


  • Controller: Small VM, small disk
  • Cache VMs: Minimum of 3 x 16 vCPUs, 64 GB mem, 1 TB premium ssd disk (4 x 256 GB RAID0)


The controller VM is deployed from the Azure market place and the three cache VMs are deployed via a script that is run from the controller VM.

This MS guide has been used in the process.

MS recommends creating a separate subscription for the deployment, this is not required but is a nice-to-have to be able to isolate costs. You do however, need to have ownership rights of the subscription for part of the installation.

Installation and configuration

To deploy the controller, log on to portal.azure.com and search for:

Avere vFXT for Azure Controller

Click Create and go through the deployment steps, this is pretty standard.

This will deploy the controller VM.

Next, create a new storage account from the Azure portal that we will later connect to Avere as backend storage (also called "core filer").

Log in to the Controller with ssh (the user will be the admin user specified during deployment) and run the following steps:

$ az login

This will generate a code and ask you to go to https://micorosoft.com/devicelogin  and input the code. When done return the the ssh session.

Then set the subscription ID. You can find that by searching for "subscriptions" in the Azure Portal.

$ az account set --subscription YOUR_SUBSCRIPTION_ID

Edit the avere-cluster.json file, search for "subscription id", replace with your own subscription ID and uncomment the line. Save the file

$ vi /avere-cluster.json
(Click here for quick vi editor guide or use nano)

Create a role for Avere to be able to perform necessary tasks:

$ az role definition create --role-definition /avere-cluster.json
(this is where ownership of the subscription is required, if you don't have it will throw an error)

Next we need to edit the cluster deploy script, make a copy of the script first:

$ cd /
$ sudo cp create-cloudbacked-cluster create-cloudbacked-cluster-blob

The original file has 777 permissions, so give the same to the copied file:

$ sudo chmod 777 create-cloudbacked-cluster-blob

Edit the file:

$ vi create-cloudbacked-cluster-blob

Below I have pasted the part of the script that needs editing and added example info in bold:

---------------

#!/usr/bin/env bash
set -exu

# Resource groups
# At a minimum specify the resource group.  If the network resources live in a
# different group, specify the network resource group.  Likewise for the storage
# account resource group.
# Below resource group I created while creating the controller VM
RESOURCE_GROUP=CLIENT-Avere-test

# The network resource group is an existing group where we want to assign IP addresses from
NETWORK_RESOURCE_GROUP=CLIENT_Network_RG
# I did not specify storage resource group as this is the same the default resource group above. I added the new storage account to that RG.
#STORAGE_RESOURCE_GROUP=

# eastus, etc.  To list:
# az account list-locations --query '[].name' --output tsv
LOCATION=westeurope

# Your VNET and Subnet names.
# To find network name and subnet name go to Azure Portal -> Virtual Networks -> YOUR_NETWORK -> Subnets
NETWORK=CLIENT_network_name
SUBNET=default

# The preconfigured Azure AD role for use by the vFXT cluster nodes.  Refer to
# the vFXT documentation.
AVERE_CLUSTER_ROLE=avere-cluster

# For cloud (blob) backed storage, provide the storage account name for the data
# to live within.
STORAGE_ACCOUNT=new_storage_account_name

# The cluster name should be unique within the resource group.
CLUSTER_NAME=avere-cluster
# Administrative password for the cluster
ADMIN_PASSWORD=INSERT_PASSWORD_HERE

# Cluster sizing for VM and cache disks.
# D16 is the smaller of the two options
INSTANCE_TYPE=Standard_D16s_v3 # or Standard_E32s_v3]
CACHE_SIZE=1024 # or 4096, 8192

# DEBUG="--debug"

# Do not edit below this line

--------------

Save the file and exit.

Run the script:

$  ./create-cloudbacked-cluster-blob

This take around half hour to run and spins up the nodes and the management web portal.

The output on screen will show you the IP address of the management server.

The script output (which is also stored in ~/vfxt.log) mentions a warning that you need to create a new encryption key. To do this:

Log in the web portal using the IP address:

http://IP_of_management_server

User: admin
Passwd: What was specified in the deploy script above

Go to the Settings tab -> Cloud Encryption Settings

Add a new password, click Generate Key and Download File.

This will download a file. Click Choose File and upload the same file (it's a precaution) and the click Activate Key. It will take effect right away.

Make sure you save the key (or certificate) and password as it's needed to access data in a restore or recovery situation.




Next enable Support uploads. This is just a couple of steps, follow this link to do this.

Now the cluster is ready for use and you can mount the Blob storage as an NFS share:

Log in to your client Linux (in this example) machine. It should be on the same network or at least be able to reach the Avere cluster.

There are several ways to distribute the client load between the currently three deployed cache nodes. This is described here. However, for testing purposes you can also mount directly on the IP address of one of the nodes. The IP address can be found on the web portal under Settings -> Cluster Networks.

From client make new directory and mount the remote share:

$ sudo mkdir /mnt/vfxt
$ sudo mount 172.xx.xx.xx:/msazure /mnt/vfxt

And that's it.

And files you copy there will be cached for 10 mins and then asynchronously uploaded to Blob.

Note that browsing the files in the Storage Explorer will show them in an encrypted and unreadable format. So files can only be accessed/read via Avere.

Another note: To be able to have true POSIX compliance including proper ownership of files, a directory service is required e.g. Active Directory (the boxes have to be domain joined via LDAP).








No comments:

Post a Comment

Note: Only a member of this blog may post a comment.