Wednesday, November 20, 2024

Azure: Creating a route table in Bicep and addressing linter rule warning

 I was creating a route table in Bicep and using an example from MSFT, see example here.

The example uses the 'contains' function which throws a warning when deploying with a suggestion to simplify the syntax. Below you can see the warning:


The warning links to the Linter rule - use safe access page.

The suggested change can be seen below. It still performs the same function - it checks if the value exists, if it does, it uses it, if not it sets the value to null.


The updated example can be found on Github here (it uses routes for Databricks as an example, but can be replaced with your own routes).

Tuesday, November 5, 2024

Azure: Creating a naming convention for cloud resources

 Creating a maintaining a well functioning naming convention can be surprisingly difficult. And it can lead to many a heated discussion. One of the reasons for this is that there isn't necessarily one right answer and in the end it's about convention and just making a decision and sticking to it.

To help out with the standardization of naming resources in Azure, MSFT has a good article, under the Cloud Adoption Framework, which has suggestions for most of the Azure resources. It's a good starting point:

https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-best-practices/resource-naming

Friday, October 4, 2024

Azure: Create a Workbook for centralized NSG log search - using VNet flow logs

 NSG flow logs will be retired in 2025, see more details here, and replaced by VNet flow logs.

This article will describe how to install a new Azure Workbook that enables centralized NSG log search using VNet flow log data. Note that to be able to retrieve and view logs with this workbook, VNet flow logs must deployed and configured in advance (you can deploy the workbook up front, but it will return empty queries).

The workbook is based on an existing workbook for NSG log search that uses NSG flow log data (see NSG workbook here). The logic is the same in the new workbook, it just retrieves data from another source (or another table in the Log analytics workspace, to be precise).

First, let's view the final result. Below you can see a screenshot of the workbook:


To install the workbook, do the following:

Download the workbook file from Github.

Go to Azure Portal -> Monitor -> Workbooks -> Click '+New'

Click on the Advanced editor, see below:


Choose Gallery template, then paste in the workbook code and click apply, see below:


That's it, this will create the workbook.

To ensure that you can find the workbook via Portal -> Monitor -> Workbooks, you have have to go into settings and choose Azure Monitor as a resource, see below (this might be set by default):



If you link it only to the log analytics workspace, then you have to navigate to the workspace first and then choose the workbook menu to see it.

If done right, you can navigate to Portal -> Monitor -> Workbooks -> Workbooks tab -> Click workbook to view, see below:


Below you can see the main Kusto query that the workbook uses (this is also included in the workbook file itself):


Note that if you run this as a query it won't return any results.

If you want to run a Kusto query directly (that returns a similar result), do the following:

Go to Portal -> Monitor -> Logs - Choose Kusto mode (instead of simple mode) -> paste in the below query and click run:

NTANetAnalytics
| where SubType == "FlowLog"
| extend Protocol = case(L4Protocol == 'T', "TCP", L4Protocol == 'U', "UDP", L4Protocol)
| extend TimeGenCET = datetime_utc_to_local(TimeGenerated, 'Europe/Stockholm')
| extend TimeGeneratedCET = format_datetime(TimeGenCET,'dd/MM/yyyy [HH:mm:ss]')
| project-rename NSGFL_Version = FaSchemaVersion
| project-rename AllowedOrDenied = FlowStatus
| project TimeGeneratedCET, VNet = split(TargetResourceId,"/")[2], NSG = split(AclGroup,"/")[8], SrcPublicIps, SrcIp, DestIp, DestPort, Protocol, L7Protocol, FlowDirection, AllowedOrDenied, AclRule, NSGFL_Version, FlowEncryption, Source_subnet = split(SrcSubnet,"/")[2], Destination_subnet = split(DestSubnet,"/")[2]  

You can see an example below:



Azure: NSG flow logs will be retired in June 2025

 On September 26, 2024 it was announced by MSFT that NSG flow logs will be retired in 2025. More specifically no new NSG flow logs can be created after June 30, 2025 and existing NSG flow logs will be retired on September 30, 2027, see link here or screenshot below:


The recommendation is to switch to VNet flow logs which covers the same as NSG flow logs plus some more things, see more info here. It also simplifies the setup somewhat as there will only be one flow log resource per VNet as opposed to one per NSG.

If you want to read more about how VNet flow logs work, which tables it creates in log analytics workspace, and how to query data (using Kusto queries), I can recommend this great article from cloudtrooper.net.

Depending on how NSG flow logs have currently been configured in your environment, this can be a bit complicated to address and involves multiple steps. At current client the NSG flow logs are deployed via an Azure policy (using the DeployIfNotExist effect). Logs are shipped both to a storage account and to a log analytics workspace. An Azure workbook is in place which reads data from the LA workspace so that NSG flow logs can be viewed and sorted in an easy manner (specifically this relates to all traffic in and out of the NSGs, source/dest, port, protocol, etc.).

Before retiring NSG flow logs, a similar setup has to be in place for VNet flow logs. The good news is that both these flow log types can run in parallel. They store data in the LA workspace in two different tables. So VNet flow logs can be set up and verified before retiring NSG flow logs.

There are two main items to be addressed (will depend on your specific setup). These will be described in separate posts:

  • An Azure policy that deploy VNet flow logs for all VNets (there is a built-in policy, but we had to modify it a bit to allow for multiple subscriptions. Update 2024.10.07: The built-in policy actually works, it's just a bit unclear that it will work across subscriptions since a strongType parameter is used for choosing a the network watcher resource)
  • An Azure workbook to view/search through NSG logs using the VNet flow logs data, see article here.

Friday, September 13, 2024

Using Azure Verified Modules (AVM) - Bicep

 In January 2024 Microsoft launched a new initiative called Azure Verified Modules (AVM). It's a collection of official and supported Bicep and Terraform modules that makes it easier to deploy resources in a standardized fashion as-code.

It's easy to use and fast to get started. And I was actually surprised at how well it works. One of the reasons for this initiative is that until now there hasn't been a formal, centralized repository for modules or templates so people have been relying on either there own or some public repo that might not be maintained over time.

The link for AVM is: http://aka.ms/avm

And the getting started guide (which is quite good) is here: http://aka.ms/avm/using

You should have VS Code installed and the Bicep extension. And Azure CLI, see install info here for Windows or here for MacOS.

And then you just follow the guide.

To use the modules you have to have internet connection from your source. If not, you can download a local copy of all the content and reference them locally.

For each resource module there is a basic version and an extended version with more options. You can copy the additional parameters from the extended version into the basic version or start with the extended version and remove the parts you don't need.

The good thing about modules is that most of the code (or the Bicep file) is managed/written by MSFT and you only have to reference the module in your Bicep file and fill in the relevant parameters. Below you can see a file for a basic blob storage account. 


There are several ways to deploy the code, but one is using Azure CLI, see below:

From VS Code, open a terminal and login to Azure:

> Az Login

If you are using the newest version of Azure CLI, you will be presented with a list of subscriptions available, choose the relevant subscription (alternatively run: az account set <sub name> see more info here).

Navigate to the folder where your Bicep files are located.

Deploy the Bicep with a what-if first (optional):

> az deployment group what-if --resource-group "<resource group name>" --template-file "<bicep file.bicep>"

And to deploy:

> az deployment group create --resource-group "<resource group name>" --template-file "<bicep file.bicep>"

I tried tested out a few of the modules, and they all worked fine. These are:
  • Blob storage account
  • Private DNS zone (for key vaults)
  • Private endpoint with privateDnsZoneGroup (adds A record in PDNS zone), requires existing PDNS zone for blob storage
  • Simple Windows virtual machine with public IP
  • VNet with one subnet and a VNet peering (requires existing Hub VNet)
The files can be found on GitHub, see link here.






Friday, August 30, 2024

Azure: ARM template for resource health alerts

 Azure keeps track of the general resource health of a number of components. This information can be viewed under the resource -> Help -> Resource health. However, this information is available only in the portal. To receive alerts around resource health, this must be configured separately.

Resource health alerts belongs, in my opinion, in the category of essential monitoring. You want to know if any of your key components become unavailable or degraded. Once this is setup and running, then implementing utilization based alerts would be the natural next step. See here for example of a VPN tunnel bandwidth utilization alert.

Resource health alerts can be defined on a per component type level. This means that you don't have to have any of the components deployed or specified in advance. If you deploy a health alert for e.g. VPN gateways, then all existing and future VPN gateways will be included in the scope (this will also include ExpressRoute virtual network gateways).

The ARM template in this example (Based on this MSFT ARM template) is useful for a connectivity hub where you have components such as VPN gateways, connections, firewalls, and load balancers.

If you want to add or remove components, it is done in following section of the code:



Once the alert is deployed, it can be viewed in the portal under Monitor -> Alerts -> Alert rules, see below:



The ARM template also deploys an action group which sends the alerts as emails to the specified address in the parameters file.

The files are available on GitHub:

alert-ResourceHealthAlert-weu-001.json

alert-ResourceHealthAlert-weu-001.parameters.json


Azure: ARM template for bandwidth utilization alert for VPN gateways

 Part of having a good setup in Azure and in the Hub specifically is to have proper monitoring configured. In this article will be described a bandwidth utilization alert for VPN gateways using a static threshold as well as an action group that sends emails to a specified address.

In addition to having resource health alerts configured on relevant resources (which will let you know if resources become unavailable), it is also beneficial to know if key components are reaching or exceeding their capacity. This can be done using utilization type alerts.

The default setting if you add a tunnel bandwidth alert for a VPN gateway is dynamic thresholds. But unless you have a fairly consistent usage pattern, then we've seen that too many alerts are thrown which generates unnecessary noise (it might be a bit better if sensitivity is changed to low as opposed to the default medium setting).

Instead it makes sense to configure the alert using a static threshold. What that threshold should be depends on your specific setup. But it could be e.g. 75% or 90% of SKU capacity.

The VPN SKU capacity is specified in Gbps and the threshold is defined in bytes so you have to make that conversion. Below are some examples of how to calculate using SI / decimal system for bits and binary system for bytes (though I actually think that MSFT is using SI system for bytes as well - but it is not overly important in this context).

Examples are for a threshold of 2,5 Gbit for a 3 Gbps gateway and 0,9 Gbit for a 1 Gbps gateway:

  • 1 Gbit = 1.000.000.000 bits
  • 1 byte = 8 bits
  • 1 Gbit = (1.000.000.000 / 8) = 125.000.000 bytes

  • 2,5 Gbit = (125.000.000 * 2,5) = 312.500.000 bytes
  • 0,9 Gbit = (125.000.000 * 0,9) = 112.500.000 bytes

  • 1 Kilobyte = 2^10 bytes = 1024 bytes
  • 1 Megabyte (MB) = 2^20 bytes = 1024 * 1024 bytes = 1.048.576 bytes

  • 1 Gbit = (125.000.000 / 1.048.576) = 119,2 megabytes
  • 2,5 Gbit = (312.500.000 / 1.048.576) = 298 megabytes
  • 0,9 Gbit = (112.500.000 / 1.048.576) = 107,3 megabytes
The ARM template deploys alerts for two VPN gateways with two different thresholds. The VPN gateways must be deployed in advance (or you can adjust the template to use just one GW).

Only the parameter files needs to be updated with relevant info.

ARM template is available on Github:



The template contains an action group and it has two email addresses configured, this can be reduced to just one if needed.


Tuesday, June 11, 2024

Azure: VNet with VNet peering in bicep

 Here's a bicep file including a module which installs a VNet and a VNet peering to an existing VNet. A VNet peering consists of a local and a remote peering. The remote peering is installed outside the scope of the current deployment (in a different resource group) and so has to be addressed specifically. With ARM templates this is solved by using nested templates. With bicep this is done using modules (a separate bicep file which is referenced from the main bicep file).

The bicep file references an existing VNet (which typically would be a Hub VNet). So this has to be implemented beforehand.

To use the files, simply update the parameters file with relevant info. And update the path to the module in the bicep file. If you deploy using Azure Pipelines or Github Actions make sure that the module file is not placed anywhere where it will be deployed (this throws an error), it should just be referenced.

The files are available here on Github:

vnetWithPeering.bicep

vnetWithPeering.parameters.json

hubPeering_module.bicep

Monday, June 10, 2024

Formatting JSON files

 If you have a JSON file which is not properly formatted, there are several ways to fix it.

An online tool such as JSON Formatter & Validator can be used.

If using Visual Studio Code, the formatting feature is built in. Just right click anywhere in the JSON file and choose Format Document (or press Shift+Alt+F), see below:



Azure policy: Deny VNet peering to non-approved VNets

 As part of the Azure Landing Zone architecture, there is a policy that denies the creation of VNet peerings to non-allowed VNets, see policy here.

This policy is relevant to a apply in a hub-spoke setup where you want to avoid that spoke VNets, or spoke landing zones, can create VNet peerings to anything other than the defined Hub VNets.

At current client we've been running this policy since December, 2023 and it's been working fine.

However, about a month ago some policy evaluation behavior changed (the policy itself or the templates have not changed) and now for certain bicep files, the policy blocks deployments even when using approved VNets for the peering. It wasn't for all bicep files and ARM templates still worked.

Microsoft Support came up with a minor update to the policy definition to effectively have the same rule but the syntax is slightly different. This works.

We haven't found an explanation yet as to why there was a change in policy evaluation behavior.

But the updated policy can be found here on Github.

Tuesday, June 4, 2024

Azure policy: Auto create DNS records in private DNS zones using multiple zones

 There are certain Azure PaaS services that when used together with private endpoints, require not just one A record in one DNS (as is the case for most PaaS services) but multiple A records in multiple private DNS zones (PDNS).

Examples of PaaS services that require this are (see here for full list): 

  • Azure Monitor
  • Power BI
  • Azure Data Explorer
  • Azure Arc
  • Azure Health Data Services

When using private endpoints at scale, MSFT recommends using Azure Policy for the A record creation. I've described this in a previous article, see here.

But the MSFT policy to create DNS records can only handle a single PDNS zone, not multiple PDNS zones.

I found a fix to this on the 'Blog Cloud63' in this post (and here's a direct link to his policy definition), so thanks goes to Vincent Misson for that.

He basically expands on the MSFT policy for the resource 'privateDnsZoneGroups' by adding a copy loop to the properties of the resource. The copy loop then goes through an array of PDNS zone resource IDs and adds multiple items under properties. The privateDnsZoneGroups resource is what actually creates the A record in the PDNS zones.

Below you can see snippet of the code with the copy loop (modified policy):


And without the copy loop (default MSFT policy)


I have uploaded my slightly modified version of the policy to Github.

It has been tested with Azure Monitor, specifically with private endpoints for Azure Monitor Private Link Scope (AMPLS) and it works as expected (11 records are created in 5 separate PDNS zones).



Saturday, January 13, 2024

Azure: Subnets with multiple IP address spaces?

 We were having a look today at the Azure documentation for virtual networks and subnets specifically. Both for Bicep and ARM there are two options to specify the addressprefix (address space) for a subnet. The first one is "addressPrefix" which takes a string as input and the second one is "addressPrefixes" which takes an array, see below. This leads one to expect that you can provide multiple IP address ranges for the subnet in the array in the same way that it can be done for VNets.


If you try to manually add an additional IP range to an existing subnet via the portal, it will show an error.

If you try to deploy multiple ranges via an ARM template, it still throws an error but we get a bit more information in the error message, see below:


The error states that the subscription is not registered for the following feature:

Microsoft.Network/AllowMultipleAddressPrefixesOnSubnet

This is usually handled under Resource Providers for the subscription. If you go to subscriptions -> resource providers in the Portal, this feature is not there to enable, though.


It is possible however, to register the feature via Azure CLI. But when you run the command, this feature goes from "not registered" to "pending" and then it will just stay like that and never move to registered.

It looks like below:



The commands are:

az feature register --namespace Microsoft.Network -n AllowMultipleAddressPrefixesOnSubnet --subscription <subscriptionId>

and:

az feature show --namespace Microsoft.Network -n AllowMultipleAddressPrefixesOnSubnet --subscription <subscriptionId>

It turns out that this feature is available only to MSFT developers and is not available either in public or private preview. There is not much info around this, as I suppose it is not really a sought after feature.

I found this explanation and also response from MSFT, see link here.

And then another person had the same issue as late as Jan 8, 2024, see link here.