Showing posts with label Network. Show all posts
Showing posts with label Network. Show all posts

Tuesday, April 15, 2025

Azure: Troubleshoot connectivity to a key vault with a private endpoint

 If you have a key vault that you can't reach there can be multiple reasons for this. Two of the main ones are DNS issues and firewall blocks.

This post will go over those two issues and show a couple of ways to test for connectivity.

When working in hybrid setups with an on-prem location connected to Azure either via VPN or ExpressRoute then it happens that you can create and see the key vault (this also goes for e.g. storage accounts and other PaaS services) but you get an error when trying to add a secret or other content to it. The error can mention e.g. "the connection to the data plane failed".

To troubleshoot, first we ensure that the local IP of the private endpoint can be resolved.

nslookup myown-keyvault.vault.azure.net

On Linux you can use the command dig to get slightly better lookup details than with nslookup:

dig myown-keyvault.vault.azure.net

This should resolve to a local IP. If it doesn't resolve at all or if it returns a public IP, there is something wrong with the DNS setup.

More info on troubleshooting DNS can be found here.

If that works, you can check for connectivity from your source. This can be done in a couple of ways and either of them is fine.

From Windows run:

tnc myown-keyvault.vault.azure.net -port 443

From Linux run (netcat and nc is same command):

nc -zv myown-keyvault.vault.azure.net 443

All data to a key vault goes over port 443, so if you have connectivity on that port and it can resolve the IP, then you should be good.

Alternatively try:

$(Invoke-WebRequest -UseBasicParsing -Uri https://myown-keyvault.vault.azure.net/healthstatus).Headers

Or from Linux:

curl -i https://myown-keyvault.vault.azure.net/healthstatus

There is more info on troubleshooting behind a firewall here.

Thursday, April 10, 2025

Azure: Adding KQL queries as Resource Graph queries

 Using the Resource Graph Explorer can be helpful to retrieve information about your Azure environment using KQL. If you have certain queries that you run on a frequent basis and want to save, this can be done under Resource Graph queries.

If they are saved as 'shared' queries they can be viewed and run by others. Private queries can only be seen by you.

Resource Graph queries can be added via code and it is also supported by Azure Verified Modules.

The below screenshot shows what it looks like, when a few queries have been added:


If you click one of the queries, it will show the query itself in a new window:


And on the Results tab, you can run the query directly:


The Bicep AVM files are available on Github, see link.

The queries can be placed in any resource group.

Here are some examples of more queries from MS.




Tuesday, April 8, 2025

Retrieve custom DNS server settings on VNets using Kusto (KQL)

 You can use Kusto Query Language (KQL) to retrieve information about your Azure environment across subscriptions.

It's a fast tool and very useful in larger environments with many subscriptions and resources.

The following example shows how to list all VNets in the environment including their custom DNS server settings.

Go to Azure Portal -> Azure Resource Graph Explorer

In the query window, add the following KQL:

Resources
where type == "microsoft.network/virtualnetworks"
extend dnsServers = properties.dhcpOptions.dnsServers
project id, name, dnsServers

Click: Run query

Result will look like below (custom DNS only set on one VNet):



You can download the content as a csv and import into xls for sorting and filtering.



Wednesday, November 20, 2024

Azure: Creating a route table in Bicep and addressing linter rule warning

 I was creating a route table in Bicep and using an example from MSFT, see example here.

The example uses the 'contains' function which throws a warning when deploying with a suggestion to simplify the syntax. Below you can see the warning:


The warning links to the Linter rule - use safe access page.

The suggested change can be seen below. It still performs the same function - it checks if the value exists, if it does, it uses it, if not it sets the value to null.


The updated example can be found on Github here (it uses routes for Databricks as an example, but can be replaced with your own routes).

Friday, October 4, 2024

Azure: Create a Workbook for centralized NSG log search - using VNet flow logs

 NSG flow logs will be retired in 2025, see more details here, and replaced by VNet flow logs.

This article will describe how to install a new Azure Workbook that enables centralized NSG log search using VNet flow log data. Note that to be able to retrieve and view logs with this workbook, VNet flow logs must deployed and configured in advance (you can deploy the workbook up front, but it will return empty queries).

The workbook is based on an existing workbook for NSG log search that uses NSG flow log data (see NSG workbook here). The logic is the same in the new workbook, it just retrieves data from another source (or another table in the Log analytics workspace, to be precise).

First, let's view the final result. Below you can see a screenshot of the workbook:


To install the workbook, do the following:

Download the workbook file from Github.

Go to Azure Portal -> Monitor -> Workbooks -> Click '+New'

Click on the Advanced editor, see below:


Choose Gallery template, then paste in the workbook code and click apply, see below:


That's it, this will create the workbook.

To ensure that you can find the workbook via Portal -> Monitor -> Workbooks, you have have to go into settings and choose Azure Monitor as a resource, see below (this might be set by default):



If you link it only to the log analytics workspace, then you have to navigate to the workspace first and then choose the workbook menu to see it.

If done right, you can navigate to Portal -> Monitor -> Workbooks -> Workbooks tab -> Click workbook to view, see below:


Below you can see the main Kusto query that the workbook uses (this is also included in the workbook file itself):


Note that if you run this as a query it won't return any results.

If you want to run a Kusto query directly (that returns a similar result), do the following:

Go to Portal -> Monitor -> Logs - Choose Kusto mode (instead of simple mode) -> paste in the below query and click run:

NTANetAnalytics
| where SubType == "FlowLog"
| extend Protocol = case(L4Protocol == 'T', "TCP", L4Protocol == 'U', "UDP", L4Protocol)
| extend TimeGenCET = datetime_utc_to_local(TimeGenerated, 'Europe/Stockholm')
| extend TimeGeneratedCET = format_datetime(TimeGenCET,'dd/MM/yyyy [HH:mm:ss]')
| project-rename NSGFL_Version = FaSchemaVersion
| project-rename AllowedOrDenied = FlowStatus
| project TimeGeneratedCET, VNet = split(TargetResourceId,"/")[2], NSG = split(AclGroup,"/")[8], SrcPublicIps, SrcIp, DestIp, DestPort, Protocol, L7Protocol, FlowDirection, AllowedOrDenied, AclRule, NSGFL_Version, FlowEncryption, Source_subnet = split(SrcSubnet,"/")[2], Destination_subnet = split(DestSubnet,"/")[2]  

You can see an example below:



Azure: NSG flow logs will be retired in June 2025

 On September 26, 2024 it was announced by MSFT that NSG flow logs will be retired in 2025. More specifically no new NSG flow logs can be created after June 30, 2025 and existing NSG flow logs will be retired on September 30, 2027, see link here or screenshot below:


The recommendation is to switch to VNet flow logs which covers the same as NSG flow logs plus some more things, see more info here. It also simplifies the setup somewhat as there will only be one flow log resource per VNet as opposed to one per NSG.

If you want to read more about how VNet flow logs work, which tables it creates in log analytics workspace, and how to query data (using Kusto queries), I can recommend this great article from cloudtrooper.net.

Depending on how NSG flow logs have currently been configured in your environment, this can be a bit complicated to address and involves multiple steps. At current client the NSG flow logs are deployed via an Azure policy (using the DeployIfNotExist effect). Logs are shipped both to a storage account and to a log analytics workspace. An Azure workbook is in place which reads data from the LA workspace so that NSG flow logs can be viewed and sorted in an easy manner (specifically this relates to all traffic in and out of the NSGs, source/dest, port, protocol, etc.).

Before retiring NSG flow logs, a similar setup has to be in place for VNet flow logs. The good news is that both these flow log types can run in parallel. They store data in the LA workspace in two different tables. So VNet flow logs can be set up and verified before retiring NSG flow logs.

There are two main items to be addressed (will depend on your specific setup). These will be described in separate posts:

  • An Azure policy that deploy VNet flow logs for all VNets (there is a built-in policy, but we had to modify it a bit to allow for multiple subscriptions. Update 2024.10.07: The built-in policy actually works, it's just a bit unclear that it will work across subscriptions since a strongType parameter is used for choosing a the network watcher resource)
  • An Azure workbook to view/search through NSG logs using the VNet flow logs data, see article here.

Friday, August 30, 2024

Azure: ARM template for resource health alerts

 Azure keeps track of the general resource health of a number of components. This information can be viewed under the resource -> Help -> Resource health. However, this information is available only in the portal. To receive alerts around resource health, this must be configured separately.

Resource health alerts belongs, in my opinion, in the category of essential monitoring. You want to know if any of your key components become unavailable or degraded. Once this is setup and running, then implementing utilization based alerts would be the natural next step. See here for example of a VPN tunnel bandwidth utilization alert.

Resource health alerts can be defined on a per component type level. This means that you don't have to have any of the components deployed or specified in advance. If you deploy a health alert for e.g. VPN gateways, then all existing and future VPN gateways will be included in the scope (this will also include ExpressRoute virtual network gateways).

The ARM template in this example (Based on this MSFT ARM template) is useful for a connectivity hub where you have components such as VPN gateways, connections, firewalls, and load balancers.

If you want to add or remove components, it is done in following section of the code:



Once the alert is deployed, it can be viewed in the portal under Monitor -> Alerts -> Alert rules, see below:



The ARM template also deploys an action group which sends the alerts as emails to the specified address in the parameters file.

The files are available on GitHub:

alert-ResourceHealthAlert-weu-001.json

alert-ResourceHealthAlert-weu-001.parameters.json


Azure: ARM template for bandwidth utilization alert for VPN gateways

 Part of having a good setup in Azure and in the Hub specifically is to have proper monitoring configured. In this article will be described a bandwidth utilization alert for VPN gateways using a static threshold as well as an action group that sends emails to a specified address.

In addition to having resource health alerts configured on relevant resources (which will let you know if resources become unavailable), it is also beneficial to know if key components are reaching or exceeding their capacity. This can be done using utilization type alerts.

The default setting if you add a tunnel bandwidth alert for a VPN gateway is dynamic thresholds. But unless you have a fairly consistent usage pattern, then we've seen that too many alerts are thrown which generates unnecessary noise (it might be a bit better if sensitivity is changed to low as opposed to the default medium setting).

Instead it makes sense to configure the alert using a static threshold. What that threshold should be depends on your specific setup. But it could be e.g. 75% or 90% of SKU capacity.

The VPN SKU capacity is specified in Gbps and the threshold is defined in bytes so you have to make that conversion. Below are some examples of how to calculate using SI / decimal system for bits and binary system for bytes (though I actually think that MSFT is using SI system for bytes as well - but it is not overly important in this context).

Examples are for a threshold of 2,5 Gbit for a 3 Gbps gateway and 0,9 Gbit for a 1 Gbps gateway:

  • 1 Gbit = 1.000.000.000 bits
  • 1 byte = 8 bits
  • 1 Gbit = (1.000.000.000 / 8) = 125.000.000 bytes

  • 2,5 Gbit = (125.000.000 * 2,5) = 312.500.000 bytes
  • 0,9 Gbit = (125.000.000 * 0,9) = 112.500.000 bytes

  • 1 Kilobyte = 2^10 bytes = 1024 bytes
  • 1 Megabyte (MB) = 2^20 bytes = 1024 * 1024 bytes = 1.048.576 bytes

  • 1 Gbit = (125.000.000 / 1.048.576) = 119,2 megabytes
  • 2,5 Gbit = (312.500.000 / 1.048.576) = 298 megabytes
  • 0,9 Gbit = (112.500.000 / 1.048.576) = 107,3 megabytes
The ARM template deploys alerts for two VPN gateways with two different thresholds. The VPN gateways must be deployed in advance (or you can adjust the template to use just one GW).

Only the parameter files needs to be updated with relevant info.

ARM template is available on Github:



The template contains an action group and it has two email addresses configured, this can be reduced to just one if needed.


Tuesday, June 11, 2024

Azure: VNet with VNet peering in bicep

 Here's a bicep file including a module which installs a VNet and a VNet peering to an existing VNet. A VNet peering consists of a local and a remote peering. The remote peering is installed outside the scope of the current deployment (in a different resource group) and so has to be addressed specifically. With ARM templates this is solved by using nested templates. With bicep this is done using modules (a separate bicep file which is referenced from the main bicep file).

The bicep file references an existing VNet (which typically would be a Hub VNet). So this has to be implemented beforehand.

To use the files, simply update the parameters file with relevant info. And update the path to the module in the bicep file. If you deploy using Azure Pipelines or Github Actions make sure that the module file is not placed anywhere where it will be deployed (this throws an error), it should just be referenced.

The files are available here on Github:

vnetWithPeering.bicep

vnetWithPeering.parameters.json

hubPeering_module.bicep

Monday, June 10, 2024

Azure policy: Deny VNet peering to non-approved VNets

 As part of the Azure Landing Zone architecture, there is a policy that denies the creation of VNet peerings to non-allowed VNets, see policy here.

This policy is relevant to a apply in a hub-spoke setup where you want to avoid that spoke VNets, or spoke landing zones, can create VNet peerings to anything other than the defined Hub VNets.

At current client we've been running this policy since December, 2023 and it's been working fine.

However, about a month ago some policy evaluation behavior changed (the policy itself or the templates have not changed) and now for certain bicep files, the policy blocks deployments even when using approved VNets for the peering. It wasn't for all bicep files and ARM templates still worked.

Microsoft Support came up with a minor update to the policy definition to effectively have the same rule but the syntax is slightly different. This works.

We haven't found an explanation yet as to why there was a change in policy evaluation behavior.

But the updated policy can be found here on Github.

Tuesday, June 4, 2024

Azure policy: Auto create DNS records in private DNS zones using multiple zones

 There are certain Azure PaaS services that when used together with private endpoints, require not just one A record in one DNS (as is the case for most PaaS services) but multiple A records in multiple private DNS zones (PDNS).

Examples of PaaS services that require this are (see here for full list): 

  • Azure Monitor
  • Power BI
  • Azure Data Explorer
  • Azure Arc
  • Azure Health Data Services

When using private endpoints at scale, MSFT recommends using Azure Policy for the A record creation. I've described this in a previous article, see here.

But the MSFT policy to create DNS records can only handle a single PDNS zone, not multiple PDNS zones.

I found a fix to this on the 'Blog Cloud63' in this post (and here's a direct link to his policy definition), so thanks goes to Vincent Misson for that.

He basically expands on the MSFT policy for the resource 'privateDnsZoneGroups' by adding a copy loop to the properties of the resource. The copy loop then goes through an array of PDNS zone resource IDs and adds multiple items under properties. The privateDnsZoneGroups resource is what actually creates the A record in the PDNS zones.

Below you can see snippet of the code with the copy loop (modified policy):


And without the copy loop (default MSFT policy)


I have uploaded my slightly modified version of the policy to Github.

It has been tested with Azure Monitor, specifically with private endpoints for Azure Monitor Private Link Scope (AMPLS) and it works as expected (11 records are created in 5 separate PDNS zones).



Saturday, January 13, 2024

Azure: Subnets with multiple IP address spaces?

 We were having a look today at the Azure documentation for virtual networks and subnets specifically. Both for Bicep and ARM there are two options to specify the addressprefix (address space) for a subnet. The first one is "addressPrefix" which takes a string as input and the second one is "addressPrefixes" which takes an array, see below. This leads one to expect that you can provide multiple IP address ranges for the subnet in the array in the same way that it can be done for VNets.


If you try to manually add an additional IP range to an existing subnet via the portal, it will show an error.

If you try to deploy multiple ranges via an ARM template, it still throws an error but we get a bit more information in the error message, see below:


The error states that the subscription is not registered for the following feature:

Microsoft.Network/AllowMultipleAddressPrefixesOnSubnet

This is usually handled under Resource Providers for the subscription. If you go to subscriptions -> resource providers in the Portal, this feature is not there to enable, though.


It is possible however, to register the feature via Azure CLI. But when you run the command, this feature goes from "not registered" to "pending" and then it will just stay like that and never move to registered.

It looks like below:



The commands are:

az feature register --namespace Microsoft.Network -n AllowMultipleAddressPrefixesOnSubnet --subscription <subscriptionId>

and:

az feature show --namespace Microsoft.Network -n AllowMultipleAddressPrefixesOnSubnet --subscription <subscriptionId>

It turns out that this feature is available only to MSFT developers and is not available either in public or private preview. There is not much info around this, as I suppose it is not really a sought after feature.

I found this explanation and also response from MSFT, see link here.

And then another person had the same issue as late as Jan 8, 2024, see link here.

Tuesday, October 10, 2023

Add custom rule to new NSGs via Azure Policy

 For governance, or operational, reasons there may be a need to ensure that certain rules are applied to all NSGs that are created within a certain scope.

This can be achieved using Azure Policy with a deployIfNotExist function.

Such a policy has already been created and is ready to use from AzAdvertizer.net, see link here:

I ran a quick test to verify the functionality and it works as expected. At the time of creation of the NSG, the policy kicks in an applies the rule right away.

The policy will let you specify one rule. So for multiple rules additional assignments can be created.

The policy looks for a suffix (the last part of the name) in the NSG name and only applies the rule if there's a match. You can re-arrange the check and have it look for a prefix instead, I have uploaded an example here on Github (can be copied in as a new definition via Azure Portal -> Policy -> Definitions).

If you want to apply the rule to all NSGs, then simply remove this check, see marked part below:


The policy is parameterized, so when you create an assignment it will request you to add all relevant parameters. See example below:

Note that some of the parameters such as destinationPortRange are arrays. They should be added in the format ["3389"] (for port 3389..).


Below is a screenshot of the inbound rule added post NSG deployment:


Since this is a deployIfNotExist policy, this means the Assignment requires a system assigned managed identity (or a user assigned managed identity) with Network Contributor permissions which will be automatically created when you create the assignment if you have enough permissions.




Tuesday, July 4, 2023

Use Terraform to deploy a simple VM and a VNet to Azure

 I have had a look at deploying a simple VM that can be used for troubleshooting using Terraform. It is based on this Microsoft Quickstart guide. What I have done is to remove most of the resources from this file so it is only the minimum required resources that will be deployed. These are:

  • A resource group
  • A Windows 2022 Server
  • A vNIC
  • A public IP (so you can RDP to the box)
It requires that you already have a VNet and subnet running. Preferably, there is an NSG as well which is associated with the subnet which allows for port 3389 tcp inbound.

The files are here on Github in the Simple VM folder. There are some getting started instructions in the readme.md file as well. The four files in the folder are all part of the template, these are:
  • main.tf
  • outputs.tf
  • providers.tf
  • variables.tf
The template creates a random admin password and assigns it to the machine. The password itself can be read post deployment by running:

terraform output -json

In addition to this, I have also created a small template that creates a VNet and two subnets and place them in a separate resource group. Files are located here.

I personally prefer to start with smaller building blocks and then combine those later when required.

Saturday, July 1, 2023

Minimum setup for private DNS zone infrastructure in Azure

 In this article we will look at what are the minimum requirements if you want to implement private DNS zone infrastructure and private link in a test setup to be able to use PaaS services with private endpoints. In this example we will use blob storage but it can be extended to most other PaaS services that support private endpoints.

The concept of Azure Private Link is a bit complicated and people often get confused around how it works. This article aims to break it down into its simplest setup.

The following components are required:

  • A VNet and a subnet
  • A default NSG (that should be associated with the subnet)
  • An inbound rule on the NSG that allows port 3389 from your external IP
  • A private DNS zone (privatelink.blob.core.windows.net)
  • A VNet link between the private DNS zone and the VNet (it's a configuration on the private DNS zone)
  • A test VM running in the subnet (we need a source with a local IP to test connectivity towards the storage account with the private endpoint)
  • A public IP associated with the VM
  • AzCopy installed on the test VM
  • A storage account with a private endpoint (including an A record in the private DNS zone)
Once it is all set up, we will disable all public access on the storage account and verify that the VM is connecting to the storage account on its private IP address. Following this we will try to copy a file from the VM to the storage account and the other way around as well (using AzCopy) to prove that it works.

Steps

First deploy a VNet and one subnet, see example below. There are no specific requirements to this part.


Then create a default NSG and associate it with the subnet (this is not strictly required, but it is good practice and will come in handy later):


Create a private DNS zone named: privatelink.blob.core.windows.net


Create a VNet link between the private DNS zone and the VNet. Go to the private DNS zone -> privatelink.blob.core.windows.net -> Virtual Network Links -> click Add

Give the link a name and choose the VNet you just deployed. Don't check the box around auto registration, this is for VMs and not in scope for this test.


Deploy a virtual machine with a public IP (so that you can RDP to it). You can do that via the portal or you can use a test Windows Server 2022 VM ARM template, see more info here (note that this VM does not have a public IP so it will have to be created separately and associated with the NIC post deployment).

Add a rule to the NSG that allows traffic on port 3389 tcp from your external IP address. This is so that you can RDP to the test VM:


RDP to the test VM and download AzCopy v10 and verify that it can run with the command: .\azcopy (this will show the version and some help info).


Create a storage account and under Networking -> Firewalls and virtual networks -> Choose the "Enabled from selected virtual networks and IP addresses" and add your external IP address. We will change this later to 'Disabled' but for now we need it to be able to create a blob container and add a file from the Azure portal. In addition this will let us browse the content of the blob containers.


Still under Networking, choose the Private Endpoint connections tab and add a private endpoint.
Give the private endpoint a name e.g. pe-<name of storage account> and choose the VNet and subnet that was created earlier. When it asks around integration with a private DNS zone, choose "No". If you choose yes it will create a new Private DNS zone but we have already created a zone in a previous step.

What adding a private endpoint does is that it creates a vNIC and attaches a local IP address from the subnet via DHCP and then it associates that vNIC with the storage account so that it can be accessed internally.


Once the private endpoint is created it can be seen in the private endpoint connections tab, see above. We need to take a note of assigned local IP address. To do this, click on the private endpoint link and then going to DNS Configuration, see below:


Now we need to create an A record in the private DNS zone we created earlier that points to the IP address we just noted. Go to Private DNS Zones -> privatelink.blob.core.windows.net -> Click "+ Record set". Under name, add the storage account name and under IP address, add the IP address that was recorded in the previous step, see below.



Now we will verify that from the test VM can resolve the local IP of the storage account by using the regular storage account FQDN (if this doesn't work it will resolve with a public IP address and you will know that something went wrong).

Jump to the test VM and start a command prompt:

Run: nslookup <storageaccountname>.blob.core.windows.net

The result should be the local IP, see below (the red box in the screenshot shows the result without a private endpoint and the green box shows the result after the private endpoint has been added. In the green box, the local IP 10.100.0.8 correctly shows):


To verify that we can copy a file from the test VM to a blob container and vice versa, first we will create a dummy file on the VM and then a dummy file in the storage account (any txt file will do).

Go to the storage account and create a new container (in this example I call it webcontent, any name will do):


Then go into the container and click Upload to a file (in this example it's called getmefile.txt):


For the VM to be able to access files in the storage account, we need a SAS token which will be used with the AzCopy command. Go to the storage account -> Shared access signature -> check Blob and check all three resource types (basically just allow all if you're in doubt as this is only for testing) -> click Generate SAS and connection string.

Then copy or take a note of the 'Blob service SAS URL' value at the bottom of the page:


Then jump back to the test VM.

I created a dummy file called index.html and placed in C:\users\localadmin folder. I also have AzCopy located in the same folder.

To test that we can copy a file from the VM to the storage account, start a PowerShell window (will also work from a regular command prompt) and run the following command:

.\azcopy copy ".\index.html" "https://<storage account name>.blob.core.windows.net/webcontent/?sv=2022-11-02&ss<link has been shortened>U%3D"

So you use the copy function and then choose a source and destination.  For the source we choose the index.html file in the current folder. For the destination, we use the Blob service SAS URL but we modify it by adding the blob folder name and a '/' at the end of the FQDN and before the SAS token info (marked above in bold), also see below:


For the next test, we copy the content of the blob container we created earlier including the file we added and place it on the local test VM:

It is basically just changing the source and destination in the AzCopy command (again adding the blob container name after the FQDN):

.\azcopy copy "https://<storage account name>.blob.core.windows.net/webcontent/?sv=2022-11-<link has been shortened>2BU%3D" "C:\Users\localadmin\" --recursive


To ensure that public access is entirely disabled, you can now go to the storage account under Networking -> Firewalls and virtual networks -> Choose Disabled and Save.

With this change you can no longer browse content in the blob containers via the portal.

However, you can re-run the two AzCopy commands above and it will still work.

If you want to verify that the files are available in the blob container, you can access them via the browser (from the test VM) by using the 'Blob service SAS URL' and then modifying it by appending the container name and file name after the FQDN:

https://<storage account name>.blob.core.windows.net/webcontent/getmefile.txt?sv=2022-11-02&s<link has been shortened>2BU%3D

Another way to represent the same URL is:

https://<storage account name>.blob.core.windows.net/<blob container>/<file name><SAS Token>

The reason you cannot just use the FQDN + the folder and file name is that even if you can technically access the content of the storage account i.e. there is no firewall on the storage account blocking access, you still need to present the required credentials to view the content of the storage account which in this case is the SAS token that is added to the URL.


If you want to get just the SAS token and not the full Blob service SAS URL, it is available under the storage account -> 'Shared access signature' in the same location as the Blob service SAS URL, see below:



With this we have a proven setup where traffic between a VM and a storage account only runs over the Private Link and all traffic is handled via the internal network using only local IP addresses.

Friday, June 30, 2023

Azure Firewall with availability zones and forced tunneling - ARM template

 This firewall has a fairly specific configuration that aligns to a set of client requirements. First of all it's set up for forced tunneling. There is not a requirement to configure a default route to point towards on-prem as the default route can be advertised via BGP (in a case where you have ExpressRoute or VPN to on-prem configured as well). For this to work, 'propagate gateway routes' must be enabled on the AzureFirewallSubnet, see here for more info. 

This setup requires a secondary subnet, AzureFirewallGatewaySubnet to be deployed (with a /26 size) and this subnet must have a default route pointing to the Internet.

In a default setup the firewall will have two public IP addresses but for security purposes one of those two IP addresses has been removed. The remaining public IP on the management interface is a technical requirement for internal communications with Microsoft and it can't be removed.

The ARM template, referenced below, deploys two resources. The firewall itself and one public IP. Both resources are deployed into three availability zones (AZ) (note that only certain Azure Regions support three AZs). 

If you want an official firewall w. AZs bicep file from MS, see this link.

ARM template on Github:


Monday, June 12, 2023

Azure: Deploy a key vault with a private endpoint

 This post describes an ARM template that deploys a key vault, with a private endpoint, into an existing VNet and subnet. The key vault does not have to be placed in the same resource group as the VNet.

It requires the following to already be deployed:

  • VNet
  • Subnet
  • Resource group (for the key vault and private endpoint)
  • Private DNS zone (privatelink.vaultcore.azure.net)

The key vault will be configured to use RBAC and will allow ARM templates to retrieve content (in this case it is to allow an ARM template that deploys a VM to retrieve a secret from the key vault), see below:


For the Network settings, all public access is disabled. With a private endpoint you'll be able to create and read secrets from the key vault in the Azure Portal via internal routing. 

The "Allow trusted Microsoft services to bypass the firewall" setting is checked. This allows Azure DevOps, via a pipeline, to deploy an ARM template that deploys a VM that retrieves the secret from the KV which typically is the local admin password for the VM, see below:



The private endpoint (PE) can be found under, Networking -> Private endpoint connections.

A local IP from the specified subnet will be dynamically assigned to the PE.

Note that this template will not create the A record in the private DNS zone for the private endpoint. This is usually automated via Azure Policy, see here for more info under point 3.


The ARM template and parameters file can be found on Github. There's also a README with a bit more info. For more info on the format of the subnet ID string, see here.





Wednesday, June 7, 2023

Azure Policy - Allow only specified IP ranges and regions for VNets

 At current client we had a request to apply a policy to enforce that only VNets using specified IP address ranges and regions will be allowed. So for example, if you create a VNet in West Europe, then it must be within the 10.100.0.0/16 IP address space and if in North Europe it must be within 10.200.0.0/16 and 10.201.0.0/16. And anything outside that must be denied.

There is no built-in policy for this. It was possible to find simpler versions online of the above but we couldn't find anything that fit all the requirements.

The closest we got was a slimmed down version of this policy from AzPolicyAdvertizer named "Address space must be pre-allocated for region".

However, it didn't take into account extended VNets. So if you have a VNet consisting of two or more IP address ranges, it will show as non-compliant.

We raised a ticket with Microsoft support and after a few days they came back with an updated policy that works.

The functioning policy can be found here on Github. The only thing that needs to be updated is the content of the "spokeAllocations" array.

Friday, February 3, 2023

Deploy an Azure Firewall with no public IP for data plane with ARM (and forced tunneling)

 At current client we have multiple Azure Firewalls running with forced tunneling to on-prem. By default this requires two public IP addresses, one for the data plane (or for customer traffic) and the other other is for service management traffic (exclusively used by the Azure platform).

Since there is no ingress or egress to the internet on these firewalls as all traffic is routed to on-prem, there was a request from the security team to remove the public IP for the data plane (so there will just be one public IP left per firewall).

If deploying from the portal or via PowerShell, this cannot be configured during deployment, but can be done post deployment by going the Azure Firewall -> Public IP configuration -> Choose the three dots to the right of the public IP -> choose Edit -> And then choose None so that no public IP is associated., see screenshot below for example.

Note that you cannot choose to delete the public IP directly, and the other option to Manage Public IP also cannot be used to disassociate the public IP (which is perhaps a bit confusing).

Deploy with ARM template

If deploying the firewall using an ARM template, it is possible to configure just one public IP at the time of deployment.

I've put an example template on GitHub that can be used for reference:

fw-no-public-ip.json

fw-no-public-ip.parameters.json

The following resources should be deployed in advance:

  • A resource group
  • A VNet
  • 2 x subnets (AzureFirewallSubnet and AzureFirewallManagementSubnet, both should be /26 in size)
  • A firewall policy

References to the subnets and the policy should be updated in the parameters file before running it.

On the second screenshot below you can see the result post deployment and that the FW has no firewall public IP but it does have one for management traffic. On the first screenshot below you can that there is an entry for the public IP called "DoesNotExist" (just an example name) but it is not associated with a public IP.




Monday, July 11, 2022

Azure Firewall with Azure policy and IP groups using Bicep

 In the previous post I deployed a firewall with some additional components such as VMs, NSGs, and route tables to have a working test setup. In this post it will mainly be the FW, the FW policies and the IP groups that will be deployed.

This post is based on the this Quickstart template with only a few modifications. The main reason I do this is to verify that the files work in my environment to inspect deployed resources.

I have uploaded the modified files to Github (fw-with-policy-001.bicep and fw-with-policy-001.parameters.json). 

The resources deployed can be seen below:


The two IP groups are used in combination with the policy. The policy is associated with the firewall. The are two rule collection groups defined with a total of three rules, see below two screenshots: