Friday, December 22, 2023

Add resource locks to individual records in a private DNS zone

 It is possible to add resource locks to individual records in private DNS zones. A scenario for this could be a critical or central A record that you don't want to be changed by mistake while still allowing for ongoing updates to the private DNS zone as part of daily operations.

If you have a centralized private DNS zone setup with Azure policy handling the DNS record creation and you also use private endpoints (PE), then there is a (perhaps small) risk that A records can be overwritten. This is the case if someone creates a new PE and associates with the same resource. The DeployIfNotExist policy will run on PE creation and replace the existing record and so that the PaaS service will resolve to a new local IP (note that if you delete the new PE again, the A record will also be deleted and the original PE will no longer have an A record and so will have to be recreated or the A record re-added).

Adding locks to individual records is described in further detail here. Note that this can currently only be done using PowerShell and can't be done via the Azure portal.

The example from MSFT looks like below:


An actual example is shown below:

# Lock a DNS record set

$lvl = "ReadOnly"

$lnm = "dontChangeMe"

$rsc = "privatelink.blob.core.windows.net/testaccountdelete001"

$rty = "Microsoft.Network/privateDNSZones/A"

$rsg = "rg-dns-conn-001"

 

New-AzResourceLock -LockLevel $lvl -LockName $lnm -ResourceName $rsc -ResourceType $rty -ResourceGroupName $rsg

Note that the MSFT example uses a regular DNS zone whereas my example uses private DNS zones (marked in bold above).

See example below for when lock is applied:


Once applied, you can see (and edit and delete) the lock in the portal under the private DNS zone -> "you're private DNS zone, e.g. for blob" -> Locks, see below:



The reason that the lock type is set to ReadOnly and not CannotDelete is that the latter option will  allow the records to be overwritten which we don't want.


Azure policy: Deploy CanNotDelete locks to resource groups

 You can use Azure policy to apply resource locks to all resource groups in a defined scope. This can be useful to ensure that critical resources are not deleted by mistake. See general description of resource locks here. Note that applying locks in the environment can create unforeseen problems so it's good to proceed with a bit of caution. 

The policy in this post is based on a slightly modified version from AzAdvertizer, that can be found here. The original policy will apply locks to a list of resource group names that is specified in an array.

The modified version, which can be found here on Github, applies locks to all resource groups but excludes resource groups which are specified in the array.

Note that the only built-in roles (for the system assigned managed identity, SAMI) that can apply locks are Owner and User Access Administrator. Owner has typically too many permissions and User Access Administrator does not have the policy deployment permissions. So a custom RBAC is required.

To create a custom RBAC, go the the subscription or management group level where the policy will be applied, go to Access Control (IAM) -> Roles. From here click +Add to a new role. Add the following permissions:

"Microsoft.Resources/deployments/write",
"Microsoft.Authorization/locks/*",
"Microsoft.Authorization/policies/deployIfNotExists/action",
"Microsoft.Resources/checkPolicyCompliance/read"

The full RBAC role in JSON can be found here on Github (make sure to update the management group or subscription under assignableScopes).

When you assign the policy definition, make sure to choose the new RBAC role when creating the SAMI, see below:





Azure policy: Reducing the default evaluation time for auto A record creation for private endpoints

 I have previously written about the use of Azure policy to automatically create A records for private endpoints in centrally managed private DNS zones. You can see more here and here. And the general recommended setup from MSFT is available here.

One of the slightly annoying things about the standard Azure policies is that they use the default evaluation delay time (10 minutes) for when a DeployIfNotExist policy runs. This means that when you create a private endpoint, it takes around 10 minutes before the A record is created in the private DNS zone. This creates, in addition to the added wait time, some confusion from users as they don't know whether their deployments work or not.

This evaluation delay time can be minimized by using the optional property for DeployIfNotExist type policies called EvaluationDelay. This is described in more detail here.

There are different values that can be set for this property, but to get the best effect, I'd recommend using AfterProvisioningSuccess. This will run the policy as soon as the private endpoint has been successfully deployed.

You can see an example of a policy using this property here on Github. And below is an example as well.

Note that this property does not only apply to DNS record creation. It can be used for other DeployIfNotExist policies as well where you want the resources deployed right away.



Monday, December 11, 2023

Azure policy: Auto create DNS records using both subResource and private link resource type

 When using private endpoints at scale, the recommended setup from Microsoft is to use Azure Policy to automatically create the DNS records in the central private DNS zones when the private endpoints are created. The reason for this is that users or owners of the spokes or landing zones do not have permissions to create A records in the central private DNS zones in the Hub.

For most private DNS zones, the regular Azure policy can be used which checks for private DNS zone name and subResource id, see list here. However, there are scenarios where this is not sufficient. For example, if a region has to be specified using Recovery Services Vault, see more on that here.

Another example, and the scope of this post, is when there are overlapping subResource values such as for Synapse Analytics and Cosmos DB (which both use 'sql') or Synapse Studio and Storage accounts Web (which both use 'web'). If multiple policies are created using the same subResource, you don't know in which private DNS zone that the A record will be created and you can experience records being created first in one zone and then the other whichever policy is evaluated first.

To address this, Microsoft has created a policy that, in addition to the subResource, adds a parameter that matches on the private link resource type (also referred to as privateLinkServiceId). The policy can be found here.

The private link resource type is found in the first column in the table of private DNS zones, here. Examples of values are:

  • Microsoft.Synapse/privateLinkHubs
  • Microsoft.Synapse/workspaces
  • Microsoft.DocumentDB/databaseAccounts

For some odd reason, MSFT hardcodes the value of the private link reosurce type in the policy. I've updated the policy slightly to parameterize that value. The updated policy can be found on here on Github.

Below you can see an example of what it looks like when the policy is assigned in the portal:



Thursday, November 30, 2023

Installing tools on Windows with Chocolatey - a package manager

 Chocolatey is a useful tool to install apps and tools on a fresh laptop or developer VM via command line.

Install Chocolatey

To install Chocolatey, follow below steps:

  • Install PowerShell 7, see link
  • Open PowerShell 7 as administrator
  • Run the following command to install Chocolatey (copied from the official install instructions)
    • Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
  • Close and re-open the PowerShell 7 window, again as administrator
  • Run the following command to verify that install is succesful:
    • choco upgrade all
  • Enable long paths in PowerShell (command copied from here):
    • New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force

Install apps with Chocolatey

Below are some commands for common tools:

  • choco install -y notepadplusplus
  • choco install -y git
  • choco install -y vscode
  • choco install -y 7zip
  • choco install -y kubernetes-cli
  • choco install -y kubernetes-helm
  • choco install -y azure-cli
  • choco install -y az.powershell
  • choco install -y bicep
  • choco install -y terraform
  • choco install -y firefox
You can see a list of available apps here (and search as well, there are many)

Here's instructions on initial config for Git (set user.name and user.email)

Friday, October 20, 2023

Git commands - for setup and daily work

 This article is just to collect some of the Git commands that are being used on a regular basis.

Git commands for daily work

git clone <https link to repo> (clone a remote repo. Make sure you’re in the correct directory when running)

git pull (sync the latest changes from remote repo)

git checkout -b feature/new_branch_name (create new feature branch)

git checkout main (switch to main branch)

git add -A (pre-commit command to add all new files to local staging, run this before local commit)

git commit -a -m "adding a test file" (commit all files to local branch and add a message)

git push --set-upstream origin feature/testingbranch (publish local branch to remote repo, use the same name as the current feature branch)

git branch -d localBranchName (delete local branch)

git push origin --delete remoteBranchName (delete branch remotely)

git branch -l (list branches)

Git commands configuring Git

git config --global user.name "First Last" (user name and email should be set up as a one time config)

git config --global user.email <user@email.com>

git config --global core.longpaths true (fixes an error with long path names)

git config --list (show git config)

git config --list --show-origin (show git config including where variables are defined)

git config --global http.https://dev.azure.com/.proxy https://someaddress (this is a proxy related setting)

git config --global http.https://dev.azure.com/.proxyauthmethod negotiate (this is a proxy related setting)

git config --global http.https://dev.azure.com/.emptyAuth true (this is a proxy related setting)

git config --global --edit (edit the global config file in VI editor)

git config --global --replace-all user.name "username" (replaces current user name)

git config --global --replace-all user.email <user@email.com>

git config --unset-all credential.helper (unset named settings)

git config --global --set credential.helper (set the credentials helper type)

   


Tuesday, October 10, 2023

Add custom rule to new NSGs via Azure Policy

 For governance, or operational, reasons there may be a need to ensure that certain rules are applied to all NSGs that are created within a certain scope.

This can be achieved using Azure Policy with a deployIfNotExist function.

Such a policy has already been created and is ready to use from AzAdvertizer.net, see link here:

I ran a quick test to verify the functionality and it works as expected. At the time of creation of the NSG, the policy kicks in an applies the rule right away.

The policy will let you specify one rule. So for multiple rules additional assignments can be created.

The policy looks for a suffix (the last part of the name) in the NSG name and only applies the rule if there's a match. You can re-arrange the check and have it look for a prefix instead, I have uploaded an example here on Github (can be copied in as a new definition via Azure Portal -> Policy -> Definitions).

If you want to apply the rule to all NSGs, then simply remove this check, see marked part below:


The policy is parameterized, so when you create an assignment it will request you to add all relevant parameters. See example below:

Note that some of the parameters such as destinationPortRange are arrays. They should be added in the format ["3389"] (for port 3389..).


Below is a screenshot of the inbound rule added post NSG deployment:


Since this is a deployIfNotExist policy, this means the Assignment requires a system assigned managed identity (or a user assigned managed identity) with Network Contributor permissions which will be automatically created when you create the assignment if you have enough permissions.




Friday, July 7, 2023

Deploying Terraform in Azure using Github Actions workflow

 This article will summarise the steps I went through to set up Github Actions with a private repo to be able to push Terraform code from VS Code to Github and then to Azure.

It is based on a great article by Guillermo Musumeci and if you want to replicate this setup, that is the guide you should follow, see article.

The overall steps are as follows:

  • Create a Service Principal (SPN) with contributor permissions
  • Create a storage account and a container (this is used to hold the backend state and is created up front. I did this via AZ CLI but any method, such as via the Azure Portal, will do)
  • Create a Private Repo in Github
  • From VS Code, clone the repo to your local machine
  • Add 4 x secrets to Github under your private repo: Repo -> Settings -> Secrets -> Actions -> New Repo Secret (these secrets contain the SPN info so that Github Actions can authenticate towards Azure
  • Create providers.tf, variables.tf, outputs.tf files according to article mentioned above. And a main.tf file that just contains a resource group to get started
  • Create Github Actions workflow using a simple Terraform YAML pipeline file
These are the base steps. From now on when you create a pull request (PR), the workflow will start and will run terraform init, plan, and apply and if there are no issues the code will be pushed. It doesn't wait for you to complete the PR though but you can do that as the last step.

Under Actions, you can see the status of all the workflow runs (it looks quite similar to a push pipeline run in Azure DevOps):


In the repo I have just placed all the files in the root folder. This works but can likely be organised better as the amount of files grow, see below:


I ran into two minor issues, these were:

An error message saying the following:
"Error message: state blob is already locked" and "Terraform acquires a state lock to protect the state from being written by multiple users at the same time. Please resolve the issue above and try again. For most commands, you can disable locking with the "-lock=false" flag, but this is not recommended."

I tried updating the pipeline file to include the -lock=false under the "terraform plan" command but that didn't work. For some reason I had a lock on the state file in the storage account that was created earlier. From the Azure Portal, you can navigate to the file in the blob container and from there remove the lock (or break the lease, is the terminology, see screenshot below). After doing this, there has been no more issues with the lock.



The other issue I ran into was that when I tried to deploy the first resource, the workflow was hanging for a long time and it was unclear what was holding it up.

It turns out that it was waiting for the Microsoft.DocumentDB resource provider to be registered. This is a one time thing that is done on the under the subscription under Resource Providers (it specifies which resource types are allowed and not all resource types are allowed by default). I would recommend to just go and enable/register it manually before running the Actions workflow (it takes some time for it to register, maybe 30-40 mins).

Once this was in place the workflow has been running smoothly and any errors has been related to my code but the error messages have been pretty clear so it has been easy to address.


That's it. It may seem like a lot initially, but if you take step by step, it's not that bad. And the final setup is quite cool and useful.

Tuesday, July 4, 2023

Use Terraform to deploy a simple VM and a VNet to Azure

 I have had a look at deploying a simple VM that can be used for troubleshooting using Terraform. It is based on this Microsoft Quickstart guide. What I have done is to remove most of the resources from this file so it is only the minimum required resources that will be deployed. These are:

  • A resource group
  • A Windows 2022 Server
  • A vNIC
  • A public IP (so you can RDP to the box)
It requires that you already have a VNet and subnet running. Preferably, there is an NSG as well which is associated with the subnet which allows for port 3389 tcp inbound.

The files are here on Github in the Simple VM folder. There are some getting started instructions in the readme.md file as well. The four files in the folder are all part of the template, these are:
  • main.tf
  • outputs.tf
  • providers.tf
  • variables.tf
The template creates a random admin password and assigns it to the machine. The password itself can be read post deployment by running:

terraform output -json

In addition to this, I have also created a small template that creates a VNet and two subnets and place them in a separate resource group. Files are located here.

I personally prefer to start with smaller building blocks and then combine those later when required.

Saturday, July 1, 2023

Azure: ARM template for simple Win2k22 VM with trusted launch

 It was recently announced that trusted launch is now enabled by default when deploying new Gen 2 VMs via the portal.

I have modified an ARM template for a simple Windows Server 2022 to include the Trusted Launch security features. The addition to the template is a "securityProfile" section under the virtual machine resource:

"securityProfile": {
  "securityType": "[parameters('securityType')]",
  "uefiSettings": {
  "secureBootEnabled": "[parameters('secureBoot')]",
  "vTpmEnabled": "[parameters('vTPM')]"
   }
}

Where securityType is TrustedLaunch and the other two are bool types set to true.

You can verify that the settings are configured correctly on the Overview page of the VM, see below:


There is a bit more info on how the VM is configured here.

The ARM template is available on Github, link to files:


Minimum setup for private DNS zone infrastructure in Azure

 In this article we will look at what are the minimum requirements if you want to implement private DNS zone infrastructure and private link in a test setup to be able to use PaaS services with private endpoints. In this example we will use blob storage but it can be extended to most other PaaS services that support private endpoints.

The concept of Azure Private Link is a bit complicated and people often get confused around how it works. This article aims to break it down into its simplest setup.

The following components are required:

  • A VNet and a subnet
  • A default NSG (that should be associated with the subnet)
  • An inbound rule on the NSG that allows port 3389 from your external IP
  • A private DNS zone (privatelink.blob.core.windows.net)
  • A VNet link between the private DNS zone and the VNet (it's a configuration on the private DNS zone)
  • A test VM running in the subnet (we need a source with a local IP to test connectivity towards the storage account with the private endpoint)
  • A public IP associated with the VM
  • AzCopy installed on the test VM
  • A storage account with a private endpoint (including an A record in the private DNS zone)
Once it is all set up, we will disable all public access on the storage account and verify that the VM is connecting to the storage account on its private IP address. Following this we will try to copy a file from the VM to the storage account and the other way around as well (using AzCopy) to prove that it works.

Steps

First deploy a VNet and one subnet, see example below. There are no specific requirements to this part.


Then create a default NSG and associate it with the subnet (this is not strictly required, but it is good practice and will come in handy later):


Create a private DNS zone named: privatelink.blob.core.windows.net


Create a VNet link between the private DNS zone and the VNet. Go to the private DNS zone -> privatelink.blob.core.windows.net -> Virtual Network Links -> click Add

Give the link a name and choose the VNet you just deployed. Don't check the box around auto registration, this is for VMs and not in scope for this test.


Deploy a virtual machine with a public IP (so that you can RDP to it). You can do that via the portal or you can use a test Windows Server 2022 VM ARM template, see more info here (note that this VM does not have a public IP so it will have to be created separately and associated with the NIC post deployment).

Add a rule to the NSG that allows traffic on port 3389 tcp from your external IP address. This is so that you can RDP to the test VM:


RDP to the test VM and download AzCopy v10 and verify that it can run with the command: .\azcopy (this will show the version and some help info).


Create a storage account and under Networking -> Firewalls and virtual networks -> Choose the "Enabled from selected virtual networks and IP addresses" and add your external IP address. We will change this later to 'Disabled' but for now we need it to be able to create a blob container and add a file from the Azure portal. In addition this will let us browse the content of the blob containers.


Still under Networking, choose the Private Endpoint connections tab and add a private endpoint.
Give the private endpoint a name e.g. pe-<name of storage account> and choose the VNet and subnet that was created earlier. When it asks around integration with a private DNS zone, choose "No". If you choose yes it will create a new Private DNS zone but we have already created a zone in a previous step.

What adding a private endpoint does is that it creates a vNIC and attaches a local IP address from the subnet via DHCP and then it associates that vNIC with the storage account so that it can be accessed internally.


Once the private endpoint is created it can be seen in the private endpoint connections tab, see above. We need to take a note of assigned local IP address. To do this, click on the private endpoint link and then going to DNS Configuration, see below:


Now we need to create an A record in the private DNS zone we created earlier that points to the IP address we just noted. Go to Private DNS Zones -> privatelink.blob.core.windows.net -> Click "+ Record set". Under name, add the storage account name and under IP address, add the IP address that was recorded in the previous step, see below.



Now we will verify that from the test VM can resolve the local IP of the storage account by using the regular storage account FQDN (if this doesn't work it will resolve with a public IP address and you will know that something went wrong).

Jump to the test VM and start a command prompt:

Run: nslookup <storageaccountname>.blob.core.windows.net

The result should be the local IP, see below (the red box in the screenshot shows the result without a private endpoint and the green box shows the result after the private endpoint has been added. In the green box, the local IP 10.100.0.8 correctly shows):


To verify that we can copy a file from the test VM to a blob container and vice versa, first we will create a dummy file on the VM and then a dummy file in the storage account (any txt file will do).

Go to the storage account and create a new container (in this example I call it webcontent, any name will do):


Then go into the container and click Upload to a file (in this example it's called getmefile.txt):


For the VM to be able to access files in the storage account, we need a SAS token which will be used with the AzCopy command. Go to the storage account -> Shared access signature -> check Blob and check all three resource types (basically just allow all if you're in doubt as this is only for testing) -> click Generate SAS and connection string.

Then copy or take a note of the 'Blob service SAS URL' value at the bottom of the page:


Then jump back to the test VM.

I created a dummy file called index.html and placed in C:\users\localadmin folder. I also have AzCopy located in the same folder.

To test that we can copy a file from the VM to the storage account, start a PowerShell window (will also work from a regular command prompt) and run the following command:

.\azcopy copy ".\index.html" "https://<storage account name>.blob.core.windows.net/webcontent/?sv=2022-11-02&ss<link has been shortened>U%3D"

So you use the copy function and then choose a source and destination.  For the source we choose the index.html file in the current folder. For the destination, we use the Blob service SAS URL but we modify it by adding the blob folder name and a '/' at the end of the FQDN and before the SAS token info (marked above in bold), also see below:


For the next test, we copy the content of the blob container we created earlier including the file we added and place it on the local test VM:

It is basically just changing the source and destination in the AzCopy command (again adding the blob container name after the FQDN):

.\azcopy copy "https://<storage account name>.blob.core.windows.net/webcontent/?sv=2022-11-<link has been shortened>2BU%3D" "C:\Users\localadmin\" --recursive


To ensure that public access is entirely disabled, you can now go to the storage account under Networking -> Firewalls and virtual networks -> Choose Disabled and Save.

With this change you can no longer browse content in the blob containers via the portal.

However, you can re-run the two AzCopy commands above and it will still work.

If you want to verify that the files are available in the blob container, you can access them via the browser (from the test VM) by using the 'Blob service SAS URL' and then modifying it by appending the container name and file name after the FQDN:

https://<storage account name>.blob.core.windows.net/webcontent/getmefile.txt?sv=2022-11-02&s<link has been shortened>2BU%3D

Another way to represent the same URL is:

https://<storage account name>.blob.core.windows.net/<blob container>/<file name><SAS Token>

The reason you cannot just use the FQDN + the folder and file name is that even if you can technically access the content of the storage account i.e. there is no firewall on the storage account blocking access, you still need to present the required credentials to view the content of the storage account which in this case is the SAS token that is added to the URL.


If you want to get just the SAS token and not the full Blob service SAS URL, it is available under the storage account -> 'Shared access signature' in the same location as the Blob service SAS URL, see below:



With this we have a proven setup where traffic between a VM and a storage account only runs over the Private Link and all traffic is handled via the internal network using only local IP addresses.

Friday, June 30, 2023

Azure Firewall with availability zones and forced tunneling - ARM template

 This firewall has a fairly specific configuration that aligns to a set of client requirements. First of all it's set up for forced tunneling. There is not a requirement to configure a default route to point towards on-prem as the default route can be advertised via BGP (in a case where you have ExpressRoute or VPN to on-prem configured as well). For this to work, 'propagate gateway routes' must be enabled on the AzureFirewallSubnet, see here for more info. 

This setup requires a secondary subnet, AzureFirewallGatewaySubnet to be deployed (with a /26 size) and this subnet must have a default route pointing to the Internet.

In a default setup the firewall will have two public IP addresses but for security purposes one of those two IP addresses has been removed. The remaining public IP on the management interface is a technical requirement for internal communications with Microsoft and it can't be removed.

The ARM template, referenced below, deploys two resources. The firewall itself and one public IP. Both resources are deployed into three availability zones (AZ) (note that only certain Azure Regions support three AZs). 

If you want an official firewall w. AZs bicep file from MS, see this link.

ARM template on Github:


Friday, June 16, 2023

Installing VS Code, PowerShell, Azure PowerShell, and AZ CLI on macOS

 It's relatively simple to get these tools installed on a Mac so you can start working with Azure and ARM templates via code.

This article just collects the relevant information and puts it in order:

Visual Studio Code

This can be installed as a regular app in macOS, follow the link:

https://code.visualstudio.com/docs/setup/mac 


Homebrew


Homebrew is a package manager for macOS and Microsoft's recommended way af installing PowerShell and other tools.


https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-macos?view=powershell-7.3


The commands to run for Homebrew are:


/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"


When Homebrew has been installed, it will ask you to run two additional commands (to add Homebrew to you PATH), these are:


(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/<REPLACE WITH USER>/.zprofile


eval "$(/opt/homebrew/bin/brew shellenv)"


PowerShell


To install PowerShell follow the same link as above:


Or run the following commands:

brew install --cask powershell

And to run PowerShell from the terminal:

pwsh

Azure PowerShell

To install Azure PowerShell, follow this link:


Or run this command:

Install-Module -Name Az -Repository PSGallery -Force

This will give you the Azure related commands such as:

Connect-AzAccount, Get-AzContext, Set-AzContext, etc 


AZ CLI


To install, follow this link:


https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-macos


Or run:

brew update && brew install azure-cli

This gives you all the AZ commands. A separate Az login is required to use the Az commands in Azure.

Terraform

To install, see link or run below two commands:


brew tap hashicorp/tap

brew install hashicorp/tap/terraform



Thursday, June 15, 2023

Azure: ARM template for simple Win2k22 server for testing purposes

I have previously posted and article with a Win2k19 simple server using a key vault, see here. This current post will describe a simple Win2k22 server using a regular username and password set in the paremeters file of the ARM template.

The VM is deployed with tcp port 3389 opened locally on the VM, but no public IP is added. If you need to RDP to the VM from the internet, a public IP should be added and associated with the vNIC post deployment. 

The ARM template files can be found here:



Remember to update the parameters file with relevant information.

See here (the README file) for deployment information via PowerShell.

This VM has automatic shutdown configured at 19.00 hrs. UTC, daily.


Monday, June 12, 2023

Azure: Deploy a key vault with a private endpoint

 This post describes an ARM template that deploys a key vault, with a private endpoint, into an existing VNet and subnet. The key vault does not have to be placed in the same resource group as the VNet.

It requires the following to already be deployed:

  • VNet
  • Subnet
  • Resource group (for the key vault and private endpoint)
  • Private DNS zone (privatelink.vaultcore.azure.net)

The key vault will be configured to use RBAC and will allow ARM templates to retrieve content (in this case it is to allow an ARM template that deploys a VM to retrieve a secret from the key vault), see below:


For the Network settings, all public access is disabled. With a private endpoint you'll be able to create and read secrets from the key vault in the Azure Portal via internal routing. 

The "Allow trusted Microsoft services to bypass the firewall" setting is checked. This allows Azure DevOps, via a pipeline, to deploy an ARM template that deploys a VM that retrieves the secret from the KV which typically is the local admin password for the VM, see below:



The private endpoint (PE) can be found under, Networking -> Private endpoint connections.

A local IP from the specified subnet will be dynamically assigned to the PE.

Note that this template will not create the A record in the private DNS zone for the private endpoint. This is usually automated via Azure Policy, see here for more info under point 3.


The ARM template and parameters file can be found on Github. There's also a README with a bit more info. For more info on the format of the subnet ID string, see here.