Monday, November 29, 2021

Create Azure ExpressRoute Circuit - ARM template

 There are multiple steps and design decisions that go into the deployment of an ExpressRoute. This article will cover (mostly) one of these steps which is the deployment of the ExpressRoute (ER) circuit.

An ExpressRoute circuit is a logical connection between your on-premises datacenters and Azure via a connectivity provider, see diagram below:


If you are designing an ER setup with multiple ER providers for DR, one ER circuit is required for each of these providers.

Once you deploy the ER circuit there will be a monthly running cost (both for metered and unmetered connections). 

To deploy the ER circuit, you can use an ARM template available on Github, see this link (this is one that I've used at a client) or use this version from MS Github. You can also deploy it manually from the portal if needed.

Note that to ensure that you enter the correct parameters in the parameter json, you can go through the steps of creating a new ER circuit via the Azure portal but instead of clicking "Create" on the last tab, click on "Download the template for automation". This will give you the ARM template including the parameters file. From here, you can copy the correct parameter information. Below are shown the main parameter options:


For port type there is a choice between Provider and Direct. Provider is the option described in this article (and which most companies would use) whereas ER Direct circumvents the service provider and connects directly to Microsoft equipment at the local peering edge location, see more here (however, a connectivity provider such as Telia or Interxion will still be used to do the actual work of connecting the physical equipment). With ER Direct you can get speeds up to 100 Gpbs, via a connectivity provider max bandwidth is 10 Gbps.

Peering location is the physical datacenter location of the connectivity provider. This location will typically be as close to the physical location for the client datacenters or main office location. You can see a list of providers and locations here. An example could be a Danish based company using Azure resources in West Europe region. A choice of local peering location could be Copenhagen via the provider Interxion. When traffic reaches Copenhagen on provider managed lines it will continue on the Microsoft backbone to Amsterdam (West Europe).

For the SKU, there is choice between Standard and Premium. You can see the differences here. Likely, Standard SKU will suffice for most clients.

For Billing Model there is a choice between Metered and Unmetered, see prices here. For Metered connection, there is a cost for egress traffic (traffic leaving Azure) but not ingress.

When the connectivity provider has established the last mile and connected the client datacenter to a local peering edge, they will ask for a Service Key to be able to configure the circuit on their side. The Service Key is available from the portal once the ER circuit is provisioned, see screenshot below:


After creating the ER circuit, Private Peering (as opposed to Microsoft peering for Office 365) must be configured. See details here (This step can be set up either by the connectivity provider or or you so ask the provider how they usually do it).

Then verify Private Peering.

After this, create a connection to the ER GW (similar to a VPN gateway but for ExpressRoute) from portal: Circuit -> Connections -> Add. This can also be done via ARM, see this link.


Tuesday, October 5, 2021

Azure Firewall - Forced Tunneling not working for internet bound traffic

 At current client we have a use case that is not widely used. It's built around the Microsoft Enterprise-Scale AdventureWorks Hub/Spoke network infrastructure. The traffic from spokes is filtered via Azure Firewall to a VPN gateway in the Hub and from there to on-premise datacenters. Azure Firewall Forced tunneling is configured so that all traffic to on-prem including internet bound traffic is routed this way and will ultimately exit via an on-premise web proxy. The usual configuration would be to route internet bound traffic directly to the internet from the Azure Firewall (AZ FW).

Since we're currently building the environment we had only tested the connectivity to on-premise local addresses and not internet bound traffic. When testing we could see that traffic from the spokes reached the AZ FW and was correctly allowed by the network rules. But from there it sort of disappeared.

Then we set up a VPN packet capture on two of the VPN gateways (VPN GW), one was a control where traffic was not filtered via the AZ FW. We could see that internet bound traffic never reached the VPN GW and so the problem was introduced before that point either in the FW itself or the UDRs.

Microsoft Support suggested that we remove the UDR/route table associated with the AzureFirewallSubnet (which contains a default route, 0.0.0.0/0, to the virtual network gateway) since this should already be pushed from the VPN GWs.

Removing the UDR actually fixed the issue (so that no route tables are associated with the AzureFirewallSubnet). So having a double configuration which in itself is not incorrect results in internet traffic being dropped by the AZ FW.

While testing this we couldn't see the local source IP addresses of the test VMs in the on-prem firewall. Instead we saw an interface/IP address from the AzureFirewallSubnet range (which was not the configured IP of the local AZ FW interface but from same subnet so it belongs to the FW). The reason for this is that SNAT is configured by default for internet bound addresses but not for local addresses. This can be changed under AZ FW Policies and set to 'Never'. With this, the original source IPs are visible in the on-prem FW (they are not SNAT'ed or masked). See the configuration of SNAT in screen shot below.