Showing posts with label Azure Firewall. Show all posts
Showing posts with label Azure Firewall. Show all posts

Friday, February 3, 2023

Deploy an Azure Firewall with no public IP for data plane with ARM (and forced tunneling)

 At current client we have multiple Azure Firewalls running with forced tunneling to on-prem. By default this requires two public IP addresses, one for the data plane (or for customer traffic) and the other other is for service management traffic (exclusively used by the Azure platform).

Since there is no ingress or egress to the internet on these firewalls as all traffic is routed to on-prem, there was a request from the security team to remove the public IP for the data plane (so there will just be one public IP left per firewall).

If deploying from the portal or via PowerShell, this cannot be configured during deployment, but can be done post deployment by going the Azure Firewall -> Public IP configuration -> Choose the three dots to the right of the public IP -> choose Edit -> And then choose None so that no public IP is associated., see screenshot below for example.

Note that you cannot choose to delete the public IP directly, and the other option to Manage Public IP also cannot be used to disassociate the public IP (which is perhaps a bit confusing).

Deploy with ARM template

If deploying the firewall using an ARM template, it is possible to configure just one public IP at the time of deployment.

I've put an example template on GitHub that can be used for reference:

fw-no-public-ip.json

fw-no-public-ip.parameters.json

The following resources should be deployed in advance:

  • A resource group
  • A VNet
  • 2 x subnets (AzureFirewallSubnet and AzureFirewallManagementSubnet, both should be /26 in size)
  • A firewall policy

References to the subnets and the policy should be updated in the parameters file before running it.

On the second screenshot below you can see the result post deployment and that the FW has no firewall public IP but it does have one for management traffic. On the first screenshot below you can that there is an entry for the public IP called "DoesNotExist" (just an example name) but it is not associated with a public IP.




Monday, July 11, 2022

Azure Firewall with Azure policy and IP groups using Bicep

 In the previous post I deployed a firewall with some additional components such as VMs, NSGs, and route tables to have a working test setup. In this post it will mainly be the FW, the FW policies and the IP groups that will be deployed.

This post is based on the this Quickstart template with only a few modifications. The main reason I do this is to verify that the files work in my environment to inspect deployed resources.

I have uploaded the modified files to Github (fw-with-policy-001.bicep and fw-with-policy-001.parameters.json). 

The resources deployed can be seen below:


The two IP groups are used in combination with the policy. The policy is associated with the firewall. The are two rule collection groups defined with a total of three rules, see below two screenshots:






Sunday, July 10, 2022

Azure: Deploy Azure Firewall in availability zones using Bicep

 Microsoft has some great Quickstart templates for deploying resources as codes. For this post I've looked at deploying an Azure Firewall into multiple availability zones using Bicep.

The Quickstart guide used can be found here.

The template needs only a few changes. I updated the VNet name and IP address info and inserted a reference to a key vault for the VM admin password.

My updated files can be found on Github (fw-conn-weu-001.bicep and fw-conn-weu-001.parameters.json).

The firewall is deployed into three availability zones in West Europe. This increases the availability to 99.99%. There is still only one logical firewall deployed, so the increased availability is managed in the background by Microsoft. And from a Azure Portal point of view it looks the same as if only deploying to one availability zone.

The availability zones can be verified under Azure Firewall -> Properties, see below:


There is no immediate additional cost to deploying to multiple availability zones, however, there is an increased bandwidth cost, see below (or link):




The bicep file deploys the following resources:


If you are testing in a Visual Studio subscription make sure to delete resources again when done as the cost of especially the FW can run up quickly.

One network rule and one application rule are added under Firewall -> Rules (classic) -> Network/Application rule collection tabs, see below. Usually a Firewall Policy will be created and associated with the FW but for testing purposes this can be used.


Using the Bicep visualizer from VS Code you can get a nice graphical overview of the resources to be deployed. To do this, right click the bicep file in VS code and choose "Open Bicep Visualizer", see below: 







Tuesday, October 5, 2021

Azure Firewall - Forced Tunneling not working for internet bound traffic

 At current client we have a use case that is not widely used. It's built around the Microsoft Enterprise-Scale AdventureWorks Hub/Spoke network infrastructure. The traffic from spokes is filtered via Azure Firewall to a VPN gateway in the Hub and from there to on-premise datacenters. Azure Firewall Forced tunneling is configured so that all traffic to on-prem including internet bound traffic is routed this way and will ultimately exit via an on-premise web proxy. The usual configuration would be to route internet bound traffic directly to the internet from the Azure Firewall (AZ FW).

Since we're currently building the environment we had only tested the connectivity to on-premise local addresses and not internet bound traffic. When testing we could see that traffic from the spokes reached the AZ FW and was correctly allowed by the network rules. But from there it sort of disappeared.

Then we set up a VPN packet capture on two of the VPN gateways (VPN GW), one was a control where traffic was not filtered via the AZ FW. We could see that internet bound traffic never reached the VPN GW and so the problem was introduced before that point either in the FW itself or the UDRs.

Microsoft Support suggested that we remove the UDR/route table associated with the AzureFirewallSubnet (which contains a default route, 0.0.0.0/0, to the virtual network gateway) since this should already be pushed from the VPN GWs.

Removing the UDR actually fixed the issue (so that no route tables are associated with the AzureFirewallSubnet). So having a double configuration which in itself is not incorrect results in internet traffic being dropped by the AZ FW.

While testing this we couldn't see the local source IP addresses of the test VMs in the on-prem firewall. Instead we saw an interface/IP address from the AzureFirewallSubnet range (which was not the configured IP of the local AZ FW interface but from same subnet so it belongs to the FW). The reason for this is that SNAT is configured by default for internet bound addresses but not for local addresses. This can be changed under AZ FW Policies and set to 'Never'. With this, the original source IPs are visible in the on-prem FW (they are not SNAT'ed or masked). See the configuration of SNAT in screen shot below.



 


Thursday, October 1, 2020

Azure: Allowing Windows Server to activate via KMS through Azure Firewall

 At current client we have an Azure Firewall installed. And only outbound traffic on ports 80 and 443 is allowed. Today we found that Windows Servers do not activate (we're buying the licenses per Windows Server). 

The reason for this is that the VMs can't reach the KMS server for Azure Global cloud:

Azure Global KMS: kms.core.windows.net  - IP: 23.102.135.246 on port 1688. See here for more info. 

(I found a general activation troubleshooting guide here where they recommend testing with PsPing. I didn't get around to trying that, but it might be useful.)

I tried adding a network rule in the Azure Firewall using the FQDN, kms.core.windows.net. That failed.

But adding a rule using the IP address in stead worked fine (and that looks to be 'allowed' as well by MS).

The following rule was added in Azure Firewall via PowerShell:

 # Allow VMs to reach kms.core.windows.net for Windows Server activation
 $NetRule2 = New-AzFirewallNetworkRule -Name "allow-kms" -Protocol TCP,UDP -SourceAddress `
  "10.1.1.0/24", `
  "10.1.2.0/24" `
 -DestinationAddress "23.102.135.246" -DestinationPort "1688"

And remember to add the rule variable as well to the New-AzFirewallNetworkRuleCollection cmdlet:

$NetRuleCollection = New-AzFirewallNetworkRuleCollection -Name 'fw-rule-collection' -Priority 200 `
   -Rule $NetRule1,$NetRule2 -ActionType "Allow"

When done, I RDP'ed to a VM and went to activation. It wasn't activated. I clicked on troubleshoot and the VM activated. Likely after a while all VMs will activate automatically.




Thursday, September 10, 2020

Azure Firewall drops traffic to on-prem S2S VPN

We're currently setting up an Azure Firewall at a client site. Initial implementation was done by following the MS documentation: 

https://docs.microsoft.com/en-us/azure/firewall/deploy-ps

After deployment and after attaching a given subnet to the default route table (to force all outbound traffic to pass through the Azure Firewall), then there was no communication from the Azure subnet and to the local VPN gateway (S2S VPN to on-premises). Other subnets that weren't attached to the route table worked fine.

It seemed that new default route (see example below) might be overriding the existing default routes in Azure. And that the way forward would be to create additional routes to specify traffic to the VPN.

$routeTableDG = New-AzRouteTable `

-Name Firewall-rt-table ` -ResourceGroupName Test-FW-RG ` -location "East US" ` -DisableBgpRoutePropagation #Create a route Add-AzRouteConfig ` -Name "DG-Route" ` -RouteTable $routeTableDG ` -AddressPrefix 0.0.0.0/0 ` -NextHopType "VirtualAppliance" ` -NextHopIpAddress $AzfwPrivateIP ` | Set-AzRouteTable


It turns out that the standard routes were fine. Only the default route to the internet is overridden (which is expected), the other two remain in use:

These are the standard routes that Azure creates:

"Each virtual network subnet has a built-in, system routing table. The system routing table has the following three groups of routes:
Local VNet routes: Directly to the destination VMs in the same virtual network.
On-premises routes: To the Azure VPN gateway.
Default route: Directly to the Internet. Packets destined to the private IP addresses not covered by the previous two routes are dropped."

The problem was that the property setting: "Propagate gateway routes" was set to "No", see below. This means that the VPN gateway routes are not visible/propagated to the subnets. To turn it on you can either do it from the portal or configure it via Powershell.




To do this in Powershell, simply remove the line: -DisableBgpRoutePropagation from the New-AzRouteTable command so it looks like below :

-Name Firewall-rt-table ` -ResourceGroupName Test-FW-RG ` -location "East US" #-DisableBgpRoutePropagation

This will set the property to: "False" which in the portal corresponds to "Yes".  You can see the "false" setting under Route table -> Export template.

When done, the published routes from the virtual gateway become visible under Route tables -> "your route table" -> Effective routes, like below, and traffic will flow to the on-prem site via VPN as well.