Loading Now

Azure Virtual Network Routing Appliance — A Native Solution for Hub-and-Spoke Routing

Azure Virtual Network Appliance Overview

A few months ago, a client contacted me, expressing a common concern. They had an impressive hub-and-spoke architecture in Azure, comprising around 100 spoke VNets, each dedicated to an application team, with excellent isolation and governance in place. A model setup, indeed. However, they encountered a challenge: the traffic between spokes had to route through an Azure Firewall Premium deployed in the hub, and they were beginning to reach the 100 Gbps limit. Additionally, the monthly costs of the Azure Firewall were escalating to a point where the finance department was starting to ask questions — and the honest assessment was: *”We’re primarily using it as a router, not as a firewall.”*

virtual-network-appliance-diagram_orig Azure Virtual Network Routing Appliance — A Native Solution for Hub-and-Spoke Routing

That was a challenging discussion.

They didn’t require deep packet inspection or Layer 7 filtering between their internal spokes; they simply needed a reliable, fast connection between Spoke A and Spoke B at scale. We explored third-party Network Virtual Appliances (NVAs), but the high licensing fees, throughput limitations, and restrictions on active connections felt like merely shifting the problem.

Then, in February 2026, Microsoft launched the Azure Virtual Network Routing Appliance in public preview, and things became more interesting.

Understanding the Challenges of Spoke-to-Spoke Routing

For those familiar with Azure, it’s known that VNet peering is not transitive. Spoke A and Spoke B can both connect to the Hub, but direct communication between them isn’t automatic unless explicitly configured.

This can be quite confusing for engineers transitioning from traditional networking. In on-premises setups, routers tend to handle routing seamlessly. However, in Azure, a subnet’s “default gateway” isn’t a traditional routing entity. The virtual NIC manages its routing table and directs traffic straight to its destination, bypassing any intermediary gateway. Therefore, to enable communication from Spoke A to Spoke B via the hub, an actual device must exist in the hub for Azure to identify as the next hop.

Until now, your options were limited:

Option The Catch
Azure Firewall Premium Excellent firewall, but cumbersome for routing. Limited to 100 Gbps with a cost of $1.75/hr before data charges, plus limited BGP support for on-premise routing.
Azure Firewall Standard Cap at 30 Gbps, costs $1.25/hr — less suitable for high-throughput routing.
Third-party NVA VM-based constraints apply. The cap on active connections at 250,000 has caught many by surprise. Additional costs for licensing, patching, and support also add up.

Organisations aiming for a cloud-first approach often prefer to avoid third-party NVAs if a native Azure resource would suffice. Until recently, a viable solution was not available.


Introducing the Azure Virtual Network Routing Appliance

The Azure Virtual Network Routing Appliance (AVNA) is a managed resource you can deploy within your hub VNet. It operates on specialised networking hardware, distinguishing its performance metrics from traditional VMs.

What sets it apart from an NVA or Azure Firewall:

  • It’s a top-tier Azure resource — managed like a VNet, NSG, or Route Table. Forget about OS patches or image management.
  • It resides in a dedicated subnet known as VirtualNetworkApplianceSubnet.
  • It’s solely a forwarding layer — it routes traffic without any firewall policies, deep packet inspection, or NAT integration (though it can work alongside NAT Gateway).
  • High availability is standard, and it’s inherently resilient across availability zones — no load balancer is necessary. If one is added, it won’t function as you’d expect.
  • It supports NSGs, Admin rules, User Defined Routes (UDRs), and NAT Gateway out of the box.

Understanding the Architecture

In a typical hub-and-spoke configuration, each spoke VNet connects to the hub. You set up UDRs in your spoke subnets to direct east-west traffic towards the AVNA’s IP address as the next hop. The AVNA then handles the forwarding to the appropriate destination spoke. On-premises traffic continues through your hub’s VPN or ExpressRoute gateway, and internet traffic still passes through your NAT Gateway or firewall.

imagensg4_orig Azure Virtual Network Routing Appliance — A Native Solution for Hub-and-Spoke Routing

Fig 1 — Hub-and-spoke topology showcasing AVNA as the east-west routing layer


Setting Up the Routing

After deploying the AVNA, it’s crucial to determine how your spoke UDRs will be organised. You have three approaches:

Option 1 — Maintain Granularity

Create distinct routes for each spoke — cloud prefixes direct to AVNA, on-premise traffic to the hub gateway, and internet traffic to egress. This offers maximum control but can become cumbersome in larger setups.

Option 2 — Route Everything through AVNA

Set a default route (0.0.0.0/0) in the spokes directed at the AVNA, which undertakes all routing — internal spoke communication, on-premise traffic to the gateway, and internet egress. It’s a straightforward UDR configuration, minimising risks of asymmetric routing. The downside is that you lose granular control per spoke.


Performance Insights

imagensgee4_orig Azure Virtual Network Routing Appliance — A Native Solution for Hub-and-Spoke Routing

Fig 2 — Bandwidth tiers and their respective capacity limits for AVNA

Consider the difference of 200 Gbps and 8 million concurrent flows compared to the Azure Firewall Premium’s 100 Gbps or the throughput of a typical NVA VM, which is constrained by the VM SKU and the limit of 250,000 connections. For environments that are heavily engaged in high-volume east-west traffic, this significantly alters the discussion.

⚠️ Note: The capacity tier is determined during deployment and cannot be modified later without completely redeploying. It’s essential to select the appropriate size initially. As there’s no cost during the preview period, opting for 200 Gbps is sensible — however, upon general availability, precise calculations will be crucial.


What’s the Role of NSGs?

You can connect an NSG to the VirtualNetworkApplianceSubnet to implement basic Layer 4 filtering between spokes. However, since traffic both enters and exits through the same subnet, your rules must accommodate both inbound and outbound directions. NSGs are relatively blunt instruments here and are best applied to spokes within the same security zone, where only minimal privilege access control is required. If stateful deep filtering is necessary, opting for Azure Firewall or a third-party NVA remains the better choice.


Preview Limitations — Transparency with Stakeholders is Key

Limitations Details
Production Readiness Currently in preview — intended for testing and evaluation purposes
Instances per Subscription Maximum of 2 (additional requests can be made via form)
Throughput Cap Limited to 200 Gbps per instance
IPv6 Support Not available
Metrics and Logging Currently not exposed — this can hinder visibility in the preview phase
Tooling Options No Azure CLI, PowerShell, or Terraform support as of yet
Private Endpoint Support Global and cross-region Private Endpoint functionality is not supported
Availability Regions East US, East US 2, West Central US, West US, North Europe, UK South, West Europe, East Asia
Cost Free during the preview period

The absence of metrics can be particularly challenging in practice. Without insights into traffic volumes, connection counts, or error rates, this setup may be suitable for lab testing, but it’s less ideal for developing a migration plan to general availability.


How to Gain Access

  1. Register your subscription for the preview feature flag: Microsoft.network/AllowVirtualNetworkAppliance
  2. Complete the sign-up form available on the Microsoft Learn page
  3. Await approval from the product team

While it’s technically classified as a public preview, the approval process and the limited availability of regions make the experience feel more akin to a private preview. Expect some delay in receiving access.


Final Thoughts

Reflecting on my conversation with the client — we’re monitoring this development closely. The performance metrics appear promising, the architecture aligns well, and the thought of replacing a $1.75/hr firewall that primarily functioned as a router with a native Azure solution is precisely the kind of simplification their platform team has been advocating.

Before recommending this solution for production use, I would want to see the following:

  • Metrics and Logging — visibility into the traffic flow is vital
  • Terraform and CLI Support — managing infrastructure exclusively through the portal isn’t desirable
  • Clear pricing upon General Availability — the tier structure strongly implies tiered billing; the cost-benefit analysis needs clarity against alternatives
  • IPv6 Support — as dual-stack environments become more common

If you operate within a hub-and-spoke structure and your current routing solution feels more like a workaround, this is worth consideration for testing. Familiarising yourself with it now will prepare you for its general availability.

Share this content: