Loading Now

Master Azure Networking with Virtual Network Manager

Govern Azure Networking Using Azure Virtual Network Manager Aidan Finn, IT Pro

This article on Azure Virtual Network Manager is part of the online community event, Azure Back To School 2024. Here, I’ll explain how Azure Virtual Network Manager (AVNM) allows you to efficiently oversee a multitude of Azure virtual networks, regardless of whether your environment is fast-moving and agile or relatively static.

Common Challenges

Many organisations worldwide face a familiar challenge: managing countless networks that are constantly being created or removed is quite complex. If networking is managed centrally, it often leads to repeated tasks. On the other hand, if developers or operators handle the networks, there’s a significant overhead for enforcing governance and validation.

You must guarantee connectivity and routes align with your organisation’s standards. Essential security policies have to be implemented—permitting only legitimate traffic and effectively blocking malicious or unnecessary flows.

Previously, when data centres had a handful of large, highly accessible subnets, management was straightforward. Moving to the cloud changes the game: We break large networks into smaller Azure virtual networks with micro-segmentation in mind. This layout creates clear governance paths and bolsters security, making it easier to ward off sophisticated threats. However, with cloud environments, the number of networks and subnets often rises rapidly, and every new network brings with it a need for ongoing management.

This is exactly the sort of scenario that Azure Virtual Network Manager was built to address.

What is Azure Virtual Network Manager?

AVNM isn’t particularly new, but it hasn’t been widely adopted just yet—more on why that is shortly. Spoiler: this could be about to change!

AVNM’s core purpose is to streamline the management of Azure virtual networks and introduce more governance. Note: AVNM doesn’t replace Azure Policy, but it works closely with it. The idea is to provide networking specialists with robust, dedicated controls, as opposed to relying solely on more generic (and sometimes cryptic) tools like Azure Policy, which can be tricky to diagnose.

Some key AVNM features that can help you include:

  • Network groups: Collect and organise virtual networks or subnets to be centrally managed.
  • Connectivity configurations: Define how multiple networks are linked together.
  • Security admin rules: Apply security policies directly where subnets connect (on network interfaces).
  • Routing configurations: Use policies to implement custom user-defined routes.
  • Verifier: Confirm that your networks allow the necessary communication flows.

How to Deploy AVNM

The steps are fairly straightforward:

  1. Create a Network Group to select which networks or subnets to include.
  2. Define a config, such as connectivity, security rules, or routing settings.
  3. Deploy that configuration to the target Network Group and one or more Azure regions.

Once your configuration is built, it rolls out to all networks within the chosen group(s) and region(s).

Understanding Network Groups

A critical feature for scalability with AVNM is the concept of network groups. You’ll likely establish multiple groups, each combining networks or subnets with similar configuration needs. This enables a single deployment to reach a broad set of relevant resources in one action.

There are two main types of Network Groups:

  • Static: You manually select the networks to be included. Suitable when targets are few and relatively stable.
  • Dynamic: Here, you define rules or queries—based on particular properties—to automatically collect present and future networks. Azure Policy drives this dynamic discovery, creating a rule that’s applied to the chosen scope.

Dynamic groups are typically preferred. For instance, in a governed setup, Azure resources are often tagged. You can create queries for virtual networks with selected tags in specific regions, so new resources are instantly picked up by relevant groups. When developers or operators spin up new networks, those resources are tagged through governance controls, and the policy auto-discovers them for AVNM. The relevant configuration will then be quickly and automatically applied—a seamless process!

Configuring Connectivity

It’s important to note: virtual network peering isn’t a physical wire or pipe. Instead, it simply tells Azure’s networking system, “allow this group of network interfaces to communicate with those ones.”

Organisations often want to standardise or automate network links. While automation can scale through infrastructure code, in fast-changing environments, automation is only effective if there’s clear trust and governance. Human error is the most common cause of problems, so automating network connectivity—especially when tied to integration, security, or compliance—is vital.

With Connectivity Configurations, you can support three network topologies:

  • Hub-and-spoke: The most popular enterprise design. A central hub exists for security and connectivity. Workloads and data reside in “spokes,” only connecting with the hub (forming the environment’s backbone). Traffic leaving a spoke typically passes through a firewall or network appliance in the hub.
  • Full mesh: Every network is directly connected to all the others.
  • Hub-and-spoke plus mesh: Spokes connect to the hub, and also directly to each other. Any external-facing traffic goes via the hub, while intra-spoke traffic flows directly between spokes.

The mesh approach is intriguing—when is it used? Generally, it’s avoided due to security best practices which favour firewalls in the hub for micro-segmentation and intrusion prevention. Nevertheless, there are occasions when business needs—like very low latency between specific systems—take precedence over absolute security. In these cases, if routing through a firewall doubles latency and isn’t needed for the traffic type, a full mesh might be justified.

This also explains why network peering was discussed above. Whether two resources are on the same or separate networks, the crucial factor is physical (not virtual) proximity; peering is more a policy setting than an actual link.

Furthermore, mesh connections on AVNM don’t actually rely on traditional peering. Instead, they use something called Connected Groups, which enable many-to-many interconnections without having to individually peer every pair of networks.

A particularly helpful feature with these configurations is the…

You can now easily tidy up legacy network configurations by automatically removing old peering connections, helping you start fresh and avoid previous design clutter.

Security Admin Rules

What exactly is a Network Security Group (NSG) rule? In essence, it’s an Access Control List (ACL) applied at the network interface of your virtual machine, whether it’s part of your environment or a hosted platform service. The association to a subnet or NIC is more about scaling and targeting, but the actual enforcement takes place at the VM’s NIC connected to the virtual switch.

Scalability is a challenge with NSGs. If you need to deploy a new rule across every subnet or NIC, you’ll face a lot of manual changes and spend unnecessary time sorting rule priorities to guarantee your desired rule is enforced first.

Security Admin Rules use the same port-based ACLs, but have the advantage of always being considered before any other rules. You’re able to define individual or sets of rules, apply them to a Network Group, and every associated NIC will update accordingly with your rules taking precedence.

Advice: If you need to troubleshoot Security Admin Rules, try enabling VNet Flow Logs for insights.

Routing Configurations

Routing Configurations is among the more recent additions to AVNM, and previously had been a roadblock for many. In secure environments, you often need to ensure traffic is sent from spokes to a central firewall in the hub. Traditionally, this meant creating a user-defined route (UDR) in every subnet—a method that doesn’t scale and depends too much on trust. Some turned to BGP routing, though that’s costly and difficult to get right, and can still be overridden.

With AVNM, there’s now a preview feature that lets you centrally manage UDRs and assign them to Network Groups in just a few steps. You can decide on the level of detail for your Route Tables:

  • A single table shared by multiple virtual networks.
  • A table common to all subnets within one virtual network.
  • An individual table for each subnet.

Verification

The Verification feature is slightly mysterious—I’m not sure if it’s laying the groundwork for something bigger to come. Essentially, it lets you test your configurations to verify they’re set up as you intended. That said, its function overlaps a lot with Network Watcher and, similarly, it only supports virtual machines at present.

What’s The Bad News?

Once routing configurations become generally available, AVNM would be my preferred choice for every deployment. However, there’s a significant hurdle: the cost. Currently, AVNM is billed at $73 per subscription each month. If you manage a few subscriptions, it’s manageable. But if you use numerous subscriptions as natural divisions for governance—like you would with the Microsoft Cloud Adoption Framework—the fees can really add up, potentially making AVNM the priciest part of your Azure environment!

The positive news is that Microsoft seems to have acknowledged this feedback, and members of Azure’s networking team have indicated publicly that they’re considering changes to AVNM’s pricing model.

Another longstanding issue involves Azure Policy. Dynamic membership for network groups is governed by Azure Policy, so if a developer sets up a new virtual network, it might take hours for Policy to detect it and inform AVNM. In my own tests, once AVNM becomes aware, it acts straight away, but the delay from Azure Policy can introduce a window where less-than-ideal practices slip through temporarily.

Summary

I was initially sceptical about AVNM. However, the updates that have been released and the features in development have changed my view. Pricing is currently the major sticking point, but I believe serious efforts are being made to resolve this. Previously, I mentioned there hasn’t been widespread adoption of AVNM—I expect this will change as soon as costs are addressed and routing configurations are fully available.

At a recent conference, I showcased how AVNM can be used to rapidly set up hub-and-spoke networks with micro-segmentation. Using the Azure Portal, the whole process took under 10 minutes. Just think—not only your current security posture, but your future state too, sorted in only 10 minutes.

Post Comment