Unlocking Azure SDN: Modern Networking for IT Pros
In this article, I’ll detail how Azure’s software-defined networking (SDN), specifically virtual networks, stands apart from the traditional cable-based networking commonly used in on-premises environments.
Background
What inspired this article? It’s a topic that’s been brewing for a while. Early in my journey with Azure, I noticed a pattern. Most professionals overseeing Azure infrastructure, like VMs, Azure SQL, App Services, or virtual networks, tended to come from a server admin background. Responsibilities like setting up virtual networks, configuring Network Security Groups, managing Azure Firewall, and controlling routes typically landed with them—not the traditional network admins.
The reason for this isn’t straightforward. Several years ago, I started asking audiences who managed networking on-premises during my Azure networking sessions versus those who managed “something else” on-prem. I often commented, “Server admins usually understand these Azure networking concepts faster than network admins.” There was always a wave of agreeing nods. Network admins often find Azure networking challenging because it operates fundamentally differently from their accustomed networking.
Cable-Defined Networking
On-premises networks are predominantly “cable-defined.” In other words, network traffic flows from source to destination relying on physical, often direct, connections:
- Devices like routers determine how data travels at key intersecting points
- Firewalls inspect and either allow or block packets
- Other devices may convert signals between electrical, optical, or radio formats.
Connections are always present, usually as physical cables, providing a predictable path for data.
Look at any on-premises firewall illustration, and you’ll notice multiple Ethernet ports, each catering to a specific network segment:
- External
- Management
- Site-to-site connections
- DMZ
- Internal
- Secure zone
Each port links to a subnet for a particular network, with one or more switches connecting to all devices within that subnet. Switches then uplink to the appropriate firewall port, establishing clear security boundaries. For example, a server within the DMZ must route traffic through the firewall via a dedicated cable to communicate outside its subnet.
Basically, if there’s no cable, there’s no connection, making traffic patterns and security expectations easy to enforce—it’s all about linking or not linking cables for access and segmentation.
Software-Defined Networking
Cloud environments like Azure are designed for self-service. Imagine needing to submit a support ticket just to set up a new network or subnet in the cloud, then waiting days for operators to connect hardware or configure switches—this would feel more like early 2000s hosting, not modern cloud computing.
Azure’s SDN empowers users to instantly provision and configure networks using the Portal, automation scripts, Infrastructure-as-Code, or APIs. Whether it’s setting up a new subnet, firewall, WAF, or nearly any network resource (apart from ExpressRoute circuits), you can do it without Microsoft staff involvement. It’s ready to use in seconds or up to 45 minutes depending on the resource type.
This agility is possible because Azure overlays its physical network with a VXLAN-based software-defined layer. You never interact with the underlying infrastructure directly. Instead, you define virtual networks, choosing your own addressing and topology, without concern for the physical data centre’s setup. Multiple tenants can use identical IP ranges like 10.x.x.x with no conflict, because actual routing is handled by Azure’s core network team, hidden beneath the software abstraction.
A simple diagram illustrates this concept—I often reference it in my Azure networking talks.
Here, both source and destination systems operate in Azure. Key things to know:
- Almost everything in Azure, even “serverless” services, runs atop virtual machines. These VMs may be abstracted away, but they exist. There are some exceptions, like certain premium SAP and Azure VMware services.
- All these VMs are hosted on Hyper-V, showcasing its remarkable scalability.
Let’s say the source needs to communicate with the destination. The source is on one Virtual Network with address 10.0.1.4; the destination is on another peered Virtual Network at 10.10.1.4. The guest OS on the VM sends the packet to its NIC, and at this point, Azure’s fabric takes over. It knows which host each VM is running on. To transfer the packet, Azure encapsulates it—essentially placing it in a new envelope with source and destination addresses set to the respective hosts, not the VMs themselves. This allows data to traverse Azure’s internal network securely and efficiently, regardless of overlapping customer address spaces. The original packet is extracted and delivered to the final VM NIC at the destination host.
This encapsulation is also why technologies like GRE tunnelling cannot be implemented directly within Azure networks.
Virtual Networks Explained
Azure’s SDN maintains detailed mappings between network interfaces (NICs) and their network memberships. When you create a virtual network, it creates a map identifying which NICs (either ones you set up or those managed by the platform) should be able to talk to each other. It also tracks which Hyper-V hosts these NICs belong to. The virtual network’s main function is to define and enforce which resources can communicate, crucial for maintaining isolation in a multi-tenant cloud environment.
Consider what happens when you peer two virtual networks. Does someone physically connect cables, or is there a virtual equivalent? Does peering introduce a new traffic bottleneck?
The answer lies in how Azure handles fabric mapping. When virtual networks are peered, Azure doesn’t create a fixed connection. Instead, it updates its internal mapping, similar to expanding an overlapping section of a Venn Diagram. Now, the resources within both VNets can communicate directly, subject to their network interface capabilities—the slowest NIC (or underlying VM performance features) determines the throughput, not a physical or virtual cable connection.
So, peering doesn’t insert a new link—rather, the communication permissions in the SDN map extend to include both virtual networks, allowing seamless but secure interaction between their resources.
In VNet1, resources can communicate directly with any endpoint in VNet2, and vice versa, using encapsulation and decapsulation without having to pass through a specific device or appliance in the virtual networks.
You may have already realized that in an Azure virtual network, you cannot ping the default gateway. This is because there’s no traditional hardware link to a central network device—there’s no physical appliance between subnets like in on-premises setups.
Similarly, network tools such as traceroute don’t provide much insight in Azure because there are no visible physical hops between resources. For diagnosing connections in Azure, tools like test-netconnection in PowerShell or Azure Network Watcher’s Connection Troubleshoot and Connection Monitor become essential for network troubleshooting.
Direct Connections
Now that you have a better understanding of what happens behind the scenes, let’s discuss what that means in practice. When data is sent from one point to another, there are actually no intermediate network hops. Take a look at the following illustration.

The diagram above is fairly common: On the left is an on-premises network connected to Azure virtual networks via a VPN tunnel. This tunnel ends at a VPN Gateway in Azure, which is deployed within a hub virtual network. The hub contains elements like a firewall, and application or data workloads are placed in spoke VNets that are linked or “peered” with the hub.
The firewall shown in the centre is clearly meant to secure the Azure networks from incoming threats on the on-premises side. On the surface, this setup appears to provide robust protection. But here’s where many run into trouble by assuming everything flows through the firewall automatically.
Let’s apply what we now know: The VPN Gateway is actually comprised of two Azure VMs. When a packet enters the tunnel, it lands on one of these VMs and is then forwarded to the relevant spoke VNet. But what path does it take? Even though the diagram shows a firewall, the actual packet is sent from the VPN Gateway’s NIC directly to the target NIC in the spoke VNet, bypassing the firewall entirely—almost as if it’s instantly transported.
In order to force traffic to flow through the security devices you intend, you must fully understand Azure routing and configure it using BGP or User-Defined Routes (UDR) as needed.
Now, check out this diagram showing a Palo Alto firewall appliance running in Azure.

Notice the multiple subnets present. Each subnet serves a distinct role, such as public, management, and VPN interfaces, and connects to various virtual NICs. But why are NICs split this way? There are no physical cables in Azure to govern how packets flow between different VNets or the DMZ. It’s Azure’s routing that determines if and when traffic is steered through the firewall. The multiple NICs on the firewall don’t provide isolation—they can actually make things more complex! Performance isn’t improved by extra NICs either, as VM size dictates throughput and speed. More NICs don’t mean better security or efficiency—they just add complexity.
The main purpose of all these NICs is to mimic eth0, eth1, etc., which the Palo Alto OS expects, ensuring the same software runs both on-prem and on its Azure Marketplace image. This is more about software compatibility for Palo Alto than network security in Azure. On the other hand, Azure Firewall uses a single IP via a Standard load balancer, yet there’s no compromise on security.
Wrapping Up
Having a clear grasp of Azure’s under-the-hood networking has proven invaluable on countless occasions. Understanding the real path packets take between source and destination empowers you to design, deploy, operate, and troubleshoot Azure networks far more effectively.
Post Comment