Rethinking the Network

Marten Terpstra

Subscribe to Marten Terpstra: eMailAlertsEmail Alerts
Get Marten Terpstra: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Virtualization Magazine, Compliance Journal, Network Virtualization, VDI and Application Virtualization

Blog Post

Red Sox, Pumpkins and Packet Encapsulation

Network virtualization allows the amount of attached VLANs & networks to scale beyond what a single physical switch can handle

[This is not really about the Red Sox or pumpkins this Halloween, but how could I not use those in the title? Go Red Sox]

I left an awful teaser at the end of my article last week. In Brent Salisbury's original article that triggered some of these additional virtualization thoughts, he articulated two very clear differences between native network based L2 virtualization mechanisms and the mechanisms that are being provided by overlay solutions based mostly in server vSwitch infrastructure. These two fundamental functions are MAC learning and tunnel encapsulation. In today's post I will spend a little more time looking at encapsulation differences.

redsox pumpkinOutside of logical separation of multiple virtual networks or tenants, network virtualization allows the amount of attached VLANs, networks and devices to scale well beyond what a single physical switch can handle. In a traditional network, an edge switch will only maintain tables for its local VLANs, ports and learned MAC addresses, but core switches that connect these edge switches together will have to maintain the union of all these edge tables, limiting the size of the network, or forcing extremely large (read expensive) core switches that can cope.  In a virtualized network, a VLAN is no longer a unique identifier. The same VLAN can appear many times on different ports in the network, all mapped to different virtual networks or tenants.

To achieve this scaling and re-use of VLAN space, original user traffic has a new header added to it, effectively hiding the original VLAN, source and destination MAC address. Traffic is exchanged between edge switches that serve the specific members of this virtual network, passing through the core of the network based on the new header alone. Using this re-encapsulation creates a tunnel and intermediate switches are oblivious to the original packet, allowing them to scale by having to know only how to locate the edge switches.

Traditional L2 virtualization mechanisms use an extra L2 header for the tunnel, most commonly a MAC-in-MAC encapsulation. This encapsulation adds a new ethernet header onto the original packet, with the source edge switch as the source MAC, the destination edge switch as the destination MAC, and some other fields including an identifier for the virtual network. Switches in the core must only provide end to end ethernet service to the edge switches, since all they see are these newly ethernet encapsulated packets. Their tables stay sparsely populated, since the edge switch MAC addresses and the VLANs used for transport are the only bits needed for packet forwarding.

And this is where overlay solutions have taken a step forward. Whether its VXLAN, NVGRE or STT, all proposed/used overlay encapsulation mechanisms wrap the original packet into an IP packet. The tunnel between the edge switches is an IP/UDP connection (except for STT which "uses" TCP, but really doesn't), and transport requires regular IP connectivity between them. Now, this certainly increases the size of an original packet by quite a bit, the edge switch adds a UDP header, an IP header and then (assuming its still transported across an ethernet network) an ethernet header, but it creates an abstraction and a degree of freedom between the edge devices, and the transport network that connects them.

There is nothing wrong with ethernet based encapsulations for a virtualized network. I have seen them work, I have seen them work well. But when the virtualization edge becomes the vSwitch in a server, the sheer amount of edge switches (vSwitch in this case) that need to be connected together using a single or small number of transport VLANs in an ethernet core may start to run into the scaling concerns for a single broadcast domain discussed last week. When it is hundreds of edge devices some of the SPB or TRILL based solutions may become challenged, I bet you almost all of them will when you start talking thousands of edge devices.

Besides the obvious ability to create tunnels between remote but IP connected virtual network islands, having tunnels based on IP is actually an added convenience, even if the network based virtualization solution can be created on top of a single VLAN ethernet network. An extra set of headers provide additional tables, additional lookup opportunities in hardware to send traffic where you want it to be sent. Not the ECMP way, but in a controlled and carefully calculated way in order to satisfy the needs of the virtual network. And that makes VXLAN equally suited as a pure physical network based virtualization transport mechanism even if there is no overlay.

Rather than leaving a poor hint of what to expect, next week I will discuss the second major difference from Bren't original article: MAC learning in a virtualized network and the role of a controller in determining virtual network membership and location.

Happy Halloween!

[Today's fun fact: Lego produces more rubber tires than any other tire company in the world (381 million in 2011)]

The post Red Sox, Pumpkins and Packet Encapsulation appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.