Understanding VXLAN



In modern networks, Virtual eXtensible Local Area Network (VXLAN) has become a key technology enabler for advanced data center network designs such as those employed by cloud service providers and large enterprises. RFC 7348 is the IETF informational document that describes in detail the VXLAN encapsulation scheme.


Historically, the primary need for the introduction of a new encapsulation scheme originated in the data center where high server density on top of server virtualization placed increased demands on the available physical and logical resources of the network.


In particular, each virtual machine (VM) requires its own Media Access Control (MAC) address. Therefore, highly scalable data centers with N VMs running on M servers would require MAC address tables potentially larger (N x M) than what is available in a switched Ethernet network.


In addition, bare metal servers as well as VMs in a data center need to be grouped and isolated according to various management and security policies: this is usually achieved by assigning them to Virtual LANs (VLANs). However, the maximum number of VLANs is limited to 4094 and hence it’s inadequate for the largest network designs, especially when multi-tenancy and VLAN reuse are required.


In fact, data centers are often required to host multiple tenants, which must have their own isolated management and data forwarding domains. This means that each tenant should be able to independently assign MAC addresses and VLAN IDs without causing resource conflicts in the physical network. In order to achieve this, an extra layer of network virtualization is required.


While various solutions for each aforementioned resource limitation/challenge have been proposed in the past, thanks to its flexibility and ample software and hardware support VXLAN has risen above all alternatives to become the solution of choice to address all of the above limitations. Moreover, its powerful UDP-based encapsulation has been leveraged to enable novel capabilities as well as to replace legacy solutions for highly scalable network designs (as further discussed in the paragraphs below).


One important use case is when virtualized environments require having Layer 2 forwarding capabilities scale across the entire data center or even between different data centers for efficient allocation of compute, network and storage resources. A solution that enables a network to scale across data centers is oftentimes referred to as Data Center Interconnect (DCI) and is an important high availability design.


In the DCI scenario traditional approaches that use the Spanning Tree Protocol (STP) or an MPLS-based technology (such as VPLS) for a loop-free topology are not always optimal or flexible enough. In particular, network designers typically prefer to use an IP transport for the interconnection, redundancy and load balancing of resources and traffic.


Nonetheless, despite the preference for a Layer 3-based interconnect, in VM-based environments there is often a need to preserve Layer 2-based connectivity for inter-VM communication. How can both models coexist then?


VXLAN is the answer to this question: it employs a UDP-based header format which can be used to route traffic within a physical Layer 3 network (also called an underlay) while its encapsulation capabilities can seamlessly preserve and augment Layer 2’s inherent characteristics (such as MAC addresses and VLANs) and communication rules.


From this point of view VXLAN marries the best of both worlds. Hence it can be used to implement a virtual Layer 2 network (a so-called Layer 2 overlay) on top of a traditional Layer 3 network design.


This capability is also an ideal fit for example for DCI deployments where a Layer 2 overlay is required to carry the Ethernet traffic to and from geographically dispersed VMs.


The approach of transporting Layer 2 traffic in a UDP-encapsulated format is akin to a logical ‘tunneling function’.


However, VXLAN’s informational RFC does not enforce any particular control-plane and data-plane scheme (like a tunnel-like model), nor does it limit the number of possible use cases. (It simply illustrates how to overcome certain limitations while suggesting one communication scheme for unicast traffic and one for multicast/broadcast/unknown unicast traffic).


In practical terms, then, VXLAN is primarily an encapsulation format. The way the control plane uses it varies from vendor to vendor, sometimes with fully proprietary implementations and sometimes with open software components based on common interoperability standards.


Despite this heterogeneity and flexibility of implementation, the VXLAN format has become widely popular: it is supported in hardware by most data center switches and it is also supported in software on virtualized servers running for example an open virtual switch.


Most hardware vendors however don’t implement verbatim the entire RFC in particular with regard to the ‘flood and learn’ data plane mechanism described in it, because deemed not scalable enough.


The following sections will describe Pluribus Networks’ innovative VXLAN integration with the Unified Cloud Fabric (UCF, or simply the fabric), the related optimizations and use cases, and their configuration.


north
    keyboard_arrow_up
    keyboard_arrow_down
    description
    print
    feedback
    support
    business
    rss_feed
    south