Configuring VXLANs

About VXLANs

Netvisor provides traditional network segmentation using Virtual Local Area Networks (VLANs) and standardized under the IEEE 802.1Q group. VLANs provide logical segmentation of the network at Layer 2 or broadcast domains. Due to less than optimal use of available network links with VLANs, rigid requirements exist for the placement of devices in the network and the scalability limited to a maximum of 4096 VLANs. Using VLANs becomes a limiting factor when building large multi-tenant data centers.

Virtual Extensible LANs (VXLAN) design provides the same Ethernet Layer 2 network services as VLANs but with greater extensibility and flexibility. When compared to VLANs, VXLANs offer the following benefits:

As a Layer 2 overlay scheme over a Layer 3 network, VXLANs uses MAC Address-in-User Datagram Protocol (MAC-in-UDP) encapsulation to provide a means to extend Layer 2 segments across the data center network. VXLAN supports a flexible, large-scale multi-tenant environment over a shared common physical infrastructure. VXLANs use IP plus UDP as a transport protocol over the physical data center network.

Netvisor supports VXLANs on non-redundant and redundant spine-leaf topology. VXLAN configuration at high level involves 5 major steps in addition to VLAN, trunk, vLAG, and vRouter configuration as needed.

Configuring VXLANs

1. Configure underlay vRouter interfaces:

a. Add vRouter and add vRouter interfaces for each VTEP.

CLI network-admin@switch > vrouter-create name <vr-name> vnet <vnet-name> router-type hardware  hw-vrrp-id <id>

CLI network-admin@switch > vrouter-interface-add vrouter-name <vr-name> ip <network/netmask> vlan <y> if data mtu <mtu>

b. Configure VIP for redundant VTEPs.

CLI network-admin@switch > vrouter-interface-add vrouter-name <vr-name> ip <network/netmask> vlan <y> if data vrrp-id <id> vrrp-primary <ethz.y> mtu <mtu>

2. Optionally, add ports to vxlan-loopback-trunk:

CLI network-admin@switch > trunk-modify name vxlan-loopback-trunk ports <list of ports>

3. Configure tunnels:

On non-redundant switches, configure a tunnel with scope local and on redundant switch, create a tunnel using scope cluster.

CLI network-admin@switch > tunnel-create name <tunnel-name> local-ip <ip1> remote-ip <ip2> scope local vrouter-name <vr-name>

CLI network-admin@switch > tunnel-create name <tunnel-name> local-ip <ip1> remote-ip <ip2> scope cluster vrouter-name <vr-name> peer-vrouter-name <peer-vr-name>

4. Configure overlay:

Create mapping between VXLAN VNID and VLANs on respective switches

CLI network-admin@switch > vlan-create id <vlan-id> scope <scope> vxlan <vnid>

 

5. Add VNIDs to tunnels:

The mapping allows configured VLAN VNIDs to be carried over VXLAN tunnel.

CLI network-admin@switch > tunnel-vxlan-add name <tunnel-name> vxlan <vnid>

In order to carry Layer 2 broadcast, unicast, and multicast (BUM) traffic over VXLAN tunnels on Netvisor OS switches, you must configure one physical port to recirculate the packet and do head-end replication. Based on the hardware architecture of the switch, configure a front panel port. Depending on the amount of BUM traffic, you can use either a 10G port or a 40G port.

For monitoring VXLAN specific states and statistics, use the following commands:

vlan-show — displays the VXLAN ID associated with the VLAN ID.

tunnel-show — displays the configured tunnel and the state

trunk-show — displays the port used for BUM traffic recirculation

ports-stats-show — displays statistics for each port

tunnel-stats-show — displays statistics for each tunnel

vxlan-stats-show — displays statistics for each VXLAN ID

Configuring VXLANs and Tunnels


 

Informational Note:  VXLAN encapsulated packets are recirculated in using hardware features and not software.


In today’s virtualized environments, networks place an increasing demand on MAC address tables of switches that connect to servers. Instead of learning one MAC address per server link, the switch now learns the MAC addresses of individual VMs, and if the MAC address table overflows, the switch may stop learning new MAC addresses until idle entries age out.

Virtual Extensible LAN (VXLAN), essentially a Layer 2 overlay scheme over a Layer 3 network, and the term for each overlay names a VXLAN segment. Only VMs within the same VXLAN segment communicate with each other. Netvisor identifies each VXLAN segment by a 24 bit segment ID called the VXLAN Network Identifier (VNI).

VXLANs increase the scalability of a network up to 16 million logical networks and is used to contain broadcast, multicast, and unknown unicast traffic.

Because of this encapsulation, VXLAN could also be called a tunneling scheme to overlay Layer 2 networks over top of Layer 3 networks. However, the tunnel does not terminate on the switch, and the switch sits in the middle of the tunnel and identifies packets as L3 tunneled packets. Netvisor then forwards the packets using L2 or L3 forwarding.

Pluribus Networks supports two scenarios for VXLAN:

1. The tunnel does not terminate on the switch and does not support VTEPs. Though the switch does not participate in the creation of a tunnel, Netvisor ONE still performs the following tasks.

a. Analytics Collection — All TCP control packets are captured as well as ARP packets traversing the tunnel. These packets are used to build connection statistics and provide visibility as to which VXLAN nodes are on specific ports.

b. ARP Optimization — An ARP request is captured and if a Layer 2 entry exists in the switch Layer 2 table, Netvisor OS sends a response back to the sender of the ARP request over the tunnel. Otherwise, the ARP request is re-injected into the tunnel without any modification to continue crossing the tunnel.

The tunnels terminate at a switch and the switch performs the role of a VTEP. In this sce­nario, the responsible switch encapsulates packets arriving from non-VXLAN nodes on a Layer 2 network and transmis them over the tunnel. Similarly, the switch decapsulates the packets arriving through the tunnel and forwards the inner packet over the Layer 2 net­work. The switch also collects statistics and optimizes ARP requests as in the first scenario.

 

Informational Note:

Netvisor supports a one to one mapping of VXLAN to VLAN. Netvisor does not support multicast traffic. VXLAN uses the scope local on all switches, and must be in the same subnet.

Configuring a VXLAN with Netvisor ONE

The first scenario requires no additional configuration. The second scenario requires the following steps, in order:

1. Create a hardware vRouter.

Add interfaces to the vRouter, one per tunnel. The tunnel endpoint IP address must be routable.

Create one or more tunnels.

Create the VXLAN with the VNI, and add the tunnels created in the previous steps.

To create a VXLAN, vx-seg1, with the VNID 25, scope fabric, and turn off deep inspection, use the following syntax:

CLI network-admin@switch > vxlan-create name vx-seg1 vnid 25 scope fabric deep-inspection no

To delete a VXLAN, use the vxlan-delete command.

To display information about VXLANs, use the vxlan-show command.

If you added a port to the VXLAN configuration, use the vxlan-port-remove command.

Configuration Example

The following example assumes that one VTEP resides on the generic switch and the other VTEP resides on a Pluribus Networks switch. Also, the nodes connect on a L3 IP network, and the tunnel forms between the generic switch and the Pluribus Networks switch.

Figure 5:

generic-swtch-vtep.png

VTEP Generic Switch-VTEP Pluribus Networks Switch

The example also includes VLAN 10 and port 47 on Host2 as well as the VNET fab-global.

1. Create the vRouter using the vrouter-create command:

vrouter-create name vx-vrouter vnet fab-global router-type hardware

Add the vRouter interface:

vrouter-interface-add vrouter-name vx-vrouter ip 192.168.0.1 netmask 255.255.255.0 vlan 10

Create the tunnel:

tunnel-create name vx-tunnel scope local local-ip 192.168.0.1 remote-ip 192.168.5.1 next-hop 192.168.0.2 next-hop-mac 00:01:02:03:04:05 router-if vx-router.eth0

Create the VXLAN:

vxlan-create vnid 14593470 scope local name vxlan1 vlan 10

If VLAN 10 does not exist, then the vxlan-create command creates the VLAN on the switch, but you may need to add local ports to the VLAN.

Add port 47 to the VXLAN:

vxlan-port-add vxlan-name vxlan1 ports 47

The configuration associates all packets from port 47 on VLAN 10 with the VXLAN ID, 14593470.

Add the tunnel to the VXLAN:

vxlan-tunnel-add vxlan-name vxlan1 tunnel-name vx-tunnel

To display the configuration, use the vxlan-show command.

You cannot configure different VLANs for the tunnel and the local hosts, and you cannot associate different VLANs on different ports for the same VXLAN.

VXLAN Head End Replication Counters

This feature introduces support for collecting statistics (stats) for head end replication (HER) packets to tunnels, on VXLAN VLANs and enhances the tunnel stats output to display counter for head replicated packets. When Broadcast/UnknownUnicast/UnknownMcast traffic floods on VXLAN VLANs, the traffic head end replicates to tunnels part of the VXLAN. Currently, Netvisor OS uses vxlan-loopback-trunk port stats counters for HER packets statistics. Although all tunnels use the same vxlan-loopback-trunk ports, this provides a very cumulative view of the system. This feature provides additional counters for HER packets per tunnel.

Netvisor attaches statistical objects and flexible counters to multicast VPs created for head end replication for each tunnel. Packet (pkt) and byte counts for HER statistics displays per tunnel. Two new fields added to tunnel-stats-show output as HER-pkts, HER-bytes and provides flood packet counters per tunnel. Netvisor updates tunnels local to each node and therefore no impact to High Availability (HA).

CLI (network-admin@Spine1) > tunnel-stats-show show-diff-interval 1 format all

Configuring_VXLANs00005.jpg

 

Creating Tunnels

You create tunnels to encapsulate protocols on the network. You can create tunnels for IP-in-IP, VXLAN, and NVGRE network traffic. However, Netvisor supports tunnels for the local scope only and does not use any discovery mechanism.

IP-in-IP protocol encapsulates an IP header with an outer IP header for tunneling. The outer IP header source and destination identifies the endpoints of a tunnel. The inner IP header source and destination identify the original sender and recipient of the datagram.

In addition to the IP header and the VXLAN header, the VTEP also inserts a UDP header. During ECMP, the switch includes this UDP header to perform the hash function. The VTEP calculates the source port by performing the hash of the inner Ethernet frame's header. Netvisor supports the Destination UDP port on the VXLAN port .

The outer IP header contains the Source IP address of the VTEP performing the encapsulation. The remote VTEP IP address or the IP Multicast group address provides the destination IP address.

Network Virtualization with Generic Routing Encapsulation (NVGRE) uses GRE to tunnel Layer 2 packets over Layer 3 networks. NVGRE seems similar to VXLAN but it doesn’t rely on IP multicast for address learning.

To create a tunnel for IP-in-IP traffic, local IP address 192.168.100.35, and the router, tunnel-network, use the following syntax:

CLI network-admin@switch > tunnel-create scope local name ipinip type ip-in-ip local-ip 192.168.100.35 router-if vrouter-hw-if eth0.0

To remove a tunnel, use the tunnel-delete command.

To modify a tunnel, use the tunnel-modify command.

Logging Changes to Tunnel States

This feature enables you to log tunnel state changes so you view tunnel state historical data for debugging purposes. The following state changes are logged in tunnel history:

Tunnel History Commands

To update tunnel history settings, use the tunnel-history-settings-modify command:

CLI network-admin@switch > tunnel-history-settings-modify enable|disable disk-space disk-space-number log-file-count 1..20

To display tunnel history settings, use the tunnel-history-settings-show command:

CLI network-admin@switch > tunnel-history-settings-show

To display historical tunnel state history, us the tunnel-history-show command:

CLI network-admin@switch > tunnel-history-show

Egress ECMP Load Distribution for VXLAN Traffic from the VTEP Switch

Equal-cost multi-path routing (ECMP) defies a routing strategy where next-hop packet forwarding to a single destination can occur over multiple best paths. Tunnel next hops update based on underlay routes information. RIB/FIB information leverages to program next hops for a tunnel remote endpoint. If multiple next hops exist for a tunnel remote endpoint, an ECMP group uses the list of next hops and the tunnel programmed accordingly.

For example, a tunnel leaf1toleaf 2 with a remote IP address 32.4.11.1, Netvisor displays 2 next hops, 192.178.0.6 and 192.178.0.2. Netvisor hashes the traffic going over tunnel leaf1toleaf 2 using these two next hop links.

CLI (network-admin@lleaf11) > tunnel-show

scope:                           cluster

name:                            leaf1toleaf2

type:                            vxlan

vrouter-name:                    leafpst1

peer-vrouter-name:               leafpst2

local-ip:                        22.3.11.1

remote-ip:                       32.4.11.1

router-if:                       eth12.11

next-hop:                        192.178.0.6

next-hop-mac:                    66:0e:94:8c:d4:0f

nexthop-vlan:                    4091

remote-switch:                   0

active:                          yes

state:                           ok

error:                           

route-info:                      32.4.11.0/24

scope:                           

name:                            

type:                            

vrouter-name:                    

peer-vrouter-name:               

local-ip:                        

remote-ip:                       

router-if:                       

next-hop:                        192.178.0.2

next-hop-mac:                    66:0e:94:5b:90:2b

nexthop-vlan:                    

remote-switch:                   4092

active:                          0

state:                           ok

error:                           

route-info:                      32.4.11.0/24

scope:                           cluster

name:                            leaf1toleaf2-2nd

type:                            vxlan

vrouter-name:                    leafpst1

peer-vrouter-name:               leafpst2

local-ip:                        22.3.12.1

remote-ip:                       32.4.12.1

router-if:                       eth9.12

next-hop:                        192.178.0.6

next-hop-mac:                    66:0e:94:8c:d4:0f

nexthop-vlan:                    4091

remote-switch:                   0

active:                          yes

state:                           ok

error:                           

route-info:                      32.4.11.0/24

scope:                           

name:                            

type:                            

vrouter-name:                    

peer-vrouter-name:               

local-ip:                        

remote-ip:                       

router-if:                       

next-hop:                        192.178.0.2

next-hop-mac:                    66:0e:94:5b:90:2b

nexthop-vlan:                    

remote-switch:                   4092

active:                          0

state:                           ok

error:                           

route-info:                      32.4.11.0/24

 

CLI (network-admin@leaf-pst-1) > vrouter-rib-routes-show ip 32.4.11.0

vrid ip        prelen number-of-nexthops nexthop     flags      vlan intf_ip     intf_id

---- --------- ------ ------------------ ----------- ---------- ---- ----------- -------

0    32.4.11.0 24     2                  192.178.0.6 ECMP,in-hw 4091 192.178.0.5 1

0    32.4.11.0 24     2              192.178.0.2 ECMP,in-hw 4092 192.178.0.1 0

 

CLI (network-admin@leaf-pst-1) > vrouter-rib-routes-show ip 32.4.12.0

vrid ip        prelen number-of-nexthops nexthop     flags      vlan intf_ip     intf_id

---- --------- ------ ------------------ ----------- ---------- ---- ----------- -------

0    32.4.12.0 24     2                  192.178.0.6 ECMP,in-hw 4091 192.178.0.5 1

0    32.4.12.0 24     2                  192.178.0.2 ECMP,in-hw 4092 192.178.0.1 0

 

 

VXLAN Routing In and Out of Tunnels

 

Informational Note:

The VXLAN tunnel loopback infrastructure, identified by the trunk object named "vxlan-loopback-trunk", used for bridging multicast or broadcast traffic in the extended VLAN and for routing traffic before VXLAN encapsulation or after VXLAN decapsulation. Non-routed unicast traffic bridges and encapsulated or decapsulated and bridged without using the VXLAN tunnel loopback

This feature provides support for centralized routing for VXLAN VLANs. For hosts on different VXLAN VLANs to communicate with each other, SVIs on VXLAN VLAN configured on one cluster pair in the fabric. Any VXLAN VLAN packets be routed between two hosts sent to a centralized overlay vrouter and then VXLAN encapsulated or decapsulated depending on source or destination host location.

Because the E68-M and E28Q cannot perform VXLAN routing in and out of tunnels in a single instance, loopback support exists. Netvisor leverages vxlan-loopback-trunk to support recirculation of the packets. Be sure to add ports to vxlan-loopback-trunk so that VXLAN routing in and out of tunnels works correctly. After VXLAN decapsulation, if packets route, the inner DMAC uses either the vRouter MAC address or VRRP MAC address. The packet needs to recirculate after decapsulation as part of the routing operation. To accomplish this, Layer 2 entries for route RMAC address or VRRP MAC address on VXLAN VLAN program to point to vxlan-loopback-trunk ports in hardware. The show output for the command, l2-table-show, updates with a vxlan-loopback flag to indicate the hardware state.

CLI network-admin@switch > l2-table-show vlan 200

mac:                       00:0e:94:b9:ae:b0

vlan:                      200

vxlan                      10000

ip:                        2.2.2.2

ports:                     69

state:                     active,static,vxlan-loopback,router

hostname:                  Spine1

peer-intf:                 host-1

peer-state:                

peer-owner-state:          

status:                    

migrate:                   

mac:                       00:0e:94:b9:ae:b0

vlan:                      200

vxlan                      10000

ip:                        2.2.2.2

ports:                     69

state:                     active,static,vxlan-loopback,router

hostname:                  Spine1

peer-intf:                 host-1

peer-state:                active,vrrp,vxlan-loopback active,vrrp

peer-owner-state:          

status:                    

migrate:                   

CLI network-admin@switch > l2-table-show vlan 100

mac:                       00:0e:94:b9:ae:b0

vlan:                      100

vxlan                      20000

ip:                        1.1.1.1

ports:                     69

state:                     active,static,vxlan-loopback,router

hostname:                  Spine1

status:                    

migrate:                   

 

Also for Layer3 entries behind VXLAN tunnels, routing and encapsulation operations requires two passes . To obtain the Layer 3 entry, the hardware points to vxlan-loopback-trunk. The show output of the l3-table-show displays the hardware state with a vxlan-loopback flag.

CLI (network-admin@Spine1) > l3-table-show ip 2.2.2.3 format all

mac:                    00:12:c0:88:07:75

ip:                    2.2.2.3

vlan:                  200

public-vlan:          200

vxlan:                10000

rt-if:                eth5.200

state:                active,vxlan-loopback

egress-id:            100030

create-time:          16:46:20

last-seen:            17:25:09

hit:                  22

tunnel:               Spine1_Spine4

 

VXLAN Port Termination

When configuring overlay VLANs on a port, Netvisor does not allow VXLAN termination on a port even if the VXLAN termination criteria matches. Netvisor enforces the configuration for ports facing bare metal servers or single root input/output virtualization (SRIOV) hosts. With underlay VLANs configured on a port, Netvisor OS allows VXLAN termination on a port which may have HWvtep or SWVtep configured for the port.

Prior to Version 2.5.3, when you configured overlay VLANs on a port, VXLAN encapsulated packets received on a port do not terminate on a VXLAN tunnel. In versions later than 2.5.3, Netvisor no longer enforces the configuration.

One sample use case has both overlay and underlay VLANs on a port. In this case, Netvisor disables the VXLAN termination on the port since the port has overlay VLAN and therefore, any VXLAN encapsulated traffic received on this port no longer terminates even if the destination employs a local HWvtep.

To support this sample use case, Netvisor OS provides a port-config-modify parameter to enable or disable VXLAN termination on the port.

CLI network-admin@switch > CLI (network-admin@Spine1)>port-config-modify port 35 vxlan-termination

Enables tunnel termination of VXLAN encapsulated packets received on the port when VXLAN tunnel termination criteria is met.

CLI network-admin@switch > CLI (network-admin@Spine1)>port-config-modify port 35 no-vxlan-termination

Disable vxlan-termination on a port when VXLAN encapsulated packets are received on port. This enforces the security to prevent any malicious host from generating VXLAN encapsulated packets that would otherwise be subject to VXLAN tunnel termination.

Managed ports added to a VNET with vlan-type private, relies on VXLAN functionality and therefore always carry overlay VLANs only. Therefore when you configure a port as a managed port, VXLAN termination is disabled by default.

 

Default Settings

1. VNETs with vlan-type private relies on VXLAN functionality. The vlan-type private are VXLAN overlay VLANs. Hence when a port is configured to be a managed port with vlan-type private, vxlan-termination is disabled by default.

2. Shared/underlay ports have vxlan-termination on by default and can use the port-con­fig-modify command to enable or disable vxlan-termination as is deemed to enforce port level security.

Virtual Link Extension with Cluster Configurations

Limitations for this release are as follows:

Netvisor supports Virtual Link Extension (VLE) on switches part of a cluster configuration iby creating a dedicated VXLAN tunnel end points (VTEPs). Configure the VTEPs using one of the physical or primary IP addresses on the switch. The physical or primary IP address can be from a new Layer 3 interface dedicated for VLE configuration or from reusing the existing physical or primary IP addresses used to build the cluster VIP and used for VXLAN tunnel redundancy in a cluster environment. These dedicated tunnels and VTEPs are stateless with no dependency on each other.

Figure 3:

VLE-Cluster-Topology.png

 Example Topology for Virtual Link Extension and Cluster Configuration

In the example topology, Host1 connects to both cluster nodes, PN-SW1 and PN-SW2. No VLAG on PN-SW1 and PN-SW2 connected to Host1. Host2 has 2 links connected to PN-SW3, a standalone switch. PN-SW3 does not configure trunking on the ports connected to Host2. Configure both Host1 and Host2 with LACP on links connecting to switches to High Availability (HA) functionality.

Create a new VLAN Layer 3 interface on the local vRouter used as a VTEP source IP. Create the VLAN as local only and dedicated for this usage.

In this example configuration, you must configure one virtual link extension for each point to point connectivity.

1. Configure VLE VLANs for each virtual link extension and add the ports:

On PN-SW1 

CLI network-admin@switch > vlan-create id 400 vxlan 400 vxlan-mode transparent scope local

CLI network-admin@switch > vlan-port-add vlan-id 400 ports 11

On PN-SW2 

CLI network-admin@switch > vlan-create id 401 vxlan 401 vxlan-mode transparent scope local

CLI network-admin@switch > vlan-port-add vlan-id 401 ports 11

On PN-SW3

CLI network-admin@switch > vlan-create id 400 vxlan 400 vxlan-mode transparent scope local

CLI network-admin@switch > vlan-create id 401 vxlan 401 vxlan-mode transparent scope local

CLI network-admin@switch > vlan-port-add vlan-id 400 ports 11

CLI network-admin@switch > vlan-port-add vlan-id 401 ports 12

On PN-SW1

CLI network-admin@switch > tunnel-create scope local name VTEP1 vrouter-name vr-s1 local-ip 10.10.10.1 remote-ip 20.20.20.3

On PN-SW2

CLI network-admin@switch > tunnel-create scope local name VTEP2 vrouter-name vr-s2 local-ip 10.10.10.2 remote-ip 20.20.20.3

On PN-SW3

CLI network-admin@switch > tunnel-create scope local name VTEP3 vrouter-name vr-s3 local-ip 20.20.20.3 remote-ip 10.10.10.1

CLI network-admin@switch > tunnel-create scope local name VTEP4 vrouter-name vr-s3 local-ip 20.20.20.3 remote-ip 10.10.10.2

2. Add VLE VLANs and VXLANs to VXLAN tunnels.

On PN-SW1 

CLI network-admin@switch > tunnel-vxlan-add name VTEP1 vxlan 400

On PN-SW2

CLI network-admin@switch > tunnel-vxlan-add name VTEP2 vxlan 401

On PN-SW3

CLI network-admin@switch > tunnel-vxlan-add name VTEP3 vxlan 400

CLI network-admin@switch > tunnel-vxlan-add name VTEP4 vxlan 401

Virtual Link Static Bidirectional Association

Virtual Link Extension (VLE) static bidirectional association is between a physical port or trunk and a VXLAN tunnel for a selected VNID. Every virtual link extension is defined using a VLAN or VXLAN VNID and adding physical ports or trunk and tunnels to this VLAN or VXLAN VNID. One endpoint of virtual link extension must be a physical port or trunk and other end point must be a virtual tunnel end point (VTEP). A VTEP can carry multiple VLE VLANs or VXLANs.

Virtual Link Static Bidirectional Command

Use the vlan-create vxlan-mode standard|transparent command to configure this feature.

Port Replication for Virtual Link Extensions

This feature provides a mechanism for link state tracking between two ports of two switches in the same fabric for Virtual Link Extension (VLE).

When you configure a VLE between two physical ports of two switches, the VLE remains up as long as both physical ports are in the link up state. When you configure VLE tracking on a trunk port, the VLE stays up as long as at least one port in the trunk remains up and the remote port remains up. When the last trunk member goes link down, Netvisor brings down the VLE. Note that when you configure VLE tracking on a trunk port, you cannot configure tracking on individual trunk members.

VLE tracking helps achieve VLE high availability on Netvisor OS nodes and avoids the need on the client side to run LACP for link up/down detection.

Use these commands to create, modify, delete, and show link state tracking:

vle-create

vle-modify

vle-delete

vle-show

To create virtual link extension tracking, use the vle-create command. You can execute this command from any fabric node to create a virtual link extension between any two switches in the fabric.

CLI network-admin@switch > vle-create name name-string node1 fabric-node name node-2 fabric-node name node-1-port node-1-port-number node-2-port node-2-port-number [tracking|no-tracking]

vle-create

Create virtual link extension tracking

 name name-string

Specify the VLE name.

node-1 fabric-node name

Specify VLE node 1 name.

node-2 fabric-node name

Specify VLE node 2 name.

 node-1-port node-1-port-number

Specify VLE node-1 port.

 node-2-port node-2-port-number

Specify VLE node-2 port.

[tracking|no tracking]

Enable or disable tracking between VLE ports

To enable or disable tracking between existing VLE ports, use the vle-modify command:

CLI network-admin@switch > vle-modify name name-string tracking|no tracking

vle-modify

Modify virtual link extension tracking

name name-string

Modify the VLE name

tracking|no tracking

Enable or disable tracking between VLE ports

To delete a virtual link extension, use the vle-delete command:

CLI network-admin@switch > vle-delete name name-string

To view a virtual link extension status, use the vle-show command:

CLI network-admin@switch > vle-show

name       node-1   node-2   node-1-port node-2-port status tracking

---------- -------- -------- ----------- ----------- ------ --------

Test1       mynode1 mynode2   11          11          up     yes

 

Support for Configuring Keep-Alive Time for Virtual Link Extension (VLE)

As part of VLE tracking, each node hosting VLEs sends fabric fast keepalives every one second to every other node that hosts the endpoint of the local VLEs. A local node times out a remote node if a keepalive is not received from the remote node in 3 seconds which is the default VLE tracking timeout.

If a remote node times out, Netvisor brings down the local VLE ports, terminating on the remote node. In some deployments, Netvisor determines a three second times out.

Note that the periodic fast keepalive send frequency remains one second. Only the timeout value adjusts to the configured value.

system-settings-modify vle-tracking-timeout seconds

 

Configure a value from 3 to 30 seconds.

fabric-node-show keepalive-timeout high resolution time: #ns 

 

Configure the fabric keepalive timeout in nanoseconds.

Support for Virtual Link Extension (VLE) Analytics

Currently, Netvisor OS does not copy VLE traffic control frames to the CPU on the switch. Netvisor does not remove the inner tag, if present. Netvisor achieves this by installing a system vFlow, Virtual-Link-Extend, with highest priority 15 with no action specified so that Netvisor does not terminate LLDP or other control frames and send to CPU.

To support VLE analytics, Netvisor installs a few additional system vFlows with the same priority as the existing Virtual-Link-Extend vFlow to copy TCP-SYN/FIN/RST packets to CPU. This ensures that any VLE-SYN/FIN/RST packets target System-VLE-x flows and not Virtual-Link-Extend flow.

vflow-show format name,scope,type,proto,tcp-flags,precedence,action,enable

 

name                   scope type   proto tcp-flags precedence action      enable

---------------------- ----- ------ ----- --------- ---------- ----------- ------

System-VLE-S           local system tcp   syn       15         copy-to-cpu enable

System-VLE-F           local system tcp   fin       15         copy-to-cpu enable

System-VLE-R           local system tcp   rst       15         copy-to-cpu enable

Virtual-Link-Extend    local system                 15         none        enable

 

connection-show

vnet vlan vxlan src-ip     dst-ip     dst-port cur-state syn-resends syn-ack-resends

---- ---- ----- ---------- ---------- -------- --------- -----------

100  100   20.20.20.1 20.20.20.2 http     fin       0           0

 

latency obytes ibytes total-bytes age

------- ------ ------ ----------- --------

74.8us  149    311    460         2h11m21s