Packet Load Balancing over One to Many Links

When the Virtual Wire is deployed as legacy packet broker, moving packets from production to an analyzer tool, it requires load balancing feature because you can monitor 10Gb links with 1Gb tools.

Netvisor load balances the traffic by distributing the traffic load to different tool ports or appliances in order to scale the monitoring.)

This also provides redundancy in the monitoring technology.

When the member port goes down, traffic on the port is switched to remaining member ports and evenly distributed.

To configure load balancing, use the following steps:

1. First configure a trunk on the desired ports. In this case, ports 15 and 16 are configured as a trunk:

CLI network-admin@Leaf1 > trunk-create name lb_trunk ports 15,16

Created trunk lb_trunk, id 128

 

2. Create the port association on the switch:

CLI network-admin@Leaf1 > port-association-create name pa1 master-ports 1 slave-ports 128 virtual-wire bidir

3. Display the configuration:

CLI network-admin@Leaf1 > port-association-show

switch      name master-ports slave-ports policy      virtual-wire bidir

----------- ---- ------------ ----------- ----------- ------------ -----

vw-switch-1 pa1  1            128         all-masters true         true

CLI network-admin@Leaf1 > port-show port 1,16,16

switch      port vnet hostname status config    trunk

----------- ---- ---- -------- ------ --------- --------

vw-switch-1 1                         40g,jumbo

vw-switch-1 16                 trunk  10g,jumbo lb_trunk

CLI network-admin@Leaf1 > vflow-show

switch      name                    scope type   in-port ether-type dst-ip    proto

----------- ----------------------- ----- ------ ------- ---------- --------- -----

vw-switch-1 Internal-Keepalive      local system         ipv4       239.4.9.7 udp

vw-switch-1 VIRT_WIRE_MAS_SLV       local system 1                               

vw-switch-1 VIRT_WIRE_SLV_MAS       local system 128                             

 

tcp-flags flow-class precedence action      action-to-ports-value enable

--------- ---------- ---------- ----------- --------------------- ------

          control    14         to-cpu                            enable

                     14         to-port     128                   enable

                     14         to-port     1                     enable

 

table-name

-------------------

                   

L1-Virtual-Wire-1-0

L1-Virtual-Wire-1-0

 

Building a Virtual Wire Fabric

Multiple Virtual Wire switches can be interconnected to form a single Virtual Wire fabric. A Virtual Wire fabric is like a highly scalable and distributed patch panel that can be dynamically and remotely provisioned to implement single dedicated wire speed links between any two device ports in the network.

When all of the switches in the Virtual Wire fabric are part of the same Management Fabric, they can be provisioned and controlled as a single logical Virtual Wire switch.


 

Informational Note:  A Management Fabric for more than four Virtual Wire switches requires a software license that is already included on E68-M and E28-Q switches.

The most efficient design for a Virtual Wire fabric is based on the classic leaf-spine architecture, or Clos, a nonblocking, multistage switching topology, as in Figure 4

Figure 4: Leaf and Spine Topology for Virtual Wire Fabric

Slide1_2.jpg

.


 

Informational Note:  In Clos architecture, there is no limit to the number of Virtual Wire links between device ports that are physically connected to the same leaf. Instead, the number of Virtual Wire links between device ports that are connected to different leafs depend on the oversubscription ratio between leaf and spine.

With this approach, you can select the desired oversubscription ratio and build a modular and scalable architecture to scale up to thousands of device ports. For example, using E68-M switches as building blocks, a possible leaf switch configuration uses 44 x 10 Gigabit Ethernet ports to connect to device ports and 24 x 10 Gigabit Ethernet ports to connect to the spine layer, resulting in a 1.8:1 oversubscription ratio. Based on the desired maximum number of device ports, you can select from different scale options:

Figure 5: 17 Leafs and 6 Spines

Slide1_3.jpg

Figure 6: 34 Leafs and 12 Spines

Slide1_4.jpg

Figure 7: 68 Leafs and 24 Spines

Slide1_5.jpg