Configuring Port Attributes

Displaying Port Numbering

To display port numbering on white box Netvisor OS platforms, use the bezel-portmap-show command:

CLI network-admin@switch > bezel-portmap-show

switch         port bezel-intf

-------------- ---- ----------

Leaf1          1    1          

Leaf1          2    2          

Leaf1          3    3          

Leaf1          4    4          

Leaf1          5    5

...

Leaf1          49   49         

Leaf1          50   49.2       

Leaf1          51   49.3       

Leaf1          52   49.4       

Leaf1          53   50         

Leaf1          54   50.2       

Leaf1          55   50.3       

Leaf1          56   50.4

Configuring Ports for Different Throughput

By default, ports on the switches are configured as 40GbE ports. You can also use them as 4 x 10GbE with the right transceiver. To refer to the 40Gb port, use the last port number of the port group. For example, the first 40Gb port, in the example above, is referred to as port 49 for 40GbE use and as ports 49, 50, 51, and 52 for 4/10Gb use.

If you want to change the 40Gb port to 4x10Gb functionality, use the following command sequence:

CLI network-admin@switch > port-config-modify port 49-52 speed 10g

To change the port back to 40Gb operation, use the following command sequence:

CLI network-admin@switch > port-config-modify port 49 speed 40g

 

The default port speed is 10G and you can modify the parameters of a port:

Displaying Port Status

You can use the port-show command to display status information on all ports with active links. Details for each port include the IP addresses and MAC addresses of hosts connected to that port. There can be more than one host if a network device such as a switch is connected. The command also displays the VLAN of the port, port status, and configuration details.

To display all port information for ports 1-6 on the switch, use the command, port-show port 1-6:

CLI network-admin@switch > port-show port 1-6

switch   port bezel-port vnet l2-net hostname status config

-------- ---- ---------- ---- ------ -------- ------ ------

Leaf1    1    1                                      10g    

Leaf1    2    2                                      10g    

Leaf1    3    3                                      10g    

Leaf1    4    4                                      10g    

Leaf1    5    5                                      10g    

Leaf1    6    6                                      10g

 

Displaying Port Statistics

You can also display statistics for all active ports on the switch. This information is useful for understanding the traffic on the ports.

Use the port-stats-show command to display the information:

CLI network-admin@switch > port-stats-show port 0 format all layout vertical

switch:      dorado05

time:        11:32:41

port:        0

description:

counter:     0

ibytes:      2.82G

ibits:       24.2G

iUpkts:      176M

iBpkts:      0

iMpkts:      0

iPauseFs:    0

iCongDrops:  0

idiscards:   7

ierrs:       0

obytes:      884M

obits:       7.42G

oUpkts:      13.0M

oBpkts:      0

oMpkts:      0

oPauseFs:    0

oCongDrops:  1.89G

odiscards:   1.89G

oerrs:       0

mtu-errs:    0

HER-pkts:    0

HER-bytes:   0

port-speed:  disable

 

The output headers have the following meaning:

Using Port Buffering

You can modify and display the port buffering settings for the switch ports. To display the port buffering settings, use the port-buffer-settings-show command:

CLI network-admin@switch > port-buffer-settings-show

switch: Spine1

enable: yes

interval: 1m

disk-space: 50M

 

To modify port buffering settings, use the port-buffer-settings-modify command:

CLI network-admin@switch > port-buffer-settings-modify interval 2m

You can modify the buffer interval, duration, disk space, and enable or disable port buffering on the switch.

To display the port buffer, use the port-buffer-show command:

CLI network-admin@switch > port-buffer-show

switch: Spine1

port: 0

ingress-used-buf: 0%

ingress-used-buf-val: 0

egress-used-buf: 0%

egress-used-buf-val: 0

switch: Spine1

port: 3

ingress-used-buf: 0%

ingress-used-buf-val: 0

egress-used-buf: 0%

egress-used-buf-val: 0

switch: Pleiades24

port: 57

ingress-used-buf: 0%

ingress-used-buf-val: 0

egress-used-buf: 0%

egress-used-buf-val: 0

switch: Spine1

port: 65

ingress-used-buf: 0%

ingress-used-buf-val: 0

egress-used-buf: 0%

egress-used-buf-val: 0

switch: Spine2

port: 0

ingress-used-buf: 0%

ingress-used-buf-val: 0

egress-used-buf: 0%

egress-used-buf-val: 0

switch: Spine2

port: 1

ingress-used-buf: 0%

ingress-used-buf-val: 0

egress-used-buf: 0%

egress-used-buf-val: 0

 

Auto-Recovery of a Disabled Port

Physical ports are automatically disabled by Netvisor OS due to certain violations. For example, if a port receives BPDU messages from an edge port, Netvisor OS disables the port because receiving BPDUs on a edge port is a security violation. However, there is no way to indicate that the port is shut down because of a violation and not because of physical link issues.

The port may be disabled due to the following errors:

This feature allows you to configure an automatic retry to enable the port after a configured timeout.

err-disable-counters-clear

bpduguard|no-bpduguard

Specify if you want BPDU guard enabled.

macsecurity|no-macsecurity

Specify if you want MAC recovery enabled.

recovery-timer duration: #d#h#m#s

Specify the recovery time value. The default timer value is 5 minutes

err-disable-modify

bpduguard|no-bpduguard

Specify if you want BPDU guard enabled.

macsecurity|no-macsecurity

Specify if you want MAC recovery enabled.

recovery-timer duration: #d#h#m#s

Specify the recovery time value. The default timer value is 5 minutes.

err-disable-show

bpduguard|no-bpduguard

Display BPDU guard settings.

macsecurity|no-macsecurity

Display MAC recovery settings.

recovery-timer duration: #d#h#m#s

Display the recovery time value.

CLI network-admin@switch > err-disable-show

switch:         Leaf1

bpduguard:      off

macsecurity:    off

recovery-timer: 5m

Loop-Free Layer 2 Topology

Netvisor OS Loop Detection operates in conjunction with Rapid Spanning Tree Protocol (RSTP) and Multiple Spanning Tree Protocol (MSTP). RSTP and MSTP are is used to ensure loop free topology of the VLANs in the Layer 2 network as far as the networking equipment is concerned.

RSTP prevents loops in the network caused by miscabled networking equipment, but does not address misconfigured hosts. Netvisor OS Loop Detection goes beyond STP to protect the network from misconfigured or miscabled hosts attached to the network

Netvisor OS Control Plane — The Netvisor OS control plane includes information about every MAC address attached to the Layer 2 network in a vPort database. The vport database is distributed throughout the fabric so that each Netvisor OS switch has a copy of the vPort database for the entire fabric.

A MAC address is stored in a vPort, which includes the following information:

Access to the Netvisor OS fabric goes through the Netvisor OS software. Netvisor OS determines if endpoints access the network based on control plane data structures including the vPort database.

Detecting Loops

Netvisor OS Loop Detection is implemented as part of Netvisor OS source MAC address miss handling. Netvisor OS disables hardware learning of MAC addresses, so when a packet arrives with an unknown MAC address, the switch sends the packet to Netvisor OS rather than switching the packet normally. Netvisor OS examines the vPort table to determine if a packet with an unknown MAC is indicative of a loop.

Netvisor OS uses two criteria to detect a loop on the network:

For the purposes of Netvisor OS Loop Detection, a host port is defined as a port not connected to another Pluribus switch, not an internal port, and does not participate in STP with Netvisor OS which means that Netvisor OS is not configured for STP or the device connected on the port is not configured for STP.

VRRP MAC addresses are not subject to Loop Detection and Mitigation, and can migrate freely.

Loops are detected on a port by port basis. A single loop typically involves two ports, either on the same switch or on two different switches. When multiple loops are present, more than two ports are involved and each port is handled separately.

Loop Mitigation

When a loop is detected a message is logged to the system log indicating the host port and VLAN involved in the loop. In addition the host port involved in the loop has the "loop" status added and Netvisor OS adds the VLAN to the host port loop-vlans VLAN map, so that looping ports and VLANs are displayed in the port-show output.

At the start of loop mitigation, Netvisor OS creates vPorts to send loop probe packets. The vPorts use the port MAC address for the in-band NIC port, status of PN-internal, and a state of loop-probe. Loop-probe vPorts are propagated throughout the fabric. Netvisor OS creates a loop-probe vPort for each looping VLAN.

At the start of loop mitigation Netvisor OS deletes all vPorts from the looping host port and VLAN. This prevents the hardware from sending unicast packets to the looping port, and causes every packet arriving on the looping port to appear in the software as a source MAC miss. During loop mitigation, all packets arriving on the looping port are dropped.

During loop mitigation Netvisor OS sends loop probe packets on the looping VLANs every 3 seconds. As long as the loop persists, Netvisor OS receives the probe packets as source mac miss notification on the looping ports, so Netvisor OS can determine if the loop is still present. If 9 seconds elapse with no received probe packets, Netvisor OS detects the loop is resolved and ends loop mitigation.

At the end of loop mitigation, log messages are added the system log, loop-probe vPorts are removed, and loop stats and loop VLANS are removed from the looping port.

To view affected ports, use the port-show command and add the parameter, status loop:

network-admin@switch-31>port-show status loop

switch     port hostname status                config

   ---------- ---- -------- --------------------- ------

   switch-31 9             up,stp-edge-port,loop fd,10g

   switch-32 9             up,stp-edge-port,loop fd,10g

Note the new status, loop, in the status column.

During loop mitigation, the MAC addresses for loop probes are displayed in the vPort table:

CLI (network-admin@switch-31) > vport-show state loop-probe

owner      mac               vlan ports state      hostname   status      

---------- ----------------- ---- ----- ---------- ---------- -----------

switch-32 06:c0:00:16:f0:45 42   69    loop-probe leo-ext-32 PN-internal

switch-31 06:c0:00:19:c0:45 42   69    loop-probe leo-ext-31 PN-internal

 

Note the loop-probe state as well as the PN-internal state. The loop probes use the port MAC address format, and use the internal port for the in-band NIC.

If you notice a disruption in the network, use the port-show command to find the looping ports, and fix the loop. Fixing the loop typically involves correcting cabling issues, configuring virtual switches, or as a stop-gap measure, using the port-config-modify command to change port properties for the looping host ports. Once the loop is resolved, Netvisor OS no longer detects probes and leaves the loop mitigation state, while logging a message:

2016-01-12,12:18:41.911799-07:00 leo-ext-31 nvOSd(25695) system

host_port_loop_resolved(11381) : level=note : port=9 :

Traffic has stopped looping on host-port=9

At this point the loop status is removed from the port-show output for port 9 and the loop-probe vPorts are removed.

Netvisor OS Loop Detection exposes loops using system log messages, port-show output, and vport-show output. Netvisor OS Loop Detection is enabled or disabled by using the sys-flow-setting-modify command:

network-admin@e68-leaf-01>system-settings-modify block-loops

network-admin@e68-leaf-01>system-settings-modify no-block-loops

 

The block-loops argument for sys-flow-setting-modify is not available on the F64.

When Netvisor OS detects an internal port MAC address on a host port, Netvisor OS prints a log message:

system   2016-01-19,15:36:40.570184-07:00 mac_move_denied

       11379 note  MOVE DENIED mac=64:0e:94:c0:03:b3 vlan=1 vxlan=0

       from switch=leo-ext-31 port=69 to deny-switch=leo-ext-31 deny-port=9

       reason=internal MAC of local switch not allowed to change ports

 

Netvisor OS starts Loop Mitigation by logging a message:

system   2016-01-19,15:36:40.570334-07:00 host_port_loop_detected

       11380 warn  Looping traffic detected on host-port=9

       vlan=1. Traffic on this port/VLAN will be ignored until loop resolved

 

During Loop Mitigation, Netvisor OS sends loop probes. When these probes, as well as any other packets, are received on a looping host port, Netvisor OS logs a message:

 

       system   2016-01-19,15:59:54.734277-07:00 mac_move_denied

       11379 note  MOVE DENIED mac=06:c0:00:19:c0:45 vlan=1 vxlan=0

       from switch=leo-ext-31 port=69 to deny-switch=leo-ext-31

       deny-port=9 reason=port is looping

 

mac_move_denied messages are limited to one every 5 seconds for each vPort. This prevents the system log from filling up with mac_move_denied messages during loop mitigation.

During loop mitigation, you can use the port-show command to see which ports are involved in the loop:

 

CLI network-admin@switch > port-show status loop

switch     port hostname status                loop-vlans config

---------- ---- -------- --------------------- ---------- ------

e68-leaf-01 9             up,stp-edge-port,loop 1          fd,10g

e68-leaf-01 9             up,stp-edge-port,loop 1          fd,10g

 

Note the loop status in the status column and the loop-vlans column.

During loop mitigation the MAC addresses for loop probes are displayed the vPort table:

 

CLI network-admin@switch > vport-show state loop-probe,

owner       mac                   vlan  ports      state      hostname   status      

---------- ----------------- ---- ----- ---------- --------   --------- --------

e68-leaf-01 06:c0:00:16:f0:45    42     69         loop-probe leo-ext-32 PN-internal

e68-leaf-01 06:c0:00:19:c0:45    42     69         loop-probe leo-ext-31 PN-internal

 

Managing Control Plane Traffic Protection (CPTP)

This feature is supported on the following platforms:

• F9272-X

• AS5712-54X

• F9232-C

• AS6712-32X

• FF9372-T

• AS5812-54T

This feature is supported on the following platforms:

• S4048-ON

• Z9100-ON

Control Plane Traffic Protection (CPTP) applies to the internal control, data, and span ports which all connect to the CPU, so the CPU resources are protected from large quantities of traffic arriving from different sources such as control packets, cluster communication, fabric updates as well as the regular flood traffic, learning packets and copy-to-cpu packets.

The purpose of CPTP is to classify the traffic on the hardware to different Class of Service (CoS), and perform priority scheduling between them, and also apply a rate limit for each of the CoS, to protect the CPU resources and at the same time, provide a Service Level Agreement (SLA) for critical traffic.

CLI network-admin@switch > port-cos-rate-setting-show

switch    port  port-number cos0-rate cos1-rate cos2-rate cos3-rate cos4-rate cos5-rate cos6-rate

--------- ----- ----------- --------- --------- --------- --------- --------- --------- --------

Spine1    pci-e 0           100       100       1000000   1000000   1000000   1000000   1000000

Spine1    data 65           100       100       1000000   1000000   1000000   1000000   1000000

Spine1    span 66           100       100       1000000   1000000   1000000   1000000   1000000

 

On ONVL switches, the following output is displayed:

switch:         Leaf1

port:           control-port

ports:          0

cos0-rate(pps): 5000

cos1-rate(pps): 5000

cos2-rate(pps): 5000

cos3-rate(pps): 5000

cos4-rate(pps): 5000

cos5-rate(pps): 5000

cos6-rate(pps): 5000

cos7-rate(pps): 5000

You can modify the CoS rate settings using the port-cos-rate-setting-modify command. The rate limits are set in packets per second.

CLI network-admin@switch > port-cos-stats-show

switch:     Spine1

time:        11:59:15

port:        0

cos0-out:    58.8M

cos0-drops:  180M

cos1-out:    58.8M

cos1-drops:  185M

cos2-out:    0

cos2-drops:  0

cos3-out:    0

cos3-drops:  0

cos4-out:    0

cos4-drops:  0

cos5-out:    0

cos5-drops:  0

cos6-out:    65.5M

cos6-drops:  1.06G

cos7-out:    483K

cos7-drops:  493MTo clear the statistics for CoS on the ports, use the port-cos-stats-clear command.

Enhancements for Control Plan Traffic Protection

This enhancement to Control Plane Traffic Protection (CPTP) provides 44 queues to further strengthen CPU protection and limits the traffic going to the CPU. Currently, only 8 Class of Service (CoS) queues are supported for flow control on a physical port. Each traffic class with a CPU destination has a separate vFlow. All system vFlows with the parameters, to-cpu or copy-to-cpu, now have an additional cpu-cos value.

 

cpu-class-show

 

switch   name          scope rate-limit queue

-------- ------------- ----- ---------- -----

Spine1   stp           local 1000        8    

Spine1   lacp          local 1000       9

Spine1   system-d      local 1000       10

Spine1   igmp          local 1000       11

Spine1   bcast         local 1000       12

Spine1   icmpv6        local 1000       13

Spine1   tcp-analytics local 1000       14

Spine1   fabric        local 1000       15

Spine1   kpalv         local 1000       16

Spine1   ecp           local 1000       17

Spine1   arp           local 1000       18

Spine1   lldp          local 1000       19

Spine1   vport-stats   local 1000       20

Spine1   dhcp          local 1000       21

Spine1   pim           local 1000       22

Spine1   local-subnet  local 1000       23

Spine1   bgp           local 1000       24

Spine1   ospf          local 1000       25

 

All DHCP traffic has a separate CoS queue, 21, and so on. CoS 0-7 are reserved CPU queues. Any traffic not in one of the listed classes uses queue 0.

Netvisor OS assigns a default rate-limit of 1000 to each queue, but you can modify the rate using the following syntax:

cpu-class-modify cpu-class-name DHCP rate-limit 2000

 

You must restart Netvisor OS for the change to take effect on the switch. You should modify any or all traffic classes at one time and then reboot the switch once.

Configuring User-defined Classes

1. Create a CPU class and specify the rate-limit:

cpu-class-create name ftp rate 1000

Netvisor OS assigns a CoS class to the new CPU class.

2. Display the CPU class configuration:

cpu-class-show name ftp

name    queue   rate

-----   -----   -----

ftp     17      1000

 

3. You can now create a vFlow using the ftp class:

 

vflow-create name ftp scope local proto ftp cpu-class ftp action copy-to-cpu

The cpu-class parameter is only valid if the action copy-to-cpu or to-cpu is specified.

 

You can also display statistics for each vFlow using the command, cpu-cos-stats-show:

cpu-cos-stats-show

switch   name          cos out-pkts drop-pkts

-------- ------------- --- -------- ---------

Spine1   class0        0   0        0

Spine1   class1        1   0        0

Spine1   class2        2   0        0

Spine1   class3        3   0        0

Spine1   class4        4   0        0

Spine1   class5        5   0        0

Spine1   class6        6   0        0

Spine1   class7        7   0        0

Spine1   stp           8   298K     0

Spine1   lacp          9   0        0

Spine1   system-d      10  0        0

Spine1   igmp          11  35.1K    0

Spine1   bcast         12  0        0

Spine1   icmpv6        13  0        0

Spine1   tcp-analytics 14  0        0

Spine1   fabric        15  5.02K    0

Spine1   kpalv         16  75.4K    0

Spine1   ecp           17  0        0

Spine1   arp           18  3.02K    0

Spine1   lldp          19  15.1K    0

Spine1   vport-stats   20  0        0

Spine1   dhcp          21  0        0

Spine1   pim           22  0        0

Spine1   local-subnet  23  31.0K    0

Spine1   bgp           24  0        0

Spine1   ospf          25  0        0

Spine1   ftp           26  0        0

 

Additional Control Plane Traffic Protection Enhancements

Additional Control Plane Traffic Protection (CPTP) enhancements to a new feature that allows the user to impose rate limits on the flow of traffic that arrives on the CPU management port. When control plane traffic arrives out-of-band on the management NIC of the switch, there is currently no such protection. There is the possibility that excessive control plane traffic may saturate the 1G management port or starve the CPU of other critical traffic.

You can restrict the ingress traffic types on a port used as a management interface, and drop packets that exceed a configured bandwidth limit.

Netvisor now allows you to change the settings for traffic sent to the management NIC. Currently, you can manage the following types of traffic:

This feature is disabled by default.

You can manage the settings using the following new Netvisor OS commands:

cpu-mgmt-class-modify

name arp|icmp|ssh|snmp|fabric|bcast|nfs|web|web-ssl|net-api

Select the class of traffic to modify.

one or more of the following options:

rate-limit unlimited

Specify the ingress rate limit on the management port in bps or unlimited.

burst-size default

Specify the ingress traffic burst size in bytes or default.

cpu-mgmt-class-show

name arp|icmp|ssh|snmp|fabric|bcast|nfs|web|web-ssl|net-api

Displays the class of traffic.

one or more of the following options:

rate-limit unlimited

Displays the ingress rate limit on the management port in Bps or unlimited.

burst-size default

Displays the ingress traffic burst size in bytes or default.

cpu-mgmt-class-stats-settings-modify

enable|disable

Specify if you want to enable statistics collection.

interval duration: #d#h#m#s

Specify the interval duration.

disk-space disk-space-number

Specify the amount of disk space for the statistics.

cpu-mgmt-class-stats-settings-show

enable|disable

Displays if statistics collection is enabled or disabled.

interval duration: #d#h#m#s

Displays the interval duration.

disk-space disk-space-number

Displays the amount of disk space for the statistics.

cpu-mgmt-class-stats-show

time date/time: yyyy-mm-ddTHH:mm:ss

Displays the time to start collection.

start-time date/time: yyyy-mm-ddTHH:mm:ss

Displays the start time of collection.

end-time date/time: yyyy-mm-ddTHH:mm:ss

Displays the end time of collection.

duration duration: #d#h#m#s

Displays the duration of collection.

interval duration: #d#h#m#s

Displays the interval between collection.

since-start

Displays the statistics collected since the start time.

older-than duration: #d#h#m#s

Displays the statistics older than the specified time.

within-last duration: #d#h#m#s

Displays the statistics collected within last time.

name arp|icmp|ssh|snmp|fabric|bcast|nfs|web|web-ssl|net-api

Displays the CPU management class.

in-bytes in-bytes-number

Displays the ingress bytes processed.

in-pkts in-pkts-number

Displays the ingress packets processed.

drop-pkts drop-pkts-number

Displays the number of ingress packets dropped.

cpu-mgmt-class-show

name    rate-limit

------- ----------

arp     unlimited

icmp    unlimited

ssh     unlimited

snmp    unlimited

fabric  unlimited

bcast   unlimited

nfs     unlimited

web     unlimited

web-ssl unlimited

net-api unlimited

 

cpu-mgmt-class-stats-settings-show

 

switch   name    in-bytes in-pkts drop-pkts

-------- ------- -------- ------- ---------

dorado05 arp     0        0       0         

dorado05 icmp    0        0       0         

dorado05 ssh     0        0       0         

dorado05 snmp    0        0       0         

dorado05 fabric  0        0       0         

dorado05 bcast   0        0       0         

dorado05 nfs     0        0       0         

dorado05 web     0        0       0         

dorado05 web-ssl 0        0       0         

dorado05 net-api 0        0       0

 

 

Display Physical Port Layer 2 Information

You can display physical port information at Layer 2 using the port-phy-show command. This command displays information about the default VLAN, link quality, maximum frame size, Ethernet mode, speed, and status. You can also display the default VLAN for a port.

CLI network-admin@switch > port-phy-show

port state speed eth-mode    max-frame learning def-vlan

---- ----- ----- --------    --------  -------- --------

17   up    1000  1000base-x  1540      on       1

19   up    10000 10Gbase-cr  10232     on       1

Displaying Transceiver Information

You can display information about the transceivers connected to the switch using the port-xcvr-show command:

CLI network-admin@switch > port-xcvr-show

switch         port vendor-name      part-number      serial-number    

-------------- ---- ---------------- ---------------- ----------------

Spine1         3    3M               1410-P17-00-0.50                  

Spine1         4    3M               1410-P17-00-0.50                  

Spine1         57   3M               9QA0-111-12-1.00 V10B9252         

Spine1         65   3M               9QA0-111-12-1.00 V10B9614

 

Configuring Minimum and Maximum Bandwidth on Ports

This feature introduces bandwidth guarantees on switch ports. Currently, Pluribus Networks switches allow rate limiting only for CPU facing (PCIe, data and span) ports. Using this feature, you can configure bandwidth guarantees at egress CoS (Class of Service) queue level and manage prioritized traffic. The feature is also used for setting SLAs (Service Level Agreements).  Currently, nvOS provides maximum bandwidth policing at the vFlow level, yet it is not possible to set guaranteed minimum bandwidth. This feature addresses that limitation.

Switch hardware supports minimum and maximum bandwidth guarantees configured at a port and CoS queue level. A per port configuration is supported. This allows the settings as a percentage of port speed, so the data rate is internally determined on command execution. Additionally, when port speed is updated, the port configuration internally re-adjusts the minimum or maximum bandwidth rates for the applicable ports. The port-config-show command displays 100% allocations by default. New configurations are displayed as additional elements, sorted by CoS queue.

New commands are introduced to modify and show ONLY modified port configurations. Ports not displayed in the show command output have default settings: 100% link capacity, no minimum guarantee for each CoS queue.

(CLI network-admin@Spine)>port-cos-bw-modify

cos integer

Specify the CoS priority between 0 and 7.

port port-list

Specify the physical port(s).

min-bw-guarantee min-bw-guarantee-string

Specify the minimum bandwidth as a percentage.

max-bw-limit max-bw-limit-string

Specify the maximum bandwidth as a percentage.

 

(CLI network-admin@Spine)>port-cos-bw-show

cos integer

Specify the CoS priority.

port port-list

Specify the physical port(s).

 

CLI (network-admin@Spine1) > port-cos-bw-modify port 2-5 cos 5 min-bw-guarantee 10

 

CLI (network-admin@Spine1) > port-cos-bw-show

 

switch           cos port   min-bw-guarantee max-bw-limit weight

---------------- --- ------ ---------------- ------------ ------

Spine1          0   1-72   0%               100%         16     

Spine1          1   1-72   0%               100%         32     

Spine1          2   1-72   0%               100%         32     

Spine1          3   1-72   0%               100%         32     

Spine1          4   1-72   0%               100%         32     

Spine1          5   1,6-72 0%               100%         32     

Spine1          5   2-5    10%              100%         32     

Spine1          6   1-72   0%               100%         64     

Spine1          7   1-72   0%               100%         127

CLI (network-admin@Spine1) > port-cos-bw-modify port 2-511,13 cos 4 min-bw 20 max-bw 80

CLI (network-admin@Spine1) > port-cos-bw-show

 

switch         cos     port        min-bw  max-bw

-------        ----    ------      ------  -----

Spine1         0       0-72        100%    100%

Spine1         1       0-72        100%    100%

Spine1         2       0-72        100%    100%

Spine1         3       0-72        100%    100%

Spine1         4       0-1,11-72   100%    100%

Spine1         4       11-13        20%     80%

Spine1         5       2-10        10%     100%

Spine1         6       0-72        100%    100%

Spine1         7       0-72        100%    100%

 

Changing the port settings to new values overrides the previous settings.

Changes to Class of Service (CoS) Behavior

Netvisor OS now automatically weights Class of Server (CoS) queues with minimum bandwidth guarantees. When you configure minimum bandwidth on a port queue using CoS, the remaining bandwidth is assigned to the rest of the queues in the same ratio as the minimum bandwidth.

Once you configure minimum bandwidth on a CoS port using the command, port-cos-bw-modify, the command, port-cos-weight-modify, no longer works to configure the CoS queue weight.

When you configure a minimum bandwidth without specifying a weight value, the weight for the port and CoS is automatically set on a scale of 1 to 100. For example, if you configure the minimum bandwidth as 10%, Netvisor OS automatically assigns the queue a weight value of 1. You can also assign a specific weight value to the queue.

Additionally, you can configure strict priority scheduling for any of the queues. By default, CoS 7 assigns strict priority scheduling for the queue.

To allow automatic weight assignment for CoS queues, use the following syntax:

system-settings-modify cosq-weight-auto|no-cosq-weight-auto

 

port-cos-bw-modify

cos integer

Specify the CoS priority between 0 and 7.

port port-list

Specify the physical port(s).

min-bw-guarantee min-bw-guarantee-string

Specify the minimum bandwidth as a percentage.

max-bw-limit max-bw-limit-string

Specify the maximum bandwidth as a percentage.

weight|no-weight

Specify if the scheduling weight after the bandwidth guarantee is met.

port-bw-show

switch  cos port min-bw-guarantee max-bw-limit weight

------- --- ---- ---------------- ------------ ------

Spine-1 0   1-72 0%               100%         0

Spine-1 1   1-72 0%               100%         0

Spine-1 2   1-72 0%               100%         0

Spine-1 3   1-72 0%               100%         0

Spine-1 4   1-72 0%               100%         0

Spine-1 5   1-72 0%               100%         0

Spine-1 6   1-72 0%               100%         0

Spine-1 7   1-72 0%               100%         0

 

To auto-configure bandwidth, use the following syntax:

port-cos-bw-modify cos 1 port 1 min-bw-guarantee 20

port-cos-bw-show

 

switch  cos port min-bw-guarantee max-bw-limit weight

------- --- ---- ---------------- ------------ ------

Spine-1 0   1-72 0%               100%         0

Spine-1 1   2-72 0%               100%         0

Spine-1 1   1    20%              100%         2

Spine-1 2   1-72 0%               100%         0

Spine-1 3   1-72 0%               100%         0

Spine-1 4   1-72 0%               100%         0

Spine-1 5   1-72 0%               100%         0

Spine-1 6   1-72 0%               100%         0

Spine-1 7   1-72 0%               100%         0

 

To configure a specific weight, use the following syntax:

port-cos-bw-modify cos 1 port 1 weight 6

port-cos-bw-show

 

switch  cos port min-bw-guarantee max-bw-limit weight

------- --- ---- ---------------- ------------ ------

Spine-1 0   1-72 0%               100%         0

Spine-1 1   2-72 0%               100%         0

Spine-1 1   1    20%              100%         6

Spine-1 2   1-72 0%               100%         0

Spine-1 3   1-72 0%               100%         0

Spine-1 4   1-72 0%               100%         0

Spine-1 5   1-72 0%               100%         0

Spine-1 6   1-72 0%               100%         0

Spine-1 7   1-72 0%               100%         0

 

Configuring Port Storm Control

Port Storm Control prevents traffic on a LAN from disruption by a broadcast, multicast, or unicast storm on a port. A LAN storm occurs when packets flood the LAN, creating excessive traffic and degrading network performance.

Use the port-storm-control-modify to modify the percentage of total available bandwidth that can be used by broadcast, multicast, or unicast traffic.

CLI network-admin@switch > port-storm-control-modify port 11 unknown-ucast-level 1.1

Use the port-storm-control-show command to display the configuration:

CLI network-admin@switch > port-storm-control-show

switch   port speed unknown-ucast-level unknown-mcast-level broadcast-level    vlag           trunk          

-------- ---- ----- ------------------- ------------------- --------------- ---------- ------

draco01  1    40g   30%                 30%                 30%                                          

draco01  2    10g   30%                 30%                 30%                                          

draco01  3    10g   30%                 30%                 30%                                          

draco01  4    10g   30%                 30%                 30%                                          

draco01  5    40g   30%                 30%                 30%                                          

draco01  6    40g   30%                 30%                 30%                                          

draco01 7   10g  30%            30%           30

Enabling Jumbo Frame Support

Jumbo frames are frames that are bigger than the standard Ethernet frame size, which is 1518 bytes (including Layer 2 (L2) header and FCS). The definition of frame size is vendor-dependent, as these are not part of the IEEE standard.

When the jumbo frame feature is enabled on a port, the port can switch large or jumbo frames. This feature optimizes server-to-server performance. The default Maximum Transmission Unit (MTU) frame size is 1548 bytes for all Ethernet ports. The MTU size is increased to 9216 bytes when the jumbo frame feature is enabled on a port.

Jumbo frame support is disabled by default.

To enable jumbo frame support, add the jumbo parameter to the port-config-modify command:

CLI network-admin@switch > port-config-modify jumbo

About Port Isolation

Port Isolation prevents local switching among ports on a Netvisor OS switch or on a pair of Netvisor OS switches configured as a cluster. With Port Isolation, hosts that are part of same Layer 2 domain connect to isolated ports are not allowed to communicate directly or to mutually learn the other MAC address. Communication between these hosts occurs through a Layer 3 device. This is useful for securing bridged east-west traffic through a firewall.

When using this feature on ports within a cluster, you must configure the port-link state association rules between the uplink ports and the downlink isolated ports.

Example Use-Case Configuration

In a typical scenario, as shown in Figure 4, ports 1, 2, and 3 are configured as isolated ports so that the hosts attached to these ports cannot communicate with each other directly, but only through the upstream firewall or router that is connected to port 64.

Figure 4:Port Isolation scenario

port_isolation.png

 

As shown in Figure 4, create the configuration as follows:

PN-HA1

CLI network-admin@switch > port-config-modify port 1 no-local-switching

CLI network-admin@switch > port-config-modify port 2 no-local switching

PN-HA2

CLI network-admin@switch > port-config-modify port 2 no-local-switching

CLI network-admin@switch > port-config-modify port 3 no-local-switching

Typically, the upstream router or firewall is configured to perform local proxy ARPs and/or NDP proxy and respond to all ARP requests and/or Neighbor Solicitations coming from isolated hosts. To avoid interfering with local proxy ARPs and NDP proxy, disable ARP and ND Optimization as follows:

CLI network-admin@switch > system-settings-modify no-optimize-arps

CLI network-admin@switch > system-settings-modify no-optimize-nd

Configuring Port Isolation

To configure Port Isolation, use the following steps:

1. Configure the isolated ports. In this example, ports 1 and 2:

CLI network-admin@switch > port-config-modify port 1,2 no-local-switching

2. Optionally, configure the port link state association. A port association is required to match the link state of downlink isolated ports with the one of uplink ports. When all uplink ports are down, downlink isolated ports are administratively disabled until one of the uplinks becomes operational again. In this example, the port association name is PA, uplink (mas­ter), ports value is 64, and isolated downlink (slave) ports value are 1, 2.

CLI network-admin@switch > port-association-create-name PA master-ports 64 slave-ports 1,2 policy any-master

3. Optionally, disable ARP and ND optimization.

CLI network-admin@switch > system-settings-modify no-optimize-arps

CLI network-admin@switch > system-settings-modify no-optimize-nd

This feature uses the command no-local-switching for the port-config-modify command. To configure one or more isolated ports:

CLI network-admin@switch > port-config-modify port port-list no-local-switchingg

To view ports that are impacted by the no-local-switching command, use the port-egress-show command:

switch port      egress  rx-only             active-active-vlags  loopback

------ --------- ------- ------------------- -------------------- -------- 

1      0-72      none    none                none                 none

2      0-72      none    none                none                 none

3      0-72      none    none                none                 none

4      0-72      none    none                none                 none

5      0-4,11-72 none    none                none                 none

6      0-4,11-72 none    none                none                 none

7      0-4,11-72 none    none                none                 none

8      0-4,11-72 none    none                none                 none

mir_prevent_out          no-local-switching-out

------------------------ ----------------------

none                     none

none                     none

none                     none

none                     none

none                     5-10

none                     5-10

none                     5-10

none                     5-10

 

The following Port Isolation options for the trunk-create, trunk-modify, and trunk-show commands are as follows:

CLI network-admin@switch > trunk-create

trunk-create

Create a trunk configuration for link aggregation

one or more of the following options:

local-switching|no-local-switching

Specify no-local-switching if you do not want the port to bridge traffic to another no-local-switching port.

CLI network-admin@switch > trunk-modify

trunk-modify

Modify a trunk configuration for link aggregation

one or more of the following options:

reflect|noreflect

Specify if physical port reflection is enabled or not.

CLI network-admin@switch > trunk-show

trunk-show

Display trunk configuration

one or more of the following options:

reflect|noreflect

Displays if physical port reflection is enabled or not.

 

Support for Priority-based Flow Control


 

Informational Note:  This feature is supported on the following platforms: S4048-ON, S6000-ON, AS5712-54X, AS6712-32X, F9272-X, and F9232-Q.

Priority Flow Control (PFC) is an IEEE standard (802.1qbb) for link level flow control on Ethernet networks. Functionally, this feature is similar to the IEEE standard 802.3 for PAUSE mechanism, except that it operates at the granularity of individual packet priorities or traffic class, instead of port level. When a queue corresponding to traffic with a particular traffic class reaches a predetermined, either auto or statically set, threshold, the switch chip generates a PFC frame and sends it back to the sender. For PFC to work effectively end to end on the network, all switches and hosts in the traffic path are configured to enable PFC, and configured for traffic class to priority mappings.

Netvisor OS introduces a new command to configure priorities, or traffic classes, for PFC. The configuration allows you to add ports where PFC is enabled. When enabled, traffic class to CoS queue mappings, as well as to packet priorities, are performed in the background. The following mappings are performed:

PFC is enabled to both transmit and receive on the selected port. For transmit, Netvisor OS pauses traffic corresponding to the traffic class indicated in the received PFC frame. For receive, Netvisor OS generates a PFC frame when a queue corresponding to a traffic class reaches the pause threshold. Netvisor OS auto-configures parameters such buffer threshold,and pause timer value. Disabling PFC turns off PFC for receive and transmit, although the traffic class priority and queue mappings remain.

On supported switches, even with ingress admission control enabled (in lossless mode), by default, only the traffic class or priority group 7 is set up with the memory management unit (MMU) buffer resources. Packets of all priorities utilize the resources of the default priority group unless specifically configured. This implies that when enabling a new priority group for PFC, the buffer configuration is generated and saved in the chip configuration file, which is read during system initialization for MMU setup. AS a result, when you enable a new priority for PFC, restarting Netvisor OS is required. Adding new ports to an existing priority group setting, for another port or ports, does not require restarting Netvisor OS.

Up to three priority group buffer settings can be configured on switches in Netvisor OS. If you attempt to configure more than three, an error message is returned.

To create a new PFC configuration on port 2 with a priority group of 2, use the following command:

CLI (network-admin@Spine1)port-pfc-create priority 2 port 1-10

Priority configuration will be effective after restart.

 

To modify the ports and change them to 11-15, use the following command:

CLI (network-admin@Spine1)port-pfc-modify priority 2 port 11-15

Priority configuration will be effective after restart.

To delete the configuration, use the following command:

CLI (network-admin@Spine1)port-pfc-delete priority 2 port 11-15

 

To display the configuration, use the port-pfc-show command:

CLI (network-admin@Spine1)port-pfc-show

switch  priority port  error

------- -------- ----- -----

Spine1  2        11-20      

Spine1  3        11-20      

 

Support for Priority-based Flow Control Port Statistics

Priority-based Flow Control (PFC) was introduced in Netvisor OS in Version 2.5.4, but the feature implementation did not include displaying statistics related to PFC. It is helpful to view the stats to confirm or debug traffic characteristics when PFC is in use. New commands allow you to display PFC stats per port and adjust the statistics collection.

port-pfc-clear

port port-list

Specify the ports to delete PFC statistics.

port-pfc-stats-settings-modify

enable|disable

Specify if you want to enable or disable PFC statistics collection.

interval duration: #d#h#m#

Specify the interval between statistics collection.

disk-space disk-space-number

Specify the amount of disk space for statistics collection.

port-pfc-stats-settings-show

enable|disable

Specify if you want to enable or disable PFC statistics collection.

interval duration: #d#h#m#

Specify the interval between statistics collection.

disk-space disk-space-number

Specify the amount of disk space for statistics collection.

port-pfc-stats-show

time date/time: yyyy-mm-ddTHH:mm:ss

Displays the date and time for statistics collection.

start-time date/time: yyyy-mm-ddTHH:mm:ss

Displays the start date and time for statistics collection.

end-time date/time: yyyy-mm-ddTHH:mm:ss

Displays the end date and time for statistics collection.

duration duration: #d#h#m#s

Displays the duration for statistics collection.

interval duration: #d#h#m#s

Displays the interval between statistics collection.

since-start

Displays the statistics since the start time.

older-than duration: #d#h#m#s

Displays the statistics older than the specified time.

within-last duration: #d#h#m#s

Displays the statistics within a specified time.

port port-list

Displays the port list.

      

Safely Restoring Ports for Cluster Configurations


 

Informational Note:  This feature is only applied during the initial start up of the network.


Sub-second traffic loss for fail over events is required for a cluster configuration. There are two types of ports providing redundant data paths: 1) Layer 3 ports over ECMP redundant routed paths, and 2) virtual LAGS (VLAGs) providing redundant Layer 2 paths. During failover and recovery port events, it can take measurable time to change the hardware routing and MAC tables on larger networks. This delay incurs traffic loss on the network. To reduce delay, this feature allows you to incrementally restore these ports at start up. By incrementally restoring the ports, the changes to the hardware are prevented from contending with each other and reduces the delay between a port up and the hardware updates with the appropriate Layer 3 and Layer 2 information for the port. This process ensures sub-second fail over.

All non-Layer 3 and non-VLAG ports are restored first. This allows the cluster links to activate and the cluster configuration to synchronize information. Layer 3 and VLAG port restoration starts after the cluster synchronizes. This is predicated on the cluster becoming active, all Layer 2 and Layer 3 entries, such as status updates, exchanged, cluster STP status synchronized, and all router interfaces initialized.

The parameter, maximum-sync-delay, controls the maximum time to wait for synchronization in the case where the cluster cannot synchronize information. After synchronization is complete, Layer 3 ports are restored first, since Layer 3 traffic can traverse the cluster link to the peer VLAG port if needed. Currently the reverse is typically not true.

If VLAG ports are restored first, a Layer 3 adjacency between the two cluster nodes may be needed but may not exist in some network configurations. After Layer 3 ports are restored, Netvisor OS waits a configurable Layer 3 port to VLAG delay to allow time for the routing protocols to converge and insert the routes. The delay time defaults to 15 seconds.

After the delay, the VLAG ports are restored incrementally. Incrementally restoring ports allows enough time to move Layer 2 entries from the cluster link to the port. Incrementally restoring ports also allows the traffic loss to occur in small, 200-300ms per port, rather than one large time span. This is particularly important for server clusters where temporary small losses are no issue, but fail or timeout for a large continuous traffic loss. If the node coming up is the cluster master, then no staggering and no Layer 3 to VLAG wait is applied. And if the node is the cluster master node, that means the peer is down or coming up, and not handling traffic. Therefore Netvisor OS safely restores the ports as soon as possible to start traffic flowing between the nodes.

To configure a cluster for restoring Layer 3 ports, use the following commands:

cluster-bringup-modify

Modifies the cluster bring up configuration.

Specify one or more of the following options

l3-port-bringup-mode staggered|simultaneous

Specify the Layer 3 port bring up mode during start up.

l3-port-staggered-interval duration: #d#h#m#s

Specify the interval between Layer 3 ports in Layer 3 staggered mode. This can be in days, hours, minutes, or seconds.

vlag-port-bringup-mode staggered|simultaneous

Specify the VLAG port bring up mode during start up.

vlag-port-staggered-interval duration: #d#h#m#s   

 Specify the interval between VLAG ports in VLAG staggered mode.

This can be in days, hours, minutes, or seconds.

maximum-sync-delay duration: #d#h#m#s

Specify the maximum delay to wait for cluster to synchronize before starting Layer 3 or VLAG port bring up.

This can be in days, hours, minutes, or seconds.

l3-to-vlag-delay duration: #d#h#m#s

Specify the delay between the last Layer 3 port and the first VLAG port bring up.

This can be in days, hours, minutes, or seconds. The default value is 15 seconds.

 

To display the cluster port restoration configuration, use the cluster-bringup-show command:

cluster-bringup-show

Displays the cluster bring up configuration information.

CLI network-admin@switch > cluster-bringup-show

switch:                       Leaf1

state:                        

l3-port-bringup-mode:         staggered

l3-port-staggered-interval:   3s

vlag-port-bringup-mode:       staggered

vlag-port-staggered-interval: 3s

maximum-sync-delay:           1m

l3-to-vlag-delay:             15s

l3-to-vlan-interface-delay:   0s

Limiting the Number of MAC Addresses per Port

You can now limit the number of MAC addresses per port. You can configure port security on ports or trunks.

New Commands 

mac-limit-modify

port port-list

Specify the port list.

mac-limit mac-limit-number

Specify the number of MAC addresses to limit on the port.

mac-limit-action log|drop|disable

Specify the action to take when the MAC address limit is exceeded. If you select log, an event is logged to the event log. If you specify drop, the event is logged and the packet dropped. If you specify disable, the event is logged and the port is disabled.

 

mac-limit-show

port port-list

Displays the port list.

mac-limit mac-limit-number

Displays the number of MAC addresses to limit on the port.

mac-limit-action log|drop|disable

Displays the action to take when the MAC address limit is exceeded.

mac-number number-mac-number

Displays the number of MAC addresses learned on the port.

 

CLI network-admin@switch > mac-limit-show

switch   port mac-limit mac-limit-action num-macs

-------- ---- --------- ---------------- --------

Leaf01   5    8         drop             0        

Leaf02   5    0         log              0        

Leaf03   5    0         log              0

 

Topic Feedback

Was this topic useful to you? Please provide feedback to improve the content. 

 

Support for Fabric Guard

Currently, Netvisor detects a Layer 2 loop using by STP, LLDP, or loop detect code. However if there is a third party device connect to a Pluribus Networks switch which consumes LLDP such as a hypervisor vSwitch, and the port is configured as an edge port, Netvisor OS cannot detect loops on the network.

If a port is configured as fabric-guard port, Netvisor OS triggers sending global discovery multicast packets on this port after the port is physically up and in an adjacency wait state. If a port with fabric-guard configuration receives a global discovery packet, Netivsor disables the port in the same way LLDP disables the port when receiving messages from the same switch. In order to re-enable the port once the loop is fixed, you must manually enable the port using the command, port-config-modify port port-number enable.

To enable fabric guard, use the following syntax:

port-config-modify port port-number fabric-guard

 

To disable fabric guard, use the following syntax:

CLI network-admin@switch > port-config-modify port port-number no-fabric-guard