Adaptive Cloud Fabric
Adaptive Cloud Fabric
Pluribus Fabric
The fabric is an essential object for switch operations. When you add switches to the fabric, all switches are under a single management domain, which is highly-available through multiple link aggregation and load balancing between network resources. The fabric performs a classic database three-phase commit for configuration changes. All members of the fabric must accept the configuration changes before the change executes in the fabric.
We highly recommend using an in-band fabric because of redundancy when compared to Mgmt. Important things to consider for a Healthy Fabric:
- All fabric nodes must be able to reach each other.
- Pluribus fabric traffic consumes some bandwidth, and in an in-band based setup, the fabric traffic has priority over other switch traffic.
- We have no control over traffic prioritization on MGMT Network since it is customer provided.
CLI Usage Note
To execute command on a local switch only, use the command: switch-local. The display of an asterisk * denotes commands will run on the local switch. To revert to displaying information from all switches use the command: switch.
In some cases, a command's horizontal output may be wide and wrap inside the terminal CLI session. To display the information in a vertical format use the format command layout vertical. To revert, omit the format command.
UNUM: Use the Topology Dashboard to review and manage Fabric settings and functionality.
Modify a Fabric Network using the command: fabric-local-modify.
CLI (network-admin@udev-leo1) > fabric-local-modify fabric-local-modify modify fabric local information one or more of the following options: vlan 0..4095 VLAN assigned to fabric change-password current password via api/change password via cli reset-password plain text password via api/Reset password via cli fabric-network in-band|mgmt|vmgmt fabric administration network control-network in-band|mgmt|vmgmt control plane network fabric-advertisement-network inband-mgmt|inband-vmgmt|inband-only| mgmt-only network to send fabric advertisements on |
Fabric Status
Confirm the fabric is in good health before continuing to other configured objects.
Verify basic fabric information with the commands: fabric-show and fabric-info
Fabric Show
CLI (network-admin@udev-leo1) > fabric-show name id vlan fabric-network control-network tid fabric-advertisement-network --------------------- ---------------- ---- -------------- --------------- --- ---------------------------- dev-leo c0001bd:5e5d4ed2 1 mgmt in-band 196 inband-mgmt vcf_testing 8000034:58548346 1 mgmt mgmt 2 inband-mgmt leo-3 c000184:58ed9e2f 1 mgmt mgmt 523 inband-mgmt uguidell2 b000327:5efbfce5 1 mgmt in-band 5 inband-mgmt ansLeo_1 c00024e:5f3664e2 1 mgmt mgmt 134 inband-mgmt uans-test c00024e:5f3522e6 1 mgmt mgmt 8 inband-mgmt Ansible_QS_1 b001d01:5f51dfbc 1 mgmt mgmt 57 inband-mgmt gui1 9000a73:5f573a57 1 mgmt mgmt 26 inband-mgmt fab-udev-leo5 6000011:5d525a75 1 in-band in-band 9 inband-mgmt GUI_01 9000a73:5f5f517a 1 mgmt mgmt 36 inband-mgmt gui2 b001a35:5f605895 1 mgmt mgmt 23 inband-mgmt Ansibel_test2 b00208e:5f610318 1 mgmt mgmt 12 inband-mgmt Ansibel_test1 b00208d:5f61034e 1 mgmt mgmt 7 inband-mgmt uans-leo-fab c00024e:5f61117c 1 mgmt mgmt 3 inband-mgmt vcfc-smoke-0915021359 b00082b:5f612f11 1 mgmt mgmt 3 inband-mgmt vcfc-smoke-0915021826 b0008af:5f612ff5 1 mgmt mgmt 2 inband-mgmt |
Fabric Info
CLI (network-admin@udev-leo1) > fabric-info name: dev-leo id: c0001bd:5e5d4ed2 vlan: 1 fabric-network: mgmt control-network: in-band tid: 196 fabric-advertisement-network: inband-mgmt |
Fabric Node State
Verify the fabric node state with the command: fabric-node-show
CLI (network-admin@udev-leo1) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid out-port version state firmware-upgrade device-state ---------- -------- -------------- ------------ ----------------- ------- -------- ---------------- ------ ---------------- ------------ udev-leo1 dev-leo 10.110.0.48/22 6.6.6.210/24 public 196 6.0.0-6000016331 online not-required ok udev-leo-4 dev-leo 10.110.0.58/22 6.6.6.214/24 public 196 3 6.0.0-6000016331 online not-required ok udev-leo-3 dev-leo 10.110.0.56/22 6.6.6.213/24 public 196 272 6.0.0-6000016331 online not-required ok |
Fabric Node Show
CLI (network-admin@corvus-ring-1) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid cluster-tid out-port version state firmware-upgrade device-state ------------- -------- ------------- ---------------- ----------------- ------- ----------- -------- ---------------- ------ ---------------- ------------ corvus-ring-1 src 10.13.2.29/23 192.16.10.161/29 public 73730 2647 6.0.1-6000116966 online not-required ok corvus-ring-2 src 10.13.2.30/23 192.16.10.169/29 public 73730 2647 6.0.1-6000116966 online not-required ok pavo-ring-7 src 10.13.2.38/23 192.16.10.145/29 public 73730 2629 6.0.1-6000116966 online not-required ok pavo-ring-8 src 10.13.2.39/23 192.16.10.153/29 public 73730 2629 272 6.0.1-6000116966 online not-required ok pavo-ring-12 src 10.13.2.43/23 192.16.10.9/29 public 73730 2639 6.0.1-6000116966 online not-required ok pavo-ring-9 src 10.13.2.40/23 192.16.10.137/29 public 73730 4 6.0.1-6000116966 online not-required ok pavo-ring-11 src 10.13.2.42/23 192.16.10.1/29 public 73730 2639 6.0.1-6000116966 online not-required ok pavo-ring-2 src 10.13.2.20/23 192.16.10.25/29 public 73730 2637 6.0.1-6000116966 online not-required ok pavo-ring-1 src 10.13.2.19/23 192.16.10.17/29 public 73730 2637 6.0.1-6000116966 online not-required ok pavo-ring-6 src 10.13.2.37/23 192.16.10.129/29 public 73730 2629 273 6.0.1-6000116966 online not-required ok pavo-ring-5 src 10.13.2.36/23 192.16.10.121/29 public 73730 2629 273 6.0.1-6000116966 online not-required ok delph-ring-1 src 10.13.2.21/23 192.16.10.41/29 public 73730 2642 272 6.0.1-6000116966 online not-required ok delph-ring-4 src 10.13.2.32/23 192.16.10.105/29 public 73730 2629 272 6.0.1-6000116966 online not-required ok delph-ring-3 src 10.13.2.31/23 192.16.10.97/29 public 73730 2629 272 6.0.1-6000116966 online not-required ok scorp-ring-2 src 10.13.2.28/23 192.16.10.89/29 public 73730 2647 272 6.0.1-6000116966 online not-required ok delph-ring-2 src 10.13.2.22/23 192.16.10.49/29 public 73730 2642 272 6.0.1-6000116966 online not-required ok pavo-ring-3 src 10.13.2.34/23 192.16.10.57/29 public 73730 273 6.0.1-6000116966 online not-required ok hydra-ring-2 src 10.13.2.24/23 192.16.10.73/29 public 73730 3623 272 6.0.1-6000116966 online not-required ok scorp-ring-1 src 10.13.2.27/23 192.16.10.81/29 public 73730 2647 272 6.0.1-6000116966 online not-required ok vnv-ring-1 src 10.13.2.51/23 192.16.10.185/29 public 73730 6.0.1-6000116966 online not-required ok pavo-ring-4 src 10.13.2.35/23 192.16.10.113/29 public 73730 273 6.0.1-6000116966 online not-required ok hydra-ring-1 src 10.13.2.23/23 192.16.10.65/29 public 73730 3623 272 6.0.1-6000116966 online not-required ok pyxis-ring-2 src 10.13.2.66/23 192.16.10.201/29 public 73730 2595 6.0.1-6000116966 online not-required ok pyxis-ring-3 src 10.13.2.68/23 192.16.10.209/29 public 73730 6.0.1-6000116966 online not-required ok pyxis-ring-1 src 10.13.2.65/23 192.16.10.193/29 public 73730 2595 6.0.1-6000116966 online not-required ok |
The state represents communication status between members of the fabric, and the device status means the overall health of the switch.
Usage Notes:
|
Trunk Status
A trunk (link aggregation group or LAG) can be configured automatically or defined manually and used for inter-switch communication (auto-LAG) or general network connectivity (manual LAG). If configured, they provide a critical communication path.
Verify trunk (LAG) status with the command: trunk-show
CLI (network-admin@udev-leo1) > trunk-show format switch,name,ports,speed,lacp-mode,status switch name ports speed lacp-mode status ---------- -------------------- ----- ----- --------- --------------------------------- udev-leo1 s1-to-leafs 11-13 10g off down udev-leo1 auto-273 17-18 10g off up,PN-switch,PN-cluster,STP-BPDUs udev-leo1 test-trunk 20-30 10g off down udev-leo1 vxlan-loopback-trunk off down udev-leo-4 vxlan-loopback-trunk off vxlan-loopback,down udev-leo-3 auto-272 17-18 10g off up,PN-switch,PN-cluster,STP-BPDUs udev-leo-3 vxlan-loopback-trunk off vxlan-loopback,down udev-leo-3: !!!! vxlan-loopback-trunk has no ports. vxlan forwarding may not work correctly. !!!! |
Configure Trunks with or without LACP. The following example shows the Trunk based LACP options.
Options:
off LACP is off
passive LACP passive mode
active LACP active mode
CLI (network-admin@udev-leo1) > trunk-create name port1-4 ports 1,4 lacp-mode off trunk 275 defer-bringup set to 1 based on first port 1 Created trunk port1-4, id 275 CLI (network-admin@udev-leo1) > trunk-show format switch,name,ports,speed,lacp-mode,status switch name ports speed lacp-mode status ---------- -------------------- ----- ----- --------- --------------------------------- udev-leo1 s1-to-leafs 11-13 10g off down udev-leo1 auto-273 17-18 10g off up,PN-switch,PN-cluster,STP-BPDUs udev-leo1 test-trunk 20-30 10g off down udev-leo1 port1-4 1,4 10g off up,PN-switch udev-leo1 vxlan-loopback-trunk off down udev-leo-4 vxlan-loopback-trunk off vxlan-loopback,down udev-leo-3 auto-272 17-18 10g off up,PN-switch,PN-cluster,STP-BPDUs udev-leo-3 vxlan-loopback-trunk off vxlan-loopback,down udev-leo-3: !!!! vxlan-loopback-trunk has no ports. vxlan forwarding may not work correctly. !!!! |
UNUM: Use the Manager Layer 2 Dashboards to review and manage LACP - Trunks settings and functionality.
Cluster Status
The cluster and VLAG objects provide the underlying redundancy structure for network communications. If the network design calls for redundancy, check that the cluster and VLAG objects are functioning correctly.
Verify cluster status with the command: cluster-show
CLI (network-admin@udev-leo1) > cluster-show switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ---------- --------------- ------ -------------- -------------- --- ------ --------- ------------ ------------------------ -------------------------- udev-leo-3 clusterleo3leo1 online udev-leo-3 udev-leo1 0 master 17-18,272 17-18,273 4000 3 udev-leo1 clusterleo3leo1 online udev-leo-3 udev-leo1 0 slave 17-18,273 17-18,272 4000 3 |
Cluster communications are dependent on a direct physical link(s) between two switches. For the cluster to function correctly, that physical link must be working.
UNUM: Use the Manager Layer 2 Dashboards to review and manage Cluster settings and functionality.
VLAG Status
Verify VLAG status with the command: vlag-show
CLI (network-admin@aqua-unum-leaf1*) > vlag-show layout vertical name: dell-host8.pluribusnetworks.com_DSwitch-UNUM8node cluster: aqua-unum-leaf1-to-dorado-unum-leaf2-cluster mode: active-active switch: aqua-unum-leaf1 port: dell-host8.pluribusnetworks.com_DSwitch-UNUM8node peer-switch: dorado-unum-leaf2 peer-port: dell-host8.pluribusnetworks.com_DSwitch-UNUM8node status: normal local-state: enabled,up peer-state: enabled,up lacp-mode: off lacp-fallback: bundle lacp-fallback-timeout: 50 lacp-individual: none lacp-port-priority: 32768 |
The VLAG relies on the underlying cluster. Confirm the VLAG status is normal, and the state is “enabled,up”. If there are problems with a VLAG, work back through the objects it depends on, such as the cluster, and ultimately physical ports and cables.
UNUM: Use the Manager Layer 2 Dashboards to review and manage VLAG functionality.
Additional Cluster Troubleshooting Examples
The cluster-show command output is captured as part of tech-support command or save-diags command. In normal operation the following conditions are valid:
- The tid/ports/remote-ports should be identical for a given cluster.
- One node should be slave and one should master.
- Both nodes should report online.
Examples
- All nodes in the cluster are online:
CLI (network-admin@LEAF1) > cluster-show switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ------ ------------ ------ -------------- -------------- --- ------ --------- ------------ ------------------------ -------------------------- LEAF1 Leaf-cluster online LEAF1 LEAF2 1 slave 87-88,272 87-88,272 2000 3 LEAF2 Leaf-cluster online LEAF1 LEAF2 1 master 87-88,272 87-88,272 2000 3 |
- One node in the cluster is offline:
CLI (network-admin@LEAF1) > cluster-show switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ------ ------------ ----------- -------------- -------------- --- ---- ----- ------------ ------------------------ -------------------------- LEAF1 Leaf-cluster offline LEAF1 LEAF2 1 none none none 2000 3 LEAF2 Leaf-cluster unavailable LEAF1 LEAF2 2000 3 |
Use the fabric-node-show command to see the impacted switches:
CLI (network-admin@SPINE2) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid cluster-tid out-port version state firmware-upgrade device-state ------ -------- -------------- ----------- ----------------- ------- ----------- -------- ---------------- ------- ---------------- ------------ SPINE2 STE 10.100.64.6/17 20.0.0.2/29 public 7 5.1.4-5010415575 online not-required ok LEAF1 STE 10.100.64.7/17 30.0.0.2/29 public 7 1 2 5.1.4-5010415575 online not-required ok LEAF2 STE 10.100.64.8/17 40.0.0.2/29 public 7 1 272 5.1.4-5010415575 offline not-required ok |
- All nodes in the cluster are offline:
CLI (network-admin@SPINE2) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid cluster-tid out-port version state firmware-upgrade device-state ------ -------- -------------- ----------- ----------------- ------- ----------- -------- ---------------- ------- ---------------- ------------ SPINE2 STE 10.100.64.6/17 20.0.0.2/29 public 7 5.1.4-5010415575 online not-required ok LEAF1 STE 10.100.64.7/17 30.0.0.2/29 public 7 1 2 5.1.4-5010415575 offline not-required ok LEAF2 STE 10.100.64.8/17 40.0.0.2/29 public 7 1 272 5.1.4-5010415575 offline not-required ok CLI (network-admin@SPINE2) > |
CLI (network-admin@SPINE2) > cluster-show switch name state cluster-node-1 cluster-node-2 cluster-sync-timeout(ms) cluster-sync-offline-count ------ ------------ ----------- -------------- -------------- ------------------------ -------------------------- LEAF1 Leaf-cluster unavailable LEAF1 LEAF2 2000 3 LEAF2 Leaf-cluster unavailable LEAF1 LEAF2 2000 3 |
- Split Brain Scenario
Split-brain means the cluster link between a pair of switches has gone offline.
Running the fabric-node-show command on the individual switches: spine, leaf1 and leaf2.
CLI (network-admin@SPINE2) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid cluster-tid out-port version state firmware-upgrade device-state ------ -------- -------------- ----------- ----------------- ------- ----------- -------- ---------------- ------- ---------------- ------------ SPINE2 STE 10.100.64.6/17 20.0.0.2/29 public 7 5.1.4-5010415575 online not-required ok LEAF1 STE 10.100.64.7/17 30.0.0.2/29 public 7 1 2 5.1.4-5010415575 online not-required ok LEAF2 STE 10.100.64.8/17 40.0.0.2/29 public 7 1 272 5.1.4-5010415575 online not-required ok CLI (network-admin@SPINE2) > |
Result: Both leafs are online from the spine perspective.
CLI (network-admin@LEAF2) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid cluster-tid out-port version state firmware-upgrade device-state ------ -------- -------------- ----------- ----------------- ------- ----------- -------- ---------------- ------- ---------------- ------------ LEAF2 STE 10.100.64.8/17 40.0.0.2/29 public 7 1 272 5.1.4-5010415575 online not-required ok LEAF1 STE 10.100.64.7/17 30.0.0.2/29 public 7 1 2 5.1.4-5010415575 offline not-required ok SPINE2 STE 10.100.64.6/17 20.0.0.2/29 public 7 5.1.4-5010415575 online not-required ok CLI (network-admin@SPINE2) > |
Result: Leaf1 is offline from leaf2.
CLI (network-admin@LEAF1) > fabric-node-show name fab-name mgmt-ip in-band-ip in-band-vlan-type fab-tid cluster-tid out-port version state firmware-upgrade device-state ------ -------- -------------- ----------- ----------------- ------- ----------- -------- ---------------- ------- ---------------- ------------ LEAF1 STE 10.100.64.8/17 40.0.0.2/29 public 7 1 272 5.1.4-5010415575 online not-required ok LEAF2 STE 10.100.64.7/17 30.0.0.2/29 public 7 1 2 5.1.4-5010415575 offline not-required ok SPINE2 STE 10.100.64.6/17 20.0.0.2/29 public 7 5.1.4-5010415575 online not-required ok CLI (network-admin@SPINE2) > |
Result: Leaf2 is offline from leaf1.
CLI (network-admin@SPINE2) > cluster-show switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ------ ------------ ------ -------------- -------------- --- ------ --------- ------------ ------------------------ -------------------------- LEAF1 Leaf-cluster online LEAF1 LEAF2 1 master 87-88,272 87-88,272 2000 3 LEAF2 Leaf-cluster online LEAF1 LEAF2 1 master 87-88,272 87-88,272 2000 3 |
Result: From spine both leafs claim to be master.
CLI (network-admin@LEAF1) > cluster-show switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ------ ------------ ----------- -------------- -------------- --- ---- ----- ------------ ------------------------ -------------------------- LEAF1 Leaf-cluster offline LEAF1 LEAF2 1 none none none 2000 3 LEAF2 Leaf-cluster unavailable LEAF1 LEAF2 2000 3 |
CLI (network-admin@LEAF2) > cluster-show switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ------ ------------ ----------- -------------- -------------- --- ---- ----- ------------ ------------------------ -------------------------- LEAF2 Leaf-cluster offline LEAF1 LEAF2 1 none none none 2000 3 LEAF1 Leaf-cluster unavailable LEAF1 LEAF2 2000 3 |
Example output from the tech-support command for cluster-show:
==================== cluster-show ============================ switch name state cluster-node-1 cluster-node-2 tid mode ports remote-ports cluster-sync-timeout(ms) cluster-sync-offline-count ------------- ---------------- ------ -------------- -------------- ---- ------ --------- ------------ ------------------------ -------------------------- pavo-ring-11 pavo11to12 online pavo-ring-11 pavo-ring-12 2639 slave 8 8 2001 3 pavo-ring-12 pavo11to12 online pavo-ring-11 pavo-ring-12 2639 master 8 8 2001 3 pavo-ring-1 pavo01topavo02 online pavo-ring-1 pavo-ring-2 2637 master 1 1 2000 3 pavo-ring-2 pavo01topavo02 online pavo-ring-1 pavo-ring-2 2637 slave 1 1 2000 3 pavo-ring-7 pavo7topavo8 online pavo-ring-7 pavo-ring-8 2629 slave 3-4,273 3-4,272 2000 3 pavo-ring-8 pavo7topavo8 online pavo-ring-7 pavo-ring-8 2629 master 3-4,272 3-4,273 2000 3 pavo-ring-5 pavo5topavo6 online pavo-ring-5 pavo-ring-6 2629 master 7-8,272 7-8,272 2000 3 pavo-ring-6 pavo5topavo6 online pavo-ring-5 pavo-ring-6 2629 slave 7-8,272 7-8,272 2000 3 corvus-ring-1 corvus1tocorvus2 online corvus-ring-1 corvus-ring-2 2647 slave 1-2,272 1-2,272 2000 3 corvus-ring-2 corvus1tocorvus2 online corvus-ring-1 corvus-ring-2 2647 master 1-2,272 1-2,272 2000 3 delph-ring-3 delph3todelph4 online delph-ring-3 delph-ring-4 2629 slave 1-2,272 1-2,273 2000 3 delph-ring-4 delph3todelph4 online delph-ring-3 delph-ring-4 2629 master 1-2,273 1-2,272 2000 3 scorp-ring-1 scorp1toscorp2 online scorp-ring-1 scorp-ring-2 2647 master 19-20,272 19-20,272 2000 3 scorp-ring-2 scorp1toscorp2 online scorp-ring-1 scorp-ring-2 2647 slave 19-20,272 19-20,272 2000 3 hydra-ring-1 hydra1tohydra2 online hydra-ring-1 hydra-ring-2 3623 slave 12 12 2000 3 hydra-ring-2 hydra1tohydra2 online hydra-ring-1 hydra-ring-2 3623 master 12 12 2000 3 delph-ring-1 delph1todelph2 online delph-ring-1 delph-ring-2 2642 slave 15-16,272 15-16,273 2000 3 delph-ring-2 delph1todelph2 online delph-ring-1 delph-ring-2 2642 master 15-16,273 15-16,272 2000 3 pyxis-ring-1 pyxis1topyxis2 online pyxis-ring-1 pyxis-ring-2 2595 master 4 2 2000 3 pyxis-ring-2 pyxis1topyxis2 online pyxis-ring-1 pyxis-ring-2 2595 slave 2 4 2000 3 |