Configuring vCenter Connection Service

vCenter Connection Service

Configuring a vCenter Service

Auto Provisioning for vCenter

Automatic Link Aggregation on EXSi-facing Ports for vCenter

Support for VLAN Alarms in vCenter

vCenter Connection Service

Service Components

VMware vCenter

VMware vCenter Server™ provides a centralized and extensible platform for managing virtual infrastructure. vCenter Server manages VMware vSphere® environments, giving you simple and automated control over the virtual environment to deliver infrastructure with confidence.

Pluribus Adaptive Cloud Fabric

Built on the carrier grade network stack of Netvisor, Pluribus Adaptive Cloud Fabric brings a simple, dynamic and secure approach to building a holistic distributed network architecture bringing the benefits of cloud-scale, elasticity and adaptability to the modern data center. The Pluribus Adaptive Cloud Fabric delivers a highly optimized and resilient operating environment to meet the mission-critical requirements of enterprise and service provider organizations. The Adaptive Cloud Fabric distributes intelligence, integrates a broad range of advanced network services, and provides pervasive visibility for all traffic traversing the network.

Pluribus Insight Analytics

Pluribus Insight Analytics eliminates the economic and operational barrier associated with traditional monitoring infrastructure such as packet brokers and expensive tools, and brings flows and packet analytics to broader enterprise networking deployments ranging from campus to data center.

Pluribus Insight Analytics provides full interoperability with third party networks. The compatibility allows you to extract more visibility from the network, spot security concerns and drastically reduce the time spent solving IT trouble tickets.

VCCS-1.png

 

Figure 1: Pluribus Adaptive Cloud Fabric with vCenter Connection Service

vCenter-driven Automated Fabric Network Provisioning

The Pluribus Adaptive Cloud Fabric connected to a vCenter Server considerably simplifies the burden of configuring a network infrastructure to match the application requirements. In order to ease the operation, the Adaptive Cloud Fabric interacts with vCenter by using vCenter Connection Service (vCCS). With vCCS the Adaptive Cloud Fabric browses the vCenter inventory to discover vSphere hosts and automatically provision the host links and the physical network infrastructure to support VMs and other infrastructure services such as vMotion or VSAN.

ESXi Host Discovery and Teaming Automation

Once ESXi servers connect to the fabric, leveraging dynamic discovery protocols on both servers and the network, the switches automatically create LAGs, Link Aggregation on one single switch, or vLAGs, Link Aggregation on two member switches of the same high-availability cluster to match the teaming configuration on the server side. By doing so, vCCS completely automates the addition of new ESXi hosts in the network without any manual operation on the Adaptive Cloud Fabric.

VMKernel Network Services

With vCenter Connection Service, the Pluribus Adaptive Cloud Fabric automatically detects the ESXi host IP configuration and seamlessly provisions VLANs associated to any VMKernel interface, regardless of the service attached to the logical interface. Depending on the service requirement, the fabric creates either an isolated VLAN or a VLAN with additional network constructs, Layer 3 or Multicast, to support these services, which include VMware Management, HA, vMotion or vSAN.

 

 

 

VM Network Provisioning

When you create or associate a new VM to a port group, standard or distributed, vCCS automatically creates and configures the VLANs at the host level within the Adaptive Cloud Fabric, regardless of the underlay network design, pure Layer 2, or Layer 3 with VXLAN for Layer 2 extension. And for security reasons and resources optimization, the Adaptive Cloud Fabric adds VLANs to downlinks facing servers only where required, and automatically prunes VLANs when not used on the servers by VMs. As a further security check, the Adaptive Cloud Fabric allows you the ability to restrict the dynamic usage of VLAN IDs vCenter automatically provisions in the fabric.

VMware Endpoints Visibility

One of the key component of the Pluribus Adaptive Cloud Fabric is the ability to build a database of all endpoints (vPorts) connected to the network and provides an easy way to display this information to the network administrator. By using the vPort database as a central source of information for all devices connected to the fabric, physical or virtual, you can considerably reduce the time to identify an endpoint and use the vPort details as a tool to simplify troubleshooting.

By default, the Adaptive Cloud Fabric provides the following information to you for every vPort:

With vCCS, the fabric provides more details in addition to what already exists for each vPort. vCCS transforms the fabric as a source of information and provides the most useful tool for visibility and analytics in regards to VMware workloads.

For each vPort associated to a vCenter, vCCS adds the following attributes to the vPort metadata:

Pluribus Insight Analytics (IA) uses these attributes by to display comprehensive and customized dashboards to report flow analytics of any application running on VMware vSphere.

VMware VMs and Hypervisors Connection Analytics

With more and more distributed applications and daily growing security breaches, one of the fundamental criteria of success for you to scale applications and business resides in the ability to perform these tasks:

With Pluribus Insight Analytics network infrastructure teams become key members of the team providing accurate information to analyze or triage application issues.

Extending the vPort database with VMware vSphere attributes allows the Insight Analytics engine to associate network flows with VMs and VMware services end point in the network, and provide instant visibility into the behavior of VMWare infrastructure endpoints on the network. The Insight Analytics tool act as a network recorder and time machine of all TCP connections between VM-to-VM, VM-to-Physical or Physical-to-Physical devices.

Multi-tenancy and Traffic Segmentation

Tenant isolation

You can isolate complete data, control and management planes, as Pluribus can slice the entire fabric in multiple vNET (Virtual Networks), and provide VLANs and VRFs along with a port isolation. With this solution, Pluribus can create multiple tenants with a real physical and logical isolation, from the port to the Virtual Router fully dedicated per tenant.

Netvisor provides Virtual Networks (vNETs) as an advanced solution addressing multi-tenancy requirements in a secure and versatile manner going beyond basic VLAN and VRF canons. In basic terms Netvisor uses vNETs to separate resource management spaces. You can isolate vNETsand integrate mechanisms operating in the data plane as well as in the control plane. A vNET object represents a manageable collection of instances of the following sub-components:

Each vNET implements a single point of management. As the fabric administrator, you create vNETs and assign ownership of each vNET to individuals with responsibility for managing those resources and with separate usernames and passwords for each vNET manager In this way multiple tenants can share a common fabric while each managing a separate vNET resource pool with security, traffic, and resource protection from other vNETs.

 

 

 

Figure 2 is an example of a virtualized POD where vNETs are used to create fully independent pods with management, control and data plane isolation, for example, independent vRouters, independent management plane, or independent provisioning.

VCCS-2.png

 

Figure 2: Adaptive Cloud Fabric vNETs

Multiple vCenter Servers

Enterprises and Service Providers are facing different challenges nowadays, but most importantly, they need to find a way to secure their infrastructure and workloads to ensure business continuity. VMware has several solutions to provide multi-sites continuity and most of them imply using different vCenter Servers in the same or different locations.

The Pluribus Adaptive Cloud Fabric offers the option to connect up to 4 vCenter Servers per fabric management domain and thus, perfectly fits with organizations looking for a simple solution for application continuity.

Plus, for customers who are more concerned about providing virtualized secured tenants at the fabric level, vNETs can be leveraged to create an isolated fabric tenant space where vCenter will only be able to dynamically provision this isolated tenant in the network infrastructure.

In conclusion, the physical network with the Adaptive Cloud Fabric becomes a key component VMware virtualized infrastructure of it by simplifying and automating the communication between the building blocks of a disaster recovery plan or in a secured multi-zones SDDC.

Application Continuity with the Adaptive Cloud Fabric Datacenter Interconnect Solution

Organizations can stretch VMware SDDC and vSAN deployments to multiple locations to ensure continuity of business applications without deploying completely separated compute and storage infrastructures. The physical network infrastructure has to be aware of where the workloads are located and adapt the policies (switching, routing and security) in accordance to the application design.

With the Adaptive Cloud Fabric vCenter Connection Service and automated Layer 2 extension services, Pluribus and VMware offers the best solution for companies looking to extend an SDDC deployment to multiple locations.

By stretching the fabric across the network, Pluribus deliver the following key features:

Adding the vCenter Connection Service vCCS to the fabric enhances this stretched VMware SDDC with:

All these features make VMware vSphere and Pluribus Adaptive Cloud Fabric the most advanced solution in the market for an SDDC deployment spanning across multiple data centers.

 

Configuring a vCenter Service

Create a vCenter service, using the vcenter-connection-create command:

CLI network-admin@switch > vcenter-connection-create name name-string host host-string user user-string password password-string enable|disable vlans vlan-list

Modify a vCenter service, using the following syntax:

CLI network-admin@switch > vcenter-connection-modify name name-string host host-string user user-string password password-string enable|disable vlans vlan-list

Delete a vCenter service, use the following syntax:

CLI network-admin@switch > vcenter-connection-delete name name-string

To display information about a vCenter service, use the following syntax:

CLI network-admin@switch > vcenter-connection-show name name-string host host-string host host-string state init|ok|error connected-time connected-time-string connection-error connection-error-string vlans vlan-list

Auto Provisioning for vCenter

Auto-provisioning allows a network administrators to provide a range of VLANs that other administrators, for example, server administrators, can use to associate with port Groups used in vCenter and applied to Virtual Machines(VMs).

For auto-provisioning VLANs, Netvisor extends the vcenter-connection-create command to include a vlans keyword to allow one VLAN or a range of VLANs tied to the service. If you do not proide VLANs as part of starting the service, then Netvisor does not auto-provision VLANs. In this initial release, Netvisor provisions a maximum of 500 VLANs on a per connection service instance. You can overlap VLANs across connection service instances. VMs connect to portGroups on a ESXi server, and the PortGroups include definition of VLAN or VLAN range used. In order for the port Group VLAN or VLAN range to be provisioned in the fabric you must create it as part of the range specified in the connection service command. Netvisor creates the VLANs with the scope local and does not add any ports.

Netvisor retains the VLAN creation after rebooting the switch. When you delete the connection service, Netvisor deletes the VLANs or cleans up VLAN port membership added by the service. For port membership on a VLAN, each switch manages the local host facing ports. vCenter connectivity relies on LLDP objects stored on per physical NIC (pNIC) per ESXi basis and distributed vSwitch (DVS) object attached to the pNIC. Netvisor attaches VMs to associated DVS as port members. Server administrators must enable LLDP for the distributed portGroup.

When you remove the last VM attached to the local host port, then Netvisor removes the port from associated VLAN. Netvisor validates port membership and if necessary, update the ports with each poll cycle of the vCenter database.

To display the range of provisioned VLANs, use the vcenter-connection-show command:

CLI network-admin@switch > vcenter-connection-show

switch         name host        user                       enable state

-------------- ---- ----------- -------------------------- ------ -----

Spine1         svc2 10.9.34.204 administrator@lab.pluribus yes    ok   

Spine1         svc2 10.9.34.204 administrator@lab.pluribus yes    ok   

connected-time

--------------------------------

connected at 2017-01-14 03:07:23

connected at 2017-01-14 03:07:21

 

vlans

-----------

20-30,33-40

20-30,33-40 

Automatic Link Aggregation on EXSi-facing Ports for vCenter

Currently, Netvisor only supports automatic Link Aggregation (LAG) between Pluribus Networks switches. With this new feature, automatic Link Aggregation (LAG) includes ports between Pluribus Networks switches and ESXi hosts .

Netivsor implements LLDP and LACP to bundle the ports. Since Netvisor does not use custom type-length-value (TLV) probes, Netvisor implements standard LLDP TLVs to uniquely identify ESXi hosts. Netvisor uses the system description TLV to identify ESXi hosts and uses the system name to uniquely identify a specific ESXi host.

Netvisor enables LACP mode and sets the mode to active on auto-lag with a fallback-option of individual to ensure extra robustness in bundling and ensure that ports are bundled only if ESXi host uses LACP to avoid any data path issues.

Netvisor supports a new parameter to enable or disable trunking towards hosts with the default value as off.

The current global parameter auto-trunk needs to be ON (default) for this new settings to take effect. Setting auto-trunk to off turns off all trunking.

CLI (network-admin@Leaf1) > system-settings-show

auto-host-trunk:               on

 

 

Example output from trunk-show, port-show and lldp-show.

CLI (network-admin@Leaf1) > trunk-show format trunk-id,switch,name,ports,lacp-mode,lacp-fallback,lacp-individual,status,

 

trunk-id switch         name     ports lacp-mode lacp-fallback        

-------- -------------- -------- ----- --------- -------------        

129      Leaf1          auto-129 42,44 active    individual           

 

lacp-individual status

--------------- -----------

none            up,PN-other

 

CLI (network-admin@Leaf1) > port-show port 42,44 format all

port bezel-port vnet hostname status                                  

---- ---------- ---- -------- ---------------------------------------

42   42                       up,PN-other,LLDP,trunk,LACP-PDUs,vlan-up

44   44                       up,PN-other,LLDP,trunk,LACP-PDUs,vlan-up

rswitch rem-ip    rem-mac           lport config  trunk

----------------- ----------------- ------ ------ --------

2987    ::        00:00:00:00:00:00 42    fd,10g  auto-129

2987    ::        00:00:00:00:00:00 44    fd,10g  auto-129

 

 

CLI (network-admin@Leaf1) > lldp-show

local-port chassis-id port-id           port-desc                     

---------- ---------- ----------------- ------------------------------

42         vmnic2     00:50:56:98:07:56 port 25 on dvSwitch DEV-CN-Tests-1

44         vmnic3     00:50:56:98:07:57 port 24 on dvSwitch DEV-CN-Tests-1

sys-name                      

------------------------------

esx-dev01.pluribusnetworks.com

esx-dev01.pluribusnetworks.com

 

Support for VLAN Alarms in vCenter

Configure Netvisor with a vCenter Connection Service to read metadata from a vCenter server and update to the vPort table when learned from an inventory scan. Netvisor supports a remote plugin for vCenter capable of auto provisioning a VLAN range for a DvPortgroup associated with uplinks going to the Netvisor OS fabric.

vCenter Alarm integration allows you to see the vCenter connection service running with supported VLAN ranges configured on DVPortgroup.

If you configure a DVPortgroup to associate with a VM with VLAN trunking and using VLANs unsupported by auto provisioning, then you receive alarms on the vSphere Web Client Server. When you receive an alarm, you can modify the VLAN range or request additional VLANs from the Netvisor OS switch administrator.

To configure the vCenter Connection Service with VLANs, use the following syntax:

CLI network-admin@switch > vcenter-connection-create name svc12 host 10.13.37.211 user administrator@lab.pluribus vlans 10-15

CLI network-admin@switch > vcenter-connection-show

switch             name        host          user                       enable state

------------------ ----------- ------------  ------------------------- ------- -----

switch-1           svc15       10.13.37.211  administrator@lab.pluribus yes    ok

switch-2           svc15       10.9.34.204   administrator@lab.pluribus yes    ok

 

connected-time                   vlans

-------------------------------- -----

connected at 2017-03-02 20:14:55 10-15

connected at 2017-03-02 20:14:57 10-15

 

If you configure a VM with a VLAN not in the 10-15 VLAN range, Netvisor triggers an alarm.

vcenter-alarms.png