Configuring EVPN



The configuration of EVPN revolves around the selection of the border gateway nodes and the configuration of associated VNIs for the VRFs.


In addition, the fabric automation takes care of transparently performing various configurations, which can be verified with new show commands or with new parameters of existing commands, as shown in the examples below.


Configuring the Border Gateways


The first step involves configuring the vRouters on at least two nodes across two pods to make them border gateways. That is achieved with the vrouter-create command using the evpn-border/no-evpn-border parameter. In case of an existing vRouter, the vrouter-modify command can be used instead.


For example:


CLI (network-admin@switch) > vrouter-create name vRouter1 fabric-comm enable location border-switch router-type hardware evpn-border bgp-as 5200 router-id 10.30.30.30


Then the vrouter-show command can be used to verify the current EVPN parameter configuration:


CLI (network-admin@switch) > vrouter-show name vRouter1 format all layout vertical

id:                           c00029d:0

name:                         vRouter1

type:                         vrouter

scope:                        fabric

vnet:                         

location:                     switch

zone-id:                      c00029d:1

router-type:                  hardware

evpn-border:                  enable


The second step is for the user to configure an MP BGP neighbor with the l2vpn-evpn parameter to point to an external EVPN border gateway:


CLI (network-admin@switch) > vrouter-bgp-add vrouter-name vRouter1 neighbor 190.12.1.3 remote-as 65012 ebgp-multihop 3 update-source 190.11.1.3 multi-protocol l2vpn-evpn


This configuration enables the use of the standard EVPN NLRI, which is carried using BGP Multiprotocol Extensions with an Address Family Identifier (AFI) of 25 (L2VPN) and a Subsequent Address Family Identifier (SAFI) of 70 (EVPN).


Configuring L3 VNIs


To uniquely identify VRF instances with EVPN it is necessary to specify a new l3-vni parameter during creation (or modification), like so:


CLI (network-admin@switch) > vlan-create id 1000 auto-vxlan 101000 scope fabric ports none


CLI (network-admin@switch) > vrf-create name VRF1 l3-vni 101000


As explained in more detail in the Configuring VXLAN chapter, subnets can be mapped to VRFs with the subnet-create command, for example:


CLI (network-admin@switch) > vlan-create id 12 auto-vxlan 500012 scope fabric


CLI (network-admin@switch) > subnet-create name subnet-vxlan-500012 scope fabric vxlan 500012 network 172.10.2.0/24 anycast-gw-ip 172.10.2.1 vrf VRF1


L3 VNIs also enable a new forwarding mode (see the two hop model described above), which is used for inter-pod VXLAN routing.


Once configured, you can check the VRFs with the vrf-show command:


CLI (network-admin@switch) > vrf-show format name,vnet,anycast-mac,l3-vni,active,hw-router-mac,hw-vrid,flags,enable


name vnet anycast-mac       l3-vni active hw-router-mac     hw-vrid flags  enable

---- ---- ----------------- ------ ------ ----------------- ------- ------ ------

VRF1 0:0  64:0e:94:40:00:02 101000 yes    66:0e:94:48:68:a7 1       subnet yes


Note that the user doesn't need to confgure vrf-gw and vrf-gw2 with EVPN except on border gateways connected to DC gateways for North South traffic. Traffic reaches the border gateways with the default route described in Figure 9-3 above.


Configuring Route Maps for EVPN Filtering


Since EVPN is based on a BGP configuration between the border gateways, it’s a natural extension (and general practice) to use route maps for filtering.


A user can create a route map using the match/set/permit/deny statements, and then apply it to the BGP EVPN configuration, inbound or outbound.


When a route map is applied inbound, it ensures that the match statement is applied to the routes that are learned from a BGP neighbor. When it is applied outbound, it ensures that the match statement is applied to the routes that are sent to a BGP neighbor.


Starting from release 7.0.0 NetVisor OS supports the configuration of route maps that support the following EVPN-related statements both in the inbound and the outbound direction:


  • match-evpn-default-route
  • match-evpn-route-type
  • match-evpn-vni
  • match-evpn-rd


These statements can match a default route (such as 0.0.0.0/0 or 0::0/0), a route type (Type 2, 3, 5 routes), a VNI (L2 or L3), and a route distinguisher (Type 2, 3, and 5 routes carry an RD field in them). In addition, it is possible to apply IP prefix lists to the filters.


This enhancement extends the existing NetVisor OS vrouter-route-map-add/-modify commands, as described below:


CLI (network-admin@switch) > vrouter-route-map-add vrouter-name leaf1 name map

action permit | deny


with the additional parameters:


match-evpn-default-route

match-evpn-route-type

match-evpn-vni

match-evpn-rd


Note: This filtering capability must be configured on the EVPN border gateways. In redundant gateways, it must be enabled on both cluster nodes to ensure that the pair of EVPN border nodes filter routes symmetrically for the pod.


The following example filters Type 2 and Type 5 routes:


CLI (network-admin@switch) > vrouter-route-map-add vrouter-name vr1 name type52

action deny seq 1 match-evpn-route-type type-5


CLI (network-admin@switch) > vrouter-route-map-add vrouter-name vr1 name type52

action deny seq 2 match-evpn-route-type type-2


CLI (network-admin@switch) > vrouter-route-map-add vrouter-name vr1 name type52

action permit seq 3


Note that the third closing command is required (as permit all) for the entire route map to work.


Next, for example, the route map can be applied in the outbound direction like so:


CLI (network-admin@switch) > vrouter-bgp-modify vrouter-name vr1 neighbor

190.12.1.2 route-map-out type52


to stop advertising EVPN routes to the neighbor specified in the route map.


Additionally, it is also possible to filter the incoming routes from the matching neighbor like so:


CLI (network-admin@switch) > vrouter-bgp-modify vrouter-name vr1 neighbor

190.12.1.2 route-map-in type52


The next example shows how to configure an IP filter list (100) and then apply it to a route map (rmap1) to filter EVPN routes in the outbound direction:


CLI (network-admin@switch) > vrouter-prefix-list-add vrouter-name vr1 name 100

seq 1 prefix 100.1.1.0/24 action permit


CLI (network-admin@switch) > vrouter-prefix-list-add vrouter-name vr1 name 100

seq 2 prefix 101.1.1.0/24 action deny


CLI (network-admin@switch) > vrouter-route-map-add vrouter-name vr1 name rmap1

action permit match-prefix 100 match-evpn-route-type type-5 seq 1


CLI (network-admin@switch) > vrouter-bgp-modify vrouter-name vr1 neighbor

190.1.2.3 route-map-out 100


CLI (network-admin@switch) > vrouter-route-map-show format vroutername,name,seq,action,match-prefix,match-evpn-route-type


vrouter-name name  seq action match-prefix match-evpn-route-type

------------ ----- --- ------ ------------ ---------------------

vr1          rmap1 10  deny   100          type-5

vr1          rmap1 20  permit none


On the neighboring node, route 101.1.1.0 is present, because it’s not filtered by the prefix list 100, but 101.1.1.0 gets filtered as shown below:


CLI (network-admin@switch) > vrouter-evpn-bgp-routes-show route-type show-interval 2 format

vrouter-name,vni,ip,route-type,next-hop,extended-community


vrouter-name vni   ip           route-type next-hop   extended-community

------------ ----- ------------ ---------- ---------- --------------------------------------------

vr4          13434 101.1.1.0/24 5          190.11.1.1 RT:65011:1003434 ET:8 Rmac:00:00:5e:00:01:64

vr4          13434 33.3.1.0/29  5          190.11.1.1 RT:65011:1003434 ET:8 Rmac:00:00:5e:00:01:64


Configuring Virtual Service Groups with EVPN 


Starting from release 7.0.0 NetVisor OS supports the configuration of Virtual Service Groups (vSGs) not just in the local pod but also across EVPN pods. Refer to the Configuring Virtual Service Groups section in the Configuring VXLAN chapter for more details on the functionality. 


vSGs can be used to import/export (i.e., ‘leak’) subnet prefixes between Unicast Fabric VRFs. 

Starting from release 7.0.0, it is possible to decide which subnets/prefixes to import/export locally and which ones to import/export over an EVPN transport using Type-5 routes. The selection can be configured based on the new parameters discussed below and is needed on one EVPN pod only, from which the shared prefixes can be extended to the other pods. 


By default, exported (i.e., ‘leaked’) networks remain local and are not extended over EVPN, so this behavior is consistent with prior releases. However, it can be changed when adding a VRF to a vSG with the export-leaked-networks parameter: 


CLI (network-admin@switch) > vsg-vrf-add vsg-name <vsg_name> vrf <vrf> [export-leaked-networks evpn | none] 


in which none is the default. For example, you can enable the extension over EVPN of the leaked networks/subnets associated with vrf3 like so: 


CLI (network-admin@switch) > vsg-create name vsg-1 


CLI (network-admin@switch) > vsg-vrf-add vsg-name vsg-1 vrf vrf3 [type promiscuous] export-leaked-networks evpn 


CLI (network-admin@switch*) > vsg-vrf-show


CLI (network-admin@switch*) > vsg-vrf-show


vsg-name vnet  vrf   type        export-leaked-networks

-------- ----- ----- ----------- -------------------------

vsg-1    0:0   vrf3  promiscuous evpn 


The vsg-vrf-remove command would be used to remove a VRF from a vSG and consequently also stop any extension across pods, if enabled. 


In a VRF that is extended over EVPN you can also decide which subnets to share with the subnet-create or subnet-modify commands. For that you need to use the evpn-export-leaked-networks parameter, which defaults to permit


CLI (network-admin@switch) > subnet-create name <name> … [evpn-export-leaked-networks permit | deny] 


In other words, when the evpn-export-leaked-networks parameter is not specified leaked subnets are extended over EVPN by default (when export-leaked-networks is set to evpn). During creation or modification, you can elect not to extend a leaked subnet like so: 


CLI (network-admin@switch) > subnet-create name vlan-235 vlan 235 vxlan 10235 vrf vrf3 network 13.1.5.0/24 anycast-gw-ip 13.1.5.254 network6 2002:13:1:5::/64 anycast-gw-ip6 2002:13:1:5::254 evpn-export-leaked-networks deny 


CLI (network-admin@switch*) > subnet-show name vlan-235 format name,scope,vlan,vxlan,vrf,network,state,enable,evpn-export-leaked-subnets 


name     scope  vlan vxlan  vrf  network     state enable evpn-export-leaked-subnets 

-------- ------ ---- ------ ---- ----------- ----- ------ -------------------------- 

vlan-235 fabric 235  10235  vrf3 13.1.5.0/24 ok    yes    deny 


CLI (network-admin@switch) > vsg-network-add vsg-name vsg-1 vrf vrf3 subnet vlan-235 


Similarly, you can also decide to extend a specific network in the vsg-network-add/modify commands like so: 


CLI (network-admin@switch) > vsg-network-add vsg-name vsg1 vrf vrf3 network 101.1.1.0/30 


Configuring Bridge Domains with EVPN 


Starting from release 7.0.0 NetVisor OS supports the configuration of bridge domains (along with VLANs) across multiple EVPN pods. (For more information on bridge domains, refer to the Configuring Advanced Layer 2 Transport Services chapter.) 


The configuration of a VXLAN bridge domain (BD in short) does not change with the addition of the EVPN support: the main (and only) requirement for a bridge domain that is extended across EVPN pods is to use a common VXLAN ID. In other words, such ID has global significance and represents the bridge domain end-to-end. Note that this ID cannot be shared with VLAN-based VXLAN configurations. 


A bridge domain supports a variety of modes (dot1q, q-in-q and untagged) as well as of tagging schemes (auto, remove-tags and transparent). All of them are transparently supported with EVPN as well. 


Note: Starting from release 7.0.0, NetVisor OS supports bridge domains over EVPN on the following platforms: 

  • F9480-V (AS7326-56X), F9460-X (AS5835-54X), F9460-T (AS5835-54T) 
  • Dell S4100 and S5200 Series 
  • NRU03, and NRU-S0301


To create a BD in an EVPN pod, you can use the regular bridge-domain-create command (as also described in the respective chapter), for example like so: 


CLI (network-admin@switch) > bridge-domain-create name bd-evpn1 scope fabric vxlan 500500 rsvd-vlan 4030 


To add a port in the pod to a BD, you can use for example the command: 


CLI (network-admin@switch) > bridge-domain-port-add name bd-evpn1 port 81 vlans 55 


With EVPN you can also create another BD (name) in another pod, for example like so: 


CLI (network-admin@switch2) > bridge-domain-create name bd-evpn2 scope fabric vxlan 500500 rsvd-vlan 4050 


And add a port to it: 


CLI (network-admin@switch2) > bridge-domain-port-add name bd-evpn2 port 31 vlans 500


Note that the the BD names and VLAN numbers in the two pods can even be different but the VXLAN ID must be the same. The common VXLAN ID represents a unique forwarding domain end-to-end. Obviously in many cases, for consistency’s sake, it can be convenient to use the same BD name end-to-end. Also in certain scenarios, when possible, using consistent VLAN numbers end-to-end can be desirable to simplify the network design. 


You can check the BD configuration on a non-border gateway node: 


CLI (network-admin@switch*) > l2-table-show bd bd-evpn1 format mac,bd,vlan,vxlan,ip,ports,state,status,vtep-ip        


mac               bd       vlan vxlan  ip           ports state       status vtep-ip 

----------------- -------- ---- ------ ------------ ----- ----------- ------ ----------- 

00:12:c0:88:07:32 bd-evpn1      500500 205.205.1.64       tunnel,evpn        93.93.93.93 

00:12:c0:80:33:1e bd-evpn1 55   500500 205.205.1.51 81    active      host 


vtep-ip 93.93.93.93 corresponds to the border gateway’s VTEP: 


CLI (network-admin@bgw*) > fabric-evpn-node-show 


fabric-pod evpn-node version        vtep-ip     evpn-role 

---------- --------- -------------- ----------- --------- 

evpn-fab1  bgw       6.2.6020018465 93.93.93.93 primary 


CLI (network-admin@bgw*) > vtep-show name vtep9 


scope  name  location vrouter-name ip         virtual-ip  mac-learning 

------ ----- -------- ------------ ---------- ----------- ------------ 

fabric vtep9 bgw      vr9          93.93.93.9 93.93.93.93 on 


The BD extends to the border gateway (and beyond): 


CLI (network-admin@bgw*) > l2-table-show bd bd-evpn1 format mac,bd,vlan,vxlan,ip,ports,state,status,vtep-ip 


mac               bd       vlan vxlan  ip           state       status vtep-ip 

----------------- -------- ---- ------ ------------ ----------- ------ ---------- 

00:12:c0:88:07:32 bd-evpn1      500500 205.205.1.64 tunnel,evpn        22.22.22.2 

00:12:c0:80:33:1e bd-evpn1 55   500500 205.205.1.51 tunnel      host         


The BD is extended to Pod 2 too, as shown below on a non-border gateway node: 


CLI (network-admin@switch2*) > l2-table-show bd bd-evpn2 format mac,bd,vlan,vxlan,ports,state,status,vtep-ip 


mac               bd       vlan vxlan  ports state                    status vtep-ip 

----------------- -------- ---- ------ ----- ------------------------ ------ ---------- 

00:12:c0:88:07:32 bd-evpn2 500  500500 31    active                   host 

00:12:c0:80:33:1e bd-evpn2      500500       tunnel,local-tunnel,evpn        22.22.22.2 


This extension happens transparently as there is no BD-specific information in BGP: 


CLI (network-admin@bgw*) > vrouter-evpn-bgp-routes-show format vrouter-name,vni,mac,ip,route-type,extended-community 


vrouter-name vni    mac               ip              route-type extended-community 

------------ ------ ----------------- --------------- ---------- -------------------- 

vr9          500500 00:12:c0:88:07:32                 2          RT:5002:500500 ET:8 

vr9          500500 00:12:c0:88:07:32 205.205.1.64/32 2          RT:5002:500500 ET:8 


Note that, as with VLANs when configured with EVPN, a BD is enabled once the BD's VXLAN ID is added to the respective border gateway’s VTEP and that same VXLAN ID is configured consistently on all the pods, as shown below on two nodes of different pods. 


First make sure EVPN is enabled: 


CLI (network-admin@bgw*) > vrouter-show name vr9 format evpn-border,


evpn-border

----------- 

enable


Then you can check that the BD’s VXLAN ID is correctly configured in Pod 1: 


CLI (network-admin@bgw*) > vtep-vxlan-show name vtep9 vxlan 500500 


name  vxlan  isolated 

----- ------ -------- 

vtep9 500500 no         


And also in Pod 2: 


CLI (network-admin@bgw2*) > vtep-vxlan-show name vtep2 vxlan 500500 


name  vxlan  isolated 

----- ------ -------- 

vtep2 500500 no 


Note: A hardware limitation is that you need to configure a BD port on the border gateway, regardless of whether there needs to be a host attached. So on the border gateway for BD configurations a port needs to be set aside anyway. 


north
    keyboard_arrow_up
    keyboard_arrow_down
    description
    print
    feedback
    support
    business
    rss_feed
    south