Cisco APIC Layer 2 Configuration Guide
Cisco APIC Layer 2 Configuration Guide
0(1)
First Published: 2018-10-24
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH
THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY,
CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL FAULTS.
CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network
topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional
and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com
go trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any
other company. (1721R)
© 2019 Cisco Systems, Inc. All rights reserved.
CONTENTS
PREFACE Preface xi
Audience xi
Document Conventions xi
Related Documentation xiii
Documentation Feedback xiv
Obtaining Documentation and Submitting a Service Request xiv
CHAPTER 5 Bridging 17
Bridged Interface to an External Router 17
Bridge Domains and Subnets 18
Bridge Domain Options 20
Creating a Tenant, VRF, and Bridge Domain Using the GUI 21
Creating a Tenant, VRF, and Bridge Domain Using the NX-OS Style CLI 22
Creating a Tenant, VRF, and Bridge Domain Using the REST API 24
Configuring an Enforced Bridge Domain 25
Configuring an Enforced Bridge Domain Using the NX-OS Style CLI 25
Configuring an Enforced Bridge Domain Using the REST API 26
Configuring Flood in Encapsulation for All Protocols and Proxy ARP Across Encapsulations 27
CHAPTER 6 EPGs 33
About Endpoint Groups 33
Endpoint Groups 33
Access Policies Automate Assigning VLANs to EPGs 35
Per Port VLAN 36
VLAN Guidelines for EPGs Deployed on vPCs 38
Deploying an EPG on a Specific Port 39
Deploying an EPG on a Specific Node or Port Using the GUI 39
Deploying an EPG on a Specific Port with APIC Using the NX-OS Style CLI 40
Deploying an EPG on a Specific Port with APIC Using the REST API 41
Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port 42
Creating Domains, Attach Entity Profiles, and VLANs to Deploy an EPG on a Specific Port 42
Creating Domains, and VLANS to Deploy an EPG on a Specific Port Using the GUI 42
Creating AEP, Domains, and VLANs to Deploy an EPG on a Specific Port Using the NX-OS Style
CLI 43
Creating AEP, Domains, and VLANs to Deploy an EPG on a Specific Port Using the REST API
44
Deploying an Application EPG through an AEP or Interface Policy Group to Multiple Ports 46
Deploying an EPG through an AEP to Multiple Interfaces Using the APIC GUI 46
Deploying an EPG through an Interface Policy Group to Multiple Interfaces Using the NX-OS Style
CLI 47
Deploying an EPG through an AEP to Multiple Interfaces Using the REST API 48
Intra-EPG Isolation 49
Intra-EPG Endpoint Isolation 49
Intra-EPG Isolation for Bare Metal Servers 49
Intra-EPG Isolation for Bare Metal Servers 49
Configuring Intra-EPG Isolation for Bare Metal Servers Using the GUI 50
Configuring Intra-EPG Isolation for Bare Metal Servers Using the NX-OS Style CLI 51
Configuring Intra-EPG Isolation for Bare Metal Servers Using the REST API 53
Intra-EPG Isolation for VMWare vDS 53
Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch 53
Configuring Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch using
the GUI 55
Configuring Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch using
the NX-OS Style CLI 56
Configuring Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch using
the REST API 58
Intra-EPG Isolation for AVS 59
Intra-EPG Isolation Enforcement for Cisco AVS 59
Configuring Intra-EPG Isolation for Cisco AVS Using the GUI 59
Configuring Intra-EPG Isolation for Cisco AVS Using the NX-OS Style CLI 60
Configuring Intra-EPG Isolation for Cisco AVS Using the REST API 60
Choosing Statistics to View for Isolated Endpoints on Cisco AVS 61
Viewing Statistics for Isolated Endpoints on Cisco AVS 61
Verifying Port Profile Configuration and Conversion Using the NX-OS Style CLI 112
Enabling Q-in-Q Encapsulation on Specific Leaf Switch Interfaces Using the GUI 198
Enabling Q-in-Q Encapsulation for Leaf Interfaces With Fabric Interface Policies Using the GUI
199
Audience
This guide is intended primarily for data center administrators with responsibilities and expertise in one or
more of the following:
• Virtual machine installation and administration
• Server administration
• Switch and network administration
Document Conventions
Command descriptions use the following conventions:
Convention Description
bold Bold text indicates the commands and keywords that you enter literally
as shown.
Italic Italic text indicates arguments for which the user supplies the values.
Convention Description
[x {y | z}] Nested set of square brackets or braces indicate optional or required
choices within optional or required elements. Braces and a vertical bar
within square brackets indicate a required choice within an optional
element.
variable Indicates a variable for which you supply values, in context where italics
cannot be used.
string A nonquoted set of characters. Do not use quotation marks around the
string or the string will include the quotation marks.
Convention Description
screen font Terminal sessions and information the switch displays are in screen font.
boldface screen font Information you must enter is in boldface screen font.
italic screen font Arguments for which you supply values are in italic screen font.
Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual.
Caution Means reader be careful. In this situation, you might do something that could result in equipment damage or
loss of data.
Related Documentation
Application Policy Infrastructure Controller (APIC) Documentation
The following companion guides provide documentation for APIC:
• Cisco APIC Getting Started Guide
• Cisco APIC Basic Configuration Guide
• Cisco ACI Fundamentals
• Cisco APIC Layer 2 Networking Configuration Guide
• Cisco APIC Layer 3 Networking Configuration Guide
• Cisco APIC NX-OS Style Command-Line Interface Configuration Guide
• Cisco APIC REST API Configuration Guide
• Cisco APIC Layer 4 to Layer 7 Services Deployment Guide
• Cisco ACI Virtualization Guide
• Cisco Application Centric Infrastructure Best Practices Guide
Documentation Feedback
To provide technical feedback on this document, or to report an error or omission, please send your comments
to [email protected]. We appreciate your feedback.
Table 1: New Features and Changed Information for Cisco APIC 4.0(2)
SAN boot through FEX HIF port SAN boot is now supported FCoE Connections
vPC through a FEX host interface (HIF)
port vPC
Table 2: New Features and Changed Information for Cisco APIC 4.0(1)
Fibre Channel NPV support The following capabilities are Fibre Channel NPV
enhancements added:
• NPIV mode support
• Fibre Channel (FC) host (F)
port connectivity in 4, 16, 32G
and auto speed configurations
• Fibre Channel (FC) uplink
(NP) port connectivity in 4, 8,
16, 32G and auto speed
configurations
• Port-channel support on FC
uplink ports
• Trunking support on FC
uplink ports
As traffic enters the fabric, ACI encapsulates and applies policy to it, forwards it as needed across the fabric
through a spine switch (maximum two-hops), and de-encapsulates it upon exiting the fabric. Within the fabric,
ACI uses Intermediate System-to-Intermediate System Protocol (IS-IS) and Council of Oracle Protocol (COOP)
for all forwarding of endpoint to endpoint communications. This enables all ACI links to be active, equal cost
multipath (ECMP) forwarding in the fabric, and fast-reconverging. For propagating routing information
between software defined networks within the fabric and routers external to the fabric, ACI uses the
Multiprotocol Border Gateway Protocol (MP-BGP).
VXLAN in ACI
VXLAN is an industry-standard protocol that extends Layer 2 segments over Layer 3 infrastructure to build
Layer 2 overlay logical networks. The ACI infrastructure Layer 2 domains reside in the overlay, with isolated
broadcast and failure bridge domains. This approach allows the data center network to grow without the risk
of creating too large a failure domain.
All traffic in the ACI fabric is normalized as VXLAN packets. At ingress, ACI encapsulates external VLAN,
VXLAN, and NVGRE packets in a VXLAN packet. The following figure shows ACI encapsulation
normalization.
Forwarding in the ACI fabric is not limited to or constrained by the encapsulation type or encapsulation
overlay network. An ACI bridge domain forwarding policy can be defined to provide standard VLAN behavior
where required.
Because every packet in the fabric carries ACI policy attributes, ACI can consistently enforce policy in a fully
distributed manner. ACI decouples application policy EPG identity from forwarding. The following illustration
shows how the ACI VXLAN header identifies application policy within the fabric.
Figure 3: ACI VXLAN Packet Format
The ACI VXLAN packet contains both Layer 2 MAC address and Layer 3 IP address source and destination
fields, which enables efficient and scalable forwarding within the fabric. The ACI VXLAN packet header
source group field identifies the application policy endpoint group (EPG) to which the packet belongs. The
VXLAN Instance ID (VNID) enables forwarding of the packet through tenant virtual routing and forwarding
(VRF) domains within the fabric. The 24-bit VNID field in the VXLAN header provides an expanded address
space for up to 16 million unique Layer 2 segments in the same network. This expanded address space gives
IT departments and cloud providers greater flexibility as they build large multitenant data centers.
VXLAN enables ACI to deploy Layer 2 virtual networks at scale across the fabric underlay Layer 3
infrastructure. Application endpoint hosts can be flexibly placed in the data center network without concern
for the Layer 3 boundary of the underlay infrastructure, while maintaining Layer 2 adjacency in a VXLAN
overlay network.
Layer3VNIDsFacilitateTransportingInter-subnetTenantTraffic
The ACI fabric provides tenant default gateway functionality that routes between the ACI fabric VXLAN
networks. For each tenant, the fabric provides a virtual default gateway that spans all of the leaf switches
assigned to the tenant. It does this at the ingress interface of the first leaf switch connected to the endpoint.
Each ingress interface supports the default gateway interface. All of the ingress interfaces across the fabric
share the same router IP address and MAC address for a given tenant subnet.
The ACI fabric decouples the tenant endpoint address, its identifier, from the location of the endpoint that is
defined by its locator or VXLAN tunnel endpoint (VTEP) address. Forwarding within the fabric is between
VTEPs. The following figure shows decoupled identity and location in ACI.
Figure 4: ACI Decouples Identity and Location
VXLAN uses VTEP devices to map tenant end devices to VXLAN segments and to perform VXLAN
encapsulation and de-encapsulation. Each VTEP function has two interfaces:
• A switch interface on the local LAN segment to support local endpoint communication through bridging
• An IP interface to the transport IP network
The IP interface has a unique IP address that identifies the VTEP device on the transport IP network known
as the infrastructure VLAN. The VTEP device uses this IP address to encapsulate Ethernet frames and transmit
the encapsulated packets to the transport network through the IP interface. A VTEP device also discovers the
remote VTEPs for its VXLAN segments and learns remote MAC Address-to-VTEP mappings through its IP
interface.
The VTEP in ACI maps the internal tenant MAC or IP address to a location using a distributed mapping
database. After the VTEP completes a lookup, the VTEP sends the original data packet encapsulated in
VXLAN with the destination address of the VTEP on the destination leaf switch. The destination leaf switch
de-encapsulates the packet and sends it to the receiving host. With this model, ACI uses a full mesh, single
hop, loop-free topology without the need to use the spanning-tree protocol to prevent loops.
The VXLAN segments are independent of the underlying network topology; conversely, the underlying IP
network between VTEPs is independent of the VXLAN overlay. It routes the encapsulated packets based on
the outer IP address header, which has the initiating VTEP as the source IP address and the terminating VTEP
as the destination IP address.
The following figure shows how routing within the tenant is done.
For each tenant VRF in the fabric, ACI assigns a single L3 VNID. ACI transports traffic across the fabric
according to the L3 VNID. At the egress leaf switch, ACI routes the packet from the L3 VNID to the VNID
of the egress subnet.
Traffic arriving at the fabric ingress that is sent to the ACI fabric default gateway is routed into the Layer 3
VNID. This provides very efficient forwarding in the fabric for traffic routed within the tenant. For example,
with this model, traffic between 2 VMs belonging to the same tenant, on the same physical host, but on
different subnets, only needs to travel to the ingress switch interface before being routed (using the minimal
path cost) to the correct destination.
To distribute external routes within the fabric, ACI route reflectors use multiprotocol BGP (MP-BGP). The
fabric administrator provides the autonomous system (AS) number and specifies the spine switches that
become route reflectors.
Layer 2 Prerequisites
Before you begin to perform the tasks in this guide, complete the following:
• Install the ACI fabric and ensure that the APIC controllers are online, and the APIC cluster is formed
and healthy—For more information, see Cisco APIC Getting Started Guide, Release 2.x.
• Create fabric administrator accounts for the administrators that will configure Layer 2 networks—For
instructions, see the User Access, Authentication, and Accounting and Management chapters in Cisco
APIC Basic Configuration Guide.
• Install and register the target leaf switches in the ACI fabric—For more information, see Cisco APIC
Getting Started Guide, Release 2.x.
For information about installing and registering virtual switches, see Cisco ACI Virtualization Guide.
• Configure the tenants, VRFs, and EPGs (with application profiles and contracts) that will consume the
Layer 2 networks—For instructions, see the Basic User Tenant Configuration chapter in Cisco APIC
Basic Configuration Guide.
Caution If you install 1 Gigabit Ethernet (GE) or 10GE links between the leaf and spine switches in the fabric, there
is risk of packets being dropped instead of forwarded, because of inadequate bandwidth. To avoid the risk,
use 40GE or 100GE links between the leaf and spine switches.
Networking Domains
A fabric administrator creates domain policies that configure ports, protocols, VLAN pools, and encapsulation.
These policies can be used exclusively by a single tenant, or shared. Once a fabric administrator configures
domains in the ACI fabric, tenant administrators can associate tenant endpoint groups (EPGs) to domains.
The following networking domain profiles can be configured:
• VMM domain profiles (vmmDomP) are required for virtual machine hypervisor integration.
• Physical domain profiles (physDomP) are typically used for bare metal server attachment and management
access.
• Bridged outside network domain profiles (l2extDomP) are typically used to connect a bridged external
network trunk switch to a leaf switch in the ACI fabric.
• Routed outside network domain profiles (l3extDomP) are used to connect a router to a leaf switch in the
ACI fabric.
• Fibre Channel domain profiles (fcDomP) are used to connect Fibre Channel VLANs and VSANs.
A domain is configured to be associated with a VLAN pool. EPGs are then configured to use the VLANs
associated with a domain.
Note EPG port and VLAN configurations must match those specified in the domain infrastructure configuration
with which the EPG associates. If not, the APIC will raise a fault. When such a fault occurs, verify that the
domain infrastructure configuration matches the EPG port and VLAN configurations.
Related Documents
For more information about Layer 3 Networking, see Cisco APIC Layer 3 Networking Configuration Guide.
For information about configuring VMM Domains, see Cisco ACI Virtual Machine Networking in Cisco ACI
Virtualization Guide.
Bridge Domains
About Bridge Domains
A bridge domain (BD) represents a Layer 2 forwarding construct within the fabric. One or more endpoint
groups (EPGs) can be associated with one bridge domain or subnet. A bridge domain can have one or more
subnets that are associated with it. One or more bridge domains together form a tenant network. When you
insert a service function between two EPGs, those EPGs must be in separate BDs. To use a service function
between two EPGs, those EPGs must be isolated; this follows legacy service insertion based on Layer 2 and
Layer 3 lookups.
VMM Domains
Virtual Machine Manager Domain Main Components
ACI fabric virtual machine manager (VMM) domains enable an administrator to configure connectivity
policies for virtual machine controllers. The essential components of an ACI VMM domain policy include
the following:
• Virtual Machine Manager Domain Profile—Groups VM controllers with similar networking policy
requirements. For example, VM controllers can share VLAN pools and application endpoint groups
(EPGs). The APIC communicates with the controller to publish network configurations such as port
groups that are then applied to the virtual workloads. The VMM domain profile includes the following
essential components:
• Credential—Associates a valid VM controller user credential with an APIC VMM domain.
• Controller—Specifes how to connect to a VM controller that is part of a policy enforcement domain.
For example, the controller specifies the connection to a VMware vCenter that is part a VMM
domain.
Note A single VMM domain can contain multiple instances of VM controllers, but
they must be from the same vendor (for example, from VMware or from Microsoft.
• EPG Association—Endpoint groups regulate connectivity and visibility among the endpoints within the
scope of the VMM domain policy. VMM domain EPGs behave as follows:
• The APIC pushes these EPGs as port groups into the VM controller.
• An EPG can span multiple VMM domains, and a VMM domain can contain multiple EPGs.
• Attachable Entity Profile Association—Associates a VMM domain with the physical network
infrastructure. An attachable entity profile (AEP) is a network interface template that enables deploying
VM controller policies on a large set of leaf switch ports. An AEP specifies which switches and ports
are available, and how they are configured.
• VLAN Pool Association—A VLAN pool specifies the VLAN IDs or ranges used for VLAN encapsulation
that the VMM domain consumes.
VMM domains contain VM controllers such as VMware vCenter or Microsoft SCVMM Manager and the
credential(s) required for the ACI API to interact with the VM controller. A VMM domain enables VM
mobility within the domain but not across domains. A single VMM domain can contain multiple instances of
VM controllers but they must be the same kind. For example, a VMM domain can contain many VMware
vCenters managing multiple controllers each running multiple VMs but it may not also contain SCVMM
Managers. A VMM domain inventories controller elements (such as pNICs, vNICs, VM names, and so forth)
and pushes policies into the controller(s), creating port groups, and other necessary elements. The ACI VMM
domain listens for controller events such as VM mobility and responds accordingly.
Step 6 (Optional) Add a AAA security domain and click the Select check box.
Step 7 Click Submit.
Configure a physical domain by sending a post with XML such as the following example:
Example:
The ACI fabric is unaware of the presence of the external router and the APIC statically assigns the leaf switch
interface to its EPG.
A BD must be linked to a VRF (also known as a context or private network). With the exception of a Layer
2 VLAN, it must have at least one subnet (fvSubnet) associated with it. The BD defines the unique Layer 2
MAC address space and a Layer 2 flood domain if such flooding is enabled. While a VRF defines a unique
IP address space, that address space can consist of multiple subnets. Those subnets are defined in one or more
BDs that reference the corresponding VRF.
The options for a subnet under a BD or under an EPG are as follows:
• Public—the subnet can be exported to a routed connection.
• Private—the subnet applies only within its tenant.
• Shared—the subnet can be shared with and exported to multiple VRFs in the same tenant or across
tenants as part of a shared service. An example of a shared service is a routed connection to an EPG
present in another VRF in a different tenant. This enables traffic to pass in both directions across VRFs.
An EPG that provides a shared service must have its subnet configured under that EPG (not under a BD),
and its scope must be set to advertised externally, and shared between VRFs.
Note Shared subnets must be unique across the VRF involved in the communication.
When a subnet under an EPG provides a Layer 3 external network shared service,
such a subnet must be globally unique within the entire ACI fabric.
Note Beginning with Cisco APIC Release 3.1(1), on the Cisco Nexus 9000 series switches (with names ending
with EX and FX and onwards), the following protocols can be flooded in encapsulation or flooded in a bridge
domain: OSPF/OSPFv3, BGP, EIGRP, CDP, LACP, LLDP, ISIS, IGMP, PIM, ST-BPDU, ARP/GARP,
RARP, ND.
Bridge domains can span multiple switches. A bridge domain can contain multiple subnets, but a subnet is
contained within a single bridge domain. If the bridge domain (fvBD) limitIPLearnToSubnets property is
set to yes, endpoint learning will occur in the bridge domain only if the IP address is within any of the
configured subnets for the bridge domain or within an EPG subnet when the EPG is a shared service provider.
Subnets can span multiple EPGs; one or more EPGs can be associated with one bridge domain or subnet. In
hardware proxy mode, ARP traffic is forwarded to an endpoint in a different bridge domain when that endpoint
has been learned as part of the Layer 3 lookup operation.
Note Bridge domain legacy mode allows only one VLAN per bridge domain. When bridge domain legacy mode
is specified, bridge domain encapsulation is used for all EPGs that reference the bridge domain; EPG
encapsulation, if defined, is ignored. Unicast routing does not apply for bridge domain legacy mode. A leaf
switch can be configured with multiple bridge domains that operate in a mixture of legacy or normal modes.
However, once a bridge domain is configured, its mode cannot be switched.
Caution Changing from unknown unicast flooding mode to hw-proxy mode is disruptive to the traffic in the bridge
domain.
If IP routing is enabled in the bridge domain, the mapping database learns the IP address of the endpoints in
addition to the MAC address.
The Layer 3 Configurations tab of the bridge domain panel allows the administrator to configure the following
parameters:
• Unicast Routing: If this setting is enabled and a subnet address is configured, the fabric provides the
default gateway function and routes the traffic. Enabling unicast routing also instructs the mapping
database to learn the endpoint IP-to-VTEP mapping for this bridge domain. The IP learning is not
dependent upon having a subnet configured under the bridge domain.
• Subnet Address: This option configures the SVI IP addresses (default gateway) for the bridge domain.
• Limit IP Learning to Subnet: This option is similar to a unicast reverse-forwarding-path check. If this
option is selected, the fabric will not learn IP addresses from a subnet other than the one configured on
the bridge domain.
Caution Enabling Limit IP Learning to Subnet is disruptive to the traffic in the bridge domain.
When IP learning is disabled, you have to enable the Global Subnet Prefix check option in System > System
Settings > Fabric Wide Setting > Enforce Subnet Check in the Online Help.
SUMMARY STEPS
1. On the menu bar, choose Tenants > Add Tenant.
2. In the Create Tenant dialog box, perform the following tasks:
3. In the Navigation pane, expand Tenant-name > Networking, and in the Work pane, drag the VRF icon
to the canvas to open the Create VRF dialog box, and perform the following tasks:
4. In the Networking pane, drag the BD icon to the canvas while connecting it to the VRF icon. In the
Create Bridge Domain dialog box that displays, perform the following tasks:
5. In the Networks pane, drag the L3 icon down to the canvas while connecting it to the VRF icon. In the
Create Routed Outside dialog box that displays, perform the following tasks:
DETAILED STEPS
Note Before creating the tenant configuration, you must create a VLAN domain using the vlan-domain command
and assign the ports to it.
Step 1 Create a VLAN domain (which contains a set of VLANs that are allowable in a set of ports) and allocate VLAN inputs,
as follows:
Example:
In the following example ("exampleCorp"), note that VLANs 50 - 500 are allocated.
apic1# configure
apic1(config)# vlan-domain dom_exampleCorp
apic1(config-vlan)# vlan 50-500
apic1(config-vlan)# exit
Step 2 Once the VLANs have been allocated, specify the leaf (switch) and interface for which these VLANs can be used. Then,
enter "vlan-domain member" and then the name of the domain you just created.
Example:
In the following example, these VLANs (50 - 500) have been enabled on leaf 101 on interface ethernet 1/2-4 (three ports
including 1/2, 1/3, and 1/4). This means that if you are using this interface, you can use VLANS 50-500 on this port for
any application that the VLAN can be used for.
apic1(config-vlan)# leaf 101
apic1(config-vlan)# interface ethernet 1/2-4
apic1(config-leaf-if)# vlan-domain member dom_exampleCorp
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
Step 3 Create a tenant in global configuration mode, as shown in the following example:
Example:
Step 4 Create a private network (also called VRF) in tenant configuration mode as shown in the following example:
Example:
Step 5 Create a bridge domain (BD) under the tenant, as shown in the following example:
Example:
Step 6 Allocate IP addresses for the BD (ip and ipv6), as shown in the following example.
Example:
apic1(config-tenant)# interface bridge-domain exampleCorp_b1
apic1(config-tenant-interface)# ip address 172.1.1.1/24
apic1(config-tenant-interface)# ipv6 address 2001:1:1::1/64
apic1(config-tenant-interface)# exit
What to do next
The next section describes how to add an application profile, create an application endpoint group (EPG), and
associate the EPG to the bridge domain.
Related Topics
Configuring a VLAN Domain Using the NX-OS Style CLI
Creating a Tenant, VRF, and Bridge Domain Using the REST API
SUMMARY STEPS
1. Create a tenant.
2. Create a VRF and bridge domain.
DETAILED STEPS
When the POST succeeds, you see the object that you created in the output.
Step 2 Create a VRF and bridge domain.
Note The Gateway Address can be an IPv4 or an IPv6 address. For more about details IPv6 gateway address, see
the related KB article, KB: Creating a Tenant, VRF, and Bridge Domain with IPv6 Neighbor Discovery .
Example:
URL for POST: https://apic-ip-address/api/mo/uni/tn-ExampleCorp.xml
<fvTenant name="ExampleCorp">
<fvCtx name="pvn1"/>
<fvBD name="bd1">
<fvRsCtx tnFvCtxName="pvn1"/>
<fvSubnet ip="10.10.100.1/24"/>
</fvBD>
</fvTenant>
Note If you have a public subnet when you configure the routed outside, you must associate the bridge domain with
the outside configuration.
Note • The exception IP addresses can ping all of the BD gateways across all of your VRFs.
• A loopback interface configured for an L3 out does not enforce reachability to the IP address that is
configured for the subject loopback interface.
• When an eBGP peer IP address exists in a different subnet than the subnet of the L3out interface, the
peer subnet must be added to the allowed exception subnets.
Otherwise, eBGP traffic is blocked because the source IP address exists in a different subnet than the
L3out interface subnet.
SUMMARY STEPS
1. Create and enable the tenant:
2. Add the subnet to the exception list.
DETAILED STEPS
You can confirm if the enforced bridge domain is operational using the following type of command:
apic1# show running-config all | grep bd-enf
bd-enforce enable
bd-enf-exp-ip add 1.2.3.4/24
Example
The following command removes the subnet from the exception list:
apic1(config)# no bd-enf-exp-ip 1.2.3.4/24
apic1(config)#tenant coke
apic1(config-tenant)#vrf context cokeVrf
What to do next
To disable the enforced bridge domain run the following command:
apic1(config-tenant-vrf)# no bd-enforce enable
DETAILED STEPS
Step 2 Create a VRF and bridge domain. Note The Gateway Address can be an IPv4 or an IPv6
address. For more about details IPv6 gateway
Example:
address, see the related KB article, KB: Creating
URL for POST: a Tenant, VRF, and Bridge Domain with IPv6
https://apic-ip-address/api/mo/uni/tn-ExampleCorp.xml
Neighbor Discovery .
<fvTenant name="ExampleCorp">
<fvCtx name="pvn1"/>
<fvBD name="bd1">
<fvRsCtx tnFvCtxName="pvn1"
bdEnforceEnable="yes"/>
<fvSubnet ip="10.10.100.1/24"/>
</fvBD>
</fvTenant>
In this topology, the fabric has a single tunnel network defined that uses one uplink to connect with the ACI
leaf node. Two user VLANs, VLAN 10 and VLAN 11 are carried over this link. The BD domain is set in
flooding mode as the servers’ gateways are outside the ACI cloud. ARP negotiations occur in the following
process:
• The server sends one ARP broadcast request over the VLAN 10 network.
• The ARP packet travels through the tunnel network to the external server, which records the source MAC
address, learned from its downlink.
• The server then forwards the packet out its uplink to the ACI leaf switch.
• The ACI fabric sees the ARP broadcast packet entering on access port VLAN 10 and maps it to EPG1.
• Because the BD is set to flood ARP packets, the packet is flooded within the BD and thus to the ports
under both EPGs as they are in the same BD.
• The same ARP broadcast packet comes back over the same uplink.
• The external server sees the original source MAC address from this uplink.
Result: the external device has the same MAC address learned from both the downlink port and uplink port
within its single MAC forwarding table, causing traffic disruptions.
Recommended Solution
The Flood in Encapsulation option is used to limit flooding traffic inside the BD to a single encapsulation.
When two EPGs share the same BD and Flood in Encapsulation is enabled, the EPG flooding traffic does
not reach the other EPG.
Beginning with Cisco APIC Release 3.1(1), on the Cisco Nexus 9000 series switches (with names ending
with EX and FX and onwards), all protocols are flooded in encapsulation. Also when enabling Flood in
Encapsulation for any inter-VLAN traffic, Proxy ARP ensures that the MAC flap issue does not occur, and
it limits all flooding (ARP, GARP, and BUM) to the encapsulation. This applies for all EPGs under the bridge
domain where it is enabled.
Note Before Cisco APIC release 3.1(1), these features are not supported (Proxy ARP and all protocols being included
when flooding within encapsulation). In an earlier Cisco APIC release or earlier generation switches (without
EX or FX on their names), if you enable Flood in Encapsulation it does not function, no informational fault
is generated, but APIC decreases the health score by 1.
The recommended solution is to support multiple EPGs under one BD by adding an external switch. This
design with multiple EPGs under one bridge domain with an external switch is illustrated in the following
figure.
Figure 11: Design with Multiple EPGs Under one Bridge Domain with an External Switch
Within the same BD, some EPGs can be service nodes and other EPGs can have flood in encapsulation
configured. A Load Balancer resides on a different EPG. The load balancer receives packets from the EPGs
and sends them to the other EPGs (There is no Proxy ARP and flood within encapsulation does not take place).
If you want to add flood in encapsulation only for selective EPGs, using the NX-OS style CLI, enter the
flood-on-encapsulation enable command under EPGs.
If you want to add flood in encapsulation for all EPGs, you can use themulti-destination encap-flood CLI
command under the bridge domain.
Using the CLI, flood in encapsulation configured for an EPG takes precedence over flood in encapsulation
that is configured for a bridge domain.
When both BDs and EPGs are configured, the behavior is described as follows:
Configuration Behavior
Flood in encapsulation at the EPG and flood in Flood in encapsulation takes place for the traffic on
encapsulation at the bridge domain all VLANs within the bridge domain.
No flood in encapsulation at the EPG and flood in Flood in encapsulation takes place for the traffic on
encapsulation at the bridge domain all VLANs within the bridge domain.
Flood in encapsulation at the EPG and no flood in Flood in encapsulation takes place for the traffic on
encapsulation at the bridge domain that VLAN within the EPG of the bridge domain.
No flood in encapsulation at the EPG and no flood in Flooding takes place within the entire bridge domain.
encapsulation at the bridge domain
Limitations
Here are the limitations for using flood in encapsulation for all protocols:
• Flood in encapsulation does not work in ARP unicast mode.
• Neighbor Solicitation (NS/ND) is not supported for this release.
• You must enable per-port CoPP with flood in encapsulation.
• Flood in encapsulation is supported only in BD in flood mode and ARP in flood mode. BD spine proxy
mode is not supported.
• IPv4 L3 multicast is not supported.
• IPv6 is not supported.
• VM migration to a different VLAN has momentary issues (60 seconds).
• A load balancer acting as a gateway is supported, for example, in one to one communication between
VMs and the load balancer in non-proxy mode. No Layer 3 communication is supported. The traffic
between VMs and the load balancer is on Layer 2. However, if intra-EPG communication passes through
the load balancer, then the load balancer changes the SIP and SMAC; otherwise it can lead to a MAC
flap. Therefore, Dynamic Source Routing (DSR) mode is not supported in the load balancer.
• Setting up communication between VMs through a firwall, as a gateway, is not recommended because
if the VM IP address changes to the gateway IP address instead of the firewall IP address, then the firewall
can be bypassed.
• Prior releases are not supported (even interoperating between prior and current releases).
• The Proxy ARP and Flood in Encapsulation features are not supported for VXLAN encapsulation.
• A mixed-mode topology with Application Leaf Engine (ALE) and Application Spine Engine (ASE) is
not recommended and is not supported with flood in encapsulation. Enabling them together can prevent
QoS priorities from being enforced.
• Flood in encapsulation is not supported with Remote Leaf switches and Cisco ACI Multi-Site.
• Flood in encapsulation is not supported for Common Pervasive Gateway (CPGW).
An EPG is a managed object that is a named logical entity that contains a collection of endpoints. Endpoints
are devices that are connected to the network directly or indirectly. They have an address (identity), a location,
attributes (such as version or patch level), and can be physical or virtual. Knowing the address of an endpoint
also enables access to all its other identity details. EPGs are fully decoupled from the physical and logical
topology. Endpoint examples include servers, virtual machines, network-attached storage, or clients on the
Internet. Endpoint membership in an EPG can be dynamic or static.
The ACI fabric can contain the following types of EPGs:
• Application endpoint group (fvAEPg)
• Layer 2 external outside network instance endpoint group (l2extInstP)
• Layer 3 external outside network instance endpoint group (l3extInstP)
• Management endpoint groups for out-of-band (mgmtOoB) or in-band ( mgmtInB) access.
EPGs contain endpoints that have common policy requirements such as security, virtual machine mobility
(VMM), QoS, or Layer 4 to Layer 7 services. Rather than configure and manage endpoints individually, they
are placed in an EPG and are managed as a group.
Policies apply to EPGs, never to individual endpoints. An EPG can be statically configured by an administrator
in the APIC, or dynamically configured by an automated system such as vCenter or OpenStack.
Note When an EPG uses a static binding path, the encapsulation VLAN associated with this EPG must be part of
a static VLAN pool. For IPv4/IPv6 dual-stack configurations, the IP address property is contained in the
fvStIp child property of the fvStCEp MO. Multiple fvStIp objects supporting IPv4 and IPv6 addresses can
be added under one fvStCEp object. When upgrading ACI from IPv4-only firmware to versions of firmware
that support IPv6, the existing IP property is copied to an fvStIp MO.
Regardless of how an EPG is configured, EPG policies are applied to the endpoints they contain.
WAN router connectivity to the fabric is an example of a configuration that uses a static EPG. To configure
WAN router connectivity to the fabric, an administrator configures an l3extInstP EPG that includes any
endpoints within an associated WAN subnet. The fabric learns of the EPG endpoints through a discovery
process as the endpoints progress through their connectivity life cycle. Upon learning of the endpoint, the
fabric applies the l3extInstP EPG policies accordingly. For example, when a WAN connected client initiates
a TCP session with a server within an application (fvAEPg) EPG, the l3extInstP EPG applies its policies to
that client endpoint before the communication with the fvAEPg EPG web server begins. When the client server
TCP session ends and communication between the client and server terminate, that endpoint no longer exists
in the fabric.
Note If a leaf switch is configured for static binding (leaf switches) under an EPG, the following restrictions apply:
• The static binding cannot be overridden with a static path.
• Interfaces in that switch cannot be used for routed external network (L3out) configurations.
• Interfaces in that switch cannot be assigned IP addresses.
Virtual machine management connectivity to VMware vCenter is an example of a configuration that uses a
dynamic EPG. Once the virtual machine management domain is configured in the fabric, vCenter triggers the
dynamic configuration of EPGs that enable virtual machine endpoints to start up, move, and shut down as
needed.
In the policy model, EPGs are tightly coupled with VLANs. For traffic to flow, an EPG must be deployed on
a leaf port with a VLAN in a physical, VMM, L2out, L3out, or Fibre Channel domain. For more information,
see Networking Domains, on page 11.
In the policy model, the domain profile associated to the EPG contains the VLAN instance profile. The domain
profile contains both the VLAN instance profile (VLAN pool) and the attacheable Access Entity Profile
(AEP), which are associated directly with application EPGs. The AEP deploys the associated application
EPGs to all the ports to which it is attached, and automates the task of assigning VLANs. While a large data
center could easily have thousands of active virtual machines provisioned on hundreds of VLANs, the ACI
fabric can automatically assign VLAN IDs from VLAN pools. This saves a tremendous amount of time,
compared with trunking down VLANs in a traditional data center.
VLAN Guidelines
Use the following guidelines to configure the VLANs where EPG traffic will flow.
• Multiple domains can share a VLAN pool, but a single domain can only use one VLAN pool.
• To deploy multiple EPGs with same VLAN encapsulation on a single leaf switch, see Per Port VLAN,
on page 36.
To enable deploying multiple EPGs using the same encapsulation number, on a single leaf switch, use the
following guidelines:
• EPGs must be associated with different bridge domains.
• EPGs must be deployed on different ports.
• Both the port and EPG must be associated with the same domain that is associated with a VLAN pool
that contains the VLAN number.
• Ports must be configured with portLocal VLAN scope.
For example, with Per Port VLAN for the EPGs deployed on ports 3 and 9 in the diagram above, both using
VLAN-5, port 3 and EPG1 are associated with Dom1 (pool 1) and port 9 and EPG2 are associated with Dom2
(pool 2).
Traffic coming from port 3 is associated with EPG1, and traffic coming from port 9 is associated with EPG2.
This does not apply to ports configured for Layer 3 external outside connectivity.
Note Avoid adding more than one domain to the AEP that is used to deploy the EPG on the ports, to avoid the risk
of traffic forwarding issues.
Only ports that have the vlanScope set to portlocal allow allocation of separate (Port, VLAN) translation
entries in both ingress and egress directions. For a given port with the vlanScope set to portGlobal (the
default), each VLAN used by an EPG must be unique on a given leaf switch.
Note Per Port VLAN is not supported on interfaces configured with Multiple Spanning Tree (MST), which requires
VLAN IDs to be unique on a single leaf switch, and the VLAN scope to be global.
Reusing VLAN Numbers Previously Used for EPGs on the Same Leaf Switch
If you have previously configured VLANs for EPGs that are deployed on a leaf switch port, and you want to
reuse the same VLAN numbers for different EPGs on different ports on the same leaf switch, use a process,
such as the following example, to set them up without disruption:
In this example, EPGs were previously deployed on a port associated with a domain including a VLAN pool
with a range of 9-100. You want to configure EPGs using VLAN encapsulations from 9-20.
1. Configure a new VLAN pool on a different port (with a range of, for example, 9-20).
2. Configure a new physical domain that includes leaf ports that are connected to firewalls.
3. Associate the physical domain to the VLAN pool you configured in step 1.
4. Configure the VLAN Scope as portLocal for the leaf port.
5. Associate the new EPGs (used by the firewall in this example) to the physical domain you created in step
2.
6. Deploy the EPGs on the leaf ports.
When an EPG is deployed on a vPC, it must be associated with the same domain (with the same VLAN pool)
that is assigned to the leaf switch ports on the two legs of the vPC.
In this diagram, EPG A is deployed on a vPC that is deployed on ports on Leaf switch 1 and Leaf switch 2.
The two leaf switch ports and the EPG are all associated with the same domain, containing the same VLAN
pool.
Option Description
4. (Optional) From the Mode drop-down list, accept the default Trunk or choose another
mode.
5. In the Port Encap field, enter the secondary VLAN to be deployed.
6. (Optional) In the Primary Encap field, enter the primary VLAN to be deployed.
Deploying an EPG on a Specific Port with APIC Using the NX-OS Style CLI
apic1# configure
apic1(config)# tenant t1
Note The vlan-domain and vlan-domain member commands mentioned in the above example are a pre-requisite for
deploying an EPG on a port.
Deploying an EPG on a Specific Port with APIC Using the REST API
Before you begin
The tenant where you deploy the EPG is created.
Note All endpoint groups (EPGs) require a domain. Interface policy groups must also be associated with Attach
Entity Profile (AEP), and the AEP must be associated with a domain, if the AEP and EPG have to be in same
domain. Based on the association of EPGs to domains and of interface policy groups to domains, the ports
and VLANs that the EPG uses are validated. The following domain types associate with EPGs:
• Application EPGs
• Layer 3 external outside network instance EPGs
• Layer 2 external outside network instance EPGs
• Management EPGs for out-of-band and in-band access
The APIC checks if an EPG is associated with one or more of these types of domains. If the EPG is not
associated, the system accepts the configuration but raises a fault. The deployed configuration may not function
properly if the domain association is not valid. For example, if the VLAN encapsulation is not valid for use
with the EPG, the deployed configuration may not function properly.
Creating Domains, and VLANS to Deploy an EPG on a Specific Port Using the
GUI
Before you begin
• The tenant where you deploy the EPG is already created.
• An EPG is statically deployed on a specific port.
Step 1 On the menu bar, click Fabric > External Access Policies.
Step 2 In the Navigation pane, click Quick Start.
Step 3 In the Work pane, click Configure an Interface, PC, and VPC.
Step 4 In the Configure an Interface, PC, and VPC dialog box, click the + icon to select switches and perform the following
actions:
a) From the Switches drop-down list, check the check box for the desired switch.
g) In the Interface Policy Group field, choose the Create One radio button.
h) From the Link Level Policy drop-down list, choose the appropriate link level policy.
Note Create additional policies as desired, otherwise the default policy settings are available.
i) From the Attached Device Type field, choose the appropriate device type.
j) In the Domain field, click the Create One radio button.
k) In the Domain Name field, enter a domain name.
l) In the VLAN field, click the Create One radio button.
m) In the VLAN Range field, enter the desired VLAN range. Click Save, and click Save again.
n) Click Submit.
Step 5 On the menu bar, click Tenants. In the Navigation pane, expand the appropriate Tenant_name > Application Profiles >
Application EPGs > EPG_name and perform the following actions:
a) Right-click Domains (VMs and Bare-Metals), and click Add Physical Domain Association.
b) In the Add Physical Domain Association dialog box, from the Physical Domain Profile drop-down list, choose
the appropriate domain.
c) Click Submit.
The AEP is associated with a specific port on a node and with a domain. The physical domain is associated with the
VLAN pool and the Tenant is associated with this physical domain.
The switch profile and the interface profile are created. The policy group is created in the port block under the interface
profile. The AEP is automatically created, and it is associated with the port block and with the domain. The domain is
associated with the VLAN pool and the Tenant is associated with the domain.
Creating AEP, Domains, and VLANs to Deploy an EPG on a Specific Port Using
the NX-OS Style CLI
Before you begin
• The tenant where you deploy the EPG is already created.
• An EPG is statically deployed on a specific port.
Step 2 Create an interface policy group and assign a VLAN domain to the policy group:
Example:
Step 3 Create a leaf interface profile, assign an interface policy group to the profile, and assign the interface IDs on which the
profile will be applied:
Example:
Step 4 Create a leaf profile, assign the leaf interface profile to the leaf profile, and assign the leaf IDs on which the profile will
be applied:
Example:
Creating AEP, Domains, and VLANs to Deploy an EPG on a Specific Port Using
the REST API
Before you begin
• The tenant where you deploy the EPG is already created.
• An EPG is statically deployed on a specific port.
Step 1 Create the interface profile, switch profile and the Attach Entity Profile (AEP).
Example:
<infraInfra>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-<interface_profile_name>"/>
</infraNodeP>
<infraAttEntityP name="<attach_entity_profile_name>"
dn="uni/infra/attentp-<attach_entity_profile_name>" >
<infraRsDomP tDn="uni/phys-<physical_domain_name>"/>
</infraAttEntityP>
<infraInfra>
Deploying an EPG through an AEP to Multiple Interfaces Using the APIC GUI
You can quickly associate an application with an attached entity profile to quickly deploy that EPG over all
the ports associated with that attached entity profile.
Step 3 Use the Application EPGs table to associate the target application EPG with this attached entity profile. Click + to add
an application EPG entry. Each entry contains the following fields:
Field Action
Application EPGs Use the drop down to choose the associated Tenant, Application Profile, and target application
EPG.
Encap Enter the name of the VLAN over which the target application EPG will communicate.
Field Action
Primary Encap If the application EPG requires a primary VLAN, enter the name of the primary VLAN.
Mode Use the drop down to specify the mode in which data is transmitted:
• Trunk -- Choose if traffic from the host is tagged with a VLAN ID.
• Access -- Choose if traffic from the host is tagged with an 802.1p tag.
• Access Untagged -- Choose if the traffic from the host is untagged.
Step 1 Associate the target EPG with the interface policy group.
The sample command sequence specifies an interface policy group pg3 associated with VLAN domain, domain1, and
with VLAN 1261. The application EPG, epg47 is deployed to all interfaces associated with this policy group.
Example:
apic1# configure terminal
apic1(config)# template policy-group pg3
apic1(config-pol-grp-if)# vlan-domain member domain1
apic1(config-pol-grp-if)# switchport trunk allowed vlan 1261 tenant tn10 application pod1-AP
epg epg47
Step 2 Check the target ports to ensure deployment of the policies of the interface policy group associated with application EPG.
The output of the sample show command sequence indicates that policy group pg3 is deployed on Ethernet port 1/20 on
leaf switch 1017.
Example:
apic1# show run leaf 1017 int eth 1/20
# Command: show running-config leaf 1017 int eth 1/20
# Time: Mon Jun 27 22:12:10 2016
leaf 1017
interface ethernet 1/20
policy-group pg3
exit
exit
ifav28-ifc1#
Deploying an EPG through an AEP to Multiple Interfaces Using the REST API
The interface selectors in the AEP enable you to configure multiple paths for an AEPg. The following can be
selected:
1. A node or a group of nodes
2. An interface or a group of interfaces
The interfaces consume an interface policy group (and so an infra:AttEntityP).
3. The infra:AttEntityP is associated to the AEPg, thus specifying the VLANs to use.
An infra:AttEntityP can be associated with multiple AEPgs with different VLANs.
When you associate the infra:AttEntityP with the AEPg, as in 3, this deploys the AEPg on the nodes selected
in 1, on the interfaces in 2, with the VLAN provided by 3.
In this example, the AEPg uni/tn-Coke/ap-AP/epg-EPG1 is deployed on interfaces 1/10, 1/11, and 1/12 of
nodes 101 and 102, with vlan-102.
To deploy an AEPg on selected nodes and interfaces, send a post with XML such as the following:
Example:
<infraInfra dn="uni/infra">
<infraNodeP name=“NodeProfile">
<infraLeafS name=“NodeSelector" type="range">
<infraNodeBlk name=“NodeBlok" from_="101" to_=“102”/>
<infraRsAccPortP tDn="uni/infra/accportprof-InterfaceProfile"/>
</infraLeafS>
</<infraNodeP>
<infraAccPortP name="InterfaceProfile">
<infraHPortS name="InterfaceSelector" type="range">
<infraFuncP>
<infraAccPortGrp name="PortGrp”>
<infraRsAttEntP tDn="uni/infra/attentp-AttEntityProfile"/>
</infraAccPortGrp>
</infraFuncP>
Intra-EPG Isolation
Intra-EPG Endpoint Isolation
Intra-EPG endpoint isolation policies provide full isolation for virtual or physical endpoints; no communication
is allowed between endpoints in an EPG that is operating with isolation enforced. Isolation enforced EPGs
reduce the number of EPG encapsulations required when many clients access a common service but are not
allowed to communicate with each other.
An EPG is isolation enforced for all ACI network domains or none. While the ACI fabric implements isolation
directly to connected endpoints, switches connected to the fabric are made aware of isolation rules according
to a primary VLAN (PVLAN) tag.
Note If an EPG is configured with intra-EPG endpoint isolation enforced, these restrictions apply:
• All Layer 2 endpoint communication across an isolation enforced EPG is dropped within a bridge domain.
• All Layer 3 endpoint communication across an isolation enforced EPG is dropped within the same subnet.
• Preserving QoS CoS priority settings is not supported when traffic is flowing from an EPG with isolation
enforced to an EPG without isolation enforced.
• Backup clients have the same communication requirements for accessing the backup service, buy they
don't need to communicate with each other.
• Servers behind a load balancer have the same communication requirements, but isolating them from each
other protects against a server that is compromised or infected.
Bare metal EPG isolation is enforced at the leaf switch. Bare metal servers use VLAN encapsulation. All
unicast, multicast and broadcast traffic is dropped (denied) within isolation enforced EPGs. ACI bridge-domains
can have a mix of isolated and regular EPGs. Each Isolated EPG can have multiple VLANs where intra-vlan
traffic is denied.
Configuring Intra-EPG Isolation for Bare Metal Servers Using the GUI
The port the EPG uses must be associated with a bare metal server interface in the physical domain that is
used to connect the bare metal servers directly to leaf switches.
SUMMARY STEPS
1. In a tenant, right click on an Application Profile, and open the Create Application EPG dialog box to
perform the following actions:
2. In the Leaves/Paths dialog box, perform the following actions:
DETAILED STEPS
Step 1 In a tenant, right click on an Application Profile, and open the Create Application EPG dialog box to perform the
following actions:
a) In the Name field, add the EPG name (intra_EPG-deny).
b) For Intra EPG Isolation, click Enforced.
c) In the Bridge Domain field, choose the bridge domain from the drop-down list (bd1).
d) Check the Statically Link with Leaves/Paths check box.
e) Click Next.
Step 2 In the Leaves/Paths dialog box, perform the following actions:
a) In the Path section, choose a path from the drop-down list (Node-107/eth1/16) in Trunk Mode.
Specify the Port Encap (vlan-102) for the secondary VLAN.
Note If the bare metal server is directly connected to a leaf switch, only the Port Encap secondary VLAN is
specified.
Configuring Intra-EPG Isolation for Bare Metal Servers Using the NX-OS Style CLI
SUMMARY STEPS
1. In the CLI, create an intra-EPG isolation EPG:
2. Verify the configuration:
DETAILED STEPS
Static Leaves:
Node Encap Deployment Immediacy
Mode Modification Time
Static Paths:
Node Interface Encap
Modification Time
---------- ------------------------------
---------------- ------------------------------
1018 eth101/1/1
vlan-100 2016-02-11T18:39:02.337-08:00
1019 eth1/16
vlan-101 2016-02-11T18:39:02.337-08:00
Static Endpoints:
Node Interface Encap
End Point MAC End Point IP Address
Modification Time
---------- ------------------------------
---------------- -----------------
------------------------------
------------------------------
Configuring Intra-EPG Isolation for Bare Metal Servers Using the REST API
SUMMARY STEPS
1. Send this HTTP POST message to deploy the application using the XML API.
2. Include this XML structure in the body of the POST message.
DETAILED STEPS
Step 1 Send this HTTP POST message to deploy the application using the XML API.
Example:
POST https://apic-ip-address/api/mo/uni/tn-ExampleCorp.xml
Step 2 Include this XML structure in the body of the POST message.
Example:
<fvTenant name="Tenant_BareMetal" >
<fvAp name="Web">
<fvAEPg name="IntraEPGDeny" pcEnfPref="enforced">
<!-- pcEnfPref="enforced" ENABLES ISOLATION-->
<fvRsBd tnFvBDName="bd" />
<fvRsDomAtt tDn="uni/phys-Dom1" />
<!-- PATH ASSOCIATION -->
<fvRsPathAtt tDn="topology/pod-1/paths-1017/pathep-[eth1/2]" encap="vlan-51"
primaryEncap="vlan-100" instrImedcy='immediate'/>
</fvAEPg>
</fvAp>
</fvTenant>
A Cisco ACI virtual machine manager (VMM) domain creates an isolated PVLAN port group at the VMware
VDS or Microsoft Hyper-V Virtual Switch for each EPG that has intra-EPG isolation enabled. A fabric
administrator specifies primary encapsulation or the fabric dynamically specifies primary encapsulation at
the time of EPG-to-VMM domain association. When the fabric administrator selects the VLAN-pri and
VLAN-sec values statically, the VMM domain validates that the VLAN-pri and VLAN-sec are part of a static
block in the domain pool.
Note When intra-EPG isolation is not enforced, the VLAN-pri value is ignored even if it is specified in the
configuration.
VLAN-pri/VLAN-sec pairs for the VMware VDS or Microsoft Hyper-V Virtual Switch are selected per VMM
domain during the EPG-to-domain association. The port group created for the intra-EPG isolation EPGs uses
the VLAN-sec tagged with type set to PVLAN. The VMware VDS or the Microsoft Hyper-V Virtual Switch
and fabric swap the VLAN-pri/VLAN-sec encapsulation:
• Communication from the Cisco ACI fabric to the VMware VDS or Microsoft Hyper-V Virtual Switch
uses VLAN-pri.
• Communication from the VMware VDS or Microsoft Hyper-V Virtual Switch to the Cisco ACI fabric
uses VLAN-sec.
Figure 16: Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch
1. EPG-DB sends VLAN traffic to the Cisco ACI leaf switch. The Cisco ACI egress leaf switch encapsulates
traffic with a primary VLAN (PVLAN) tag and forwards it to the Web-EPG endpoint.
2. The VMware VDS or Microsoft Hyper-V Virtual Switch sends traffic to the Cisco ACI leaf switch using
VLAN-sec. The Cisco ACI leaf switch drops all intra-EPG traffic because isolation is enforced for all
intra VLAN-sec traffic within the Web-EPG.
3. The VMware VDS or Microsoft Hyper-V Virtual Switch VLAN-sec uplink to the Cisco ACI Leaf is in
isolated trunk mode. The Cisco ACI leaf switch uses VLAN-pri for downlink traffic to the VMware VDS
or Microsoft Hyper-V Virtual Switch.
4. The PVLAN map is configured in the VMware VDS or Microsoft Hyper-V Virtual Switch and Cisco
ACI leaf switches. VM traffic from WEB-EPG is encapsulated in VLAN-sec. The VMware VDS or
Microsoft Hyper-V Virtual Switch denies local intra-WEB EPG VM traffic according to the PVLAN tag.
All intra-ESXi host or Microsoft Hyper-V host VM traffic is sent to the Cisco ACI leaf using VLAN-Sec.
Related Topics
For information on configuring intra-EPG isolation in a Cisco AVS environment, see Intra-EPG Isolation
Enforcement for Cisco AVS, on page 59.
Configuring Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch using the
GUI
SUMMARY STEPS
1. Log into Cisco APIC.
2. Choose Tenants > tenant.
3. In the left navigation pane expand the Application Profiles folder and appropriate application profile.
4. Right-click the Application EPGs folder and then choose Create Application EPG.
5. In the Create Application EPG dialog box, complete the following steps:
6. Click Update and click Finish.
DETAILED STEPS
e) Click Next.
f) In the Associated VM Domain Profiles area, click the + icon.
g) From the Domain Profile drop-down list, choose the desired VMM domain.
For the static case, in the Port Encap (or Secondary VLAN for Micro-Seg) field, specify the secondary VLAN,
and in the Primary VLAN for Micro-Seg field, specify the primary VLAN. If the Encap fields are left blank, values
will be allocated dynamically.
Note For the static case, a static VLAN must be available in the VLAN pool.
Configuring Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch using the
NX-OS Style CLI
SUMMARY STEPS
1. In the CLI, create an intra-EPG isolation EPG:
2. Verify the configuration:
DETAILED STEPS
Example:
The following example is for Microsoft Hyper-V Virtual Switch:
apic1(config)# tenant Test_Isolation
apic1(config-tenant)# application PVLAN
apic1(config-tenant-app)# epg EPG1
apic1(config-tenant-app-epg)# show running-config
# Command: show running-config tenant Tenant_VMM application Web epg intraEPGDeny
tenant Tenant_VMM
application Web
epg intraEPGDeny
bridge-domain member VMM_BD
microsoft-domain member domain1 encap vlan-2003 primary-encap vlan-2004
microsoft-domain member domain2
exit
isolation enforce
exit
exit
exit
apic1(config-tenant-app-epg)#
Static Leaves:
Node Encap Deployment Immediacy Mode Modification Time
Static Paths:
Node Interface Encap Modification Time
---------- ------------------------------ ---------------- ------------------------------
1018 eth101/1/1 vlan-100 2016-02-11T18:39:02.337-08:00
1019 eth1/16 vlan-101 2016-02-11T18:39:02.337-08:00
Static Endpoints:
Node Interface Encap End Point MAC End Point IP Address
Modification Time
---------- ------------------------------ ---------------- -----------------
------------------------------ ------------------------------
Dynamic Endpoints:
Encap: (P):Primary VLAN, (S):Secondary VLAN
Node Interface Encap End Point MAC End Point IP Address
Modification Time
---------- ------------------------------ ---------------- -----------------
------------------------------ ------------------------------
1017 eth1/3 vlan-943(P) 00:50:56:B3:64:C4 ---
2016-02-17T18:35:32.224-08:00
vlan-944(S)
Configuring Intra-EPG Isolation for VMware VDS or Microsoft Hyper-V Virtual Switch using the
REST API
SUMMARY STEPS
1. Send this HTTP POST message to deploy the application using the XML API.
2. For a VMware VDS or Microsoft Hyper-V Virtual Switch deployment, include one of the following XML
structures in the body of the POST message.
DETAILED STEPS
Step 1 Send this HTTP POST message to deploy the application using the XML API.
Example:
POST https://apic-ip-address/api/mo/uni/tn-ExampleCorp.xml
Step 2 For a VMware VDS or Microsoft Hyper-V Virtual Switch deployment, include one of the following XML structures in
the body of the POST message.
Example:
The following example is for VMware VDS:
<fvTenant name="Tenant_VMM" >
<fvAp name="Web">
<fvAEPg name="IntraEPGDeny" pcEnfPref="enforced">
<!-- pcEnfPref="enforced" ENABLES ISOLATION-->
<fvRsBd tnFvBDName="bd" />
<!-- STATIC ENCAP ASSOCIATION TO VMM DOMAIN-->
<fvRsDomAtt encap="vlan-2001" instrImedcy="lazy" primaryEncap="vlan-2002"
resImedcy="immediate" tDn="uni/vmmp-VMware/dom-DVS1”>
</fvAEPg>
</fvAp>
</fvTenant>
Example:
The following example is for Microsoft Hyper-V Virtual Switch:
<fvTenant name="Tenant_VMM" >
<fvAp name="Web">
<fvAEPg name="IntraEPGDeny" pcEnfPref="enforced">
<!-- pcEnfPref="enforced" ENABLES ISOLATION-->
<fvRsBd tnFvBDName="bd" />
<!-- STATIC ENCAP ASSOCIATION TO VMM DOMAIN-->
<fvRsDomAtt tDn="uni/vmmp-Microsoft/dom-domain1”>
<fvRsDomAtt encap="vlan-2004" instrImedcy="lazy" primaryEncap="vlan-2003"
resImedcy="immediate" tDn="uni/vmmp-Microsoft/dom-domain2”>
</fvAEPg>
</fvAp>
</fvTenant>
Note Using intra-EPG isolation on a Cisco AVS microsegment (uSeg) EPG is not currently supported.
Communication is possible between two endpoints that reside in separate uSeg EPGs if either has intra-EPG
isolation enforced, regardless of any contract that exists between the two EPGs.
Note This procedure assumes that you want to isolate endpoints within an EPG when you create the EPG. If you
want to isolate endpoints within an existing EPG, select the EPG in Cisco APIC, and in the Properties pane,
in the Intra EPG Isolation area, choose Enforced, and then click SUBMIT.
What to do next
You can select statistics and view them to help diagnose problems involving the endpoint. See the sections
Choosing Statistics to View for Isolated Endpoints on Cisco AVS and Viewing Statistics for Isolated Endpoints
on Cisco AVS in this guide.
Configuring Intra-EPG Isolation for Cisco AVS Using the NX-OS Style CLI
What to do next
You can select statistics and view them to help diagnose problems involving the endpoint. See the sections
Choosing Statistics to View for Isolated Endpoints on Cisco AVS and Viewing Statistics for Isolated Endpoints
on Cisco AVS in this guide.
Configuring Intra-EPG Isolation for Cisco AVS Using the REST API
Step 1 Send this HTTP POST message to deploy the application using the XML API.
Example:
POST
https://192.0.20.123/api/mo/uni/tn-ExampleCorp.xml
Step 2 For a VMM deployment, include the XML structure in the following example in the body of the POST message.
Example:
Example:
<fvTenant name="Tenant_VMM" >
<fvAp name="Web">
<fvAEPg name="IntraEPGDeny" pcEnfPref="enforced">
<!-- pcEnfPref="enforced" ENABLES ISOLATION-->
<fvRsBd tnFvBDName="bd" />
<fvRsDomAtt encap="vlan-2001" tDn="uni/vmmp-VMware/dom-DVS1”>
</fvAEPg>
</fvAp>
</fvTenant>
What to do next
You can select statistics and view them to help diagnose problems involving the endpoint. See the sections
Choosing Statistics to View for Isolated Endpoints on Cisco AVS and Viewing Statistics for Isolated Endpoints
on Cisco AVS in this guide.
Physical Ports
Configuring Leaf Switch Physical Ports Using Policy Association
The procedure below uses a Quick Start wizard.
Note This procedure provides the steps for attaching a server to an ACI leaf switch interface. The steps would be
the same for attaching other kinds of devices to an ACI leaf switch interface.
Figure 17: Switch Interface Configuration for Bare Metal Server
Step 1 On the APIC menu bar, navigate to Fabric > External Access Policies > Quick Start, and click Configure an interface,
PC, and VPC.
Step 2 In the Select Switches To Configure Interfaces work area, click the large + to select switches to configure. In the
Switches section, click the + to add switch ID(s) from the drop-down list of available switch IDs and click Update.
Step 3 Click the large + to configure switch interfaces.
The interface policy group is a named policy that specifies the group of interface policies you will apply to the selected
interfaces of the switch. Examples of interface policies include Link Level Policy (for example, 1gbit port speed), Storm
Control Interface Policy, and so forth.
Note The Attached Device Type domain is required for enabling an EPG to use the interfaces specified in the switch
profile.
What to do next
This completes the basic leaf interface configuration steps.
Note While this configuration enables hardware connectivity, no data traffic can flow without a valid application
profile, EPG, and contract that is associated with this hardware configuration.
SUMMARY STEPS
1. On the APIC menu bar, navigate to Fabric > Inventory > Inventory, choose a pod and navigate to the
Configure tab .
2. Once you have assigned the appropriate fields to the configuration, click Submit.
DETAILED STEPS
Step 1 On the APIC menu bar, navigate to Fabric > Inventory > Inventory, choose a pod and navigate to the Configure tab .
A graphical representation of the switch appears. Choose the port to configure. Once selected, port configure type will
appear at the top in the form of a highlighted port configuration type. Choose configuraton type and those configuration
parameters will appear.
Step 2 Once you have assigned the appropriate fields to the configuration, click Submit.
In this configuration, all changes to the leaf switch are done by selecting the port and applying the policy to it. All leaf
switch configuration is done right here on this page.
You have selected the port then applied a policy to it.
What to do next
This completes the basic leaf interface configuration steps.
Configuring Physical Ports in Leaf Nodes and FEX Devices Using the NX-OS
CLI
The commands in the following examples create many managed objects (MOs) in the ACI policy model that
are fully compatible with the REST API/SDK and GUI. However, the CLI user can focus on the intended
network configuration instead of ACI model internals.
The following figure shows examples of Ethernet ports directly on leaf nodes or FEX modules attached to
leaf nodes and how each is represented in the CLI. For FEX ports, the fex-id is included in the naming of the
port itself as in ethernet 101/1/1. While describing an interface range, the ethernet keyword need not be
repeated as in NX-OS. Example: interface ethernet 101/1/1-2, 102/1/1-2.
SUMMARY STEPS
1. configure
2. leaf node-id
3. interface type
4. (Optional) fex associate node-id
5. speed speed
DETAILED STEPS
Step 2 leaf node-id Specifies the leaf or leafs to be configured. The node-id can
be a single node ID or a range of IDs, in the form
Example:
node-id1-node-id2, to which the configuration will be
apic1(config)# leaf 102 applied.
Step 3 interface type Specifies the interface that you are configuring. You can
specify the interface type and identity. For an Ethernet port,
Example:
use “ethernet slot / port.”
apic1(config-leaf)# interface ethernet 1/2
Step 5 speed speed The speed setting is shown as an example. At this point you
can configure any of the interface settings shown in the
Example:
table below.
apic1(config-leaf-if)# speed 10G
The following table shows the interface settings that can be configured at this point.
Command Purpose
[no] lldp receive Set the LLDP receive for physical interface
Examples
Configure one port in a leaf node. The following example shows how to configure the interface
eth1/2 in leaf 101 for the following properties: speed, cdp, and admin state.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/2
apic1(config-leaf-if)# speed 10G
apic1(config-leaf-if)# cdp enable
apic1(config-leaf-if)# no shut
Configure multiple ports in multiple leaf nodes. The following example shows the configuration of
speed for interfaces eth1/1-10 for each of the leaf nodes 101-103.
Attach a FEX to a leaf node. The following example shows how to attach a FEX module to a leaf
node. Unlike in NX-OS, the leaf port Eth1/5 is implicitly configured as fabric port and a FEX fabric
port-channel is created internally with the FEX uplink port(s). In ACI, the FEX fabric port-channels
use default configuration and no user configuration is allowed.
Note This step is required before creating a port-channel using FEX ports, as described in the next example.
Configure FEX ports attached to leaf nodes. This example shows configuration of speed for interfaces
eth1/1-10 in FEX module 101 attached to each of the leaf nodes 102-103. The FEX ID 101 is included
in the port identifier. FEX IDs start with 101 and are local to a leaf.
Port Cloning
Cloning Port Configurations
In the Cisco APIC Release 3.2 and later, the Port Cloning feature is supported. After you configure a leaf
switch port, you can copy the configuration and apply it to other ports. This is only supported in the APIC
GUI (not in the NX-OS style CLI).
Port cloning is used for small numbers of leaf switch ports (interfaces) that are individually configured, not
for interfaces configured using Fabric Access Policies, that you deploy on multiple nodes in the fabric.
Port cloning is only supported for Layer 2 configurations.
The following policies are not supported on a cloned port:
• Attachable Access Entity
• Storm Control
• DWDM
• MACsec
Port Channels
ACI Leaf Switch Port Channel Configuration Using the GUI
The procedure below uses a Quick Start wizard.
Note This procedure provides the steps for attaching a server to an ACI leaf switch interface. The steps would be
the same for attaching other kinds of devices to an ACI leaf switch interface.
Figure 18: Switch Port Channel Configuration
Step 1 On the APIC menu bar, navigate to Fabric > External Access Policies > Quick Start, and click Configure an interface,
PC, and VPC.
Step 2 In the Select Switches To Configure Interfaces work area, click the large + to select switches to configure. In the
Switches section, click the + to add switch ID(s) from the drop-down list of available switch IDs and click Update.
Step 3 Click the large + to configure switch interfaces.
The interface policy group is a named policy that specifies the group of interface policies you will apply to the selected
interfaces of the switch. Examples of interface policies include Link Level Policy (for example, 1gbit port speed), Storm
Control Interface Policy, and so forth.
Note The Attached Device Type is required for enabling an EPG to use the interfaces specified in the switch profile.
Note • Choosing to create a port channel policy displays the Create Port Channel Policy dialog box where
you can specify the policy details and enable features such as symmetric hashing. Also note that
choosing the Symmetric hashing option displays the Load Balance Hashing field, which enables
you to configure hash tuple. However, only one customized hashing option can be applied on the same
leaf switch.
• Symmetric hashing is not supported on the following switches:
• Cisco Nexus 93128TX
• Cisco Nexus 9372PX
• Cisco Nexus 9372PX-E
• Cisco Nexus 9372TX
• Cisco Nexus 9372TX-E
• Cisco Nexus 9396PX
• Cisco Nexus 9396TX
d) Specify the attached device type to use. Choose Bare Metal for connecting bare metal servers. Bare metal uses
the phys domain type.
e) Click Save to update the policy details, then click Submit to submit the switch profile to the APIC.
The APIC creates the switch profile, along with the interface, selector, and attached device type policies.
Verification: Use the CLI show int command on the switch where the server is attached to verify that the switch interface
is configured accordingly.
What to do next
This completes the port channel configuration steps.
Note While this configuration enables hardware connectivity, no data traffic can flow without a valid application
profile, EPG, and contract that is associated with this hardware configuration.
Configuring Port Channels in Leaf Nodes and FEX Devices Using the NX-OS
CLI
Port-channels are logical interfaces in NX-OS used to aggregate bandwidth for multiple physical ports and
also for providing redundancy in case of link failures. In NX-OS, port-channel interfaces are identified by
user-specified numbers in the range 1 to 4096 unique within a node. Port-channel interfaces are either configured
explicitly (using the interface port-channel command) or created implicitly (using the channel-group
command). The configuration of the port-channel interface is applied to all the member ports of the port-channel.
There are certain compatibility parameters (speed, for example) that cannot be configured on the member
ports.
In the ACI model, port-channels are configured as logical entities identified by a name to represent a collection
of policies that can be assigned to set of ports in one or more leaf nodes. Such assignment creates one
port-channel interface in each of the leaf nodes identified by an auto-generated number in the range 1 to 4096
within the leaf node, which may be same or different among the nodes for the same port-channel name. The
membership of these port-channels may be same or different as well. When a port-channel is created on the
FEX ports, the same port-channel name can be used to create one port-channel interface in each of the FEX
devices attached to the leaf node. Thus, it is possible to create up to N+1 unique port-channel interfaces
(identified by the auto-generated port-channel numbers) for each leaf node attached to N FEX modules. This
is illustrated with the examples below. Port-channels on the FEX ports are identified by specifying the fex-id
along with the port-channel name (interface port-channel foo fex 101, for example).
• N+1 instances per leaf of port-channel foo are possible when each leaf is connected to N FEX nodes.
• Leaf ports and FEX ports cannot be part of the same port-channel instance.
• Each FEX node can have only one instance of port-channel foo.
SUMMARY STEPS
1. configure
2. template port-channel channel-name
3. [no] switchport access vlan vlan-id tenant tenant-name application application-name epg epg-name
4. channel-mode active
5. exit
6. leaf node-id
7. interface type
8. [no] channel-group channel-name
9. (Optional) lacp port-priority priority
DETAILED STEPS
Step 3 [no] switchport access vlan vlan-id tenant tenant-name Deploys the EPG with the VLAN on all ports with which
application application-name epg epg-name the port-channel is associated.
Example:
Step 6 leaf node-id Specifies the leaf switches to be configured. The node-id
can be a single node ID or a range of IDs, in the form
Example:
node-id1-node-id2, to which the configuration will be
apic1(config)# leaf 101 applied.
Step 7 interface type Specifies the interface or range of interfaces that you are
configuring to the port-channel.
Example:
Step 8 [no] channel-group channel-name Assigns the interface or range of interfaces to the
port-channel. Use the keyword no to remove the interface
Example:
from the port-channel. To change the port-channel
apic1(config-leaf-if)# channel-group foo assignment on an interface, you can enter the
channel-group command without first removing the
interface from the previous port-channel.
Step 9 (Optional) lacp port-priority priority This setting and other per-port LACP properties can be
applied to member ports of a port-channel at this point.
Example:
Note In the ACI model, these commands are allowed
apic1(config-leaf-if)# lacp port-priority 1000 only after the ports are member of a port channel.
apic1(config-leaf-if)# lacp rate fast If a port is removed from a port channel,
configuration of these per-port properties are
removed as well.
The following table shows various commands for global configurations of port channel properties in the ACI
model. These commands can also be used for configuring overrides for port channels in a specific leaf in the
(config-leaf-if) CLI mode. The configuration made on the port-channel is applied to all member ports.
[no] link debounce time <time> Set Link Debounce for port-channel
[no] channel-mode { active | passive | on| mac-pinning LACP mode for the link in port-channel l
}
[no] lacp fast-select-hot-standby LACP fast select for hot standby ports
Examples
Configure a port channel (global configuration). A logical entity foo is created that represents a
collection of policies with two configurations: speed and channel mode. More properties can be
configured as required.
Note The channel mode command is equivalent to the mode option in the channel group command in
NX-OS. In ACI, however, this supported for the port-channel (not on member port).
Configure ports to a port-channel in a FEX. In this example, port channel foo is assigned to ports
Ethernet 1/1-2 in FEX 101 attached to leaf node 102 to create an instance of port channel foo. The
leaf node will auto-generate a number, say 1002 to identify the port channel in the switch. This port
channel number would be unique to the leaf node 102 regardless of how many instance of port
channel foo are created.
Note The configuration to attach the FEX module to the leaf node must be done before creating port
channels using FEX ports.
In Leaf 102, this port channel interface can be referred to as interface port-channel foo FEX 101.
apic1(config)# leaf 102
apic1(config-leaf)# interface port-channel foo fex 101
apic1(config-leaf)# shut
Configure ports to a port channel in multiple leaf nodes. In this example, port channel foo is assigned
to ports Ethernet 1/1-2 in each of the leaf nodes 101-103. The leaf nodes will auto generate a number
unique in each node (which may be same or different among nodes) to represent the port-channel
interfaces.
apic1(config)# leaf 101-103
apic1(config-leaf)# interface ethernet 1/1-2
apic1(config-leaf-if)# channel-group foo
Add members to port channels. This example would add two members eth1/3-4 to the port-channel
in each leaf node, so that port-channel foo in each node would have members eth 1/1-4.
apic1(config)# leaf 101-103
apic1(config-leaf)# interface ethernet 1/3-4
apic1(config-leaf-if)# channel-group foo
Remove members from port channels. This example would remove two members eth1/2, eth1/4 from
the port channel foo in each leaf node, so that port channel foo in each node would have members
eth 1/1, eth1/3.
apic1(config)# leaf 101-103
apic1(config-leaf)# interface eth 1/2,1/4
apic1(config-leaf-if)# no channel-group foo
Configure port-channel with different members in multiple leaf nodes. This example shows how to
use the same port-channel foo policies to create a port-channel interface in multiple leaf nodes with
different member ports in each leaf. The port-channel numbers in the leaf nodes may be same or
different for the same port-channel foo. In the CLI, however, the configuration will be referred as
interface port-channel foo. If the port-channel is configured for the FEX ports, it would be referred
to as interface port-channel foo fex <fex-id>.
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/1-2
apic1(config-leaf-if)# channel-group foo
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# leaf 102
apic1(config-leaf)# interface ethernet 1/3-4
apic1(config-leaf-if)# channel-group foo
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# leaf 103
apic1(config-leaf)# interface ethernet 1/5-8
apic1(config-leaf-if)# channel-group foo
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface ethernet 101/1/1-2
apic1(config-leaf-if)# channel-group foo
Configure per port properties for LACP. This example shows how to configure member ports of a
port-channel for per-port properties for LACP.
Note In ACI model, these commands are allowed only after the ports are member of a port channel. If a
port is removed from a port channel, configuration of these per-port properties would be removed
as well.
Configure admin state for port channels. In this example, a port-channel foo is configured in each
of the leaf nodes 101-103 using the channel-group command. The admin state of port-channel(s) can
be configured in each leaf using the port-channel interface. In ACI model, the admin state of the
port-channel cannot be configured in the global scope.
// create port-channel foo in each leaf
apic1(config)# leaf 101-103
apic1(config-leaf)# interface ethernet 1/3-4
apic1(config-leaf-if)# channel-group foo
Override config is very helpful to assign specific vlan-domain, for example, to the port-channel
interfaces in each leaf while sharing other properties.
// configure a port channel global config
apic1(config)# interface port-channel foo
apic1(config-if)# speed 1G
apic1(config-if)# channel-mode active
This example shows how to change port channel assignment for ports using the channel-group
command. There is no need to remove port channel membership before assigning to other port
channel.
apic1(config)# leaf 101-103
apic1(config-leaf)# interface ethernet 1/3-4
apic1(config-leaf-if)# channel-group foo
apic1(config-leaf-if)# channel-group bar
Configuring Two Port Channels Applied to Multiple Switches Using the REST
API
This example creates two port channels (PCs) on leaf switch 17, another port channel on leaf switch 18, and
a third one on leaf switch 20. On each leaf switch, the same interfaces will be part of the PC (interface 1/10
to 1/15 for port channel 1 and 1/20 to 1/25 for port channel 2). The policy uses two switch blocks because
each a switch block can contain only one group of consecutive switch IDs. All these PCs will have the same
configuration.
Note Even though the PC configurations are the same, this example uses two different interface policy groups.
Each Interface Policy Group represents a PC on a switch. All interfaces associated with a given interface
policy group are part of the same PCs.
To create the two PCs, send a post with XML such as the following:
Example:
<infraInfra dn="uni/infra">
<infraNodeP name=”test">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk”
from_=”17" to_=”18”/>
<infraNodeBlk name="nblk”
from_=”20" to_=”20”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test1"/>
<infraRsAccPortP tDn="uni/infra/accportprof-test2"/>
</infraNodeP>
<infraAccPortP name="test1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1”
fromCard="1" toCard="1"
fromPort="10" toPort=”15”/>
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-bndlgrp1"/>
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="test2">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1”
fromCard="1" toCard="1"
fromPort=“20" toPort=”25”/>
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-bndlgrp2" />
</infraHPortS>
</infraAccPortP>
<infraFuncP>
<infraAccBndlGrp name="bndlgrp1" lagT="link">
<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>
</infraInfra>
1. Prerequisites
• Ensure that you have read/write access privileges to the infra security domain.
• Ensure that the target leaf switches with the necessary interfaces are available.
Note When creating a VPC domain between two Leaf nodes, please considering the hardware model limitations:
• Generation 1 switches are compatible only with other Generation 1 Switches. These switch models can
be identified by the lack of “EX”, or “FX” at the end of the switch name. For example N9K-9312TX.
Generation 2 and later switches can be mixed together in a VPC domain. These switch models can be
identified with “EX”, “FX” or “FX2” at the end of the switch name. For example N9K-93108TC-EX,
or N9K-9348GC-FXP.
Example:
Compatible VPC Switch Pairs:
• N9K-9312TX & N9K-9312TX
• N9K-93108TC-EX & N9K-9348GC-FXP
• Nexus 93180TC-FX & Nexus 93180YC-FX
• Nexus 93180YC-FX & Nexus 93180YC-FX
1. On the APIC menu bar, navigate to Fabric > External Access Policies > Quick Start, and click Configure
an interface, PC, and VPC to open the quick start wizard.
2. Provide the specifications for the policy name, switch IDs and the interfaces the virtual port channel will
use. Add the Interface Policy parameters, such as group port speed, storm control, CDP, LLDP. Add the
Attached Device Type as an External Bridged Device and specify the VLAN and domain that will be
used.
3. Use the CLI show int command on the ACI leaf switches where the external switch is attached to verify
that the switches and virtual port channel are configured accordingly.
Note: While this configuration enables hardware connectivity, no data traffic can flow without a valid
application profile, EPG, and contract that is associated with this hardware configuration.
1. On the APIC menu bar, navigate to Tenant > tenant-name > Quick Start, and click Create an application
profile under the tenant quick start wizard.
2. Configure the endpoint groups (EPGs), contracts, bridge domain, subnet, and context.
3. Associate the application profile EPGs with the virtual port channel switch profile created above.
ACI Leaf Switch Virtual Port Channel Configuration Using the GUI
The procedure below uses a Quick Start wizard.
Note This procedure provides the steps for attaching a trunked switch to a ACI leaf switch virtual port channel.
The steps would be the same for attaching other kinds of devices to an ACI leaf switch interface.
Figure 20: Switch Virtual Port Channel Configuration
Note LACP sets a port to the suspended state if it does not receive an LACP PDU from the peer. This can cause
some servers to fail to boot up as they require LACP to logically bring-up the port. You can tune behavior to
individual use by disabling LACP suspend individual. To do so, create a port channel policy in your vPC
policy group, and after setting the mode to LACP active, remove Suspend Individual Port. Now the ports
in the vPC will stay active and continue to send LACP packets.
Note Adaptive Load Balancing (ALB) (based on ARP Negotiation) across virtual port channels is not supported
in the ACI.
Note When creating a VPC domain between two leaf switches, both switches must be in the same switch generation,
one of the following:
• Generation 1 - Cisco Nexus N9K switches without “EX” on the end of the switch name; for example,
N9K-9312TX
• Generation 2 – Cisco Nexus N9K switches with “EX” on the end of the switch model name; for example,
N9K-93108TC-EX
Switches such as these two are not compatible VPC peers. Instead, use switches of the same generation.
Step 1 On the APIC menu bar, navigate to Fabric > Access Policies > Quick Start, and click Configure an interface, PC, and
VPC.
Step 2 In the Configure an interface, PC, and VPC work area, click the large green + to select switches.
The Select Switches To Configure Interfaces work area opens with the Quick option selected by default.
Step 3 Select switch IDs from the Switches drop-down list, name the profile, then click Save.
The saved policy displays in the Configured Switch Interfaces list.
Step 4 Configure the Interface Policy Group and Attached Device Type that the virtual port channel will use for the selected
switches.
The interface policy group is a named policy that specifies the group of interface policies you will apply to the selected
interfaces of the switch. Examples of interface policies include Link Level Policy (for example, 1gbit port speed), Storm
Control Interface Policy, and so forth.
Note The Attached Device Type domain is required for enabling an EPG to use the interfaces specified in the switch
profile.
What to do next
This completes the switch virtual port channel configuration steps.
Note While this configuration enables hardware connectivity, no data traffic can flow without a valid application
profile, EPG, and contract that is associated with this hardware configuration.
Configuring Virtual Port Channels in Leaf Nodes and FEX Devices Using the
NX-OS CLI
A Virtual Port Channel (VPC) is an enhancement to port-channels that allows connection of a host or switch
to two upstream leaf nodes to improve bandwidth utilization and availability. In NX-OS, VPC configuration
is done in each of the two upstream switches and configuration is synchronized using peer link between the
switches.
Note When creating a VPC domain between two leaf switches, both switches must be in the same switch generation,
one of the following:
• Generation 1 - Cisco Nexus N9K switches without “EX” on the end of the switch name; for example,
N9K-9312TX
• Generation 2 – Cisco Nexus N9K switches with “EX” on the end of the switch model name; for example,
N9K-93108TC-EX
Switches such as these two are not compatible VPC peers. Instead, use switches of the same generation.
The ACI model does not require a peer link and VPC configuration can be done globally for both the upstream
leaf nodes. A global configuration mode called vpc context is introduced in ACI and VPC interfaces are
represented using a type interface vpc that allows global configuration applicable to both leaf nodes.
Two different topologies are supported for VPC in the ACI model: VPC using leaf ports and VPC over FEX
ports. It is possible to create many VPC interfaces between a pair of leaf nodes and similarly, many VPC
interfaces can be created between a pair of FEX modules attached to the leaf node pairs in a straight-through
topology.
VPC considerations include:
• The VPC name used is unique between leaf node pairs. For example, only one VPC 'corp' can be created
per leaf pair (with or without FEX).
• Leaf ports and FEX ports cannot be part of the same VPC.
• Each FEX module can be part of only one instance of VPC corp.
• VPC context allows configuration
• The VPC context mode allows configuration of all VPCs for a given leaf pair. For VPC over FEX, the
fex-id pairs must be specified either for the VPC context or along with the VPC interface, as shown in
the following two alternative examples.
or
In the ACI model, VPC configuration is done in the following steps (as shown in the examples below).
Note A VLAN domain is required with a VLAN range. It must be associated with the port-channel template.
SUMMARY STEPS
1. configure
2. vlan-domainname[dynamic] [type domain-type]
3. vlanrange
4. vpc domain explicit domain-id leaf node-id1 node-id2
5. peer-dead-interval interval
6. exit
7. template port-channel channel-name
8. vlan-domain membervlan-domain-name
9. switchport access vlan vlan-id tenant tenant-name application application-name epg epg-name
10. channel-mode active
11. exit
12. leaf node-id1 node-id2
13. interface typeleaf/interface-range
14. [no] channel-group channel-name vpc
15. exit
16. exit
17. vpc context leaf node-id1 node-id2
18. interface vpc channel-name
19. (Optional) [no] shutdown
DETAILED STEPS
Step 3 vlanrange Configures a VLAN range for the VLAN domain and exits
the configuration mode. The range can be a single VLAN
Example:
or a range of VLANs.
apic1(config-vlan)# vlan 1000-1999
apic1(config-vlan)# exit
Step 4 vpc domain explicit domain-id leaf node-id1 node-id2 Configures a VPC domain between a pair of leaf nodes.
You can specify the VPC domain ID in the explicit mode
Example:
along with the leaf node pairs.
apic1(config)# vpc domain explicit 1 leaf 101
102 Alternative commands to configure a VPC domain are as
follows:
• vpc domain [consecutive | reciprocal]
The consecutive and reciprocal options allow auto
configuration of a VPC domain across all leaf nodes
in the ACI fabric.
• vpc domain consecutive domain-start leaf start-node
end-node
This command configures a VPC domain
consecutively for a selected set of leaf node pairs.
Step 5 peer-dead-interval interval Configures the time delay the Leaf switch waits to restore
the vPC before receiving a response from the peer. If it
Example:
does not receive a response from the peer within this time,
apic1(config-vpc)# peer-dead-interval 10 the Leaf switch considers the peer dead and brings up the
vPC with the role as a master. If it does receive a response
from the peer it restores the vPC at that point. The range
is from 5 seconds to 600 seconds. The default is 200
seconds.
Step 9 switchport access vlan vlan-id tenant tenant-name Deploys the EPG with the VLAN on all ports with which
application application-name epg epg-name the port-channel is associated.
Example:
Step 12 leaf node-id1 node-id2 Specifies the pair of leaf switches to be configured.
Example:
apic1(config)# leaf 101-102
Step 13 interface typeleaf/interface-range Specifies the interface or range of interfaces that you are
configuring to the port-channel.
Example:
apic1(config-leaf)# interface ethernet 1/3-4
Step 14 [no] channel-group channel-name vpc Assigns the interface or range of interfaces to the
port-channel. Use the keyword no to remove the interface
Example:
from the port-channel. To change the port-channel
apic1(config-leaf-if)# channel-group corp vpc assignment on an interface, you can enter the
channel-group command without first removing the
interface from the previous port-channel.
Note The vpc keyword in this command makes the
port-channel a VPC. If the VPC does not
already exist, a VPC ID is automatically
generated and is applied to all member leaf
nodes.
Step 15 exit
Example:
apic1(config-leaf-if)# exit
Step 16 exit
Example:
Step 17 vpc context leaf node-id1 node-id2 The vpc context mode allows configuration of VPC to be
applied to both leaf node pairs.
Example:
apic1(config)# vpc context leaf 101 102
Step 19 (Optional) [no] shutdown Administrative state configuration in the vpc context allows
changing the admin state of a VPC with one command for
Example:
both leaf nodes.
apic1(config-vpc-if)# no shut
Example
This example shows how to configure a basic VPC.
apic1# configure
apic1(config)# vlan-domain dom1 dynamic
apic1(config-vlan)# vlan 1000-1999
apic1(config-vlan)# exit
apic1(config)# vpc domain explicit 1 leaf 101 102
apic1(config-vpc)# peer-dead-interval 10
apic1(config-vpc)# exit
apic1(config)# template port-channel corp
apic1(config-po-ch-if)# vlan-domain member dom1
apic1(config-po-ch-if)# channel-mode active
apic1(config-po-ch-if)# exit
apic1(config)# leaf 101-102
apic1(config-leaf)# interface ethernet 1/3-4
apic1(config-leaf-if)# channel-group corp vpc
apic1(config-leaf-if)# exit
apic1(config)# vpc context leaf 101 102
The APIC performs several validations of the fabricExplicitGEp and faults are raised when any of these
validations fail. A leaf can be paired with only one other leaf. The APIC rejects any configuration that breaks
this rule. When creating a fabricExplicitGEp, an administrator must provide the IDs of both of the leaf
switches to be paired. The APIC rejects any configuration which breaks this rule. Both switches must be up
when fabricExplicitGEp is created. If one switch is not up, the APIC accepts the configuration but raises a
fault. Both switches must be leaf switches. If one or both switch IDs corresponds to a spine, the APIC accepts
the configuration but raises a fault.
To create the fabricExplicitGEp policy and use the intra selector to specify the interface, send a post with XML such
as the following example:
Example:
<fabricProtPol pairT="explicit">
<fabricExplicitGEp name="tG" id="2">
<fabricNodePEp id=”18”/>
<fabricNodePEp id=”25"/>
</fabricExplicitGEp>
</fabricProtPol>
Configuring a Virtual Port Channel on Selected Port Blocks of Two Switches Using the REST API
This policy creates a single virtual port channel (VPC) on leaf switches 18 and 25, using interfaces 1/10 to
1/15 on leaf 18, and interfaces 1/20 to 1/25 on leaf 25.
Note When creating a VPC domain between two leaf switches, both switches must be in the same switch generation,
one of the following:
• Generation 1 - Cisco Nexus N9K switches without “EX” on the end of the switch name; for example,
N9K-9312TX
• Generation 2 – Cisco Nexus N9K switches with “EX” on the end of the switch model name; for example,
N9K-93108TC-EX
Switches such as these two are not compatible VPC peers. Instead, use switches of the same generation.
To create the VPC send a post with XML such as the following example:
Example:
<infraInfra dn="uni/infra">
<infraNodeP name=”test1">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk”
from_=”18" to_=”18”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test1"/>
</infraNodeP>
<infraNodeP name=”test2">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk”
from_=”25" to_=”25”/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-test2"/>
</infraNodeP>
<infraAccPortP name="test1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1”
fromCard="1" toCard="1"
fromPort="10" toPort=”15”/>
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="test2">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1”
fromCard="1" toCard="1"
fromPort=“20" toPort=”25”/>
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-bndlgrp" />
</infraHPortS>
</infraAccPortP>
<infraFuncP>
<infraAccBndlGrp name="bndlgrp" lagT=”node">
<infraRsHIfPol tnFabricHIfPolName=“default"/>
<infraRsCdpIfPol tnCdpIfPolName=”default”/>
<infraRsLacpPol tnLacpLagPolName=”default"/>
</infraAccBndlGrp>
</infraFuncP>
</infraInfra>
Configuring a Single Virtual Port Channel Across Two Switches Using the REST API
The two steps for creating a virtual port channel across two switches are as follows:
• Create a fabricExplicitGEp: this policy specifies the leaf switch that pairs to form the virtual port
channel.
• Use the infra selector to specify the interface configuration.
The APIC performs several validations of the fabricExplicitGEp and faults are raised when any of these
validations fail. A leaf can be paired with only one other leaf. The APIC rejects any configuration that breaks
this rule. When creating a fabricExplicitGEp, an administrator must provide the IDs of both of the leaf
switches to be paired. The APIC rejects any configuration which breaks this rule. Both switches must be up
when fabricExplicitGEp is created. If one switch is not up, the APIC accepts the configuration but raises a
fault. Both switches must be leaf switches. If one or both switch IDs corresponds to a spine, the APIC accepts
the configuration but raises a fault.
To create the fabricExplicitGEp policy and use the intra selector to specify the interface, send a post with XML such
as the following example:
Example:
<fabricProtPol pairT="explicit">
<fabricExplicitGEp name="tG" id="2">
<fabricNodePEp id=”18”/>
<fabricNodePEp id=”25"/>
</fabricExplicitGEp>
</fabricProtPol>
Reflective Relay
Reflective Relay (802.1Qbg)
Reflective relay is a switching option beginning with Cisco APIC Release 2.3(1). Reflective relay—the tagless
approach of IEEE standard 802.1Qbg—forwards all traffic to an external switch, which then applies policy
and sends the traffic back to the destination or target VM on the server as needed. There is no local switching.
For broadcast or multicast traffic, reflective relay provides packet replication to each VM locally on the server.
One benefit of reflective relay is that it leverages the external switch for switching features and management
capabilities, freeing server resources to support the VMs. Reflective relay also allows policies that you configure
on the Cisco APIC to apply to traffic between the VMs on the same server.
In the Cisco ACI, you can enable reflective relay, which allows traffic to turn back out of the same port it
came in on. You can enable reflective relay on individual ports, port channels, or virtual port channels as a
Layer 2 interface policy using the APIC GUI, NX-OS CLI, or REST API. It is disabled by default.
The term Virtual Ethernet Port Aggregator (VEPA) is also used to describe 802.1Qbg functionality.
Note This procedure can be performed in the GUI in Advanced mode only.
Example:
This example enables reflective relay on multiple ports using a template:
apic1(config)# template policy-group grp1
apic1(config-pol-grp-if)# switchport vepa enabled
apic1(config-pol-grp-if)# exit
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/2-4
apic1(config-leaf-if)# policy-group grp1
Example:
This example enables reflective relay on a port channel:
apic1(config)# leaf 101
apic1(config-leaf)# interface port-channel po2
apic1(config-leaf-if)# switchport vepa enabled
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)#
Example:
This example enables reflective relay on multiple port channels:
apic1(config)# template port-channel po1
apic1(config-if)# switchport vepa enabled
apic1(config-if)# exit
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/3-4
apic1(config-leaf-if)# channel-group po1
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
Example:
This example enables reflective relay on a virtual port channel:
apic1(config)# vpc domain explicit 1 leaf 101 102
apic1(config-vpc)# exit
apic1(config)# template port-channel po4
apic1(config-if)# exit
apic1(config)# leaf 101-102
apic1(config-leaf)# interface eth 1/11-12
apic1(config-leaf-if)# channel-group po4 vpc
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# vpc context leaf 101 102
apic1(config-vpc)# interface vpc po4
apic1(config-vpc-if)# switchport vepa enabled
Step 2 Apply the Layer 2 interface policy to a leaf access port policy group.
Example:
<infraAccPortGrp name=“VepaPortG">
<infraRsL2IfPol tnL2IfPolName=“VepaL2IfPol”/>
</infraAccPortGrp>
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="20" toPort="22">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accportgrp-VepaPortG" />
</infraHPortS>
</infraAccPortP>
FEX Interfaces
Configuring Port, PC, and VPC Connections to FEX Devices
FEX connections and the profiles used to configure them can be created using the GUI, NX-OS Style CLI,
or the REST API.
Interface profiles for configuring FEX connections are supported since Cisco APIC, Release 3.0(1k).
For information on how to configure them using the NX-OS style CLI, see the topics about configuring ports,
PCs and VPCs using the NX-OS style CLI.
Note When creating a VPC domain between two leaf switches, both switches must be in the same switch generation,
one of the following:
• Generation 1 - Cisco Nexus N9K switches without “EX” or "FX" on the end of the switch name; for
example, N9K-9312TX
• Generation 2 – Cisco Nexus N9K switches with “EX” or "FX" on the end of the switch model name; for
example, N9K-93108TC-EX
Switches such as these two are not compatible VPC peers. Instead, use switches of the same generation.
Note When using GARP as the protocol to notify of IP to MAC binding changes to different interfaces on the same
FEX you must set the bridge domain mode to ARP Flooding and enable EP Move Detection Mode:
GARP-based Detection, on the L3 Configuration page of the bridge domain wizard. This workaround is
only required with Generation 1 switches. With Generation 2 switches or later, this is not an issue.
Note This procedure provides the steps for attaching a server to the FEX. The steps would be the same for attaching
any device to an ACI attached FEX.
Note Configuring FEX connections with FEX IDs 165 to 199 is not supported in the APIC GUI. To use one of
these FEX IDs, configure the profile using the NX-OS style CLI. For more information, see Configuring FEX
Connections Using Interface Profiles with the NX-OS Style CLI.
Step 1 On the APIC, create a switch profile using the Fabric > External Access Policies > Quick Start Configure Interface,
PC, And VPC wizard.
a) On the APIC menu bar, navigate to Fabric > Access Policies > Quick Start.
b) In the Quick Start page, click the Configure an interface, PC, and VPC option to open the Configure Interface,
PC And VPC wizard.
c) In the Configure an interface, PC, and VPC work area, click the + to add a new switch profile.
d) In the Select Switches To Configure Interfaces work area, click the Advanced radio button.
e) Select the switch from the drop-down list of available switch IDs.
Troubleshooting Tips
In this procedure, one switch is included in the profile. Selecting multiple switches allows the same profile to be used
on multiple switches.
f) Provide a name in the Switch Profile Name field.
g) Click the + above the Fexes list to add a FEX ID and the switch ports to which it will connect to the switch profile.
You must configure FEX IDs 165 - 199, using the NX-OS style CLI. See Configuring FEX Connections Using
Interface Profiles with the NX-OS Style CLI.
h) Click Save to save the changes. Click Submit to submit the switch profile to the APIC.
The APIC auto-generates the necessary FEX profile (<switch policy name>_FexP<FEX ID>) and selector (<switch
policy name>_ifselctor).
Verification: Use the CLI show fex command on the switch where the FEX is attached to verify that the FEX is online.
Step 2 Customize the auto-generated FEX Profile to enable attaching a server to a single FEX port.
a) In the Navigation pane, locate the switch policy you just created in the policies list. You will also find the
auto-generated FEX the <switch policy name>_FexP<FEX ID> profile.
b) In the work pane of the <switch policy name>_FexP<FEX ID> profile, click the + to add a new entry to the Interface
Selectors For FEX list.
The Create Access Port Selector dialog opens.
c) Provide a name for the selector.
d) Specify the FEX interface IDs to use.
e) Select an existing Interface Policy Group from the list or Create Access Port Policy Group.
The access port policy group is a named policy that specifies the group of interface policies you will apply to the
selected interfaces of the FEX. Examples of interface policies include Link Level Policy (for example, 1gbit port
speed), Attach Entity Profile, Storm Control Interface Policy, and so forth.
Note Within the interface policy group, the Attached Entity Profile is required for enabling an EPG to use the
interfaces specified in the FEX port selector.
What to do next
Note While this configuration enables hardware connectivity, no data traffic can flow without a valid application
profile, EPG, and contract that is associated with this hardware configuration.
Note This procedure provides the steps for attaching a server to the FEX port channel. The steps would be the same
for attaching any device to an ACI attached FEX.
a) On the APIC menu bar, navigate to Fabric > External Access Policies > Interfaces > Leaf Interfaces > Profiles.
b) In the Navigation Pane, select the FEX profile.
APIC auto-generated FEX profile names are formed as follows: <switch policy name>_FexP<FEX ID>.
c) In the FEX Profile work area, click the +to add a new entry to the Interface Selectors For FEX list.
The Create Access Port Selector dialog opens.
Step 2 Customize the Create Access Port Selector to enable attaching a server to the FEX port channel.
a) Provide a name for the selector.
b) Specify the FEX interface IDs to use.
c) Select an existing Interface Policy Group from the list or Create PC Interface Policy Group.
The port channel interface policy group specifies the group of policies you will apply to the selected interfaces of the
FEX. Examples of interface policies include Link Level Policy (for example, 1gbit port speed), Attach Entity Profile,
Storm Control Interface Policy, and so forth.
Note Within the interface policy group, the Attached Entity Profile is required for enabling an EPG to use the
interfaces specified in the FEX port selector.
d) In the Port Channel Policy option, select static or dynamic LACP according to the requirements of your configuration.
e) Click Submit to submit the updated FEX proflle to the APIC.
The APIC updates the FEX profile.
Verification: Use the CLI show port-channel summary command on the switch where the FEX is attached to verify
that the port channel is configured accordingly.
What to do next
This completes the FEX port channel configuration steps.
Note While this configuration enables hardware connectivity, no data traffic can flow without a valid application
profile, EPG, and contract that is associated with this hardware configuration.
Note This procedure provides the steps for attaching a server to the FEX virtual port channel. The steps would be
the same for attaching any device to an ACI attached FEX.
Note When creating a VPC domain between two leaf switches, both switches must be in the same switch generation,
one of the following:
• Generation 1 - Cisco Nexus N9K switches without “EX” on the end of the switch name; for example,
N9K-9312TX
• Generation 2 – Cisco Nexus N9K switches with “EX” on the end of the switch model name; for example,
N9K-93108TC-EX
Switches such as these two are not compatible VPC peers. Instead, use switches of the same generation.
Step 1 On the APIC, add a virtual port channel to two FEX profiles.
a) On the APIC menu bar, navigate to Fabric > External Access Policies > Interfaces > Leaf Interfaces > Profiles.
b) In the Navigation Pane, select the first FEX profile.
APIC auto-generated FEX profile names are formed as follows: <switch policy name>_FexP<FEX ID>.
c) In the FEX Profile work area, click the +to add a new entry to the Interface Selectors For FEX list.
The Create Access Port Selector dialog opens.
Step 2 Customize the Create Access Port Selector to enable attaching a server to the FEX virtual port channel.
a) Provide a name for the selector.
b) Specify the FEX interface ID to use.
Typically, you will use the same interface ID on each FEX to form the virtual port channel.
c) Select an existing Interface Policy Group from the list or Create VPC Interface Policy Group.
The virtual port channel interface policy group specifies the group of policies you will apply to the selected interfaces
of the FEX. Examples of interface policies include Link Level Policy (for example, 1gbit port speed), Attach Entity
Profile, Storm Control Interface Policy, and so forth.
Note Within the interface policy group, the Attached Entity Profile is required for enabling an EPG to use the
interfaces specified in the FEX port selector.
d) In the Port Channel Policy option, select static or dynamic LACP according to the requirements of your configuration.
e) Click Submit to submit the updated FEX proflle to the APIC.
The APIC updates the FEX profile.
Verification: Use the CLI show port-channel summary command on the switch where the FEX is attached to verify
that the port channel is configured accordingly.
Step 3 Configure the second FEX to use the same Interface Policy Group just specified for the first FEX.
a) In the FEX Profile work area of the second FEX profile, click the +to add a new entry to the Interface Selectors For
FEX list.
The Create Access Port Selector dialog opens.
b) Provide a name for the selector.
c) Specify the FEX interface ID to use.
Typically, you will use the same interface ID on each FEX to form the virtual port channel.
d) From the drop-down list, select the same virtual port channel Interface Policy Group just used in the first FEX profile.
The virtual port channel interface policy group specifies the group of policies you will apply to the selected interfaces
of the FEX. Examples of interface policies include Link Level Policy (for example, 1gbit port speed), Attach Entity
Profile, Storm Control Interface Policy, and so forth.
Note Within the interface policy group, the Attached Entity Profile is required for enabling an EPG to use the
interfaces specified in the FEX port selector.
What to do next
This completes the FEX virtual port channel configuration steps.
Note While this configuration enables hardware connectivity, no data traffic can flow without a valid application
profile, EPG, and contract that is associated with this hardware configuration.
Note When creating a VPC domain between two leaf switches, both switches must be in the same switch generation,
one of the following:
• Generation 1 - Cisco Nexus N9K switches without “EX” on the end of the switch name; for example,
N9K-9312TX
• Generation 2 – Cisco Nexus N9K switches with “EX” on the end of the switch model name; for example,
N9K-93108TC-EX
Switches such as these two are not compatible VPC peers. Instead, use switches of the same generation.
To create the policy linking the FEX through a VPC to two switches, send a post with XML such as the following example:
Example:
<polUni>
<infraInfra dn="uni/infra">
<infraNodeP name="fexNodeP105">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="test" from_="105" to_="105"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-fex116nif105" />
</infraNodeP>
<infraNodeP name="fexNodeP101">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="test" from_="101" to_="101"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-fex113nif101" />
</infraNodeP>
<infraAccPortP name="fex116nif105">
<infraAccPortP name="fex113nif101">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk1"
fromCard="1" toCard="1" fromPort="45" toPort="48" >
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/fexprof-fexHIF113/fexbundle-fex113" fexId="113" />
</infraHPortS>
</infraAccPortP>
<infraFexP name="fexHIF113">
<infraFexBndlGrp name="fex113"/>
<infraHPortS name="pselc-fexPC" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="15" toPort="16" >
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-fexPCbundle" />
</infraHPortS>
<infraHPortS name="pselc-fexVPC" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="1" toPort="8" >
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-fexvpcbundle" />
</infraHPortS>
<infraHPortS name="pselc-fexaccess" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="47" toPort="47">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accportgrp-fexaccport" />
</infraHPortS>
</infraFexP>
<infraFexP name="fexHIF116">
<infraFexBndlGrp name="fex116"/>
<infraHPortS name="pselc-fexPC" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="17" toPort="18" >
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-fexPCbundle" />
</infraHPortS>
<infraHPortS name="pselc-fexVPC" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="1" toPort="8" >
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-fexvpcbundle" />
</infraHPortS>
<infraHPortS name="pselc-fexaccess" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="47" toPort="47">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accportgrp-fexaccport" />
</infraHPortS>
</infraFexP>
<infraFuncP>
<infraAccBndlGrp name="fexPCbundle" lagT="link">
<infraRsLacpPol tnLacpLagPolName='staticLag'/>
<infraRsHIfPol tnFabricHIfPolName="1GHIfPol" />
<infraRsAttEntP tDn="uni/infra/attentp-fexvpcAttEP"/>
</infraAccBndlGrp>
<lacpLagPol dn="uni/infra/lacplagp-staticLag"
ctrl="susp-individual,graceful-conv"
minLinks="2"
maxLinks="16">
</lacpLagPol>
Configuring FEX Connections Using Profiles with the NX-OS Style CLI
Use this procedure to configure FEX connections to leaf nodes using the NX-OS style CLI.
Note Configuring FEX connections with FEX IDs 165 to 199 is not supported in the APIC GUI. To use one of
these FEX IDs, configure the profile using the following commands.
SUMMARY STEPS
1. configure
2. leaf-interface-profile name
3. leaf-interface-group name
4. fex associate fex-id [template template-typefex-template-name]
DETAILED STEPS
Step 4 fex associate fex-id [template Attaches a FEX module to a leaf node. Use the optional
template-typefex-template-name] template keyword to specify a template to be used. If it does
not exist, the system creates a template with the name and
Example:
type you specified.
apic1(config-leaf-if-group)# fex associate 101
Example
This merged example configures a leaf interface profile for FEX connections with ID 101.
apic1# configure
apic1(config)# leaf-interface-profile fexIntProf1
apic1(config-leaf-if-profile)# leaf-interface-group leafIntGrp1
apic1(config-leaf-if-group)# fex associate 101
Restrictions
Fast Link Failover policies and port profiles are not supported on the same port. If port profile is enabled,
Fast Link Failover cannot be enabled or vice versa.
The last 2 uplink ports of supported leaf switches cannot be converted to downlink ports (they are reserved
for uplink connections.)
Up to Cisco APIC Release 3.2, port profiles and breakout ports are not supported on the same ports.
With Cisco APIC Release 3.2 and later, dynamic breakouts (both 100Gb and 40Gb) are supported on profiled
QSFP ports on the N9K-C93180YC-FX switch. Breakout and port profile are supported together for conversion
of uplink to downlink on ports 49-52. Breakout (both 10g-4x or 25g-4x options) is supported on downlink
profiled ports.
The N9K-C9348GC-FXP does not support FEX.
Guidelines
In converting uplinks to downlinks and downlinks to uplinks, consider the following guidelines.
Subject Guideline
Decommissioning nodes If a decommissioned node has the Port Profile feature deployed on it, the port
with port profiles conversions are not removed even after decommissioning the node. It is
necessary to manually delete the configurations after decommission, for the
ports to return to the default state. To do this, log onto the switch, run the
setup-clean-config.sh -k script, and wait for it to run. Then, enter the
reload command. The -k script option allows the port-profile setting to persist
across the reload, making an additional reboot unnecessary.
FIPS When you enable or disable Federal Information Processing Standards (FIPS)
on a Cisco ACI fabric, you must reload each of the switches in the fabric for
the change to take effect. The configured scale profile setting is lost when you
issue the first reload after changing the FIPS configuration. The switch remains
operational, but it uses the default scale profile. This issue does not happen on
subsequent reloads if the FIPS configuration has not changed.
FIPS is supported on Cisco NX-OS release 13.1(1) or later.
If you must downgrade the firmware from a release that supports FIPS to a
release that does not support FIPS, you must first disable FIPS on the Cisco
ACI fabric and reload all the switches in the fabric for the FIPS configuration
change.
Subject Guideline
Maximum uplink port limit When the maximum uplink port limit is reached and ports 25 and 27 are
converted from uplink to downlink and back to uplink on Cisco 93180LC-EX
switches:
On Cisco 93180LC-EX Switches, ports 25 and 27 are the native uplink ports.
Using the port profile, if you convert port 25 and 27 to downlink ports, ports
29, 30, 31, and 32 are still available as four native uplink ports. Because of the
threshold on the number of ports (which is maximum of 12 ports) that can be
converted, you can convert 8 more downlink ports to uplink ports. For example,
ports 1, 3, 5, 7, 9, 13, 15, 17 are converted to uplink ports and ports 29, 30, 31
and 32 are the 4 native uplink ports (the maximum uplink port limit on Cisco
93180LC-EX switches).
When the switch is in this state and if the port profile configuration is deleted
on ports 25 and 27, ports 25 and 27 are converted back to uplink ports, but
there are already 12 uplink ports on the switch (as mentioned earlier). To
accommodate ports 25 and 27 as uplink ports, 2 random ports from the port
range 1, 3, 5, 7, 9, 13, 15, 17 are denied the uplink conversion and this situation
cannot be controlled by the user.
Therefore, it is mandatory to clear all the faults before reloading the leaf node
to avoid any unexpected behavior regarding the port type. It should be noted
that if a node is reloaded without clearing the port profile faults, especially
when there is a fault related to limit-exceed, the port might not be in an expected
operational state.
Breakout Limitations
N9K-C9332PQ Cisco APIC 2.2 (1n) and • 40Gb dynamic breakouts into 4X10Gb ports
higher are supported.
• Ports 13 and 14 do not support breakouts.
• Port profiles and breakouts are not supported
on the same port.
N9K-C93180LC-EX Cisco APIC 3.1(1i) and • 40Gb and 100Gb dynamic breakouts are
higher supported on ports 1 through 24 on odd
numbered ports.
• When the top ports (odd ports) are broken out,
then the bottom ports (even ports) are error
disabled.
• Port profiles and breakouts are not supported
on the same port.
N9K-C9336C-FX2 Cisco APIC 3.2(1l) and • 40Gb and 100Gb dynamic breakouts are
higher supported on ports 1 through 30.
• Port profiles and breakouts are not supported
on the same port.
N9K-C93180YC-FX Cisco APIC 3.2(1l) and • 40Gb and 100Gb dynamic breakouts are
higher supported on ports 49 though 52, when they
are on profiled QSFP ports. To use them for
dynamic breakout, perform the following
steps:
• Convert ports 49-52 to front panel ports
(downlinks).
• Perform a port-profile reload, using one
of the following methods:
• In the APIC GUI, navigate to
Fabric > Inventory > Pod > Leaf,
right-click Chassis and choose
Reload.
• In the NX-OS style CLI, enter the
setup-clean-config.sh -k script,
wait for it to run, and then enter the
reload command.
N9K-C93240YC-FX2 Cisco APIC 4.0(1) and Breakout is not supported on converted downlinks.
higher
Switch Model Default Links Max Uplinks (Fabric Max Downlinks Release
Ports) (Server Ports) Supported
Step 7 Select the ports and choose the new port type as Uplink or Downlink.
The last two ports are reserved for uplink. These cannot be converted to downlink ports.
Step 8 After clicking uplink or downlink, click Submit (reload the switch on your own later) or Submit and Reload Switch.
Note Reload the switch for the change in the uplink or downlink configuration to take effect.
Step 1 configure
Enters global configuration mode.
Example:
apic1# configure
Example:
apic1(config-leaf-if)# port-direction downlink
Step 5 Log on to the leaf switch where the port is located and enter the setup-clean-config.sh -k command, then the reload
command.
Step 1 To create a port profile that converts a downlink to an uplink, send a post with XML such as the following:
<!-- /api/node/mo/uni/infra/prtdirec.xml -->
<infraRsPortDirection tDn="topology/pod-1/paths-106/pathep-[eth1/7]" direc=“UpLink” />
Step 2 To create a port profile that converts an uplink to a downlink, send a post with XML such as the following:
Example:
<!-- /api/node/mo/uni/infra/prtdirec.xml -->
<infraRsPortDirection tDn="topology/pod-1/paths-106/pathep-[eth1/52]" direc=“DownLink” />
Verifying Port Profile Configuration and Conversion Using the NX-OS Style CLI
You can verify the configuration and the conversion of the ports using the show interface brief CLI command.
Note Port profile can be deployed only on the top ports of a Cisco N9K-C93180LC-EX switch, for example, 1, 3,
5, 7, 9, 11, 13, 15, 17, 19, 21, and 23. When the top port is converted using the port profile, the bottom ports
are hardware disabled. For example, if Eth 1/1 is converted using the port profile, Eth 1/2 is hardware disabled.
Step 1 This example displays the output for converting an uplink port to downlink port. Before converting an uplink port to
downlink port, the output is displayed in the example. The keyword routed denotes the port as uplink port.
Example:
Step 2 After configuring the port profile and reloading the switch, the output is displayed in the example. The keyword trunk
denotes the port as downlink port.
Example:
Note In the FCoE topology, the role of the ACI leaf switch is to provide a path for FCoE traffic between the locally
connected SAN hosts and a locally connected FCF device. The leaf switch does not perform local switching
between SAN hosts, and the FCoE traffic is not forwarded to a spine switch.
• One or more ACI leaf switches configured through FC SAN policies to function as an NPV backbone.
• Selected interfaces on the NPV-configured leaf switches configured to function as virtual F ports, which
accommodate FCoE traffic to and from hosts running SAN management or SAN-consuming applications.
• Selected interfaces on the NPV-configured leaf switches configured to function as virtual NP ports, which
accommodate FCoE traffic to and from a Fibre Channel Forwarding (FCF) bridge.
The FCF bridge receives FC traffic from fibre channel links typically connecting SAN storage devices and
encapsulates the FC packets into FCoE frames for transmission over the ACI fabric to the SAN management
or SAN Data-consuming hosts. It receives FCoE traffic and repackages it back to FC for transmission over
the fibre channel network.
Note In the above ACI topology, FCoE traffic support requires direct connections between the hosts and virtual F
ports and direct connections between the FCF device and the virtual NP port.
APIC servers enable an operator to configure and monitor the FCoE traffic through the APIC GUI, the APIC
NX-OS style CLI, or through application calls to the APIC REST API.
The vlan used for FCoE should have vlanScope set to Global. vlanScope set to portLocal is not supported for
FCoE. The value is set via the L2 Interface Policy l2IfPol.
This policy specifies under what circumstances QoS-level priority flow control will be applied to
FCoE traffic.
• Fibre Channel Interface Policy
Specifies whether the interfaces to which this policy group is applied are to be configured as F ports
or NP ports.
• Slow Drain Policy
Specifies the policy for handling FCoE packets that are causing traffic congestion on the ACI Fabric.
Global Policies
The APIC global policies whose settings can affect the performance characteristics of FCoE traffic on
the ACI fabric.
The Global QOS Class Policies for Level1, Level2, or Level3 connections, contain the following settings
that affect FCoE traffic on the ACI fabric:
• PFC Admin State must be set to Auto
Specifies whether to enable priority flow control to this level of FCoE traffic (default value is false).
• No Drop COS
Specifies whether to enable a no-drop policy for this level of FCoE traffic designated with a certain
Class of Service (CoS) level.
Note: QoS level enabled for PFC and FCoE no-drop must match with the Priority Group ID enabled
for PFC on CNA.
Note: Only one QoS level can be enabled for no-drop and PFC, and the same QoS level must be
associated with FCoE EPGs.
• QoS Class—Priority flow control requires that CoS levels be globally enabled for the fabric and
assigned to the profiles of applications that generate FCoE traffic.
CoS Preservation must also be enabled—Navigate to Fabric > Access Policies > Policies > Global >
QoS Class and enable Preserve COS Dot1p Preserve.
Note Some legacy CNAs may require the Level2 Global QoS Policy to be used as the No Drop PFC, FCoE
(Fibre Channel over Ethernet) QoS Policy. If your Converged Network Adapters (CNAs) are not logging
into the fabric, and you have noticed that no FCoE Initiation Protocol (FIP) frames are being sent by the
CNAs, try enabling Level2 as the FCoE QoS policy. The Level2 policy must be attached to the FCoE
EPGs in use and only 1 QoS level can be enabled for PFC no-drop.
Profiles
APIC profiles that you can create or configure for FCoE support include the following:
Leaf Profile
Specifies the ACI Fabric leaf switches on which to configure support of FCoE traffic.
The combination of policies contained in the access switch policy group can be applied to the leaf switches
included in this profile.
Interface Profiles
Specifies a set of interfaces on which to deploy F Ports or NP Ports.
You configure at least two leaf interface profiles: One interface profile for F ports, and one interface
profile for NP ports.
The combination of policies contained in the interface policy group for F ports can be applied to the set
of interfaces included in the interface profile for F ports.
The combination of policies contained in the interface policy group for NP ports can be applied to the
set of interfaces included in the interface profile for NP ports.
Attached Entity Profile
Binds the interface policy group settings with the Fibre Channel domain mapping.
Domains
Domains that you create of configure for FCoE support include the following:
Physical Domain
A virtual domain created to support LANs for FCoE VLAN Discovery. The Physical domain will specify
the VLAN pool to support FCoE VLAN discovery.
Fibre Channel Domain
A virtual domain created to support virtual SANs for FCoE connections.
A Fibre Channel domain specifies a VSAN pool, VLAN pool and the VSAN Attribute over which the
FCoE traffic is carried.
• VSAN pool - a set of virtual SANs which you associate with existing VLANs. Individual VSANs
can be assigned to associated FCoE-enabled interfaces in the same way that VLANs can be assigned
to those interfaces for Ethernet connectivity.
• VLAN pool - the set of VLANs available to be associated with individual VSANs.
• VSAN Attribute - The mapping of a VSAN to a VLAN.
Tenant Entities
Under the Tenant tab, you configure bridge domain and EPG entities to access the FCoE ports and exchange
the FCoE traffic.
The entities include the following:
Bridge Domain (configured for FCoE support)
A bridge domain created and configured under a tenant to carry FCoE traffic for applications that use
FCoE connections.
Application EPG
The EPG under the same tenant to be associated with the FCoE bridge domain.
Fibre Channel Path
Specifies the interfaces enabled as FCoE F ports or NP ports to be associated with the selected EPG.
After you associate the Fibre Channel path with an EPG the FCoE interface is deployed in the specified
VSAN.
Step 1 Create an FCoE supportive switch policy group to specify and combine all the leaf switch policies that support FCoE
configuration.
This policy group will be applied to the leaf switches that you want to serve as NPV hosts.
a) In the APIC GUI, starting on the APIC menu bar, click Fabric > Access Policies > Switches > Leaf Switches >
Policy Groups.
b) Right-click Policy Groups and click Create Access Switch Policy Group.
c) In the Create Access Switch Policy Group dialog, specify the settings described below and then click Submit.
Policy Description
Name Identifies the switch policy group.
Enter a name that indicates the FCoE supportive function of this switch policy group. For example,
fcoe_switch_policy_grp.
Step 2 Create a leaf profile for leaf switches to support FCoE traffic.
This profile specifies a switch or set of leaf switches to assign the switch policy group that was configured in the previous
step. This association enables that set of switches to support FCoE traffic with pre-defined policy settings.
a) Starting at the APIC menu bar, click Fabric > Access Policies > Switches > Leaf Switches > Profiles
b) Right-click Leaf Profiles, then click Create Leaf Profile.
c) In the Create Leaf Profile dialog create and name the leaf profile (for example: NPV-1)
d) Also in the Create Leaf Profile dialog, locate the Leaf Selectors table, click +to create a new table row and specify
the leaf switches to serve as NPV devices.
e) In the new table row choose a leaf name, blocks, and assign the switch policy group that you created in the previous
step.
f) Click Next and then click Finish.
Step 3 Create at least two FCoE-supportive interface policy groups: one to combine all policies that support FCoE F port
interfaces, and one to combine all policies that support FCoE NP port interfaces.
These interface policy groups are to be applied to the interface profiles that are applied to interfaces that are to serve as
F ports and NP ports.
a) On the APIC menu bar, click Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups.
b) Right-click Policy Groups, then, depending on how port access is configured, click one of the following options:
Create Leaf Access Port Policy Group, Create PC Interface Port Policy, or Create VPC Interface Port Policy
Group.
Note • If you deploy over a PC interface, view ACI Leaf Switch Port Channel Configuration Using the GUI,
on page 69 for additional information.
• If you deploy over a VPC interface, view ACI Leaf Switch Virtual Port Channel Configuration Using
the GUI, on page 81 for additional information.
c) In the policy group dialog, specify for inclusion the Fibre Channel Interface policy, the slow drain policy, and the
priority flow control policy you configure.
Policy Description
Name Name of this policy group.
Enter a name that indicates the FCoE supportive function of this Leaf Access Port Policy Group and
the port type, (F or NP) that it is intended to support, for example: fcoe_f_port_policy or
fcoe_np_port_policy.
Priority Flow Specifies the state of the Priority Flow Control (PFC) on the interfaces to which this policy group is
Control Policy applied.
Options include the following:
• Auto (the default value) Enables priority flow control (PFC) on local port on the no-drop CoS as
configured, on the condition that values advertised by the DCBX and negotiated with the peer
succeed. Failure causes priority flow control to be disabled on the no-drop CoS.
• Off disables FCoE priority flow control on the local port under all circumstances.
• On enables FCoE PFC on the local port under all circumstances.
Policy Description
• To use the value specified in an existing policy, click that policy.
• To create a new policy specifying different values, click Create Priority Flow Control Policy
and follow the prompts.
Note PFC requires that Class of Service (CoS) levels be globally enabled for the fabric and assigned
to the profiles of applications that generate FCoE traffic. Also CoS Preservation must be
enabled. To enable it, navigate to Fabric > Access Policies > Policies > Global > QoS Class
and enable Preserve COS Dot1p Preserve.
Slow Drain Specifies how to handle FCoE packets that are causing traffic congestion on the ACI fabric. Options
Policy include the following:
• Congestion Clear Action (default: disabled)
Action to be taken during FCoE traffic congestion. Options include:
• Err - disable - Disable the port.
• Log - Record congestion in the Event Log.
• Disabled- Take no action.
Step 4 Create at least two interface profiles: one profile to support F port connections, one profile to support NP port connections,
and optional additional profiles to be associated with additional port policy variations.
a) Starting at the APIC bar menu click Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles.
b) Right-click Profiles and choose Create Leaf Interface Profile.
c) In the Create Leaf Interface Profile dialog, enter a descriptive name for the profile, for
example,FCoE_F_port_Interface_profile-1.
d) Locate the Interface Selectors table and click + to display the Create Access Port Selector dialog. This dialog
enables you to display a range of interfaces and apply settings to the fields described in the following table.
Option Description
Name A descriptive name for this port selector.
Interface IDs Specifies the set of interfaces to which this range applies.
• To include all interfaces in the switch, choose All.
• To include an individual interface in this range, specify single Interface ID, for example:
1/20.
• To include a range of interfaces in this range, enter the lower and upper values separated by
a dash, for example: 1/10 - 1/15.
Interface Policy The name of either the F port interface policy group or the NP port policy group that you
Group configured in the previous step.
• To designate the interfaces included in this profile as F ports, choose the interface policy
group that you configured for F ports.
• To designate the interfaces included in the profile as NP ports, choose the interface policy
group that you configured for NP ports.
Step 5 Click Submit. Repeat the previous step so that you at least have interface profiles for both F ports and an NP port.
Step 6 Configure whether to apply global QoS policies to FCoE traffic.
You can specify different QoS policies to different levels (1, 2, or 3) of FCoE traffic.
a) Starting at the APIC bar menu click Fabric > Access Policies > Policies > Global > QoS Class and enable the
Preserve CoS flag in the QoS Class pane.
b) In the QoS Class - Level 1 , QoS Class - Level 2 , or QoS Class - Level 3 dialog, edit the following fields to specify
the PFC and no-drop CoS. Then click Submit.
Note Only 1 Level can be configured for PFC and no-drop CoS.
Policy Description
PFC Admin State Whether to enable priority flow control to this level of FCoE traffic (default value is false).
Enabling priority flow control sets the Congestion Algorithm for this level of FCoE traffic
to no-drop.
CoS The CoS level to impose no drop FCoE packet handling even in case of FCoE traffic
congestion
Step 7 Define a Fibre Channel domain. Create a set of virtual SANs (VSANs) and map them to set of existing VLANs.
a) Starting at the APIC bar menu click Fabric > Access Policies > Physical and External Domains > Fibre Channel
Domains.
b) Right-click Fibre Channel Domains and click Create Fibre Channel Domain.
c) In the Fibre Channel Domain dialog, specify the following settings:
Option Description/Action
Name Specifies the name or label you want to assign the VSAN domain you are creating. (For example:
vsan-dom2)
If you open the dialog to create a VSAN pool, follow the prompts configure the following:
• A Static resource allocation method to support FCoE.
• a range of VSANs that will be available to assign to FCoE F port interfaces and NP port interfaces.
Note Minimum range value is 1. Maximum range value is 4078.
Configure multiple ranges of VSANs if necessary.
VLAN Pool The pool of VLANS available to be mapped to by the members of the VSAN pool.
A VLAN pool specifies numerical ranges of VLANs you want available to support FCoE connections
for this domain. The VLANs in the ranges you specify are available for VSANs to map to them.
• To select an existing VLAN pool, click the drop-down and choose a listed pool. If you want to
revise it, click the Edit icon.
• To create a VLAN pool, click Create a VLAN Pool.
If you open the dialog to create a VLAN pool, follow the prompts configure the following:
• A Static resource allocation method to support FCoE.
• a range of VLANs that will be available for VSANs to map to.
Note Minimum range value is 1. Maximum range value is 4094.
Configure multiple ranges of VLANs if necessary.
If you open the dialog to configure the VSAN attributes, follow the prompts configure the following:
• The appropriate load balancing option (src-dst-ox-id or src-dst-id).
• Mapping of individual VSANs to individual VLANs, for example: vsan-8 to vlan-10
Note Only VSANs and VLANs in the ranges you specified for this domain can be mapped to
each other.
Step 8 Create an attached entity profile to bind the Fibre Channel domain with the interface policy group.
a) On the APIC menu bar, click Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups >
interface_policy_group_name.
In this step interface_policy_group_name is the interface policy group that you defined in Step 3.
b) In the interface policy group dialog, Click the Attached Entity Profile drop-down and choose an existing Attached
Entity Profile or click Create Attached Entity Profile to create a new one.
c) In the Attached Entity Profile dialog specify the following settings:
Field Description
Name A name for this Attached Entity Profile.
Domains To Be Associated To Lists the domain to be associated with the interface policy group.
Interfaces
In this case, choose the Fibre Channel domain you configured in Step 7.
Click Submit.
Step 9 Associate the leaf profile and the F port and NP port interface profiles.
a) Starting at the APIC menu bar, click Fabric > Access Policies > Switches > Leaf Switches > Profiles then click the
name of the leaf profile you configured in Step 2.
b) In the Create Leaf Profile dialog, locate the Associated Interface Selector Profiles table, click +to create a new
table row and choose the F port interface profile you created in Step 4.
c) Again on the Associated Interface Selector Profiles table, click +to create a new table row and choose the NP port
interface profile you created in Step 4.
d) Click Submit.
What to do next
After successful deployment of virtual F ports and NP ports to interfaces on the ACI fabric, the next step is
for system administrators to enable EPG access and connection over those interfaces.
For more information, see Deploying EPG Access to vFC Ports Using the APIC GUI, on page 124.
• Leaf policy groups, leaf profiles, interface policy groups, interface profiles, and Fibre Channel domains
have all been configured to support FCoE traffic.
Step 1 Under an appropriate tenant configure an existing bridge domain to support FCoE or create a bridge domain to support
FCoE.
Option: Actions
To configure an existing bridge 1. Click Tenant > tenant_name > Networking > Bridge Domains >
domain for FCoE bridge_domain_name.
2. In the Type field of the bridge domain's Properties panel, click fc.
3. Click Submit.
To create a new bridge domain 1. Click Tenant > tenant_name > Networking > Bridge Domains > Actions >
for FCoE Create a Bridge Domain.
2. In the Name field of the Specify Bridge Domain for the VRF dialog, enter a
bridge domain name.
3. In the Type field of Specify Bridge Domain for the VRF dialog, click fc.
4. In VRF field select a VRF from the drop-down or click Create VRF to create and
configure a new VRF.
5. Finish the bridge domain configuration.
6. Click Submit.
Step 2 Under the same tenant, configure an existing EPG or create a new EPG to associate with the FCoE-configured bridge
domain.
Option: Actions
To associate an 1. Click Tenant > <tenant_name> > Application Profiles > <application_profile_name> >
existing EPG Application EPGs > <epg_name>.
2. In the QoS class field choose the quality of service (Level1, Level2, or Level3) to assign to
traffic generated by this EPG.
If you configured one of the QoS levels for priority-flow control no-drop congestion handling
and you want FCoE traffic handled with no-dropped packet priority, assign that QoS level to
this EPG.
3. In the Bridge Domain field of the EPG's Properties panel, click the drop-down list and choose
the name of a bridge domain configured for Type: fcoe.
4. Click Submit.
Note If you change the Bridge Domain field, you must wait 30-35 seconds between
changes. Changing the Bridge Domain field too rapidly causes vFC interfaces on the
NPV Switch to fail and a switch reload must be executed.
Option: Actions
To create and 1. Click Tenant > <tenant_name> > Application Profiles > <application_profile_name> >
associate a new Application EPGs.
EPG
2. Right-click Application EPGs and click Create Application EPG.
3. In the QoS class field choose the quality of service (Level1, Level2, or Level3) to assign to
traffic generated by this EPG.
If you configured one of the QoS levels for priority-flow control no-drop congestion handling
and you want FCoE traffic handled with no-dropped packet priority, assign that QoS level to
this EPG.
4. In the Bridge Domain field of the Specify the EPG Identity dialog, click the drop-down list
and choose the name of a bridge domain configured for Type: fcoe.
Note If you change the Bridge Domain field, you must wait 30-35 seconds between
changes. Changing the Bridge Domain field too rapidly causes vFC interfaces on the
NPV Switch to fail and a switch reload must be executed.
Option: Actions
VSAN The VSAN which will use the interface selected in the Path field.
Note The specified VSAN must be in the range of VSANs that was designated for the VSAN
pool.
In most cases, all interfaces that this EPG is configured to access must be assigned the same
VSAN, unless you specify a Fibre Channel path over a Virtual Port Channel (VPC)
connection. In that case, you can specify two VSANs, one for each leg of the connection.
VSAN Mode The mode (Native or Regular) in which the selected VSAN accesses the selected interface.
Every interface configured for FCoE support, requires one VSAN and only one VSAN configured for
Native mode. Any additional VSANs assigned to the same interface must access it in Regular mode.
Pinning label (Optional) This option applies only if you are mapping access to an F port and it is necessary to bind
this F port with a specific uplink NP port. It associates a pinning label (pinning label 1 or pinning label
2) with a specific NP port. You can then assign that pinning label to the target F port. This association
causes the associated NP port to serve in all cases as the uplink port to the target F Port.
Choose a pinning label and associate it with an interface configured as an NP port.
This option implements what is also referred to as "traffic-mapping."
Note The F port and the associated Pinning Label NP port must be on the same Leaf switch.
What to do next
After you have set up EPG access to the vFC interfaces, the final step is to set up the network supporting the
FCoE initialization protocol (FIP), which enables discovery of those interfaces.
For more information, see Deploying the EPG to Support the FCoE Initiation Protocol, on page 127.
• Leaf policy groups, leaf profiles, interface policy groups, interface profiles, and Fibre Channel domains
have all been configured to support FCoE traffic as described in the topic Deploying EPG Access to vFC
Ports Using the APIC GUI, on page 124.
• EPG access to the vFC ports is enabled as described in the topic Deploying EPG Access to vFC Ports
Using the APIC GUI, on page 124.
Step 1 Under the same tenant configure an existing bridge domain to support FIP or create a regular bridge domain to support
FIP.
Option: Actions
To configure an existing bridge 1. Click Tenant > tenant_name > Networking > Bridge Domains >
domain for FCoE bridge_domain_name.
2. In the Type field of the bridge domain's Properties panel, click Regular.
3. Click Submit.
To create a new bridge domain 1. Click Tenant > tenant_name > Networking > Bridge Domains > Actions >
for FCoE Create a Bridge Domain.
2. In the Name field of the Specify Bridge Domain for the VRF dialog, enter a
bridge domain name.
3. In the Type field of Specify Bridge Domain for the VRF dialog, click Regular.
4. In VRF field select a VRF from the drop-down or click Create VRF to create and
configure a new VRF.
5. Finish the bridge domain configuration.
6. Click Submit.
Step 2 Under the same tenant, configure an existing EPG or create a new EPG to associate with the regular-type bridge domain.
Option: Actions
To associate an existing 1. Click Tenant > tenant_name > Application Profiles > ap1 > Application EPGs >
EPG epg_name.
2. In the Bridge Domain field of the EPG's Properties panel, click the drop-down list
and choose the name of the regular bridge domain that you just configured to support
FIP.
3. Click Submit.
To create and associate a 1. Click Tenant > tenant_name > Application Profiles > ap1 > Application EPGs.
new EPG
2. Right-click Application EPGs and click Create Application EPG.
3. In the Bridge Domain field of the Specify the EPG Identity dialog, click the
drop-down list and choose the name of the regular bridge domain that you just
configured to support FIP.
Option: Actions
4. Finish the bridge domain configuration.
5. Click Finish.
The FCoE components will begin the discovery process to initiate the operation of the FCoE network.
Note If during clean up you delete the Ethernet configuration object (infraHPortS) for a vFC port (for example, in
the Interface Selector table on the Leaf Interface Profiles page of the GUI), the default vFC properties
remain associated with that interface. For example it the interface configuration for vFC NP port 1/20 is
deleted, that port remains a vFC port but with default F port setting rather than non-default NP port setting
applied.
Step 1 Delete the associated Fibre Channel path to undeploy vFC from the port/vsan whose path was specified on this deployment.
This action removes vFC deployment from the port/vsan whose path was specified on this deployment.
a) Click Tenants > tenant_name > Application Profiles > app_profile_name > Application EPGs > app_epg_name >
Fibre Channel (Paths). Then right-click the name of the target Fibre Channel path and choose Delete.
b) Click Yes to confirm the deletion.
Step 2 Delete the VLAN to VSAN map that you configured when you defined the Fibre Channel domain.
This action removes vFC deployment from all the elements defined in the map.
a) Click Fabric > External Access Policies > Pools > VSAN Attributes. Then right-click the name of the target map
and choose Delete.
b) Click Yes to confirm the deletion.
Step 3 Delete the VLAN and VSAN pools that you defined when you defined the Fibre Channel domain.
This action eliminates all vFC deployment from the ACI fabric.
a) Click Fabric > External Access Policies > Pools > VSAN and then, right-click the name of the target VSAN pool
name and choose Delete.
b) Click Yes to confirm the deletion.
c) Click Fabric > External Access Policies > Pools > VLAN then, right-click the target VLAN pool name and choose
Delete.
d) Click Yes to confirm the deletion.
Step 4 Delete the Fibre Channel Domain that contained the VSAN pool, VLAN pool, and Map elements you just deleted.
a) Click Tenants > tenant_name > Application Profiles > Fibre Channel Domains. Then right-click the name of the
target Fibre Channel Domain and choose Delete.
b) Click Yes to confirm the deletion.
Step 5 You can delete the tenant/EPG/App and the selectors if you don’t need them.
Option Action
If you want to delete the associated application Click Tenants > tenant_name > Application Profiles >
EPG but save the associated tenant and app_profile_name > Application EPGs, right-click the name of the
application profile: target application EPG, choose Delete, then click Yes to confirm
deletion.
If you want to delete the associated application Click Tenants > tenant_name > Application Profiles, right-click
profile but save the associated tenant: the name of the target application profile, choose Delete, then click
Yes to confirm deletion.
If you want to delete the associated tenant: Click Tenants > , right-click the name of the target tenant, choose
Delete, then click Yes to confirm deletion.
Procedure
Step 2 Under the same tenant, associate the target EPG with the The sample command sequence creates EPG e1 and
FCoE-configured bridge domain. associates that EPG with the FCoE-configured bridge
domain b1.
Example:
apic1(config)# tenant t1
apic1(config-tenant)# application a1
apic1(config-tenant-app)# epg e1
apic1(config-tenant-app-epg)# bridge-domain member
b1
apic1(config-tenant-app-epg)# exit
apic1(config-tenant-app)# exit
apic1(config-tenant)# exit
Step 3 Create a VSAN domain, VSAN pools, VLAN pools and In Example A, the sample command sequence creates
VSAN to VLAN mapping. VSAN domain, dom1 with VSAN pools and VLAN pools,
maps VSAN 1 to VLAN 1 and maps VSAN 2 to VLAN 2
Example:
A In Example B, an alternate sample command sequence
creates a reusable VSAN attribute template pol1 and then
apic1(config)# vsan-domain dom1
creates VSAN domain dom1, which inherits the attributes
apic1(config-vsan)# vsan 1-10
apic1(config-vsan)# vlan 1-10 and mappings from that template.
apic1(config-vsan)# fcoe vsan 1 vlan 1
loadbalancing src-dst-ox-id
apic1(config-vsan)# fcoe vsan 2 vlan 2
Example:
B
Step 4 Create the physical domain to support the FCoE In the example, the command sequence creates a regular
Initialization (FIP) process. VLAN domain, fipVlanDom, which includes VLAN 120
to support the FIP process.
Example:
Step 5 Under the target tenant configure a regular bridge domain. In the example, the command sequence creates bridge
domain fip-bd.
Example:
apic1(config)# tenant t1
apic1(config-tenant)# vrf context v2
apic1(config-tenant-vrf)# exit
apic1(config-tenant)# bridge-domain fip-bd
apic1(config-tenant-bd)# vrf member v2
apic1(config-tenant-bd)# exit
apic1(config-tenant)# exit
Step 6 Under the same tenant, associate this EPG with the In the example, the command sequence associates EPG
configured regular bridge domain. epg-fip with bridge domain fip-bd.
Example:
apic1(config)# tenant t1
apic1(config-tenant)# application a1
apic1(config-tenant-app)# epg epg-fip
apic1(config-tenant-app-epg)# bridge-domain member
fip-bd
apic1(config-tenant-app-epg)# exit
apic1(config-tenant-app)# exit
apic1(config-tenant)# exit
Step 7 Configure a VFC interface with F mode. In example A the command sequence enables interface 1/2
on leaf switch 101 to function as an F port and associates
Example:
that interface with VSAN domain dom1.
A
Each of the targeted interfaces must be assigned one (and
apic1(config)# leaf 101
only one) VSAN in native mode. Each interface may be
apic1(config-leaf)# interface ethernet 1/2
apic1(config-leaf-if)# vlan-domain member assigned one or more additional VSANs in regular mode.
fipVlanDom
apic1(config-leaf-if)# switchport trunk native vlan
The sample command sequence associates the target
120 tenant t1 application a1 epg epg-fip interface 1/2 with:
apic1(config-leaf-if)# exit
• VLAN 120 for FIP discovery and associates it with
apic1(config-leaf)# exit EPG epg-fip and application a1 under tenant t1.
apic1(config-leaf)# interface vfc 1/2
Example:
C
apic1(config)# leaf 101
apic1(config-leaf)# interface vfc-po pc1
apic1(config-leaf-if)# vsan-domain member dom1
apic1(config-leaf-if)# switchport vsan 2 tenant t1
application a1 epg e1
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface ethernet 1/2
apic1(config-leaf-if)# channel-group pc1
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
Step 8 Configure a VFC interface with NP mode. The sample command sequence enables interface 1/4 on
leaf switch 101 to function as an NP port and associates
Example:
that interface with VSAN domain dom1.
apic1(config)# leaf 101
apic1(config-leaf)# interface vfc 1/4
apic1(config-leaf-if)# switchport mode np
apic1(config-leaf-if)# vsan-domain member dom1
Step 9 Assign the targeted FCoE-enabled interfaces a VSAN. Each of the targeted interfaces must be assigned one (and
only one) VSAN in native mode. Each interface may be
Example:
assigned one or more additional VSANs in regular mode.
apic1(config-leaf-if)# switchport trunk allowed
vsan 1 tenant t1 application a1 epg e1 The sample command sequence assigns the target interface
apic1(config-leaf-if)# switchport vsan 2 tenant t4 to VSAN 1 and associates it with EPG e1 and application
application a4 epg e4
a1 under tenant t1. "trunk allowed" assigns vsan 1 regular
mode status. The command sequence also assigns the
interface a required native mode VSAN 2. As this example
Configuring FCoE Connectivity With Policies and Profiles Using the NX-OS
Style CLI
The following sample NX-OS style CLI sequences create and use policies to configure FCoE connectivity
for EPG e1 under tenant t1.
Procedure
Step 2 Under the same tenant, associate your target EPG with the The sample command sequence creates EPG e1 associates
FCoE configured bridge domain. that EPG with FCoE-configured bridge domain b1.
Example:
apic1(config)# tenant t1
apic1(config-tenant)# application a1
apic1(config-tenant-app)# epg e1
apic1(config-tenant-app-epg)# bridge-domain member
b1
apic1(config-tenant-app-epg)# exit
apic1(config-tenant-app)# exit
apic1(config-tenant)# exit
apic1(config)#
Step 3 Create a VSAN domain, VSAN pools, VLAN pools and In Example A, the sample command sequence creates
VSAN to VLAN mapping. VSAN domain, dom1 with VSAN pools and VLAN pools,
maps VSAN 1 VLAN 1 and maps VSAN 2 to VLAN 2
Example:
A In Example B, an alternate sample command sequence
creates a reusable vsan attribute template pol1 and then
apic1(config)# vsan-domain dom1
creates VSAN domain dom1, which inherits the attributes
apic1(config-vsan)# vsan 1-10
apic1(config-vsan)# vlan 1-10 and mappings from that template.
apic1(config-vsan)# fcoe vsan 1 vlan 1
loadbalancing
Example:
B
apic1(config)# template vsan-attribute pol1
apic1(config-vsan-attr)# fcoe vsan 2 vlan 12
loadbalancing
src-dst-ox-id
apic1(config-vsan-attr)# fcoe vsan 3 vlan 13
loadbalancing
src-dst-ox-id
apic1(config-vsan-attr)# exit
apic1(config)# vsan-domain dom1
apic1(config-vsan)# inherit vsan-attribute
pol1
apic1(config-vsan)# exit
Step 5 Configure a Fibre Channel SAN policy. The sample command sequence creates Fibre Channel
SAN policy ffp1 to specify a combination of error-detect
Example:
timeout values (EDTOV), resource allocation timeout
apic1# values (RATOV), and the default FC map values for
apic1# configure
apic1(config)# template fc-fabric-policy ffp1 FCoE-enabled interfaces on a target leaf switch.
apic1(config-fc-fabric-policy)# fctimer e-d-tov
1111
apic1(config-fc-fabric-policy)# fctimer r-a-tov
2222
apic1(config-fc-fabric-policy)# fcoe fcmap
0E:FC:01
apic1(config-fc-fabric-policy)# exit
Step 6 Create a Fibre Channel node policy. The sample command sequence creates Fibre Channel
node policy flp1 to specify a combination of disruptive
Example:
load-balancing enablement and FIP keep-alive values.
apic1(config)# template fc-leaf-policy flp1 These values also apply to all the FCoE-enabled interfaces
apic1(config-fc-leaf-policy)# fcoe fka-adv-period
44 on a target leaf switch.
apic1(config-fc-leaf-policy)# exit
Step 7 Create Node Policy Group. The sample command sequence creates a Node Policy
group, lpg1, which combines the values of the Fibre
Example:
Channel SAN policy ffp1 and Fibre Channel node policy,
apic1(config)# template leaf-policy-group lpg1
flp1. The combined values of this node policy group can
apic1(config-leaf-policy-group)# inherit
fc-fabric-policy ffp1 be applied to Node profiles configured later.
apic1(config-leaf-policy-group)# inherit
fc-leaf-policy flp1
apic1(config-leaf-policy-group)# exit
apic1(config)# exit
apic1#
Step 9 Create an interface policy group for F port interfaces. The sample command sequence creates interface policy
group ipg1 and assigns a combination of values that
Example:
determine priority flow control enablement, F port
apic1(config)# template policy-group ipg1 enablement, and slow-drain policy values for any interface
apic1(config-pol-grp-if)# priority-flow-control
mode auto that this policy group is applied to.
apic1(config-pol-grp-if)# switchport mode f
apic1(config-pol-grp-if)# slow-drain pause timeout
111
apic1(config-pol-grp-if)# slow-drain
congestion-timeout count 55
apic1(config-pol-grp-if)# slow-drain
congestion-timeout action log
Step 10 Create an interface policy group for NP port interfaces. The sample command sequence creates interface policy
group ipg2 and assigns a combination of values that
Example:
determine priority flow control enablement, NP port
apic1(config)# template policy-group ipg2 enablement, and slow-drain policy values for any interface
apic1(config-pol-grp-if)# priority-flow-control
mode auto that this policy group is applied to.
apic1(config-pol-grp-if)# switchport mode np
apic1(config-pol-grp-if)# slow-drain pause timeout
111
apic1(config-pol-grp-if)# slow-drain
congestion-timeout count 55
apic1(config-pol-grp-if)# slow-drain
congestion-timeout action log
Step 11 Create an interface profile for F port interfaces. The sample command sequence creates an interface profile
lip1 for F port interfaces, associates the profile with F port
Example:
specific interface policy group ipg1, and specifies the
apic1# configure
interfaces to which this profile and its associated policies
apic1(config)# leaf-interface-profile lip1
apic1(config-leaf-if-profile)# description 'test applies.
description lip1'
apic1(config-leaf-if-profile)#
leaf-interface-group lig1
apic1(config-leaf-if-group)# description 'test
description lig1'
apic1(config-leaf-if-group)# policy-group ipg1
apic1(config-leaf-if-group)# interface ethernet
1/2-6, 1/9-13
Step 12 Create an interface profile for NP port interfaces. The sample command sequence creates an interface profile
lip2 for NP port interfaces, associates the profile with NP
Example:
port specific interface policy group ipg2, and specifies the
apic1# configure
interface to which this profile and its associated policies
apic1(config)# leaf-interface-profile lip2
apic1(config-leaf-if-profile)# description applies.
'test description lip2'
Step 13 Configure QoS Class Policy for Level 1. The sample command sequence specifies the QoS level of
FCoE traffic to which priority flow control policy might
Example:
be applied and pauses no-drop packet handling for Class
apic1(config)# qos parameters level1 of Service level 3.
apic1(config-qos)# pause no-drop cos 3
Step 3 Configure FCoE over FEX per port, port-channel, and VPC:
Example:
vsan : 1-100
vlan : 1-100
Use the show vsan-domain command to verify FCoE is enabled on the target switch.
The command example confirms FCoE enabled on the listed leaf switches and its FCF connection details.
Example:
Epg: e1
Step 1 List the attributes of the leaf port interface, set its mode setting to default, and then remove its EPG deployment and
domain association.
The example sets the port mode setting of interface vfc 1/2 to default and then removes the deployment of EPG e1 and
the association with VSAN Domain dom1 from that interface.
Example:
Step 2 List and remove the VSAN/VLAN mapping and the VLAN and VSAN pools.
The example removes the VSAN/VLAN mapping for vsan 2, VLAN pool 1-10, and VSAN pool 1-10 from VSAN domain
dom1.
Example:
apic1(config)# vsan-domain dom1
apic1(config-vsan)# show run
# Command: show running-config vsan-domain dom1
# Time: Tue Jul 26 09:43:47 2016
vsan-domain dom1
vsan 1-10
vlan 1-10
fcoe vsan 2 vlan 2
exit
apic1(config-vsan)# no fcoe vsan 2
apic1(config-vsan)# no vlan 1-10
apic1(config-vsan)# no vsan 1-10
apic1(config-vsan)# exit
#################################################################################
NOTE: To remove a template-based VSAN to VLAN mapping use an alternate sequence:
#################################################################################
Step 4 You can delete the associated tenant, EPG, and selectors if you do not need them.
Step 1 To create a VSAN pool, send a post with XML such as the following example.
The example creates VSAN pool vsanPool1 and specifies the range of VSANs to be included.
Example:
https://apic-ip-address/api/mo/uni/infra/vsanns-[vsanPool1]-static.xml
Step 2 To create a VLAN pool, send a post with XML such as the following example.
The example creates VLAN pool vlanPool1 and specifies the range of VLANs to be included.
Example:
https://apic-ip-address/api/mo/uni/infra/vlanns-[vlanPool1]-static.xml
Step 3 To create a VSAN-Attribute policy, send a post with XML such as the following example.
The example creates VSAN attribute policy vsanattri1, maps vsan-10 to vlan-43, and maps vsan-11 to vlan-44.
Example:
https://apic-ip-address/api/mo/uni/infra/vsanattrp-[vsanattr1].xml
<fcVsanAttrP name="vsanattr1">
Step 4 To create a Fibre Channel domain, send a post with XML such as the following example.
The example creates VSAN domain vsanDom1.
Example:
https://apic-ip-address/api/mo/uni/fc-vsanDom1.xml
<!-- Vsan-domain -->
<fcDomP name="vsanDom1">
<fcRsVsanAttr tDn="uni/infra/vsanattrp-[vsanattr1]"/>
<infraRsVlanNs tDn="uni/infra/vlanns-[vlanPool1]-static"/>
<fcRsVsanNs tDn="uni/infra/vsanns-[vsanPool1]-static"/>
</fcDomP>
Step 5 To create the tenant, application profile, EPG and associate the FCoE bridge domain with the EPG, send a post with XML
such as the following example.
The example creates a bridge domain bd1 under a target tenant configured to support FCoE and an application EPG
epg1. It associates the EPG with VSAN domain vsanDom1 and a Fibre Channel path (to interface 1/39 on leaf switch
101. It deletes a Fibre channel path to interface 1/40 by assigning the <fvRsFcPathAtt> object with "deleted" status. Each
interface is associated with a VSAN.
Note Two other possible alternative vFC deployments are also displayed. One sample deploys vFC on a port channel.
The other sample deploys vFC on a virtual port channel.
Example:
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant
name="tenant1">
<fvCtx name="vrf1"/>
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsBd tnFvBDName="bd1" />
<fvRsDomAtt tDn="uni/fc-vsanDom1" />
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/39]"
vsan="vsan-11" vsanMode="native"/>
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]"
vsan="vsan-10" vsanMode="regular" status="deleted"/>
</fvAEPg>
</fvAp>
</fvTenant>
Step 6 To create a port policy group and an AEP, send a post with XML such as the following example.
The example executes the following requests:
• Creates a policy group portgrp1 that includes an FC interface policy fcIfPol1, a priority flow control policy pfcIfPol1
and a slow-drain policy sdIfPol1.
• Creates an attached entity profile (AEP) AttEntP1 that associates the ports in VSAN domain vsanDom1 with the
settings to be specified for fcIfPol1, pfcIfPol1, and sdIfPol1.
Example:
https://apic-ip-address/api/mo/uni.xml
<polUni>
<infraInfra>
<infraFuncP>
<infraAccPortGrp name="portgrp1">
<infraRsFcIfPol tnFcIfPolName="fcIfPol1"/>
<infraRsAttEntP tDn="uni/infra/attentp-AttEntP1" />
<infraRsQosPfcIfPol tnQosPfcIfPolName="pfcIfPol1"/>
<infraRsQosSdIfPol tnQosSdIfPolName="sdIfPol1"/>
</infraAccPortGrp>
</infraFuncP>
<infraAttEntityP name="AttEntP1">
<infraRsDomP tDn="uni/fc-vsanDom1"/>
</infraAttEntityP>
<qosPfcIfPol dn="uni/infra/pfc-pfcIfPol1" adminSt="on">
</qosPfcIfPol>
<qosSdIfPol dn="uni/infra/qossdpol-sdIfPol1" congClearAction="log"
congDetectMult="5" flushIntvl="100" flushAdminSt="enabled">
</qosSdIfPol>
<fcIfPol dn="uni/infra/fcIfPol-fcIfPol1" portMode="np">
</fcIfPol>
</infraInfra>
</polUni>
Step 7 To create a node selector and a port selector, send a post with XML such as the following example.
The example executes the following requests:
• Creates node selector leafsel1 that specifies leaf node 101.
• Creates port selector portsel1 that specifies port 1/39.
Example:
https://apic-ip-address/api/mo/uni.xml
<polUni>
<infraInfra>
<infraNodeP name="nprof1">
<infraLeafS name="leafsel1" type="range">
<infraNodeBlk name="nblk1" from_="101" to_="101"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-pprof1"/>
</infraNodeP>
<infraAccPortP name="pprof1">
<infraHPortS name="portsel1" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="39" toPort="39">
</infraPortBlk>
</infraAccPortP>
</infraInfra>
</polUni>
Step 8 To create a vPC, send a post with XML such as the following example.
Example:
https://apic-ip-address/api/mo/uni.xml
<polUni>
<fabricInst>
</fabricInst>
</polUni>
<infraAccPortP name="pprof1">
<infraHPortS name="portsel1" type="range">
<infraPortBlk name="blk"
fromCard="1" toCard="1" fromPort="17" toPort="17"></infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/fexprof-fexprof1/fexbundle-fexbundle1" fexId="110" />
</infraHPortS>
</infraAccPortP>
<infraFuncP>
<infraAccPortGrp name="portgrp1">
<infraRsAttEntP tDn="uni/infra/attentp-attentp1" />
</infraAccPortGrp>
</infraFuncP>
<infraFexP name="fexprof1">
<infraFexBndlGrp name="fexbundle1"/>
<infraHPortS name="portsel2" type="range">
<infraPortBlk name="blk2"
fromCard="1" toCard="1" fromPort="20" toPort="20"></infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accportgrp-portgrp1"/>
</infraHPortS>
</infraFexP>
<infraAttEntityP name="attentp1">
<infraRsDomP tDn="uni/fc-vsanDom1"/>
</infraAttEntityP>
</infraInfra>
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsBd tnFvBDName="bd1" />
<fvRsDomAtt tDn="uni/fc-vsanDom1" />
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/extpaths-110/pathep-[eth1/17]" vsan="vsan-11"
vsanMode="native"/>
</fvAEPg>
</fvAp>
</fvTenant>
<infraAccPortP name="pprof1">
<infraHPortS name="portsel1" type="range">
<infraPortBlk name="blk1"
fromCard="1" toCard="1" fromPort="18" toPort="18"></infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/fexprof-fexprof1/fexbundle-fexbundle1" fexId="111" />
</infraHPortS>
</infraAccPortP>
<infraFexP name="fexprof1">
<infraFexBndlGrp name="fexbundle1"/>
<infraHPortS name="portsel1" type="range">
<infraPortBlk name="blk1"
fromCard="1" toCard="1" fromPort="20" toPort="20"></infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-pc1"/>
</infraHPortS>
</infraFexP>
<infraFuncP>
<infraAccBndlGrp name="pc1">
<infraRsAttEntP tDn="uni/infra/attentp-attentp1" />
</infraAccBndlGrp>
</infraFuncP>
<infraAttEntityP name="attentp1">
<infraRsDomP tDn="uni/fc-vsanDom1"/>
</infraAttEntityP>
</infraInfra>
<fvTenant name="tenant1">
<fvCtx name="vrf1"/>
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsBd tnFvBDName="bd1" />
<fvRsDomAtt tDn="uni/fc-vsanDom1" />
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/extpaths-111/pathep-[pc1]" vsan="vsan-11" vsanMode="native"
/>
</fvAEPg>
</fvAp>
</fvTenant>
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsBd tnFvBDName="bd1" />
<fvRsDomAtt tDn="uni/fc-vsanDom1" />
<fvRsFcPathAtt vsanMode="native" vsan="vsan-11"
tDn="topology/pod-1/protpaths-101-102/extprotpaths-111-111/pathep-[vpc1]" />
</fvAEPg>
</fvAp>
</fvTenant>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-pprof1" />
</infraNodeP>
<infraNodeP name="nprof2">
<infraLeafS name="leafsel2" type="range">
<infraNodeBlk name="nblk2" from_="102" to_="102"/>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-pprof2" />
</infraNodeP>
<infraAccPortP name="pprof1">
<infraHPortS name="portsel1" type="range">
<infraPortBlk name="blk1"
fromCard="1" toCard="1" fromPort="18" toPort="18">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/fexprof-fexprof1/fexbundle-fexbundle1" fexId="111" />
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="pprof2">
<infraHPortS name="portsel2" type="range">
<infraPortBlk name="blk2"
fromCard="1" toCard="1" fromPort="18" toPort="18">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/fexprof-fexprof2/fexbundle-fexbundle2" fexId="111" />
</infraHPortS>
</infraAccPortP>
<infraFexP name="fexprof1">
<infraFexBndlGrp name="fexbundle1"/>
<infraHPortS name="portsel1" type="range">
<infraPortBlk name="blk1"
fromCard="1" toCard="1" fromPort="20" toPort="20">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-vpc1"/>
</infraHPortS>
</infraFexP>
<infraFexP name="fexprof2">
<infraFexBndlGrp name="fexbundle2"/>
<infraHPortS name="portsel2" type="range">
<infraPortBlk name="blk2"
fromCard="1" toCard="1" fromPort="20" toPort="20">
</infraPortBlk>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-vpc1"/>
</infraHPortS>
</infraFexP>
<infraFuncP>
<infraAccBndlGrp name="vpc1" lagT="node">
<infraRsAttEntP tDn="uni/infra/attentp-attentp1" />
</infraAccBndlGrp>
</infraFuncP>
<infraAttEntityP name="attentp1">
<infraRsDomP tDn="uni/fc-vsanDom1"/>
</infraAttEntityP>
</infraInfra>
</polUni>
POST https://apic-ip-address/api/node/mo/uni/fabric/protpol.xml
<fabricProtPol>
<fabricExplicitGEp name="vpc-explicitGrp1101102" id="100" >
<fabricNodePEp id="101" />
<fabricNodePEp id="102" />
</fabricExplicitGEp>
</fabricProtPol>
POST https://apic-ip-address/api/node/mo/uni/infra/fcIfPol-vpc1.xml
POST https://apic-ip-address/api/node/mo/uni/infra/lacplagp-vpc1.xml
POST https://apic-ip-address/api/node/mo/uni/infra.xml
<infraInfra>
<infraAccPortP
name="Switch101-102_Profile_ifselector"
descr="GUI Interface Selector Generated PortP Profile: Switch101-102_Profile">
<infraHPortS name="Switch101-102_1-ports-49" type="range">
<infraPortBlk name="block1" fromPort="49" toPort="49" />
<infraRsAccBaseGrp
tDn="uni/infra/funcprof/accbundle-Switch101-102_1-ports-49_PolGrp" />
</infraHPortS>
</infraAccPortP>
<infraFuncP>
<infraAccBndlGrp name="Switch101-102_1-ports-49_PolGrp" lagT="node">
<infraRsAttEntP tDn="uni/infra/attentp-fcDom_AttEntityP" />
<infraRsFcIfPol tnFcIfPolName="vpc1" />
<infraRsLacpPol tnLacpLagPolName="vpc1" />
</infraAccBndlGrp>
</infraFuncP>
<infraNodeP
name="Switch101-102_Profile"
descr="GUI Interface Selector Generated Profile: Switch101-102_Profile">
<infraLeafS name="Switch101-102_Profile_selector_101102" type="range">
<infraNodeBlk name="single0" from_="101" to_="101" />
<infraNodeBlk name="single1" from_="102" to_="102" />
</infraLeafS>
<infraRsAccPortP
tDn="uni/infra/accportprof-Switch101-102_Profile_ifselector" />>
</infraNodeP>
</infraInfra>
POST https://apic-ip-address/api/node/mo/uni/tn-newtenant/BD-BDnew1.xml
POST https://apic-ip-address/api/node/mo/uni/tn-newtenant/ap-AP1/epg-epgNew.xml
POST https://apic-ip-address/api/node/mo/uni/tn-newtenant/ap-AP1/epg-epgNew.xml
<fvRsPathAtt
encap="vlan-1"
instrImedcy="immediate"
mode="native"
tDn="topology/pod-1/protpaths-101-102/pathep-[Switch101-102_1-ports-49_PolGrp]" />
POST https://apic-ip-address/api/node/mo/uni/tn-newtenant/BD-BD3.xml
<fvBD
name="BD3"
mac="00:22:BD:F8:19:FF"
type="fc"
unicastRoute="false" >
<fvRsCtx tnFvCtxName="vrf" />
</fvBD>
POST https://apic-ip-address/api/node/mo/uni/tn-newtenant/ap-AP1/epg-epg3.xml
POST https://apic-ip-address/api/node/mo/uni/tn-newtenant/ap-AP1/epg-epg3.xml
<fvRsFcPathAtt
vsan="vsan-3"
vsanMode="native"
tDn="topology/pod-1/paths-101/pathep-[eth1/49]" />
Object Description
<fvRsFcPathAtt> (Fibre Channel Path) The Fibre Channel path specifies the vFC path to the actual
interface. Deleting each object of this type removes the deployment
from that object's associated interfaces.
<fcVsanAttrpP> (VSAN/VLAN map) The VSAN/VLAN map maps the VSANs to their associated
VLANs deleting this object removes the association between the
VSANs that support FCoE connectivity and their underlying
VSANs.
<fvnsVsanInstP> (VSAN pool) The VSAN pool specifies the set of VSANs available to support
FCoE connectivity. Deleting this pool removes those VSANs.
Object Description
<fvnsVlanIsntP> ((VLAN pool) The VLAN pool specifies the set of VLANs available for VSAN
mapping. Deleting the associated VLAN pool cleans up after an
FCoE undeployment, removing the underlying VLAN entities over
which the VSAN entities ran.
<fcDomP> (VSAN or Fibre Channel The Fibre Channel domain includes all the VSANs and their
domain) mappings. Deleting this object undeploys vFC from all interfaces
associated with this domain.
<fvAEPg> (application EPG) The application EPG associated with the FCoE connectivity. If the
purpose of the application EPGs was only to support FCoE-related
activity, you might consider deleting this object.
<fvAp> (application profile) The application profile associated with the FCoE connectivity. If
the purpose of the application profile was only to support
FCoE-related activity, you might consider deleting this object.
<fvTenant> (tenant) The tenant associated with the FCoE connectivity. If the purpose
of the tenant was only to support FCoE-related activity, you might
consider deleting this object.
Note If during clean up you delete the Ethernet configuration object (infraHPortS) for a vFC port, the default vFC
properties remain associated with that interface. For example it the interface configuration for vFC NP port
1/20 is deleted, that port remains a vFC port but with default F port setting rather than non-default NP port
setting applied.
The following steps undeploy FCoE-enabled interfaces and EPGs accessing those interfaces using the FCoE
protocol.
Step 1 To delete the associated Fibre Channel path objects, send a post with XML such as the following example.
The example deletes all instances of the Fibre Channel path object <fvRsFcPathAtt>.
Note Deleting the Fibre Channel paths undeploys the vFC from the ports/VSANs that used them.
Example:
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant
name="tenant1">
<fvCtx name="vrf1"/>
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsBd tnFvBDName="bd1" />
</fvAp>
</fvTenant>
Step 2 To delete the associated VSAN/VLAN map, send a post such as the following example.
The example deletes the VSAN/VLAN map vsanattri1 and its associated <fcVsanAttrpP> object.
Example:
https://apic-ip-address/api/mo/uni/infra/vsanattrp-[vsanattr1].xml
Step 3 To delete the associated VSAN pool, send a post such as the following example.
The example deletes the VSAN pool vsanPool1 and its associated <fvnsVsanInstP> object.
Example:
https://apic-ip-address/api/mo/uni/infra/vsanns-[vsanPool1]-static.xml
Step 4 To delete the associated VLAN pool, send a post with XML such as the following example.
The example deletes the VLAN pool vlanPool1 and its associated <fvnsVlanIsntP> object.
Example:
https://apic-ip-address/api/mo/uni/infra/vlanns-[vlanPool1]-static.xml
Step 5 To delete the associated Fibre Channel domain, send a post with XML such as the following example.
The example deletes the VSAN domain vsanDom1 and its associated <fcDomP> object.
Example:
https://apic-ip-address/api/mo/uni/fc-vsanDom1.xml
<!-- Vsan-domain -->
<fcDomP name="vsanDom1" status="deleted">
<fcRsVsanAttr tDn="uni/infra/vsanattrp-[vsanattr1]"/>
<infraRsVlanNs tDn="uni/infra/vlanns-[vlanPool1]-static"/>
<fcRsVsanNs tDn="uni/infra/vsanns-[vsanPool1]-static"/>
</fcDomP>
Step 6 Optional: If appropriate, you can delete the associated application EPG, the associated application profile, or the associated
tenant.
Example:
In the following sample, the associated application EPG epg1 and its associated <fvAEPg> object is deleted.
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant
name="tenant1"/>
<fvCtx name="vrf1"/>
<fvAp name="app1">
<fvAEPg name="epg1" status= "deleted">
<fvRsBd tnFvBDName="bd1" />
<fvRsDomAtt tDn="uni/fc-vsanDom1" />
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/39]"
vsan="vsan-11" vsanMode="native" status="deleted"/>
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]"
vsan="vsan-10" vsanMode="regular" status="deleted"/>
</fvAEPg>
</fvAp>
</fvTenant>
Example:
In the following example, the associated application profile app1 and its associated <fvAp> object is deleted.
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant
name="tenant1">
<fvCtx name="vrf1"/>
</fvBD>
</fvAp>
</fvTenant>
Example:
In the following example, the entire tenant tenant1 and its associated <fvTenant> object is deleted.
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant
name="tenant1" status="deleted">
<fvCtx name="vrf1"/>
<fvAp name="app1">
<fvAEPg name="epg1" status= "deleted">
<fvRsBd tnFvBDName="bd1" />
<fvRsDomAtt tDn="uni/fc-vsanDom1" />
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/39]"
vsan="vsan-11" vsanMode="native" status="deleted"/>
<fvRsFcPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/40]"
vsan="vsan-10" vsanMode="regular" status="deleted"/>
</fvAEPg>
</fvAp>
</fvTenant>
Beginning with Cisco APIC Release 4.0(2), SAN boot is supported through a FEX host interface (HIF) port
vPC, as shown in the following figure.
Figure 27: SAN Boot Topology with a FEX host interface (HIF) port vPC
Step 1 On the APIC menu bar, navigate to Fabric > Access Policies > Quickstart and click Configure an interface, PC, and
VPC.
Step 2 In the Configure an interface, PC, and VPC work area, in the vPC Switch Pairs toolbar, click + to create a switch pair.
Perform the following actions:
a) From the vPC Domain ID text box, enter a number to designate the switch pair.
b) From the Switch 1 drop-down list, select a leaf switch.
Only switches with interfaces in the same vPC policy group can be paired together.
c) From the Switch 2 drop-down list, select a leaf switch.
d) click Save to save this switch pair.
Step 3 In the Configure an interface, PC, and vPC work area, click the large green + to select switches.
The Select Switches To Configure Interfaces work area opens with the Quick option selected by default.
Step 4 Select two switch IDs from the Switches drop-down list, and name the switch profile.
Step 5 Click the large green + again to configure the switch interfaces.
Step 6 In the Interface Type control, select vPC.
Step 7 For Interfaces, enter a single port number, such as 1/49, that will be used on both switches as vPC members.
This action creates an interface selector policy. You can accept or change the name of the policy in the Interface
Selector Name text box.
This EPG will be the Native EPG, in which the Native VLAN will be configured.
a) In the Name field, type a name for the EPG.
b) From the Bridge Domain drop-down list, select Create Bridge Domain.
c) In the Name field, type a name for the bridge domain.
d) From the Type control, select regular.
e) From the VRF drop-down list, choose the tenant VRF. If no VRF exists yet, select Create VRF, name the VRF
and click Submit.
f) Click Next, Next, and Finish to return to Create Application EPG.
g) Click Finish.
Step 18 Expand the Native EPG created in the previous step.
Step 19 Right-click Static Ports, select Deploy Static EPG On PC, VPC, or Interface and perform the following actions.
a) From the Path Type control, select Virtual Port Channel.
b) From the Path drop-down list, select the port channel policy created for vPC.
c) From the Port Encap drop-down list, select VLAN and enter the number of an Ethernet VLAN.
d) From the Deployment Immediacy control, select Immediate.
e) From the Mode control, select Access (802.1P).
f) Click Submit.
Step 20 Right-click Application EPGs, select Create Application EPG and perform the following actions.
This EPG will be the first of two EPGs, one for each SAN.
a) In the Name field, type a name for the EPG.
b) From the Bridge Domain drop-down list, select Create Bridge Domain.
c) In the Name field, type a name for the bridge domain.
d) From the Type control, select fc.
e) From the VRF drop-down list, choose the tenant VRF. If no VRF exists yet, select Create VRF, name the VRF
and click Submit.
f) Click Next, Next, and Finish to return to Create Application EPG.
g) Click Finish.
Step 21 Repeat the previous step to create a second application EPG.
This second EPG will be used for the second SAN.
Step 22 Expand one of the two SAN EPGs, right-click Fibre Channel (Paths), select Deploy Fibre Channel and perform the
following actions.
a) From the Path Type control, select Port.
b) From the Node drop-down list, select one leaf of your switch pair.
c) From the Path drop-down list, select the Ethernet port number of your VPC.
d) In the VSAN text box, type the VSAN number prefixed by "vsan-".
For example, type "vsan-300" for VSAN number 300.
e) In the VSAN Mode control, select Native.
f) Click Submit.
Step 23 Expand the other of the two SAN EPGs and repeat the previous step, selecting the other leaf of your switch pair.
In this example, VSAN 200 is bound to physical Ethernet interface 1/49 on leaf 101 and VSAN 300 is bound
to physical Ethernet interface 1/49 on leaf 102. The two interfaces are members of virtual port channel
Switch101-102_1-ports-49_PolGrp.
Note In the FC NPV application, the role of the ACI leaf switch is to provide a path for FC traffic between the
locally connected SAN hosts and a locally connected core switch. The leaf switch does not perform local
switching between SAN hosts, and the FC traffic is not forwarded to a spine switch.
FC NPV Benefits
FC NPV provides the following:
• Increases the number of hosts that connect to the fabric without adding domain IDs in the fabric. The
domain ID of the NPV core switch is shared among multiple NPV switches.
• FC and FCoE hosts connect to SAN fabrics using native FC interfaces.
• Automatic traffic mapping for load balancing. For newly added servers connected to NPV, traffic is
automatically distributed among the external uplinks based on current traffic loads.
• Static traffic mapping. A server connected to NPV can be statically mapped to an external uplink.
FC NPV Mode
Feature-set fcoe-npv in ACI will be enabled automatically by default when the first FCoE/FC configuration
is pushed.
FC Topology
The topology of various configurations supporting FC traffic over the ACI fabric is shown in the following
figure:
• Server/storage host interfaces on the ACI leaf switch can be configured to function as either native F
ports or as virtual FC (FCoE) ports.
• An uplink interface to a FC core switch can be configured as any of the following port types:
• native FC NP port
• SAN-PO NP port
• An uplink interface to a FCF switch can be configured as any of the following port types:
• virtual (vFC) NP port
• vFC-PO NP port
• N-Port ID Virtualization (NPIV) is supported and enabled by default, allowing an N port to be assigned
multiple N port IDs or Fibre Channel IDs (FCID) over a single link.
• Trunking can be enabled on an NP port to the core switch. Trunking allows a port to support more than
one VSAN. When trunk mode is enabled on an NP port, it is referred to as a TNP port.
• Multiple NP ports can be combined as a SAN port channel (SAN-PO) to the core switch. Trunking is
supported on a SAN port channel.
• FC F ports support 4/16/32 Gbps and auto speed configuration, but 8Gbps is not supported for host
interfaces. The default speed is 'auto'.
• FC NP ports support 4/8/16/32 Gbps and auto speed configuration. The default speed is 'auto'.
• Multiple FDISC followed by Flogi (nested NPIV) is supported with FC/FCoE host and FC/FCoE NP
links.
• SAN boot is supported for hosts directly connected by FC F ports. SAN boot is supported on FEX through
an FCoE uplink, but not through a vPC.
Traffic Maps
FC NPV supports traffic maps. A traffic map allows you to specify the external (NP uplink) interfaces that a
server (host) interface can use to connect to the core switches.
Note When an FC NPV traffic map is configured for a server interface, the server interface must select only from
the external interfaces in its traffic map. If none of the specified external interfaces are operational, the server
remains in a non-operational state.
Note Redistributing a server interface causes traffic disruption to the attached end devices. Adding a member to
the existing port-channel does not trigger disruptive auto load-balance.
To avoid disruption of server traffic, you should enable this feature only after adding a new NP uplink, and
then disable it again after the server interfaces have been redistributed.
If disruptive load balancing is not enabled, you can manually reinitialize some or all of the server interfaces
to distribute server traffic to new NP uplink interfaces.
• To link a set of servers to a specific core switch, associate the server interfaces with a set of NP uplink
interfaces that all connect to that core switch.
• Configure Persistent FC IDs on the core switch and use the traffic map feature to direct server interface
traffic onto NP uplinks that all connect to the associated core switch.
• When initially configuring traffic map pinning, you must shut the server host port before configuring
the first traffic map.
• If traffic mapping is configured for more than one uplink, when removing the traffic map through which
a host has logged in, you must first shut the host before removing the traffic map.
Note When a server is statically mapped to an external interface, the server traffic is not redistributed in the event
that the external interface becomes down for any reason.
feature npiv
feature fport-channel-trunk
• To use an 8G uplink speed, you must configure the IDLE fill pattern on the core switch.
Note Following is an example of configuring IDLE fill pattern on a Cisco MDS switch:
interface fc2/3
switchport speed 8000
switchport mode NP
switchport fill-pattern IDLE speed 8000
no shutdown
Supported Hardware
FC NPV is supported on the N9K-C93180YC-FX switch and only the following FC SFPs are supported:
• DS-SFP-FC8G-SW — 2/4/8G (2G is not a supported FC NPV port speed)
• DS-SFP-FC16G-SW — 4/8/16G (not compatible when FC NPV port speed is 32G)
• DS-SFP-FC32G-SW — 8/16/32G (not compatible when FC NPV port speed is 4G)
Supported NPIV core switches are Cisco Nexus 5000 Series, Nexus 6000 Series, Nexus 7000 Series (FCoE),
and Cisco MDS 9000 Series Multilayer Switches.
Step 1 On the APIC menu bar, navigate to Fabric > Access Policies > Quickstart and click Configure an interface, PC, and
vPC.
Step 2 In the Configured Switch Interfaces toolbar, click + to create a switch profile. Perform the following actions:
This switch profile configures your server host ports. Another switch profile configures your uplink ports.
a) From the Switches drop-down list, choose your NPV leaf switch.
This action automatically creates a leaf switch profile. You can accept or change the name of the leaf switch profile
in the Switch Profile Name text box.
b) Click the large green + on the ports drawing to open more interface settings.
c) For Interface Type, select FC to specify Fibre Channel host interface ports (F ports).
d) For Interfaces, enter a port range for the FC ports.
Only one contiguous range of ports can be converted to FC ports. This range must be a multiple of 4 ending with
a port number that is a multiple of 4 (for example, 1-4, 1-8, and 21-32 are valid ranges).
This action creates an interface selector policy. You can accept or change the name of the policy in the Interface
Selector Name text box.
Note Port conversion from Ethernet to FC requires a reload of the switch. After the interface policy is applied,
a notification alarm appears in the GUI, prompting you to reload the switch. During a switch reload,
communication to the switch is interrupted, resulting in timeouts when trying to access the switch.
e) From the Policy Group Name drop-down list, select Create FC Interface Policy Group.
f) In the Create FC Interface Policy Group dialog box, type a name in the Name field.
g) In the Fibre Channel Interface Policy drop-down list, select Create Fibre Channel Interface Policy.
h) In the Create Fibre Channel Interface Policy dialog box, type a name in the Name field and configure the
following settings:
Field Setting
i) Click Submit to save the Fibre Channel interface policy and return to the Create FC Interface Policy Group
dialog box.
j) From the Attached Entity Profile drop-down list, choose Create Attachable Access Entity Profile.
The attachable entity profile option specifies the interfaces where the leaf access port policy is deployed.
k) In the Name field, enter a name for the attachable entity policy.
l) In the Domains (VMM, Physical, or External) To Be Associated To Interfaces toolbar, click + to add a domain
profile.
m) From the Domain Profile drop-down list, choose Create Fibre Channel Domain.
n) In the Name field, enter a name for the Fibre Channel domain.
o) From the VSAN Pool drop-down list, choose Create VSAN Pool.
p) In the Name field, enter a name for the VSAN pool.
q) In the Encap Blocks toolbar, click + to add a VSAN range.
r) In the Create VSAN Ranges dialog box, enter From and To VSAN numbers.
s) For Allocation Mode, select Static Allocation and click OK.
t) In the Create VSAN Ranges dialog box, click Submit.
u) In the Create Fibre Channel Domain dialog box, click Submit.
Note In the Fibre Channel Domain, when using native FC ports instead of FCoE, it is not necessary to configure
a VLAN pool or VSAN attributes.
v) In the Create Attachable Access Entity Profile dialog box, click Update to select the Fibre Channel domain
profile and click Submit.
w) In the Create FC Policy Group dialog box, click Submit.
x) In the Configure Interface, PC, and vPC dialog box, click Save to save this switch profile for your server host
ports.
Note Port conversion from Ethernet to FC requires a reload of the switch. After the interface policy is applied, a
notification alarm appears in the GUI, prompting you to reload the switch. During a switch reload, communication
to the switch is interrupted, resulting in timeouts when trying to access the switch.
In Fabric > Access Policies > Switches > Leaf Switches > Profiles > name, the Fibre Channel port profile appears in
the Associated Interface Selector Profiles list in the Leaf Profiles work pane.
What to do next
• Configure a Fibre Channel uplink connection profile.
• Deploy the server ports and uplink ports in a tenant to connect to a Fibre Channel core switch.
Note This procedure can also be performed using the Configure Interface, PC, and vPC wizard.
Step 1 Expand Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles.
Step 2 Right click Profiles and click Create Leaf Interface Profile.
Step 3 In the Create Leaf Interface Profile dialog box, perform the following steps:
a) In the Name field, enter a name for the leaf interface profile.
b) In the Interface Selectors toolbar, click + to open the Create Access Port Selector dialog box.
c) In the Name field, enter a name for the port selector.
d) In the Interface IDs field, enter a port range for the FC PC ports.
The port channel can have a maximum of 16 ports.
Only one contiguous range of ports can be converted to FC ports. This range must be a multiple of 4 ending with
a port number that is a multiple of 4 (for example, 1-4, 1-8, and 21-32 are valid ranges).
Note Port conversion from Ethernet to FC requires a reload of the switch. After the interface policy is applied,
a notification alarm appears in the GUI, prompting you to reload the switch manually. During a switch
reload, communication to the switch is interrupted, resulting in timeouts when trying to access the switch.
e) From the Interface Policy Group drop-down list, choose Create FC PC Interface Policy Group.
f) In the Name field, enter a name for the FC PC interface policy group.
g) From the Fibre Channel Interface Policy drop-down list, choose Create Fibre Channel Interface Policy.
h) In the Name field, enter a name for the FC PC interface policy.
i) In the Create Interface FC Policy dialog box, type a name in the Name field and configure the following settings:
Field Setting
j) Click Submit to save the FC PC interface policy and return to the Create FC PC Interface Policy Group dialog
box.
k) From the Port Channel Policy drop-down list, choose Create Port Channel Policy.
l) In the Name field, enter a name for the port channel policy.
The other settings in this menu can be ignored.
m) Click Submit to save the port channel policy and return to the Create FC PC Interface Policy Group dialog box.
n) From the Attached Entity Profile drop-down list, choose the existing attachable entity profile.
o) Click Submit to return to the Create Access Port Selector dialog box.
p) Click OK to return to the Create Leaf Interface Profile dialog box.
q) Click OK to return to the Leaf Interfaces - Profiles work pane.
Step 4 Expand Fabric > Access Policies > Switches > Leaf Switches > Profiles.
Step 5 Right click the leaf switch profile that you created and click Create Interface Profile.
Step 6 In the Create Interface Profile dialog box, perform the following steps:
a) From the Interface Select Profile drop-down list, choose the leaf interface profile that you created for the port
channel.
In Fabric > Access Policies > Switches > Leaf Switches > Profiles > name, the FC port channel profile appears in the
Associated Interface Selector Profiles list in the work pane.
What to do next
Deploy the server ports and uplink ports in a tenant to connect to a Fibre Channel core switch.
Step 2 Right click Application Profiles, click Create Application Profile, and perform the following actions:
a) In the Name field, enter a name for the application profile.
b) Click Submit.
Step 3 Expand Tenants > Tenant name > Application Profiles > name > Application EPGs
Step 4 Right click Application EPGs and click Create Application EPG, and perform the following actions:
Step 5 In the Create Application EPG dialog box, perform the following actions:
a) In the Name field, enter a name for the application EPG.
b) Configure the following settings:
Field Setting
c) From the Bridge Domain drop-down list, select Create Bridge Domain.
d) In the Name field, enter a name for the bridge domain.
i) Click Submit.
j) Repeat from Step a for each Fibre Channel host port.
Step 9 Expand Tenants > Tenant name > Application Profiles > name > Application EPGs > name > Fibre Channel (Paths)
and perform the following actions:
This step deploys the uplink port channel.
a) Right click Fibre Channel (Paths) and click Deploy Fibre Channel.
b) In the Path Type control, click Direct Port Channel.
c) From the Path drop-down list, choose the uplink port channel.
d) In the VSAN field, enter the port default VSAN.
e) In the VSAN Mode control, click Native for a port VSAN or Regular for a trunk VSAN.
f) Verify that the Type is fcoe.
g) Click Submit.
h) Repeat from Step a for each Fibre Channel uplink port or port channel.
Note Before creating a pinning profile (traffic map), you must shut the server port that is to be mapped to an uplink.
Step 1 In the Fabric > Inventory > Pod n > Leaf n > Interfaces > FC Interfaces work pane, select and disable the server
interface port that is to be mapped to an uplink.
Step 2 Expand Tenants > Tenant name > Application Profiles > application profile name > Application EPGs > EPG name
> Fibre Channel (Paths) and perform the following actions:
a) Right click Fibre Channel (Paths) and click Deploy Fibre Channel.
b) In the Path Type control, click Port.
c) From the Node drop-down list, choose the leaf switch.
d) From the Path drop-down list, choose the server port that is to be mapped to a specific uplink port.
e) In the VSAN field, enter the port default VSAN.
f) In the VSAN Mode control, click Native.
g) Verify that the Type is fcoe.
h) From the Pinning Label drop-down list, choose Create Pinning Profile.
i) In the Name field, enter a name for the traffic map.
j) In the Path Type control, click Port to connect to a single NP uplink port or Direct Port Channel to connect to
an FC port channel.
If you choose Port for the path type, you must also choose the leaf switch from the Node drop-down list that
appears.
If you choose Direct Port Channel for the path type, you must also choose the FC PC you have defined in
Interface Policy Group.
k) From the Path drop-down list, choose the uplink port or port channel to which the server port will be mapped.
l) Click Submit to return to the Deploy Fibre Channel dialog box.
m) Click Submit.
Step 3 In the Fabric > Inventory > Pod n > Leaf n > Interfaces > FC Interfaces work pane, select and re-enable the server
interface port that is mapped to an uplink.
This example converts ports 1/1-12 on leaf 101 to Fibre Channel ports. The [no] form of the port type fc command
converts the ports from Fibre Channel back to Ethernet.
Note The conversion of ports takes place only after a reboot of the leaf switch.
Currently only one contiguous range of ports can be converted to FC ports, and this range must be a multiple
of 4 ending with a port number that is a multiple of 4 (for example, 1-4, 1-8, or 21-24).
A FC interface can be configured in access mode or trunk mode. To configure the FC port in access mode, use the
following command format:
Example:
To configure a FC port channel, configure a FC port interface template and apply it to FC interfaces that will be members
of the FC port-channel.
The port channel can have a maximum of 16 members.
Example:
Example:
Note The policy commands that are shown here are only examples, and are not mandatory settings.
This example applies fabric-wide FC policies and leaf-wide FC policies that are grouped into a leaf policy group lpg1 to
leaf 101.
Step 8 Create a leaf interface profile and apply a fc-policy-group to a set of FC interfaces.
Example:
Example:
Example:
Step 1 To create a VSAN pool, send a post with XML such as the following example. The example creates VSAN pool
myVsanPool1 and specifies the range of VSANs to be included as vsan-50 to vsan-60:
Example:
https://apic-ip-address/api/mo/uni/infra/vsanns-[myVsanPool1]-static.xml
</fvnsVsanInstP>
Step 2 To create a Fibre Channel domain, send a post with XML such as the following example. The example creates Fibre
Channel domain (VSAN domain) myFcDomain1 and associates it with the VSAN pool myVsanPool1:
Example:
https://apic-ip-address/api/mo/uni/fc-myFcDomain1.xml
<fcDomP name="myFcDomain1">
<fcRsVsanNs tDn="uni/infra/vsanns-[myVsanPool1]-static"/>
</fcDomP>
Step 3 To create an Attached Entity Policy (AEP) for the FC ports, send a post with XML such as the following example. The
example creates the AEP myFcAEP1 and associates it with the Fibre Channel domain myFcDomain1:
Example:
https://apic-ip-address/api/mo/uni.xml
<polUni>
<infraInfra>
<infraAttEntityP name="myFcAEP1">
<infraRsDomP tDn="uni/fc-myFcDomain1"/>
</infraAttEntityP>
</infraInfra>
</polUni>
Step 4 To create a FC interface policy and a policy group for server host ports, send a post with XML. This example executes
the following requests:
• Creates a FC interface policy myFcHostIfPolicy1 for server host ports. These are F ports with no trunking.
• Creates a FC interface policy group myFcHostPortGroup1 that includes the FC host interface policy
myFcHostIfPolicy1.
• Associates the policy group to the FC interface policy to convert these ports to FC ports.
• Creates a host port profile myFcHostPortProfile.
• Creates a port selector myFcHostSelector that specifies ports in range 1/1-8.
• Creates a node selector myFcNode1 that specifies leaf node 104.
• Creates a node selector myLeafSelector that specifies leaf node 104.
• Associates the host ports to the leaf node.
Example:
https://apic-ip-address/api/mo/uni.xml
<polUni>
<infraInfra>
<fcIfPol name="myFcHostIfPolicy1" portMode="f" trunkMode="trunk-off" speed="auto"/>
<infraFuncP>
<infraFcAccPortGrp name="myFcHostPortGroup1">
<infraRsFcL2IfPol tnFcIfPolName="myFcHostIfPolicy1" />
</infraFcAccPortGrp>
</infraFuncP>
<infraAccPortP name="myFcHostPortProfile">
Note When this configuration is applied, a switch reload is required to bring up the ports as FC ports.
Currently only one contiguous range of ports can be converted to FC ports, and this range must be multiple of
4 ending with a port number that is multiple of 4. Examples are 1-4, 1-8, or 21-24.
Step 5 To create a FC uplink port interface policy and a policy group for uplink port channels, send a post with XML. This
example executes the following requests:
• Creates a FC interface policy myFcUplinkIfPolicy2 for uplink ports. These are NP ports with trunking enabled.
• Creates a FC interface bundle policy group myFcUplinkBundleGroup2 that includes the FC uplink interface policy
myFcUplinkIfPolicy2.
• Associates the policy group to the FC interface policy to convert these ports to FC ports.
• Creates an uplink port profile myFcUplinkPortProfile.
• Creates a port selector myFcUplinkSelector that specifies ports in range 1/9-12.
• Associates the host ports to the leaf node 104.
Example:
https://apic-ip-address/api/mo/uni.xml
<polUni>
<infraInfra>
<fcIfPol name="myFcUplinkIfPolicy2" portMode="np" trunkMode="trunk-on" speed="auto"/>
<infraFuncP>
<infraFcAccBndlGrp name="myFcUplinkBundleGroup2">
<infraRsFcL2IfPol tnFcIfPolName="myFcUplinkIfPolicy2" />
</infraFcAccBndlGrp>
</infraFuncP>
<infraAccPortP name="myFcUplinkPortProfile">
<infraHPortS name="myFcUplinkSelector" type="range">
<infraPortBlk name="myUplinkPorts" fromCard="1" toCard="1" fromPort="9" toPort="12"
/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/fcaccportgrp-myFcUplinkBundleGroup2" />
</infraHPortS>
</infraAccPortP>
<infraNodeP name="myFcNode1">
<infraLeafS name="myLeafSelector" type="range">
<infraNodeBlk name="myLeaf104" from_="104" to_="104" />
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-myUplinkPorts" />
</infraNodeP>
</infraInfra>
</polUni>
Note When this configuration is applied, a switch reload is required to bring up the ports as FC ports.
Currently only one contiguous range of ports can be converted to FC ports, and this range must be multiple of
4 ending with a port number that is multiple of 4. Examples are 1-4, 1-8, or 21-24.
Step 6 To create the tenant, application profile, EPG and associate the FC bridge domain with the EPG, send a post with XML
such as the following example. The example creates a bridge domain myFcBD1 under a target tenant configured to
support FC and an application EPG epg1. It associates the EPG with Fibre Channel domain myFcDomain1 and a Fibre
Channel path to interface 1/7 on leaf switch 104. Each interface is associated with a VSAN.
Example:
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant name="tenant1">
<fvCtx name="myFcVRF"/>
<fvBD name="myFcBD1" type="fc">
<fvRsCtx tnFvCtxName="myFcVRF"/>
</fvBD>
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsBd tnFvBDName="myFcBD1"/>
<fvRsDomAtt tDn="uni/fc-myFcDomain1"/>
<fvRsFcPathAtt tDn="topology/pod-1/paths-104/pathep-[fc1/1]" vsan="vsan-50" vsanMode="native"/>
</fvAEPg>
</fvAp>
</fvTenant>
Step 7 To create a traffic map to pin server ports to uplink ports, send a post with XML such as the following example. The
example creates a traffic map to pin server port vFC 1/47 to uplink port FC 1/7:
Example:
https://apic-ip-address/api/mo/uni/tn-tenant1.xml
<fvTenant name="tenant1">
<fvAp name="app1">
<fvAEPg name="epg1">
<fvRsFcPathAtt tDn="topology/pod-1/paths-104/pathep-[eth1/47]" vsan="vsan-50" vsanMode="native">
<fcPinningLbl name="label1"/>
</fvRsFcPathAtt>
</fvAEPg>
</fvAp>
</fvTenant>
https://apic-ip-address/api/mo/uni/tn-vfc_t1.xml
<fvTenant name="tenant1">
<fcPinningP name="label1">
<fcRsPinToPath tDn="topology/pod-1/paths-104/pathep-[fc1/7]"/>
</fcPinningP>
</fvTenant>
Note If traffic map pinning is configured for the first time, the server host port must be shut before configuring the
first traffic map.
With Cisco ACI and Cisco APIC Release 2.2(1x) and higher, you can configure 802.1Q tunnels on edge
(tunnel) ports to enable point-to-multi-point tunneling of Ethernet frames in the fabric, with Quality of Service
(QoS) priority settings. A Dot1q Tunnel transports untagged, 802.1Q tagged, and 802.1ad double-tagged
frames as-is across the fabric. Each tunnel carries the traffic from a single customer and is associated with a
single bridge domain. ACI front panel ports can be part of a Dot1q Tunnel. Layer 2 switching is done based
on Destination MAC (DMAC) and regular MAC learning is done in the tunnel. Edge-port Dot1q Tunnels
are supported on second-generation (and later) Cisco Nexus 9000 series switches with "EX" on the end of the
switch model name.
With Cisco ACI and Cisco APIC Release 2.3(x) and higher, you can also configure multiple 802.1Q tunnels
on the same core port to carry double-tagged traffic from multiple customers, each distinguished with an
access encapsulation configured for each 802.1Q tunnel. You can also disable MAC Address Learning on
802.1Q tunnels. Both edge ports and core ports can belong to an 802.1Q tunnel with access encapsulation and
disabled MAC Address Learning. Both edge ports and core ports in Dot1q Tunnels are supported on
third-generation Cisco Nexus 9000 series switches with "FX" on the end of the switch model name.
Terms used in this document may be different in the Cisco Nexus 9000 Series documents.
• If a PC or VPC is the only interface in a Dot1q Tunnel and it is deleted and reconfigured, remove the
association of the PC/VPC to the Dot1q Tunnel and reconfigure it.
• With Cisco APIC Release 2.2(x) the Ethertypes for double-tagged frames must be 0x9100 followed by
0x8100.
However, with Cisco APIC Release 2.3(x) and higher, this limitation no longer applies for edge ports,
on third-generation Cisco Nexus switches with "FX" on the end of the switch model name.
• For core ports, the Ethertypes for double-tagged frames must be 0x8100 followed by 0x8100.
• You can include multiple edge ports and core ports (even across leaf switches) in a Dot1q Tunnel.
• An edge port may only be part of one tunnel, but a core port can belong to multiple Dot1q tunnels.
• With Cisco APIC Release 2.3(x) and higher, regular EPGs can be deployed on core ports that are used
in 802.1Q tunnels.
• L3Outs are not supported on interfaces enabled for Dot1q Tunnels.
• FEX interfaces are not supported as members of a Dot1q Tunnel.
• Interfaces configured as breakout ports do not support 802.1Q tunnels.
• Interface-level statistics are supported for interfaces in Dot1q Tunnels, but statistics at the tunnel level
are not supported.
Optional. Add a description of the policy group. We recommend that you describe the purpose of the policy
group.
• In the L2 Interface Policy field, click on the down-arrow and choose the L2 Interface Policy that you previously
created.
• If you are tunneling the CDP Layer 2 Tunneled Protocol, click on the CDP Policy down-arrow, and in the policy
dialog box add a name for the policy, disable the Admin State and click Submit..
• If you are tunneling the LLDP Layer 2 Tunneled Protocol, click on the LLDP Policy down-arrow, and in the
policy dialog box add a name for the policy, disable the Transmit State and click Submit.
• Click Submit.
Step 6 To create a static binding of the tunnel configuration to a port, click on Tenant > Networking > Dot1Q Tunnels. Expand
Dot1Q Tunnels and click on the Dot1Q Tunnels policy_name perviously created and perform the following actions:
a) Expand the Static Bindings table to open Create Static Binding dialog box.
b) In the Port field, select the type of port.
c) In the Node field, select a node from the drop-down.
d) In the Path field, select the interface path from the drop-down and click Submit.
Note You can use ports, port-channels, or virtual port channels for interfaces included in a Dot1q Tunnel. Detailed
steps are included for configuring ports. See the examples below for the commands to configure edge and
core port-channels and virtual port channels.
Create a Dot1q Tunnel and configure the interfaces for use in the tunnel using the NX-OS Style CLI, with
the following steps:
Note Dot1q Tunnels must include 2 or more interfaces. Repeat the steps (or configure two interfaces together), to
mark each interface for use in a Dot1q Tunnel. In this example, two interfaces are configured as edge-switch
ports, used by a single customer.
Use the following steps to configure a Dot1q Tunnel using the NX-OS style CLI:
1. Configure at least two interfaces for use in the tunnel.
2. Create a Dot1q Tunnel.
3. Associate all the interfaces with the tunnel.
SUMMARY STEPS
1. configure
2. Configure two interfaces for use in an 802.1Q tunnel, with the following steps:
3. leaf ID
4. interface ethernet slot/port
5. switchport mode dot1q-tunnel {edgePort | corePort}
6. Create an 802.1Q tunnel with the following steps:
7. leaf ID
8. interface ethernetslot/port
9. switchport tenanttenant-namedot1q-tunnel tunnel-name
10. Repeat steps 7 to 10 to associate other interfaces with the tunnel.
DETAILED STEPS
Step 4 interface ethernet slot/port Identifies the interface or interfaces to be marked as ports
in a tunnel.
Example:
apic1(config-leaf)# interface ethernet 1/13-14
Step 5 switchport mode dot1q-tunnel {edgePort | corePort} Marks the interfaces for use in an 802.1Q tunnel, and then
leaves the configuration mode.
Example:
apic1(config-leaf-if)# switchport mode The example shows configuring some interfaces for edge
dot1q-tunnel edgePort port use. Repeat steps 3 to 5 to configure more interfaces
apic1(config-leaf-if)# exit for the tunnel.
apic1(config-leaf)# exit
apic1(config)# exit
Step 9 switchport tenanttenant-namedot1q-tunnel tunnel-name Associates the interfaces to the tunnel and exits the
configuration mode.
Example:
Example: Configuring an 802.1Q Tunnel Using Ports with the NX-OS Style CLI
The example marks two ports as edge port interfaces to be used in a Dot1q Tunnel, marks two more ports to
be used as core port interfaces, creates the tunnel, and associates the ports with the tunnel.
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/13-14
apic1(config-leaf-if)# switchport mode dot1q-tunnel edgePort
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)leaf 102
apic1(config-leaf)# interface ethernet 1/10, 1/21
apic1(config-leaf-if)# switchport mode dot1q-tunnel corePort
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# tenant tenant64
apic1(config-tenant)# dot1q-tunnel vrf64_tunnel
apic1(config-tenant-tunnel)# l2protocol-tunnel cdp
apic1(config-tenant-tunnel)# l2protocol-tunnel lldp
apic1(config-tenant-tunnel)# access-encap 200
apic1(config-tenant-tunnel)# mac-learning disable
apic1(config-tenant-tunnel)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/13-14
apic1(config-leaf-if)# switchport tenant tenant64 dot1q-tunnel vrf64_tunnel
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# leaf 102
apic1(config-leaf)# interface ethernet 1/10, 1/21
apic1(config-leaf-if)# switchport tenant tenant64 dot1q-tunnel vrf64_tunnel
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1# configure
apic1(config)# tenant tenant64
apic1(config-tenant)# dot1q-tunnel vrf64_tunnel
apic1(config-tenant-tunnel)# l2protocol-tunnel cdp
apic1(config-tenant-tunnel)# l2protocol-tunnel lldp
apic1(config-tenant-tunnel)# access-encap 200
apic1(config-tenant-tunnel)# mac-learning disable
apic1(config-tenant-tunnel)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 101
apic1(config-leaf)# interface port-channel pc1
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface ethernet 1/2-3
apic1(config-leaf-if)# channel-group pc1
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface port-channel pc1
apic1(config-leaf-if)# switchport mode dot1q-tunnel edgePort
apic1(config-leaf-if)# switchport tenant tenant64 dot1q-tunnel vrf64_tunnel
apic1(config-tenant-tunnel)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 102
apic1(config-leaf)# interface port-channel pc2
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface ethernet 1/4-5
apic1(config-leaf-if)# channel-group pc2
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface port-channel pc2
apic1(config-leaf-if)# switchport mode dot1q-tunnel corePort
apic1(config-leaf-if)# switchport tenant tenant64 dot1q-tunnel vrf64_tunnel
apic1# configure
apic1(config)# vpc domain explicit 1 leaf 101 102
apic1(config)# vpc context leaf 101 102
apic1(config-vpc)# interface vpc vpc1
apic1(config-vpc-if)# switchport mode dot1q-tunnel edgePort
apic1(config-vpc-if)# exit
apic1(config-vpc)# exit
apic1(config)# vpc domain explicit 1 leaf 103 104
apic1(config)# vpc context leaf 103 104
apic1(config-vpc)# interface vpc vpc2
apic1(config-vpc-if)# switchport mode dot1q-tunnel corePort
apic1(config-vpc-if)# exit
apic1(config-vpc)# exit
apic1(config)# tenant tenant64
apic1(config-tenant)# dot1q-tunnel vrf64_tunnel
apic1(config-tenant-tunnel)# l2protocol-tunnel cdp
apic1(config-tenant-tunnel)# l2protocol-tunnel lldp
apic1(config-tenant-tunnel)# access-encap 200
apic1(config-tenant-tunnel)# mac-learning disable
apic1(config-tenant-tunnel)# exit
apic1(config-tenant)# exit
apic1(config)# leaf 103
apic1(config-leaf)# interface ethernet 1/6
apic1(config-leaf-if)# channel-group vpc1 vpc
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config)# leaf 104
apic1(config-leaf)# interface ethernet 1/6
apic1(config-leaf-if)# channel-group vpc1 vpc
apic1(config-leaf-if)# exit
apic1(config-leaf)# exit
apic1(config-vpc)# interface vpc vpc1
apic1(config-vpc-if)# switchport tenant tenant64 dot1q-tunnel vrf64_tunnel
apic1(config-vpc-if)# exit
Step 1 Create a Dot1q Tunnel using the REST API with XML such as the following example.
The example configures the tunnel with the LLDP Layer 2 tunneling protocol, adds the access encapsulation VLAN, and
disables MAC learning in the tunnel.
Example:
<fvTnlEPg name="VRF64_dot1q_tunnel" qiqL2ProtTunMask="lldp" accEncap="vlan-10"
fwdCtrl="mac-learn-disable" >
<fvRsTnlpathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/13]"/>
</fvTnlEPg>
Step 2 Configure a Layer 2 Interface policy with static binding with XML such as the following example.
The example configures a Layer 2 interface policy for edge-switch ports. To configure a policy for core-switch ports,
use corePort instead of edgePort in the l2IfPol MO.
Example:
<l2IfPol name="VRF64_L2_int_pol" qinq="edgePort" />
Step 3 Apply the Layer 2 Interface policy to a Leaf Access Port Policy Group with XML such as the following example.
Example:
<infraAccPortGrp name="VRF64_L2_Port_Pol_Group" >
<infraRsL2IfPol tnL2IfPolName="VRF64_L2_int_pol"/>
</infraAccPortGrp>
Step 4 Configure a Leaf Profile with an Interface Selector with XML such as the following example:
Example:
<infraAccPortP name="VRF64_dot1q_leaf_profile" >
<infraHPortS name="vrf64_access_port_selector" type="range">
<infraPortBlk name="block2" toPort="15" toCard="1" fromPort="13" fromCard="1"/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accportgrp-VRF64_L2_Port_Pol_Group" />
</infraHPortS>
</infraAccPortP>
Example
The following example shows the port configuration for edge ports in two posts.
Step 1 Create a Dot1q Tunnel using the REST API with XML such as the following example.
The example configures the tunnel with the LLDP Layer 2 tunneling protocol, adds the access encapsulation VLAN, and
disables MAC learning in the tunnel.
Example:
<fvTnlEPg name="WEB" qiqL2ProtTunMask=lldp accEncap="vlan-10" fwdCtrl="mac-learn-disable" >
<fvRsTnlpathAtt tDn="topology/pod-1/paths-101/pathep-[po2]"/>
</fvTnlEPg>
Step 2 Configure a Layer 2 Interface policy with static binding with XML such as the following example.
The example configures a Layer 2 interface policy for edge-switch ports. To configure a Layer 2 interface policy for
core-switch ports, use corePort instead of edgePort in the l2IfPol MO.
Example:
<l2IfPol name="testL2IfPol" qinq="edgePort"/>
Step 3 Apply the Layer 2 Interface policy to a PC Interface Policy Group with XML such as the following:
Example:
<infraAccBndlGrp name="po2" lagT="link">
<infraRsL2IfPol tnL2IfPolName="testL2IfPol"/>
</infraAccBndlGrp>
Step 4 Configure a Leaf Profile with an Interface Selector with XML such as the following:
Example:
<infraAccPortP name="PC">
<infraHPortS name="allow" type="range">
<infraPortBlk name="block2" fromCard="1" toCard="1" fromPort="10" toPort="11" />
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-po2"/>
</infraHPortS>
</infraAccPortP>
Example
The following example shows the PC configuration in two posts.
This example configures the PC ports as edge ports. To configure them as core ports, use corePort
instead of edgePort in the l2IfPol MO, in Post 1.
XML with Post 1:
<infraInfra dn="uni/infra">
<infraNodeP name="bLeaf3">
<infraLeafS name="leafs3" type="range">
<infraNodeBlk name="nblk3" from_="101" to_="101">
</infraNodeBlk>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-shipping3"/>
</infraNodeP>
<infraAccPortP name="shipping3">
<infraHPortS name="pselc3" type="range">
<infraPortBlk name="blk3" fromCard="1" toCard="1" fromPort="24" toPort="25"/>
<infraAttEntityP name="default">
</infraAttEntityP>
</infraInfra>
Step 1 Create an 802.1Q tunnel using the REST API with XML such as the following example.
The example configures the tunnel with a Layer 2 tunneling protocol, adds the access encapsulation VLAN, and disables
MAC learning in the tunnel.
Example:
<fvTnlEPg name="WEB" qiqL2ProtTunMask=lldp accEncap="vlan-10" fwdCtrl="mac-learn-disable" >
<fvRsTnlpathAtt tDn="topology/pod-1/protpaths-101-102/pathep-[po4]" />
</fvTnlEPg>
Step 2 Configure a Layer 2 interface policy with static binding with XML such as the following example.
The example configures a Layer 2 interface policy for edge-switch ports. To configure a Layer 2 interface policy for
core-switch ports, use the qinq="corePort" port type.
Example:
<l2IfPol name="testL2IfPol" qinq="edgePort"/>
Step 3 Apply the Layer 2 Interface policy to a VPC Interface Policy Group with XML such as the following:
Example:
<infraAccBndlGrp name="po4" lagT="node">
<infraRsL2IfPol tnL2IfPolName="testL2IfPol"/>
</infraAccBndlGrp>
Step 4 Configure a Leaf Profile with an Interface Selector with XML such as the following:
Example:
<infraAccPortP name="VPC">
<infraHPortS name="allow" type="range">
<infraPortBlk name="block2" fromCard="1" toCard="1" fromPort="10" toPort="11" />
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-po4"/>
</infraHPortS>
</infraAccPortP>
Example
The following example shows the VPC configuration in three posts.
This example configures the VPC ports as edge ports. To configure them as core ports, use corePort
instead of edgePort in the l2IfPol MO, in Post 2
XML with Post 1:
<polUni>
<fabricInst>
<fabricProtPol pairT="explicit">
<fabricExplicitGEp name="101-102-vpc1" id="30">
<fabricNodePEp id="101"/>
<fabricNodePEp id="102"/>
</fabricExplicitGEp>
</fabricProtPol>
</fabricInst>
</polUni>
<infraNodeP name="bLeaf2">
<infraLeafS name="leafs" type="range">
<infraNodeBlk name="nblk" from_="102" to_="102">
</infraNodeBlk>
</infraLeafS>
<infraRsAccPortP tDn="uni/infra/accportprof-shipping2"/>
</infraNodeP>
<infraAccPortP name="shipping1">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk" fromCard="1" toCard="1" fromPort="4" toPort="4"/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-accountingLag1" />
</infraHPortS>
</infraAccPortP>
<infraAccPortP name="shipping2">
<infraHPortS name="pselc" type="range">
<infraPortBlk name="blk" fromCard="1" toCard="1" fromPort="2" toPort="2"/>
<infraRsAccBaseGrp tDn="uni/infra/funcprof/accbundle-accountingLag2" />
</infraHPortS>
</infraAccPortP>
<infraFuncP>
<infraAccBndlGrp name="accountingLag1" lagT='node'>
<infraRsAttEntP tDn="uni/infra/attentp-default"/>
<infraRsLacpPol tnLacpLagPolName='accountingLacp1'/>
<infraRsL2IfPol tnL2IfPolName="testL2IfPol"/>
</infraAccBndlGrp>
<infraAccBndlGrp name="accountingLag2" lagT='node'>
<infraRsAttEntP tDn="uni/infra/attentp-default"/>
<infraRsLacpPol tnLacpLagPolName='accountingLacp1'/>
<infraRsL2IfPol tnL2IfPolName="testL2IfPol"/>
</infraAccBndlGrp>
</infraFuncP>
<lacpLagPol name='accountingLacp1' ctrl='15' descr='accounting' maxLinks='14' minLinks='1'
mode='active' />
<l2IfPol name='testL2IfPol' qinq='edgePort'/>
<infraAttEntityP name="default">
</infraAttEntityP>
</infraInfra>
Step 1 On the menu bar, choose Fabric > Inventory and click Topology, Pod, or expand Pod and choose a leaf.
Step 2 On the Topology or Pod panel Interface tab.
Step 3 Click the Operation/Configuration toggle-button to display the configuration panel.
Step 4 Click + to add diagrams of leaf switches, choose one or more switches, and click Add Selected.
On the leaf-name panel Interface tab, a diagram of the switch appears automatically, after you click the
Operation/Configurationtoggle-button.
Step 1 On the menu bar, click Fabric > External Access Policies.
Step 2 On the Navigation bar, click Policies > Interface > L2 Interface.
Step 3 Right-click L2 Interface, select Create L2 Interface Policy, and perform the following actions:
a) In the Name field, enter a name for the Layer 2 Interface policy.
b) Optional. Add a description of the policy. We recommend that you describe the purpose for the L2 Interface Policy.
c) To create an interface policy that enables Q-in-Q encapsulation, in the QinQ field, click doubleQtagPort.
d) Click Submit.
Step 4 Apply the L2 Interface policy to a Policy Group with the following steps:
a) Click on Fabric > External Access Policies > Interfaces > Leaf Interfaces, and expand Policy Groups.
b) Right-click Leaf Access Port, PC Interface, or VPC Interface and choose one of the following, depending on the
type of interface you are configuring for the tunnel.
• Create Leaf Access Port Policy Group
• Create PC Policy Group
• Create VPC Policy Group
c) In the resulting dialog box, enter the policy group name, choose the L2 Interface policy that you previously created,
and click Submit.
Step 5 Create a Leaf Interface Profile with the following steps:
a) Click on Fabric > External Access Policies > Interface > Leaf Interfaces > Profiles.
b) Right-click on Leaf Profiles, choose Create Leaf Interface Policy, and perform the following steps:
• In the Name field, type a name for the Leaf Interface Profile.
All three tasks are performed in the same area of the APIC GUI.
SUMMARY STEPS
1. In the menu bar, click Tenants > tenant-name.
2. In the Navigation pane, expand Application Profiles > > application-profile-name > Application
EPGs > application-EPG-name.
3. To deploy a static EPG on an interface, PC, or VPC that has been enabled for Q-in-Q mode, perform the
following steps:
4. To statically link an EPG with a node enabled with Q-in-Q mode, perform the following steps:
5. To associate an EPG with a static endpoint, perform the following steps:
DETAILED STEPS
a) Under the application EPG, right- click Static Ports and choose Deploy Static EPG on PC, VPC, or Interface.
b) Choose the path type, the node, and the path to the Q-in-Q enabled interface.
c) On the Port Encap (or Secondary VLAN for Micro-Seg) field, choose QinQ and enter the outer and inner VLAN
tags for traffic mapped to the EPG.
d) Click Submit.
Step 4 To statically link an EPG with a node enabled with Q-in-Q mode, perform the following steps:
a) Under the application EPG, right- click Static Leafs and choose Statically Link With Node.
b) In the Node field, choose the Q-in-Q-enabled switches from the list.
c) On the Encap field, choose QinQ and enter the outer and inner VLAN tags for the EPG.
d) Click Submit.
Step 5 To associate an EPG with a static endpoint, perform the following steps:
a) Under the application EPG, right- click Static EndPoints and choose Create Static EndPoint.
b) Enter the MAC address of the interface.
c) Choose the path type, node, and path to the Q-in-Q encapsulation-enabled interface.
d) Optional. Add IP addresses for the endpoint.
e) On the Encap field, choose QinQ and enter the outer and inner VLAN tags.
f) Click Submit.
SUMMARY STEPS
1. Configure
2. leaf number
3. interface ethernetslot/port
4. switchport mode dot1q-tunnel doubleQtagPort
5. switchport trunkqinq outer-vlanvlan-number inner-vlan vlan-number tenant tenant-name application
application-name epg epg-name
DETAILED STEPS
Step 4 switchport mode dot1q-tunnel doubleQtagPort Enables an interface for Q-in-Q encapsulation.
Example:
apic1(config-leaf-if)# switchport mode dot1q-tunnel
doubleQtagPort
Step 5 switchport trunkqinq outer-vlanvlan-number inner-vlan Associates the interface with an EPG.
vlan-number tenant tenant-name application
application-name epg epg-name
Example:
apic1(config-leaf-if)# switchport trunk qinq
outer-vlan 202 inner-vlan 203 tenant tenant64
application AP64 epg EPG64
Example
The following example enables Q-in-Q encapsulation (with outer-VLAN ID 202 and inner-VLAN
ID 203) on the leaf interface 101/1/25, and associates the interface with EPG64.
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/25
apic1(config-leaf-if)#switchport mode dot1q-tunnel doubleQtagPort
apic1(config-leaf-if)# switchport trunk qinq outer-vlan 202 inner-vlan 203 tenant tenant64
application AP64 epg EPG64
SUMMARY STEPS
1. Enable an interface for Q-in-Q encapsulation and associate the interface with an EPG, with XML such
as the following example:
DETAILED STEPS
Enable an interface for Q-in-Q encapsulation and associate the interface with an EPG, with XML such as the following
example:
Example:
<polUni>
<fvTenant dn="uni/tn-tenant64" name="tenant64">
<fvCtx name="VRF64"/>
<fvBD name="BD64_1">
<fvRsCtx tnFvCtxName="VRF64"/>
<fvSubnet ip="20.0.1.2/24"/>
</fvBD>
<fvAp name="AP64">
<fvAEPg name="WEB7">
<fvRsBd tnFvBDName="BD64_1"/>
<fvRsQinqPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/25]" encap="qinq-202-203"/>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>
The 40Gb to 10Gb dynamic breakout feature is supported on the access facing ports of the following switches:
• N9K-C9332PQ
• N9K-C93180LC-EX
• N9K-C9336C-FX
The 100Gb to 25Gb breakout feature is supported on the access facing ports of the following switches:
• N9K-C93180LC-EX
• N9K-C9336C-FX2
• N9K-C93180YC-FX
• In general, breakouts and port profiles (ports changed from uplink to downlink) are not supported on the
same port.
However, from Cisco APIC, Release 3.2, dynamic breakouts (both 100Gb and 40Gb) are supported on
profiled QSFP ports on the N9K-C93180YC-FX switch.
• Fast Link Failover policies are not supported on the same port with the dynamic breakout feature.
• Breakout subports can be used in the same way other port types in the policy model are used.
• When a port is enabled for dynamic breakout, other policies (expect monitoring policies) on the parent
port are no longer valid.
• When a port is enabled for dynamic breakout, other EPG deployments on the parent port are no longer
valid.
• A breakout sub-port can not be further broken out using a breakout policy group.
Note You can also configure ports for breakout in the APIC GUI by navigating to Fabric > Inventory, and clicking
Topology or Pod, or expanding Pod and clicking Leaf. Then, enable configuration and click the Interface
tab.
Procedure
Step 1 On the menu bar, choose Fabric > External Access Policies.
Step 2 In the Navigation pane, expand Interfaces and Leaf Interfaces and Profiles.
Step 3 Right-click Profiles and choose Create Leaf Interface Profile.
Step 4 Type the name and optional description, click the + symbol on Interface Selectors
Step 5 Perform the following:
a) Type a name (and optional description) for the Access Port Selector.
b) In the Interface IDs field, type the slot and port for the breakout port.
c) In the Interface Policy Group field, click the down arrow and choose Create Leaf Breakout Port Group.
d) Type the name (and optional description) for the Leaf Breakout Port Group.
e) In the Breakout Map field, choose 10g-4x or 25g-4x.
For switches supporting breakout, see Configuration of Dynamic Breakout Ports, on page 205.
f) Click Submit.
Step 6 To assign a Breakout Port to an EPG, perform the following steps:
On the menu bar, choose Tenant > Application Profiles > Application EPG. Right-click on Application EPG to
open Create Application EPGdialog box, and perform the following steps:
a) Select the Statically Link with Leaves/Paths check box to gain access to the Leaves/Paths tab in the dialog box.
b) Complete one of the following sets of steps:
Option Description
If you want to deploy the Then
EPG on...
A node 1. Expand the Leaves area.
2. From the Node drop-down list, choose a node.
3. In the Encap field, enter the appropriate VLAN.
4. (Optional) From the Deployment Immediacy drop-down list, accept the default On
Demand or choose Immediate.
5. (Optional) From the Mode drop-down list, accept the default Trunk or choose another
mode.
Step 7 To associate the Leaf Interface Profile to a the leaf switch, perform the following steps:
a) Expand Switches and Leaf Switches, and Profiles.
b) Right-click Profiles and select Create Leaf Profiles.
c) Type the name and optional description of the Leaf Profile.
d) Click the + symbol on the Leaf Selectors area.
e) Type the leaf selector name and an optional description.
f) Click the down arrow on the Blocks field and choose the switch to be associated with the breakout leaf interface
profile.
g) Click the down arrow on the Policy Group field and choose Create Access Switch Policy Group.
h) Type a name and optional description for the Access Switch Policy Group.
• An APIC fabric administrator account is available that will enable creating the necessary fabric
infrastructure configurations.
• The target leaf switches are registered in the ACI fabric and available.
• The 40GE or 100GE leaf switch ports are connected with Cisco breakout cables to the downlink ports.
SUMMARY STEPS
1. configure
2. leaf ID
3. interface ethernetslot/port
4. breakout10g-4x | 25g-4x
5. show run
6. tenant tenant-name
7. vrf context vrf-name
8. bridge-domain bridge-domain-name
9. vrf member vrf-name
10. application application-profile-name
11. epg epg-name
12. bridge-domain member bridge-domain-name
13. leaf leaf-name
14. speed interface-speed
15. show run
DETAILED STEPS
Step 2 leaf ID Selects the leaf switch where the breakout port will be
located and enters leaf configuration mode.
Example:
apic1(config)# leaf 101
Step 6 tenant tenant-name Selects or creates the tenant that will consume the breakout
ports and enters tenant configuration mode.
Example:
apic1(config)# tenant tenant64
Step 7 vrf context vrf-name Creates or identifies the Virtual Routing and Forwarding
(VRF) instance associated with the tenant and exits the
Example:
configuration mode.
apic1(config-tenant)# vrf context vrf64
apic1(config-tenant-vrf)# exit
Step 9 vrf member vrf-name Associates the VRF with the bridge-domain and exits the
configuration mode.
Example:
apic1(config-tenant-bd)# vrf member vrf64
apic1(config-tenant-bd)# exit
Step 10 application application-profile-name Creates or identifies the application profile associated with
the tenant and the EPG.
Example:
apic1(config-tenant)# application app64
Step 11 epg epg-name Creates or identifies the EPG and enters into EPG
configuration mode.
Example:
apic1(config-tenant)# epg epg64
Step 12 bridge-domain member bridge-domain-name Associates the EPG with the bridge domain and returns to
global configuration mode.
Example:
apic1(config-tenant-app-epg)# bridge-domain member Configure the sub ports as desired, for example, use the
bd64 speed command in leaf interface mode to configure a sub
apic1(config-tenant-app-epg)# exit port.
apic1(config-tenant-app)# exit
apic1(config-tenant)# exit
Step 14 speed interface-speed Enters leaf interface mode, sets the speed of an interface,
and exits the configuration mode.
Example:
Step 15 show run After you have configured the sub ports, entering this
command in leaf configuration mode displays the sub port
Example:
details.
apic1(config-leaf)# show run
The port on leaf 101 at interface 1/16 is confirmed enabled for breakout with sub ports 1/16/1, 1/16/2, 1/16/3,
and 1/16/4.
Example
This example configures the port for breakout:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/16
apic1(config-leaf-if)# breakout 10g-4x
This example sets the speed for the breakout sub ports to 10G.
apic1(config)# leaf 101
apic1(config-leaf)# interface ethernet 1/16/1
apic1(config-leaf-if)# speed 10G
apic1(config-leaf-if)# exit
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface ethernet 1/16/3
apic1(config-leaf-if)# speed 10G
apic1(config-leaf-if)# exit
apic1(config-leaf)# interface ethernet 1/16/4
apic1(config-leaf-if)# speed 10G
apic1(config-leaf-if)# exit
This example shows the four sub ports connected to leaf 101, interface 1/16.
apic1#(config-leaf)# show run
# Command: show running-config leaf 101
# Time: Fri Dec 2 00:51:08 2016
leaf 101
interface ethernet 1/16/1
speed 10G
negotiate auto
link debounce time 100
exit
interface ethernet 1/16/2
speed 10G
negotiate auto
link debounce time 100
exit
interface ethernet 1/16/3
speed 10G
negotiate auto
link debounce time 100
exit
interface ethernet 1/16/4
speed 10G
negotiate auto
link debounce time 100
exit
interface ethernet 1/16
breakout 10g-4x
exit
interface vfc 1/16
Step 1 Configure a breakout policy group for the breakout port with JSON, such as the following example:
Example:
In this example, we create an interface profile 'brkout44' with the only port 44 underneath its port selector. The port
selector is pointed to a breakout policy group 'new-brkoutPol'.
{
"infraAccPortP": {
"attributes": {
"dn":"uni/infra/accportprof-brkout44",
"name":"brkout44",
"rn":"accportprof-brkout44",
"status":"created,modified"
},
"children":[ {
"infraHPortS": {
"attributes": {
"dn":"uni/infra/accportprof-brkout44/hports-new-brekoutPol-typ-range",
"name":"new-brkoutPol",
"rn":"hports-new-brkoutPol-typ-range",
"status":"created,modified"
},
"children":[ {
"infraPortBlk": {
"attributes": {
"dn":"uni/infra/accportprof-brkout44/hports-new-brkoutPol-typ-range/portblk-block2",
"fromPort":"44",
"toPort":"44",
"name":"block2",
"rn":"portblk-block2",
"status":"created,modified"
},
"children":[] }
}, {
"infraRsAccBaseGrp": {
"attributes":{
"tDn":"uni/infra/funcprof/brkoutportgrp-new-brkoutPol",
"status":"created,modified"
},
"children":[]
}
}
]
}
}
]
}
}
Step 2 Create a new switch profile and associate it with the port profile, previously created, with JSON such as the following
example:
Example:
In this example, we create a new switch profile 'leaf1017' with switch 1017 as the only node. We associate this new switch
profile with the port profile 'brkout44' created above. After this, the port 44 on switch 1017 will have 4 sub ports.
Example:
{
"infraNodeP": {
"attributes": {
"dn":"uni/infra/nprof-leaf1017",
"name":"leaf1017","rn":"nprof-leaf1017",
"status":"created,modified"
},
"children": [ {
"infraLeafS": {
"attributes": {
"dn":"uni/infra/nprof-leaf1017/leaves-1017-typ-range",
"type":"range",
"name":"1017",
"rn":"leaves-1017-typ-range",
"status":"created"
},
"children": [ {
"infraNodeBlk": {
"attributes": {
"dn":"uni/infra/nprof-leaf1017/leaves-1017-typ-range/nodeblk-102bf7dc60e63f7e",
"from_":"1017","to_":"1017",
"name":"102bf7dc60e63f7e",
"rn":"nodeblk-102bf7dc60e63f7e",
"status":"created"
},
"children": [] }
}
]
}
}, {
"infraRsAccPortP": {
"attributes": {
"tDn":"uni/infra/accportprof-brkout44",
"status":"created,modified"
},
"children": [] }
}
]
}
}
"dn":"uni/infra/accportprof-brkout44/hports-sel1-typ-range/subportblk-block2",
"fromPort":"44",
"toPort":"44",
"fromSubPort":"3",
"toSubPort":"3",
"name":"block2",
"rn":"subportblk-block2",
"status":"created"
},
"children":[]}
},
{
"infraRsAccBaseGrp": {
"attributes": {
"tDn":"uni/infra/funcprof/accportgrp-p1",
"status":"created,modified"
},
"children":[]}
}
]
}
}
]
}
}
Proxy ARP within the Cisco ACI fabric is different from the traditional proxy ARP. As an example of the
communication process, when proxy ARP is enabled on an EPG, if an endpoint A sends an ARP request for
endpoint B and if endpoint B is learned within the fabric, then endpoint A will receive a proxy ARP response
from the bridge domain (BD) MAC. If endpoint A sends an ARP request for endpoint B, and if endpoint B
is not learned within the ACI fabric already, then the fabric will send a proxy ARP request within the BD.
Endpoint B will respond to this proxy ARP request back to the fabric. At this point, the fabric does not send
a proxy ARP response to endpoint A, but endpoint B is learned within the fabric. If endpoint A sends another
ARP request to endpoint B, then the fabric will send a proxy ARP response from the BD MAC.
The following example describes the proxy ARP resolution steps for communication between clients VM1
and VM2:
1. VM1 to VM2 communication is desired.
Device State
VM1 IP = * MAC = *
VM2 IP = * MAC = *
Device State
VM2 IP = * MAC = *
3. The ACI fabric floods the proxy ARP request within the bridge domain (BD).
Figure 33: ACI Fabric Floods the Proxy ARP Request within the BD
Device State
Device State
5. VM2 is learned.
Figure 35: VM2 is Learned
Device State
Device State
Device State
Procedure
Step 3 application application-profile-name Creates an application profile and enters the application
mode.
Example:
Step 4 epg application-profile-EPG-name Creates an EPG and enter the EPG mode.
Example:
Examples
This example shows how to configure proxy ARP.
apic1# conf t
apic1(config)# tenant Tenant1
apic1(config-tenant)# application Tenant1-App
apic1(config-tenant-app)# epg Tenant1-epg1
apic1(config-tenant-app-epg)# proxy-arp enable
apic1(config-tenant-app-epg)#
apic1(config-tenant)#
<polUni>
<fvTenant name="Tenant1" status="">
<fvCtx name="EngNet"/>
<!-- bridge domain -->
<fvBD name="BD1">
<fvRsCtx tnFvCtxName="EngNet" />
<fvSubnet ip="1.1.1.1/24"/>
</fvBD>
<fvAp name="Tenant1_app">
<fvAEPg name="Tenant1_epg" pcEnfPref-"enforced" fwdCtrl="proxy-arp">
<fvRsBd tnFvBDName="BD1" />
<fvRsDomAtt tDn="uni/vmmp-VMware/dom-dom9"/>
</fvAEPg>
</fvAp>
</fvTenant>
</polUni>
• For port channels and virtual port channels, the storm control values (packets per second or percentage)
apply to all individual members of the port channel. Do not configure storm control on interfaces that
are members of a port channel.
Note On switch hardware starting with the APIC 1.3(x) and switch 11.3(x) release, for
port channel configurations, the traffic suppression on the aggregated port may
be up to two times the configured value. The new hardware ports are internally
subdivided into these two groups: slice-0 and slice-1. To check the slicing map,
use the vsh_lc command show platform internal hal l2 port gpd and look
for slice 0 or slice 1 under the Sl column. If port-channel members fall on
both slice-0 and slice-1, allowed storm control traffic may become twice the
configured value because the formula is calculated based on each slice.
• When configuring by percentage of available bandwidth, a value of 100 means no traffic storm control
and a value of 0.01 suppresses all traffic.
• Due to hardware limitations and the method by which packets of different sizes are counted, the level
percentage is an approximation. Depending on the sizes of the frames that make up the incoming traffic,
the actual enforced level might differ from the configured level by several percentage points.
Packets-per-second (PPS) values are converted to percentage based on 256 bytes.
• Maximum burst is the maximum accumulation of rate that is allowed when no traffic passes. When traffic
starts, all the traffic up to the accumulated rate is allowed in the first interval. In subsequent intervals,
traffic is allowed only up to the configured rate. The maximum supported is 65535 KB. If the configured
rate exceeds this value, it is capped at this value for both PPS and percentage.
• The maximum burst that can be accumulated is 512 MB.
• On an egress leaf switch in optimized multicast flooding (OMF) mode, traffic storm control will not be
applied.
• On an egress leaf switch in non-OMF mode, traffic storm control will be applied.
• On a leaf switch for FEX, traffic storm control is not available on host-facing interfaces.
• Traffic storm control unicast/multicast differentiation is not supported on Cisco Nexus C93128TX,
C9396PX, C9396TX, C93120TX, C9332PQ, C9372PX, C9372TX, C9372PX-E, or C9372TX-E switches.
Step 5 Right-click Storm Control and choose Create Storm Control Interface Policy.
Step 6 In the Create Storm Control Interface Policy dialog box, enter a name for the policy in the Name field.
Step 7 In the Configure Storm Control field, click the radio button for either All Types or Unicast, Broadcast, Multicast.
Note Selecting the Unicast, Broadcast, Multicast radio button allows you to configure Storm Control on each
traffic type separately.
Step 8 In the Specify Policy In field, click the radio button for either Percentage or Packets Per Second.
Step 9 If you chose Percentage, perform the following steps:
a) In the Rate field, enter a traffic rate percentage.
Enter a number between 0 and 100 that specifies a percentage of the total available bandwidth of the port. When
the ingress traffic is either equal to or greater than this level during a one second interval, traffic storm control drops
traffic for the remainder of the interval. A value of 100 means no traffic storm control. A value of 0 suppresses all
traffic.
b) In the Max Burst Rate field, enter a burst traffic rate percentage.
Enter a number between 0 and 100 that specifies a percentage of the total available bandwidth of the port. When
the ingress traffic is equal to or greater than, traffic storm control begins to drop traffic.
Note The Max Burst Rate should be greater than or equal to the value of Rate.
Step 10 If you chose Packets Per Second, perform the following steps:
a) In the Rate field, enter a traffic rate in packets per second.
During this interval, the traffic level, expressed as packets flowing per second through the port, is compared with
the traffic storm control level that you configured. When the ingress traffic is equal to or greater than the traffic
storm control level that is configured on the port, traffic storm control drops the traffic until the interval ends.
b) In the Max Burst Rate field, enter a burst traffic rate in packets per second.
During this interval, the traffic level, expressed as packets flowing per second through the port, is compared with
the burst traffic storm control level that you configured. When the ingress traffic is equal to or greater than the
traffic storm control level that is configured on the port, traffic storm control drops the traffic until the interval
ends.
i) Click Submit.
DETAILED STEPS
sd-tb2-ifc1(config-leaf-if)# storm-control
broadcast pps 5000 burst-rate 6000
sd-tb2-ifc1(config-leaf-if)# storm-control unicast
pps 7000 burst-rate 7000
sd-tb2-ifc1(config-leaf-if)# storm-control unicast
pps 8000 burst-rate 10000
sd-tb2-ifc1(config-leaf-if)#
In the body of the POST message, Include the following JSON payload structure to specify the policy by
percentage of available bandwidth:
{"stormctrlIfPol":
{"attributes":
{"dn":"uni/infra/stormctrlifp-MyStormPolicy",
"name":"MyStormPolicy",
"rate":"75",
"burstRate":"85",
"rn":"stormctrlifp-MyStormPolicy",
"status":"created"
},
"children":[]
}
}
In the body of the POST message, Include the following JSON payload structure to specify the policy by
packets per second:
{"stormctrlIfPol":
{"attributes":
{"dn":"uni/infra/stormctrlifp-MyStormPolicy",
"name":"MyStormPolicy",
"ratePps":"12000",
"burstPps":"15000",
"rn":"stormctrlifp-MyStormPolicy",
"status":"created"
},
"children":[]
}
}
About MACsec
MACsec is an IEEE 802.1AE standards based Layer 2 hop-by-hop encryption that provides data confidentiality
and integrity for media access independent protocols.
MACsec, provides MAC-layer encryption over wired networks by using out-of-band methods for encryption
keying. The MACsec Key Agreement (MKA) Protocol provides the required session keys and manages the
required encryption keys.
The 802.1AE encryption with MKA is supported on all types of links, that is, host facing links (links between
network access devices and endpoint devices such as a PC or IP phone), or links connected to other switches
or routers.
MACsec encrypts the entire data except for the Source and Destination MAC addresses of an Ethernet packet.
The user also has the option to skip encryption up to 50 bytes after the source and destination MAC address.
To provide MACsec services over the WAN or Metro Ethernet, service providers offer Layer 2 transparent
services such as E-Line or E-LAN using various transport layer protocols such as Ethernet over Multiprotocol
Label Switching (EoMPLS) and L2TPv3.
The packet body in an EAP-over-LAN (EAPOL) Protocol Data Unit (PDU) is referred to as a MACsec Key
Agreement PDU (MKPDU). When no MKPDU is received from a participants after 3 hearbeats (each hearbeat
is of 2 seconds), peers are deleted from the live peer list. For example, if a client disconnects, the participant
on the switch continues to operate MKA until 3 heartbeats have elapsed after the last MKPDU is received
from the client.
A node can have multiple policies deployed for more than one fabric link. When this happens, the per fabric
interface keychain and policy are given preference on the affected interface. The auto generated keychain and
associated MACsec policy are then given the least preference.
APIC MACsec supports two security modes. The MACsec must secure only allows encrypted traffic on the
link while the should secure allows both clear and encrypted traffic on the link. Before deploying MACsec
in must secure mode, the keychain must be deployed on the affected links or the links will go down. For
example, a port can turn on MACsec in must secure mode before its peer has received its keychain resulting
in the link going down. To address this issue the recommendation is to deploy MACsec in should secure
mode and once all the links are up then change the security mode to must secure.
Note Any MACsec interface configuration change will result in packet drops.
MACsec policy definition consists of configuration specific to keychain definition and configuration related
to feature functionality. The keychain definition and feature functionality definitions are placed in separate
policies. Enabling MACsec per Pod or per interface involves deploying a combination of a keychain policy
and MACsec functionality policy.
Note Using internal generated keychains do not require the user to specify a keychain.
• The following procedure should be followed to disable/remove a MACsec policy deployed in must-secure
mode:
• Change the MACsec policy to should-secure.
• Verify that the affected interfaces are using should-secure mode.
• Disable/remove the MACsec policy.
Keychain Definition
• There should be one key in the keychain with a start time of now. If must-secure is deployed with a
keychain that doesn’t have a key that is immediately active then traffic will be blocked on that link until
the key becomes current and a MACsec session is started. If should-secure mode is being used then
traffic will be unencrypted until the key becomes current and a MACsec session has started.
• There should be one key in the keychain with an end time of infinite. When a keychain expires, then
traffic is blocked on affected interfaces which are configured for must-secure mode. Interfaces configured
for should-secure mode transmit unencrypted traffic.
• There should be overlaps in the end time and start time of keys that are used sequentially to ensure the
MACsec session stays up when there is a transition between keys.
Step 2 To apply the MACsec Fabric Interface Policy to a Fabric Leaf or Spine Port Policy Group, in the Navigation pane,
click Interfaces > Leaf/Spine Interfaces > Policy Groups > Spine/Leaf Port Policy Group_name. In the Work pane,
select the MACsec Fabric Interface Policy just created.
Step 3 To apply the MACsec Fabric Interface Policy to a Pod Policy Group, in the Navigation pane, click Pods > Policy
Groups > Pod Policy Group_name. In the Work pane, select the MACsec Fabric Interface Policy just created.
Step 2 To apply the MACsec Access Interface Policy to a Fabric Leaf or Spine Port Policy Group, in the Navigation pane,
click Interfaces > Leaf/Spine Interfaces > Policy Groups > Spine/Leaf Policy Group_name. In the Work pane, select
the MACsec Fabric Interface Policy just created.
Step 2 To apply the MACsec Access Parameters Policy to a Leaf or Spine Port Policy Group, in the Navigation pane, click
Interface Policies > Policy Groups > Spine/Leaf Policy Group_name. In the Work pane, select the MACsec Access
Interface Policy just created.
d) In the Start Time field, select a date for the key to become valid.
e) In the End Time field, select a date for the key to expire. Click Ok and Submit.
Note When defining multiple keys in a keychain, the keys must be defined with overlapping times in order to
assure a smooth transition from the old key to the new key. The endTime of the old key should overlap
with the startTime of the new key.
For configuring the Keychain policy through Access Policies, on the menu bar click Fabric > External Access Policies.
In the Navigation pane, click on Policies > Interface > MACsec > MACsec KeyChain Policies and right click on to
open Create MACsec Keychain Policy and perform the steps above.
Example:
apic1# configure
apic1(config)# template macsec access keychain acckeychainpol1
apic1(config-macsec-keychain)# description 'macsec key chain kc1'
apic1(config-macsec-keychain)# key 12ab
apic1(config-macsec-keychain-key)# life-time start 2017-09-19T12:03:15 end 2017-12-19T12:03:15
apic1(config-macsec-keychain-key)# psk-string 123456789a223456789a323456789abc
apic1(config-macsec-keychain-key)# exit
apic1(config-macsec-keychain)# key ab12
apic1(config-macsec-keychain-key)# life-time start now end infinite
apic1(config-macsec-keychain-key)# life-time start now end infinite
apic1(config-macsec-keychain-key)# psk-string
Enter PSK string: 123456789a223456789a323456789abc
apic1(config-macsec-keychain-key)# exit
apic1(config-macsec-keychain)# exit
apic1(config)#
acckeychainpol1
apic1(config-macsec-if-policy)# exit
apic1(config)#
Step 4 Associate MACsec interface policy to access interfaces on leaf (or spine):
Example:
apic1# configure
apic1(config)# template macsec access interface-policy accmacsecifpol1
apic1(config-macsec-if-policy)# inherit macsec security-policy accmacsecpol1 keychain
acckeychainpol1
apic1(config-macsec-if-policy)# exit
apic1(config)
Example:
apic1# configure
apic1(config)# template macsec fabric security-policy fabmacsecpol1
apic1(config-macsec-param)# cipher-suite gcm-aes-xpn-128
apic1(config-macsec-param)# description 'description for mac sec parameters'
apic1(config-macsec-param)# window-size 1
apic1(config-macsec-param)# sak-expiry-time 100
apic1(config-macsec-param)# security-mode must-secure
apic1(config-macsec-param)# exit
apic1(config)# template macsec fabric keychain fabkeychainpol1
apic1(config-macsec-keychain)# description 'macsec key chain kc1'
apic1(config-macsec-keychain)# key 12ab
apic1(config-macsec-keychain-key)# psk-string 123456789a223456789a323456789abc
apic1(config-macsec-keychain-key)# life-time start 2016-09-19T12:03:15 end 2017-09-19T12:03:15
apic1(config-macsec-keychain-key)# exit
apic1(config-macsec-keychain)# key cd78
apic1(config-macsec-keychain-key)# psk-string
Enter PSK string: 123456789a223456789a323456789abc
apic1(config-macsec-keychain-key)# life-time start now end infinite
apic1(config-macsec-keychain-key)# exit
apic1(config-macsec-keychain)# exit
apic1(config)#
Step 7 Associate MACsec interface policy to fabric interfaces on leaf (or spine):
Example:
apic1# configure
apic1(config)# leaf 101
apic1(config-leaf)# fabric-interface ethernet 1/52-53
apic1(config-leaf-if)# inherit macsec interface-policy fabmacsecifpol2
apic1(config-leaf-if)# exit
apic1(config-leaf)#
<fabricFuncP>
<fabricPodPGrp name = "PodPG1">
<fabricRsMacsecPol tnMacsecFabIfPolName="fabricPodPol1"/>
</fabricPodPGrp>
</fabricFuncP>
<fabricPodP name="PodP1">
<fabricPodS name="pod1" type="ALL">
<fabricRsPodPGrp tDn="uni/fabric/funcprof/podpgrp-PodPG1"/>
</fabricPodS>
</fabricPodP>
</fabricInst>
</macsecPolCont>
<macsecIfPol name="accessPol1">
<macsecRsToParamPol tDn="uni/infra/macsecpcont/paramp-accessParam1"/>
<macsecRsToKeyChainPol tDn="uni/infra/macsecpcont/keychainp-accessKC1"/>
</macsecIfPol>
<infraFuncP>
<infraAccPortGrp name = "LeTestPGrp">
<infraRsMacsecIfPol tnMacsecIfPolName="accessPol1"/>
</infraAccPortGrp>
</infraFuncP>
<infraHPathS name="leaf">
<infraRsHPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/4]" />
<infraRsPathToAccBaseGrp tDn="uni/infra/funcprof/accportgrp-LeTestPGrp" />
</infraHPathS>
</infraInfra>
Applying a MACsec fabric policy on eth1/49 of leaf-101 and eth 5/1 of spine-102:
<fabricInst>
<macsecFabPolCont>
<macsecFabParamPol name="fabricParam1" secPolicy="should-secure" replayWindow="120"
>
</macsecFabParamPol>
<macsecKeyChainPol name="fabricKC1">
<macsecKeyPol name="Key1"
preSharedKey="0102030405060708090A0B0C0D0E0F100102030405060708090A0B0C0D0E0F10"
keyName="A1A2A3A0" startTime="now" endTime="infinite"/>
</macsecKeyChainPol>
</macsecFabPolCont>
<fabricFuncP>
<fabricLePortPGrp name = "LeTestPGrp">
<fabricRsMacsecFabIfPol tnMacsecFabIfPolName="fabricPol1"/>
</fabricLePortPGrp>
<fabricLFPathS name="leaf">
<fabricRsLFPathAtt tDn="topology/pod-1/paths-101/pathep-[eth1/49]" />
<fabricRsPathToLePortPGrp tDn="uni/fabric/funcprof/leportgrp-LeTestPGrp" />
</fabricLFPathS>
<fabricSpPortP name="spine_profile">
<fabricSFPortS name="spineIf" type="range">
<fabricPortBlk name="spBlk" fromCard="5" fromPort="1" toCard="5" toPort="1" />
<fabricRsSpPortPGrp tDn="uni/fabric/funcprof/spportgrp-SpTestPGrp" />
</fabricSFPortS>
</fabricSpPortP>
</fabricSpineS>
</fabricSpineP>
</fabricInst>