ACIFundamentalsLabGuide2 2
ACIFundamentalsLabGuide2 2
2
March 10th 2016
Ivan Andjelkovic
Systems Engineer
[email protected]
1
Foreword
Welcome to the Application Centric Infrastructure (ACI) Fundamentals Lab.
This lab is created for Field Engineers, System Engineers, Architects and others who would like
to get familiar with fundamental ACI concepts and techniques using a hands-on approach.
For scaling purposes, some of the ACI setup steps are done ahead of time. All of those steps are
documented in the additional ACI Lab Setup and Connectivity document.
Combined with this ACI Fundamentals Lab Guide, these two documents contain information
needed to set up a new ACI system.
For students that would like to go through the ACI setup steps, they can do so using the
simulator and following the document Optional ACI Simulator Lab.
Special thanks to the Business Technology Architects (BTA) whose original lab was the
foundation for this lab.
2
Foreword .......................................................................................................................................... 2
Revision Notes ................................................................................................................................. 4
Introduction ...................................................................................................................................... 5
Lab Format and Objectives .............................................................................................................. 7
Lab Topology and Lab Preparation .................................................................................................. 8
Lab Access ....................................................................................................................................... 9
Module 1: Student Pre-Lab setup verification ..................................................... 13
Task 1: APIC GUI Overview .......................................................................................................... 13
Task 2: Verify APICs, ACI Fabric and OOB Management ............................................................. 17
Task 3: Verify vPC connectivity to the UCS system ...................................................................... 19
Module 2: Using API Inspector and Postman ...................................................... 23
Task 1: Explore API Inspector and Postman ................................................................................. 24
Task 2: Create and Delete ACI User using GUI ............................................................................ 26
Task 3: Capture JSON script using the API Inspector ................................................................... 27
Task 4: Create and Delete ACI User using Postman .................................................................... 29
Module 3: Building ACI Forwarding Constructs ................................................. 32
Task 1: Create a Tenant ................................................................................................................ 33
Task 2: Create a VRF and a Bridge Domain (BD) for Web Servers .............................................. 35
Task 3: Create a Bridge Domain (BD) for App Servers ................................................................. 37
Module 4: Configuring Application Profile .......................................................... 41
Task 1: Create Application Network Profile (ANP) ........................................................................ 43
Module 5: Configuring VMM Integration .............................................................. 48
Task 1: Configure VMM Integration ............................................................................................... 49
Task 2: Add VMs to the port group from the VMware vCenter ...................................................... 58
Task 3: Verify and Explore ACI Contracts ..................................................................................... 66
Module 6: Configuring Ext. Layer 2 Connectivity ............................................... 71
Task 1: Create Access Policy to Nexus Switch ............................................................................. 73
Task 2: Enable and Verify App Traffic with the External L2 ........................................................... 79
Module 7: Connecting Ext. Layer 3 Connectivity ................................................ 84
Task 1: Create BGP Route Reflector ............................................................................................. 84
Task 2: Create Layer 3 External Connectivity................................................................................ 85
Task 3: Enable App Traffic with the External L3 ............................................................................ 90
Task 4: Verify External L3 Connectivity ......................................................................................... 95
Appendix A: Troubleshooting............................................................................... 98
T1. Host and vCenter credentials issue ......................................................................................... 98
3
Revision Notes
The initial version 1.2 of the lab guide was done based on the ACI version 1.0.
The version 2.0 of the lab guide is based on the next major ACI version 1.1.
This version 2.2 of the lab guide is based on the latest major ACI release 1.2.
The screenshots are updated to reflect changes in the new ACI version.
The new comments in this revision are mostly related with the differences between the ACI
versions. All new comments are done using the blue font throughout the document.
The details around the new software and hardware features with this major 1.2 release can be
found under the release notes.
4
Introduction
“Cisco Application Centric Infrastructure (ACI) is a data center fabric that enables you to
integrate virtual and physical workloads in a highly programmable multi-hypervisor
environment that is designed for any multi-service or cloud data center.
The ACI fabric consists of discrete components that operate as routers and switches (Leaves and
Spines) but are provisioned and monitored as a single entity. The operation is like a distributed
switch and router configuration that provides advanced traffic optimization, security, and
telemetry functions, stitching together virtual and physical workloads.
The controller, called the Application Policy Infrastructure Controller (APIC), is the central
point of management of the fabric. That is the device that distributes Application Network Profile
(ANP) and other policies to the devices that are part of the fabric.”
Quote from “The Policy Driven Data Center with ACI” by Lucien Avramov, et al.
At the top level, the ACI object model is built on a group of one or more tenants, allowing the
network infrastructure administration and data flows to be segregated. Tenants can be used for
customers, business units, or groups, depending on organizational needs. For instance, an
enterprise may use one tenant for the entire organization, and a cloud provider may have
customers that use one or more tenants to represent their organizations.
Tenants can be further divided into contexts, which directly relate to Virtual Routing and
Forwarding (VRF) instances, or separate IP spaces. Each tenant can have one or more contexts,
depending on the business needs of that tenant. Contexts provide a way to further separate the
organizational and forwarding requirements for a given tenant. Because contexts use separate
forwarding instances, IP addressing can be duplicated in separate contexts for multitenancy.
Within the context, the model provides a series of objects that define the application. These
objects are endpoints (EP) and endpoint groups (EPGs) and the policies that define their
relationship (Figure 2). Note that policies in this case are more than just a set of access control
lists (ACLs) and include a collection of inbound and outbound filters, traffic quality settings,
marking rules, and redirection rules.
The figure above shows a tenant with two contexts and the applications that make up those
contexts. The EPGs shown are groups of endpoints that make up an application tier or other
logical application grouping. For example, Application B, shown expanded on the right side of
5
the figure, may consist of a web tier (blue), application tier (red), and database tier (orange). The
combination of EPGs and the policies that define their interaction is an Application Network
Profile in the ACI model. More details can be found here.
Industry shifts are redefining IT at all levels. On-premise IT consumption models are shifting to
cloud-based services. Infrastructure as a service (IaaS) is being supplanted by applications as a
service. Separate development and operations teams are moving toward integrated development
and operations (DevOps) groups. Box-centric management models are migrating to application-
centric management models.
Business agility requires application agility, so IT teams need to provision applications in hours
instead of months. Resources need to scale up (or down) in minutes, not hours.
The future of networking with ACI is about providing a network that is deployed, monitored, and
managed in a fashion that supports DevOps and a consistently changing application
environment. ACI does so through the reduction of complexity and a common policy framework
that can automate provisioning and managing of resources.
6
Lab Format and Objectives
This is a SELF-PACED lab. The following table describes the objectives of the different
sections found in this document. Students are absolutely welcome to experiment and explore the
various product menus and options beyond what is covered in this guide with the understanding
that the ACI fabric is a shared resource among the students. Given the use of shared
infrastructure, it is important to further note that your work can impact others.
All steps in the Modules 3-7 (except Task 2 in Module 5) have a json script prepared. If
in the interest of time you would like to avoid configuring certain steps, you can execute
a json script using Postman. Details about Postman can be found in Module 2.
Each Module starts with a description of the objectives, what will be covered, and additional
relevant information.
Integrate APIC with VMware vCenter and attach Virtual Machines to the
Module 5 50 minutes
designated Endpoint Groups.
Module 6 Connect the ACI Fabric with an existing External Layer 2 network 40 minutes
Module 7 Connect the ACI Fabric with an existing external Layer 3 Network 35 minutes
For students that would like to go through steps that could not be part of the main lab due to
scaling issues (APIC Setup, ACI fabric discovery and vPC to UCS connectivity), there is an
optional lab that can be done on the simulator by following the Optional ACI Simulator Lab
document.
7
Lab Topology and Lab Preparation
Pay close attention to the simplified logical cabling setup between devices. The UCS system and
the Nexus switches are preconfigured following FlexPod with vSphere 5.5 and ACI Design
Guide and are not the focus of today’s lab.
Additional documents
- For Hardware installation:
“Application Centric Infrastructure Fabric Hardware Installation Guide”
- For first-time access, APIC setup, Fabric Initialization, Out-Of-Band (OOB)
Management access and more: “Cisco APIC Getting Started Guide”:
- For the details regarding the configuration done on the APIC before this lab, consult the
ACI Lab Setup and Connectivity document associated with this lab.
8
Lab Access
This section describes access to the lab environment.
5. Open your RDP client. For Windows machines type mstsc in the windows search field.
9
6. Type 192.168.199.(100 + X) where X is your pod number and click Connect.
Note: For example, pod 7 will have ip address 192.168.199.107
7. Click Connect when asked “So you trust this remote connection”.
8. Use username acistudent and password 1234Qwer to access your Windows jumpstation.
Click OK.
Note: IP address below belongs to pod 12.
You are now connected to your jump station. Skip the rest of this section that describes how to
access that jump station using clientless browser based access.
10
Clientless Access using your any browser:
11. If the warning below (or similar, it is browser dependent) is displayed, click I
Understand the Risks to proceed and accept any windows that follow.
12. Enter ACIstudentX for the username and use the password supplied by your instructor
at the beginning of the lab (Note each pod has a unique username).
Note: X is your double digit pod number, for example pod 8 will use ACIstudent08
14. Click on the Jumpbox VNC bookmark to access your Windows Workstation via VNC.
Note: VNC has the best performance, and the screen resolution can be adjusted. It is
important to point out that VNC does not support Copy&Paste functionality from within
VNC to your personal workspace.
If you want to Copy&Paste details from the remote session to your computer, you can
click on the RDP bookmarks. Unfortunately, the resolution for RDP sessions is not
modifiable.
11
15. Click Continue at Security Warning pop-up.
17. If you are asked for various Java approvals, please approve them all.
Note: You might have to go to Configure Java and Security Tab and add
https://dctraining.cisco.com to the list of the approved sites.
Afterward, you will have to close all your browser windows for this change to take effect,
and start the access process again.
Contact instructor for details.
18. Select user ACIStudent and password 1234Qwer for your Windows login credentials.
You are connected to your jump station and ready to proceed with the lab.
12
Module 1: Student Pre-Lab setup verification
Note: This Module can be skipped and done later without affecting the rest of the lab.
To be able to scale beyond one student per lab, the following tasks were performed ahead of
time:
1) Initial Setup of the APICs
2) ACI Fabric Initialization and Discovery
3) Configuration of Out of Band (OOB) Management of ACI Nodes
4) vPC connectivity between ACI Leaves and the UCS System
Detailed steps can be found in the ACI Lab Setup and Connectivity document.
In this Module, we will get familiar with ACI GUI options while verifying that the pre-work has
been done properly.
13
4. Login with the User ID of admin and password of 1234Qwer. Chose Advanced Mode.
Note: As of version 1.2, there is an option to choose the Basic Mode. The Basic Mode
allows us to do the most common tasks in the simpler way but limits us in what we can
do in that mode. Due to the limitations, we will be using the Advanced Mode in this lab.
5. Click on the welcome, admin section of the GUI on the far right of the screen.
6. Familiarize yourself with the options that can be done with this pull down menu. This
includes all of the AAA options, as well as API Inspector and API Documentation.
Note: We have an entire detailed module dedicated to API Inspector.
14
8. This is a tool that can be used to search for items in the GUI based on Category as shown
below.
10. Notice Inventory and Packages sub items. We first import the L4-L7 device package
(from Cisco, F5, Citrix or other 3rd party vendors) using the Packages submenu, and then
we create that device in the Inventory menu.
The Inventory menu displays the VMs, hypervisors, and virtual switches belonging to the fabric.
This menu also provides VM statistics including packet counters, byte counters, CPU usage, and
memory usage.
As of ACI version 1.2, we will use Inventory menu to do what was in the prior versions done
under the Policies menu. We will configure connectivity policies for virtual machines managed
by Virtual Machine Manager (VMM) tools such as VMware (vCenter, vShield), Microsoft
(SCVMM) or OpenStack.
15
13. Familiarize yourself with the options.
Note: Later in this Module we will use Inventory options extensively to verify
configurations already existing on the system.
The Fabric Policies section allows policy creation and modification for interfaces that connect
spine and leaf switches. Fabric policies can enable features such as monitoring (statistics
collection and statistics export), troubleshooting (on-demand diagnostics and SPAN), or NTP. It
also includes internal connectivity policies.
The Access Policies section allows policy creation and modification for access-edge (external-
facing) interfaces that do not connect to a spine switch. External-facing interfaces connect to
external devices such as virtual machine controllers, hypervisors, hosts, routers, or fabric
extenders (FEX). Access policies provide configuration policy for individual ports, port channels
and virtual port channels, as well as protocols such as LLDP, CDP or LACP, and features like
monitoring or diagnostics.
A Tenant is a logical container or a folder for application policies. This container can represent
an actual tenant, an organization, security zone, application or a domain. A Tenant can also just
be used for organizing information in a way that is convenient. Overall, a tenant represents a
unit of isolation from a policy perspective.
15. Notice three Tenants are preconfigured out of the box common, infra and mgmt.
Note1: The common tenant is preconfigured for defining policies that provide common
behavior for all the tenants in the fabric. A policy defined in the common tenant is usable
by any tenant.
Note2: In the lab, the screen might not appear as below since other tenants will be
created. To see all tenants including preconfigured ones, use the option All Tenants.
16. Click on the System menu item at the top of the APIC GUI and observe the sub-menu
options.
16
The Quick Start section on this page assists you in performing common and basic procedures.
The Concepts menu displays APIC online help that covers the building blocks of ACI.
Researching through documentation in Quick Start and Concepts is a great way to start learning
about ACI or refreshing your knowledge during implementation.
Dashboard provides an overview of the system health.
The Controllers menu displays property and status information about the APIC instances and
clusters.
At this point you should be comfortable navigating the top level options of the APIC GUI.
Again, we will explore most of these menu options in much greater detail throughout the
lab.
17
19. Click on apic1 in the left navigation pane or double-click the apic1 object in the right
content pane, then select the General tab to see the information about this APIC.
Note: Explore the rest of the information about the APIC by expanding apic1 and
selecting different tabs as time permits.
This is an internal
VXLAN tunnel Endpoint
(VTEP) address of this
APIC automatically
assigned based on the
address pool we chose
during the initial APIC’s
setup.
20. Click on Fabric at the top of the GUI. Select the Inventory option and click on
Topology option from the menu on the left to see the whole ACI fabric topology.
Note: We can see the same information from the multiple places in the GUI.
18
21. Expand Pod 1, click on Leaf101 and select the General tab to see information about this
leaf including the OOB IP address. Explore the rest of the information about the leaves
and spines as time permits.
Note: OOB information about ACI nodes is also available where the policy configuration
tasks are performed, at Tenants->mgmt->Node Management Addresses->Node.
19
24. Login with the User ID of admin and password of 1234Qwer.
25. Type the where command. Notice that you are in bash shell.
Note1: In ACI version 1.0, to issue NX-OS style commands we would have to use vsh
command first to get to vshell. As of version 1.1 that is not needed any more. All NX-OS
commands can be issued straight from the iBash shell.
Note2: iBash shell is just a bash shall plus a set of custom commands.
26. Type show with space after it, and press Tab key twice to see all available options.
Note1: In iBash shell, pressing Esc key twice replaces NX-OS/IOS ‘?’ option.
Press Esc key twice to see both, options and their descriptions.
Note2: Pressing Tab key once completes the command like in NX-OS and IOS.
20
Use the common NX-OS Show commands to confirm that the above port channels are indeed
ones leading to UCS. Explore Leaf 101 and other nodes as time permits.
Additional suggested commands include:
conf t (this will not work since all configuration of the nodes is done using APIC)
show port-channel summary
show cdp neighbors
show lldp neighbors
show version (Processor Board ID is the same as Serial Number in GUI)
show vpc peer-keepalive (there is no peer-link or specific peer-keepalive for vPC in ACI. The
status and information about the peer is done automatically via APIC and fabric).
show ip interface vrf management
28. We can obtain the same information about vPC from the GUI. Go to Fabric->Inventory
and expand Pod1->Leaf101->Interfaces->VPC Interfaces within the left menu and
then click on VPC Interfaces and explore options as shown below.
29. Within the same menu on the left, expand Protocols->CDP and click on Neighbors to
see the equivalent of the show cdp neighbors command in GUI.
21
30. To verify who the vPC peers are and what the vPC Domain ID is, go to Fabric->Access
Policies, and within the navigation pane on the left expand Switch Policies->Policies and
click on Virtual Port Channel Default.
In this module we performed a high level overview of the APIC GUI and verified configuration
work done as a part of lab preparation.
22
Module 2: Using API Inspector and Postman
Note: This Module can be skipped and done later without affecting the rest of the lab.
ACI can be configured using the GUI, CLI and REST API.
Throughout the lab we are using the GUI as it is the most efficient way to learn about ACI.
It is important to note however that the REST API, taking advantage of xml or json based scripts,
provides a way to automate and optimize configuration deployments making roll out
significantly faster.
When you perform a task in the APIC GUI, the GUI creates and sends internal API messages to
the operating system to execute the task. By using the API Inspector, which is a built-in tool of
the APIC, you can view and copy these API messages.
Note: As a part of this lab experience, you have the flexibility to invest your time in
topics that are of more interest to you. Should you decide to do so, you can skip the entire
modules by using pre-created JSON scripts.
For this lab, we created scripts by collecting and modifying data output from the API Inspector¸
effectively using it as a “GUI to JSON” translator.
To reduce the time needed for the lab steps configuration, you can optionally execute those
scripts by using the Chrome application Postman.
This module will provide an overview of these two tools by using an example with the following
steps:
1) Access API Inspector and Postman
2) Create an ACI User and then delete it using GUI
3) Capture and create a JSON script using the API Inspector output corresponding to step 2
4) Create the same ACI User and then delete it using the JSON script and Postman
More information about REST API can be found in Cisco APIC REST API User Guide.
If you are skillful with JSON or XML, you could have written the above script yourself by using
examples from the Cisco ACI configuration guides.
For advanced automation using Python, you can download sample codes from repositories such
as GitHub, but that is beyond the scope of this lab.
23
Task 1: Explore API Inspector and Postman
1. To open API Inspector, click on the welcome, admin option in the upper right hand
corner to view the drop-down list and select Show API Inspector.
Note: The new browser window will open up displaying API Inspector.
We can monitor all APIC activities here.
The API Inspector screen will fill up quite quickly since API Inspector captures all levels of
APICs communications by default, including all API commands (POST, GET and DELETE).
Note: POST, GET and DELETE commands are at the debug level.
Filter only affects what is displayed; API Inspector will continue collecting all information
regardless of the filter setting, but simply will not display it.
Search helps us to additionally narrow down what commands will be displayed.
Leave the API Inspector browser window open.
4. Expand Lab Collections in the menu on the left and click on POST Login.
24
5. Overview of the Postman options 1-5.
3 4
1
2
5
1) Collection of previously created scripts organized in folders. In this lab, you will use
those scripts if you want to save time by using REST API instead of the GUI.
2) On the right hand side is the content of the Login script we selected. This script will
be used for authentication with the APIC.
3) Shows url where in the Management Information Tree (MIT) we want to impact the
object with the Login Script.
4) Shows Method of what we want to do with the object (POST, GET and DELETE are
supported).
5) The content of the message - Payload, what and how we want to create/modify the
object.
Note: We can obtain url, Method and Payload information from the API Inspector.
We are now going to create and delete a single user using GUI. The equivalent JSON scripts will
be captured by the API Inspector.
Make sure you kept API inspector window open!
25
Task 2: Create and Delete ACI User using GUI
6. Click on ADMIN and select AAA option.
7. Expand Security Management, right click on Local User and select Create Local User.
9. Type Student<Pod Number>Local for Login ID, and 1234Qwer for a Password and
Confirm Password and click Finish.
Note: Replace <Pod Number> with your pod number.
26
10. Expand Local Users, right click on the newly created user Student<Pod Number>Local
and select Delete.
All configurations we did in this task using GUI should have been captured by the API Inspector.
In the following task we will use those captured JSON scripts.
The first output will be POST command to create the user. Notice two of the components we
need:
1) url which describes where to place the object (new user and its properties)
2) payload which tells APIC what the characteristics of the object are
Note: This is plain text, and can be copied, pasted and modified in any text editor.
27
13. Read through the captures and find the second POST command that contains information
on how to delete the user. Copy POST output for both creating and deleting the user and
paste into notepad to prepare the JSON script to be sent by Postman.
Note: You should have your pod number below instead of “PodNumber” in the name.
The second POST command should be toward the bottom of the list.
The format of the above commands might be hard to read. When using Postman in your test and
production environment, you can make it more readable by using online formatting programs
such as codebeautify. You should run only the payload portion through that program. Below is
the example of how the new outputs would look like for our code.
Note: Since in this lab you do not have access to the internet, you cannot access codebeautify
site to make JSON more readable. Below is the “beautified” JSON code just for your reference.
28
"aaaUserEp": {
"attributes": {
"dn": "uni/userext",
"status": "modified"
},
"children": [
{
"aaaUser": {
"attributes": {
"dn": "uni/userext/user-StudentPodNumberLocal",
"status": "deleted"
},
"children": []
}
}
]
}
}
Next we will use JSON text scripts from this task together with Postman to create and delete the
user.
15. Go to Postman, click on POST Login on the left and click Send on the right to
authenticate with APIC.
29
16. Verify that POST was successful by observing the Status “200 OK” message in the
output below the Send button.
Note: For all the scripts that you will be running in the lab this message confirms
success.
17. To create a new user, enter the corresponding url and payload from the previous task,
select POST and click Send. Make sure that JSON is selected as a format.
18. Observe that the new student is created in the APIC GUI.
19. To delete the user, enter the corresponding url and payload from the previous task within
Postman and click Send.
30
20. Verify that the user is now removed in the APIC GUI.
The main purpose of this module was to provide you with the very basics of REST API
programming with JSON using API Inspector and Postman, so that you can use existing scripts
to skip specific tasks. That way you can have more time to focus on the parts of the lab of most
interest to you.
In today’s world of automation, API and programmability from the various orchestrator tools
(UCS Director, Puppet …) are present and gaining more traction in the Datacenter. While you
will not have to become a JSON script expert, some basic understanding about this technology is
very important. The second goal of this module is to provide that understanding.
31
Module 3: Building ACI Forwarding Constructs
In the previous modules we verified the setup of the fabric and got familiar with the GUI and
basic JSON programmability. From this module on we will focus on the management of the ACI
using the ACI policy model.
The ACI policy model enables the specification of application requirements policies.
This policy model manages the entire fabric, including infrastructure, authentication, security,
services, applications, and diagnostics.
In simplified terms, ACI brings to the data center networking environment what UCS brought to
the computing environment with service profiles and stateless hardware.
A Tenant is a logical container for application policies that enables domain-based access
control. It represents a unit of isolation from a policy perspective. These logical containers can
be used whether we want to separate customers, organizations, domains or just to group policies.
Note: In this lab we use the concept of Tenant to separate policies from multiple students
that are using a single ACI fabric.
The following figure depicts elements of the Tenant. In this Module we are focusing on Context
(VRF), Bridge Domain and Subnet. The other elements will be covered later in the lab.
32
A Bridge Domain (BD) represents a Layer 2 (L2) forwarding construct. It is a container of
subnets and may be configured as a L2 flood domain/boundary, but it does not have to be. We
can have one or more Bridge Domains associated with the same Context (VRF). A Bridge
Domain can span multiple switches and can contain one or more Subnets.
More details about ACI Policy Model can be found in Cisco Application Centric Infrastructure
Fundamentals document.
At the end of the Module, you will create your Tenant (TenantX, where X is your pod number).
This Tenant will contain one VRF (TX_Production) and two Bridge Domains (VMData-Web
and VMData-App).
Task Summary:
Task 1: Create a Tenant
Task 2: Create a VRF for the new tenant
Task 3: Create Bridge Domains for VMData-Web and VMData-App
These constructs will be used in later lab exercises.
Note: If you want to save lab time, you can configure this module task by task using
Postman. Please check the Module 2 for details on how to use this tool.
1. Select Admin->AAA.
33
2. Expand Security Management, right click on Security Domains and select Create
Security Domain.
3. Type TenantX for a name (Replace X with your pod number) and click Submit.
Note: Screenshot below is taken for pod 5.
5. Enter TenantX (Replace X with your pod number) as your tenants Name, check
TenantX Security Domain and click Submit.
Note: As soon as you click Submit, you will be taken to the Networking option of the
Tenant you just created.
34
6. Your domain TenantX is created.
Note1: The URL is updated to reflect that you are in your tenant object.
Note2: A quick way to get to your tenant is by clicking on its name in the tenant bar. A
tenant bar shows only the most recent tenants. Since there are many students using the
same ACI fabric, your tenants name might not be there at all times.
7. To view all tenants, select the ALL TENANTS tab. To get back to your tenant, locate
your tenant in the table and double click on it.
Note1: This is another way to get to the desired tenant if it is not on the tenant bar. Your
tenant might be on the second page of the list.
Note2: You could also perform a search by name. The search is case sensitive.
Your Tenant is successfully created and you are done with this task.
Task 2: Create a VRF and a Bridge Domain (BD) for Web Servers
In this task, you will create a VRF (Virtual Routing Forwarding) instance for your tenant.
In addition, you will create a Bridge Domain (BD) for the Web Servers tier.
Note: Prior to the ACI version 1.2, term Private Network was used in the GUI. Other
terms used interchangeably in documentations are Context and Private Network
Context.
VRF is an L3 object in the ACI network and allows for the separate routing instances. It
can also be used for the administrative L3 separation.
All subnets for a particular tenant must be associated with a VRF for that tenant (via
Bridge Domain), but a tenant can have more than one VRF.
Subnets cannot overlap within the same VRF.
35
8. Expand Tenant TenantX (where X is your pod number) created in the previous task and
click on the Networking option. Observe available Drag and drop options.
Note: As of the ACI version 1.2, we can use drag and drop method to configure L3
Network (VRF), L2 Bridge Domain (BD), External Bridged (L2) and External Routed
(L3) networks. You can use this method in production.
In this lab we will not use this method to be consistent with the previous ACI versions
and to show in more details where each object is created.
Note: Make sure you are in your Tenant!
10. Enter TX_Production (Replace X with your pod number) and click Next.
Note: We could have unchecked Create A Bridge Domain, and create BD separately.
36
11. Enter the Bridge Domain name VMData-Web for the name and click Finish.
BD gives network engineers layer 2 design capabilities that go way beyond what they could do
with the vlan alone.
While we can design BD to behave like the traditional vlan, we can also provide optimized
design that will use the ACI fabric intelligence to avoid unnecessary flooding and broadcasts.
A Bridge Domain (BD) also contains subnets. If we chose so, BD will provide the distributed
default gateway for those subnets.
37
13. Expand Networking->Bridge Domain->VMData-Web, right click on Subnets and
select Create Subnet.
Note: Throughout the screenshots you will notice ND abbreviation like in ‘ND Proxy
Subnets’ below. ND stands for IPv6 Neighbor Discovery.
Support for IPv6 started in ACI version 1.1. IPv6 is not part of this lab.
14. Enter 192.168.10.1/24 as the Web Servers Network Gateway IP and click Submit.
Note: In ACI version 1.0, there was an additional mask field that would auto-populate
based on / notation. Mask field would be 255.255.255.0 in the example below.
Note: After you click submit, subnet will be displayed in the Work Pane.
We will now create a second Bridge Domain for the App Server called VMData-App.
Unlike with VMData-Web, we will define subnets while creating VMData-App bridge domain.
There is typically more than one way to create the same object in ACI using GUI.
38
15. Right-click on Bridge Domains and select Create Bridge Domain.
16. Enter VMData-App for the name, for VRF select TenantX/TX_Production from the
drop down menu and click Next.
Note: By the default, L2 design is optimized. If you would like to design L2 differently
(for example to behave like a vlan), select option customize and modify L2 behavior.
18. Enter 192.168.11.1/24 as the App Servers Network Gateway IP and click OK.
39
21. Click on Networking to verify that we have created and configured two Bridge Domains
and associated them with the VRF TX_Production.
ARP Flooding is off by default because the assumption is that hosts are not silent and will source
frames and the fabric (leaf) will learn about the endpoint.
Later in the lab we will use a Switch Virtual Interface (SVI) on a Cisco Nexus 5548 Switch to
act as an endpoint. The SVI will not source traffic and therefore the fabric will not learn the
SVI’s IP Address. When ARP flooding is disabled (which is the default), unicast routing is
performed on the target address. In our scenario, the SVI is not sending traffic so the host route
is not in the routing table and the ARP is dropped. By turning on ARP flooding the “ARP
Request” is sent out of the interface toward the N5K and when the “ARP Response” returns (i.e.
arrives at the leaf) the fabric dynamically learns about the SVI (IP address and MAC).
22. Click on VMData-App, select ARP Flooding checkbox and click Submit.
23. Repeat the same step for the other Bridge Domain. Click on VMData-Web, select ARP
Flooding checkbox and click Submit.
24. If Policy Usage Warning window pops up, select Submit Changes to confirm.
You created your Tenant and the basic L2 and L3 constructs for it. You are done with Module 3.
40
Module 4: Configuring Application Profile
In the previous Module, we were introduced to the ACI networking policy model, the concept of
a Tenant and configured some of the Tenants networking elements, namely Context (also known
as Private Network or VRF) and Bridge Domain.
In this module we will focus on the Application Profile and Security Policies.
The ACI policy model is designed to provision a network based on application requirements.
An Application Profile models those application requirements. It consists of Endpoint Groups
(EPGs) and the policies that define the communication between them - Contracts.
Contracts are groups of subjects which define communication between EPGs. One EPG
provides the contract (sets the rules for communicating with it), and the other EPG consumes
the contract (gets information about the rules to access the provider EPG). An EPG can be both a
provider and consumer of multiple contracts.
Subjects build definitions of communications between EPGs. They contain filters that classify
traffic of interest and an Action that defines what to do with that traffic (similar to Access
41
Control Lists). There is an optional Label identifier for more complex relationship mappings.
Label Identifiers go beyond the scope of this lab.
In this module we will create a 2-Tier Application Profile. We will create two Endpoint Groups:
1) App_Servers EPG which will contain our App Server Virtual Machines introduced later
in the lab.
2) Web_Servers EPG which will contain our Web Server Virtual Machines also introduced
later in the lab.
We will define traffic we want to allow between these two EPGs via a contract App_Contract.
The final configuration for this module is depicted below where the App_Servers EPG provides
App_Contract, while the Web_Servers EPG consumes that contract as indicated by the arrows.
Note: If you want to save lab time, you can configure this module’s tasks using Postman.
Please check the Module 2 for details.
42
Task 1: Create Application Network Profile (ANP)
1. If you are not already in your tenant TenantX (where X is your pod number), go there by
selecting ALL TENANTS tab and double clicking on the name of your TenantX.
Note: Your tenant might be on the second page of the list.
3. Enter Name TX_AppProfile (where X is your pod number) and click + under EPGs to
create an Endpoint Group.
43
4. Enter name Web_Servers, for the Bridge Domain select VMData-Web. Expand the
Consumed Contract and select Create Contract.
Note: You will associate this EPG to a Virtual Machine Manager (VMM) Domain
Profile later in the lab.
6. Enter Name App_Services, click + next to Filters. Click anywhere on the Name box
where it reads “select an option” and then click on the new + sign to create a new filter.
44
7. Name the Filter App_Service_Ports and click + to add Entries. Enter data from the table
below and click Update.
Name ICMP
EtherType IP
IP Protocol icmp
8. Click + next to Entries to add another Filter Entry. Enter data from the table below and
click Update.
Name TCP5000
EtherType IP
IP Protocol tcp
Destination port range From/To 5000/5000
45
12. Click Submit to Create Contract.
Note: In this part of the lab we used wizard to create Contract, Subject and Filters. In
module 7 (L3 external connectivity), we will show how to create these objects without
the wizard.
13. Expand the Consumed Contract, select the newly created App_Contract and click
Update.
14. Click + under EPGs to add the second Endpoint Group of the 2-tier application. Name
EPG App_Servers, for Bridge Domain select VMData-App, select App_Contract as
the provided contract and click Update.
16. Expand Application Profiles and click on TX_AppProfile. Click on any object in the
Application graph to see the directions of the contract.
Note: Direction of the contract is from the provider to the consumer, while the first
packet goes the opposite way.
46
17. Expand the App_servers and Web_Servers EPGs and select Contracts to verify that the
Contractual relationship was created. The state will show as Formed.
Note: We could already see that relationship by reading the graph arrows from before.
18. Contract and Filters are under Tenant’s Security Policies. To see Contract and Filter we
created expand the Tenant Tree as shown below.
Note: We created Contract and Filters as a part of Application Profile wizard. We could
have created them separately ahead of time. We can reuse these Security Policies in
different ANPs (Application Network Profile) later.
In this Module we created an Application Network Profile for a simple 2-tiered application and
defined a Provider/Consumer relationship between two Endpoint Groups.
So far in the lab we have created the necessary elements of the Logical Policy Model.
In the following Module we will assign Endpoints to the Endpoint Group.
47
Module 5: Configuring VMM Integration
In the previous Modules, we focused on Logical Policy Model elements. We created a Tenant, an
Application Network Profile, and Network elements such as a VRF and Bridge Domains.
We defined two Endpoint Groups as a part of Application Network Profile (ANP) representing
an example of 2-tier application.
Again, Endpoints can be physical (for example Bare metal servers, NAS) or virtual (for example
Virtual Machines).
Endpoints within the same EPG can communicate freely. For communication between endpoints
belonging to different Endpoint Groups, there has to be mandatory Contract between these two
EPGs.
In this lab, we will integrate VMware vCenter with APIC and enable connectivity between two
preconfigured Virtual Servers using the Logical Policy Model elements we configured in the
previous two modules.
We will explore the impact of the Contract we created earlier to connectivity between these two
servers, given that each belongs to a different Endpoint Group.
Note: All tasks in this module have to be performed to be able to continue with the lab. If
you want to save lab time, you can configure all tasks except Task 2 using Postman. Task
2 in this module is done in the vCenter; therefore there is no Postman shortcut.
48
Task 1: Configure VMM Integration
In this task, you will integrate a Virtual Machine Manager (VMM) with ACI.
The table below shows the naming convention for the different objects per pod.
Note: Attachable Access Entity Profile (AEP) is created ahead of time as a part of ACI-
UCS connectivity. Details are in the ACI Lab Setup and Connectivity document.
Student Tenant VMM Name Associated AEP VLAN Pool Name
Pod1 Tenant1 T1-vCenter T1-VLAN-Pool
Pod2 Tenant2 T2-vCenter T2-VLAN-Pool
Pod3 Tenant3 T3-vCenter T3-VLAN-Pool
Pod4 Tenant4 T4-vCenter T4-VLAN-Pool
Pod5 Tenant5 T5-vCenter T5-VLAN-Pool
Pod6 Tenant6 T6-vCenter T6-VLAN-Pool
Pod7 Tenant7 T7-vCenter T7-VLAN-Pool
Pod8 Tenant8 T8-vCenter UCS_Domain_01 T8-VLAN-Pool
Pod9 Tenant9 T9-vCenter T9-VLAN-Pool
Pod10 Tenant10 T10-vCenter T10-VLAN-Pool
Pod11 Tenant11 T11-vCenter T11-VLAN-Pool
Pod12 Tenant12 T12-vCenter T12-VLAN-Pool
Pod13 Tenant13 T13-vCenter T13-VLAN-Pool
Pod14 Tenant14 T14-vCenter T14-VLAN-Pool
Pod15 Tenant15 T15-vCenter T15-VLAN-Pool
Pod16 Tenant16 T16-vCenter T16-VLAN-Pool
1. Select VM Networking. Right click at VMware and select Create vCenter Domain.
Note: vCenter domain references a particular vCenter manager and a particular pool of
VLANs or VxLANs that will be used.
In the older ACI versions there was the additional Policies tab which has been removed
as of 1.2 version.
49
2. Enter TX-vCenter as Name (where X is your pod number), select UCS_Domain_1 for
Associated Attachable Access Entity Profile, click on VLAN Pool and select Create
VLAN Pool.
Note: Attachable Access Entity Profile (AEP) is created ahead of time. Details are in the
ACI Lab Setup and Connectivity document. The AEP specifies which leaf ports will be
used to connect to the hosts and which VLANS are valid on those connections.
3. Enter TX-VM-VLAN-Pool as a Name (where X is your pod number) and click on + next
to Encap Blocks.
Note: Make sure that Allocation Mode is Dynamic Allocation. For VLAN Pools
associated with Virtual Domains, we want Dynamic Allocation.
4. Add the VLAN Range as per the table below and click OK
Note: VLAN range for pod5 is in the screenshot.
Pod VLAN Range Pod VLAN Range Pod VLAN Range Pod VLAN Range
Pod1 2110–2119 Pod5 2150–2159 Pod9 2190–2199 Pod13 2230–2239
Pod2 2120–2129 Pod6 2160–2169 Pod10 2200–2209 Pod14 2240–2249
Pod3 2130–2139 Pod7 2170–2179 Pod11 2210–2219 Pod15 2250–2259
Pod4 2140–2149 Pod8 2180–2189 Pod12 2220–2229 Pod16 2260–2269
50
5. Click Submit in the CREATE VLAN POOL dialog.
Note: The VLAN pool we just created is at Fabric->Access Policies->Pools. We could
have created it there ahead of time and have it available in the pull down menu.
6. To select a Security Domain, click + select your Tenant’s Security domain TenantX
(where X is your pod number) and click Update.
Note: You created Security Domain at the beginning of the Module 3.
8. Enter AdministratorX as a Name, for Username enter root with password 1234QwerX
(where X is your pod number) and click OK.
Note: For example, pod 7 will have password 1234Qwer7
51
10. Refer to the following table to assign VCENTER/VSHIELD CONTROLLER options.
Choose DVS Version 5.5 for DVS version and click OK.
Note: Enter the Datacenter name exactly as the table below (case-sensitive – must
match exactly). You can also Cut-and-Paste the Datacenter name from the vCenter.
Associated
Name Datacenter vCenter IP
Credential
T1_VC_Controller Tenant1 192.168.199.201 Administrator1
T2_VC_Controller Tenant2 192.168.199.202 Administrator2
T3_VC_Controller Tenant3 192.168.199.203 Administrator3
T4_VC_Controller Tenant4 192.168.199.204 Administrator4
T5_VC_Controller Tenant5 192.168.199.205 Administrator5
T6_VC_Controller Tenant6 192.168.199.206 Administrator6
T7_VC_Controller Tenant7 192.168.199.207 Administrator7
T8_VC_Controller Tenant8 192.168.199.208 Administrator8
T9_VC_Controller Tenant9 192.168.199.209 Administrator9
T10_VC_Controller Tenant10 192.168.199.210 Administrator10
T11_VC_Controller Tenant11 192.168.199.211 Administrator11
T12_VC_Controller Tenant12 192.168.199.212 Administrator12
T13_VC_Controller Tenant13 192.168.199.213 Administrator13
T14_VC_Controller Tenant14 192.168.199.214 Administrator14
T15_VC_Controller Tenant15 192.168.199.215 Administrator15
T16_VC_Controller Tenant16 192.168.199.216 Administrator16
52
12. Your VM Provider Profile will appear in the vCenter Domains.
13. To verify connectivity with vCenter, expand VMware and click on Tx-vCenter. Select
Operational tab and verify that the State is Online.
Note: If status is not online, check troubleshooting section at the end of this document. If
that does not help, please contact your instructor.
14. Expand the whole tree under the Tx-vCenter to verify that the vCenter inventory is
discovered. Verify the APIC created a DVS in this vCenter and notice that there is no
VM facing DVS Portgroup.
Note: You can also verify the creation of the DVS in vCenter, which you will do in the
next task.
In this Task so far, we connected APIC with vCenter. APIC instructed vCenter to create a
Distributed Virtual Switch (DVS).
APIC will NOT instruct vCenter as to which servers it should connect to the DVS or VMware
port group. This task has to be done by the Server Administrator within vCenter, and that is what
we will do in the next task.
53
Next we will define what will qualify an Endpoint to belong to an Endpoint Group (EPG).
In the vCenter based server environment, this is done using VMware Port Groups. VMs
belonging to the same Port Group will belong to the same EPG.
In the previous module we created Application profile TX_AppProfile (where X is your pod
number) that contains two EPGs. By associating each of those two EPGs with VMM domain
TX_vCenter we just created, APIC will instruct vCenter to create two corresponding port groups.
Any VM the Server Administrator connects to those port groups will correspond to an EPG
member from the APICs perspective.
16. Verify that your Tenant has TX_AppProfile created with two EPGs by expanding
Application Profiles and clicking on TX_AppProfile. Click on any object to verify
contracts provider-consumer relationship of the EPGs.
17. Expand TX_AppProfile->Application EPGs and select EPG App_Servers, choose the
Operational Tab. Notice that there are no client end-points in this EPG.
54
18. Expand EPG App_Servers, right click Domains (VMs and Bare-Metals) and select Add
VMM Domain Association.
19. For the VMM Domain Profile, choose your VMM Domain instance VMware/TX-
vCenter (where X is your pod number) set Deploy and Resolution Immediacies to
Immediate and click Submit.
55
20. Verify that the STATE is formed.
Note: As soon as you click Submit, the new port group will be created in the vCenter.
21. Click at EPG App_Servers and select the Faults Tab to verify there were no faults.
Note: Faults tab exists for the most of the objects in ACI and is an excellent starting point
for troubleshooting.
It is important not to overreact on the fault. Sometimes faults are reported to the APIC
during the transition phase and get resolved by themselves. If the fault is resolved after
the transition, their Lifecycle status will eventually become Clearing. This indicates that
fault is no more a fault. You might encounter that case in this step.
22. Select the OPERATIONAL tab and note that there are still no Endpoints associated.
23. Expand EPG Web_Servers, right click Domains (VMs and Bare-Metals) and select
Add VMM Domain Association.
Note: You are repeating the same last few steps just for Web EPG.
56
24. For VMM Domain Profile choose your VMM Domain instance VMware/TX-vCenter,
set Deploy and Resolution Immediacies to Immediate and click Submit.
26. Click EPG Web_Servers, select the OPERATIONAL tab and note that there are still no
Endpoints associated.
APIC is now ready for virtual App and Web endpoints (virtual servers) to get associated with
their corresponding EPGs.
In the next task, as a Server Administrator, you will attach virtual servers to the port groups
created by APIC. That way those servers will be associated with the proper EPG.
57
Task 2: Add VMs to the port group from the VMware vCenter
In this task we will associate an existing host with the DVS and then attach each of the two
servers to the proper port group on that DVS.
27. From the student desktop, double click on the vSphere Client icon, use address and
credentials from the table below, and click Login. Click Ignore if the warning pops up.
Pod vCenter IP User Password Pod vCenter IP User Password
1 192.168.199.201 root 1234Qwer1 9 192.168.199.209 root 1234Qwer9
2 192.168.199.202 root 1234Qwer2 10 192.168.199.210 root 1234Qwer10
3 192.168.199.203 root 1234Qwer3 11 192.168.199.211 root 1234Qwer11
4 192.168.199.204 root 1234Qwer4 12 192.168.199.212 root 1234Qwer12
5 192.168.199.205 root 1234Qwer5 13 192.168.199.213 root 1234Qwer13
6 192.168.199.206 root 1234Qwer6 14 192.168.199.214 root 1234Qwer14
7 192.168.199.207 root 1234Qwer7 15 192.168.199.215 root 1234Qwer15
8 192.168.199.208 root 1234Qwer8 16 192.168.199.216 root 1234Qwer16
58
28. Navigate to Hosts and Clusters, select TenantX Datacenter (where X is your pod
number) and select the Tasks & Events tab to verify DVS and port group creation.
Note: You can also navigate to the Networking view to see if the APIC DVS and port
groups were successfully created in the previous task.
29. Verify that your host and two VMs appear under your TenantX Datacenter.
Note: There will be other VMs under your host. Ignore them.
30. Click on Hosts and Clusters and navigate to Networking to add the ESXi server to the
DVS created by APIC.
59
31. Expand ACI-vCenterX -> TenantX -> TX-vCenter folder (where X is your pod
number).
Note: Notice that the folder name is the same as the VMM domain that you created in the
APIC in the previous task. Also notice the two port groups created by APIC named after
the Tenant, Application Profile and EPG names.
32. Right click at DVS TX-vCenter switch and select Add Host.
The Add Host to vSphere Distributed Switch wizard will guide you through the process of
adding your server into the APIC DVS switch.
33. Select your Host and two unassigned physical adapters (vmnics 0 & 1) and click Next.
Note: Notice that two physical adapters (vmnic2 and vmnic3 in the example below) are
already assigned to vswitch0. These provide management connectivity between the ESXi
host and vCenter. Do not change these adapter mappings.
60
34. Click Next -> Next-> Finish.
Note: Notice Port Groups with dynamically created VLANs form the APIC VLAN pool.
VLANs in the screen capture belong to pod2. Specific VLAN numbers might vary.
35. Verify that the host is Connected with a VDS status of Up by selecting TX-vCenter and
the Tab Hosts.
61
37. Right click on Web_Server VM and select Edit Settings
38. Click on the Network adapter, set the Network Connection to the
TenantX|TX_AppProfile|Web_Servers Port Group and click OK.
Note: Notice the name of the port group includes the names of the Tenant, Application
Profile and End Point group.
As soon as you are done with this step, vCenter will communicate with the APIC and the
Endpoint will be registered with EPG even if the VM is powered off.
62
40. Click on the Network adapter, set the Network Connection to the
TenantX|TX_AppProfile|App_Servers Port Group and click OK.
41. Go to the Networking menu by selecting Hosts and Clusters, click on the port groups
and the Virtual Machine tab for each to verify the Network settings for App-Servers
and Web-Servers is applied.
63
42. Go back to your Tenant in the APIC to observe registered Endpoints, Tenants-
>TenantX (where X is your pod number).
44. Go back to the vCenter and revert to Hosts and Clusters menu, right click on Web-
ServerX and Open Console.
64
46. Ping the Web Servers Defaut Gateway ping 192.168.10.1
Note: We defined Default Gateway when we created the Bridge Domain for
192.168.10.0/24 subnet. The Leaf is the default gateway for this subnet.
To stop ping press Ctrl-C.
Note: This ping should be successful. If ping is unsuccessful, Power off (Guest
Shutdown) the Linux VM (from vCenter) and then after shutdown is competed, Power
On; retry Ping.
47. Go back to the APIC GUI and observe information about the endpoint.
Note: Since Web_Server generated traffic, APIC discovered the Web_Servers’s IP
address.
48. To view endpoint’s details, double Click on Web-ServerX output depicted above. Click
Close when done.
Note: Under Interfaces (need to scroll down) you can see vPC Policy Group related with
APIC and UCS connectivity.
65
If time permits, verify that App_Server endpoint is registered to the EPG App_Servers.
In this Task, playing a role of a Server Administrator we connected the App_Server and
Web_Server from the vCenter to the virtual networking environment created by APIC.
We also verified that endpoints were properly associated with their corresponding EPGs.
49. Go back to vCenter and Web-Server’s Console window. Type ifconfig to obtain the
address of your Web-Server. The forth octet value should be 100+X (where X is your pod
number).
Note: Web-Servers are in 192.168.10.0/24 and App-Servers are in 192.168.11.0/24
subnets. The forth octets are the same for both servers within the same pod.
Screenshot is taken from pod2, (100+2) = 102.
50. Ping the App-ServerX ping 192.168.11.(100+X) where X is your pod number.
Note: To stop ping press Ctrl-C.
66
51. After that was successful, try to SSH from web-server to app-server. Type command ssh
192.168.11.100+X (where X is your pod number). Default port is port 22.
Note: SSH will time out eventually. To reduce the wait time, press Ctrl-C to interrupt.
52. Now try to SSH from web-server to app-server again but use tcp port 5000
ssh –p 5000 192.168.11.100+X (where X is your pod number)
Note: Immediately the app-server refuses the connections. That means that the packet
actually made it to the App_Server.
SSH TCP connection with destination Port 5000 was allowed by the contract to reach
App_Server, while the default port 22 was blocked since it was not listed on the contract.
Important for MAC-Users: Control + option/alt key combination might NOT allow
you to exit the console session. Hold down Control + Command + Option/Alt until
you are released from the console window.
We will now access App_Server and do a ssh test. What do you expect will work?
53. From the Hosts and Clusters menu, right click on App-Server02 and Open Console.
55. SSH will not work, the same as before. Let’s try ssh on port 5000 ssh –p 5000
192.168.10.100+X (where X is your pod number).
Note: This time, ssh on port -5000 did not work. Why? Let’s check contract in the
APIC.
Note: Pinging Web-ServerX at 192.168.10.(100+X) will work as an exception.
67
56. Go back to your Tenant TenantX in the APIC.
68
60. Click on Actions and select Add Consumed Contract
Note: We could right click on Contract and have the same choices.
61. On the contract drop down list, select the TenantX/App_Contract created earlier and
click Submit.
Note: We could have created a new contract as well.
62. Verify that App_Servers EPG is both Providing and Consuming the App_Contract.
63. Expand Application EPGs->EGP Web_Servers, right click on Contracts and select
Add Provided Contract.
Note: Previously, we made App_Servers EPG to consume the contract. We also need
Web_Servers EPG to provide that contract.
69
64. From the Contract drop-down menu, select TenantX/App_Contract and click Submit.
65. Verify that Web_Servers EPG is both Providing and Consuming the App_Contract.
67. Return to the App-Server console in vSphere and try again to ssh using port 5000 to the
Web-ServerX by issuing the following command: ssh –p 5000 192.168.11.100+X
Note: Now the behavior changes and the Connection is refused immediatelly. The
difference is the login is being refused by the Web-Server,whereas previously it was
being denied by the ACI fabric. Imagine that port tcp 5000 is a middleware. Would you
want traffic arriving at the web server on this port? Probably not, but if you do make sure
that the Web_Server EPG is a provider for that service.
70
Module 6: Configuring Ext. Layer 2 Connectivity
In a typical ACI deployment, there will be an existing Data Center infrastructure that ACI has to
integrate with. This module shows how to integrate with that infrastructure at the Layer 2 (L2)
level. The next module will focus on the existing Layer 3 (L3) connectivity.
Additional details and options can be found in Connecting Application Centric Infrastructure
(ACI) to Outside Layer 2 and 3 Networks white paper.
We are in essence extending the bridge domain out of the ACI fabric in this module.
While for this exercise and in live production use cases it might be easier to use the Configure
an interface, PC and VPC wizard available from the Quick Start section under the Fabric-
>Access Policies menu, in the interest of ensuring that you know exactly what you are
configuring, we will configure all necessary policies and pools individually.
The Overall workflow for creating an access policy for connectivity is depicted below.
Note: The arrow orientation is different from most Cisco Documentations since those
documents are focusing on the relationship between objects. Directions below show one
possible configuration workflow.
Interface Policies control the configuration of an individual feature, CDP on/off, LLDP on/off,
LACP mode, port-speed etc. Necessary Interface Policies were already created as a part of initial
lab preparation. Details can be found in the ACI Lab setup and connectivity document.
71
Interface Profile consists of a range of interfaces sharing similar configuration so that we can
apply the same Interface Policy Group across multiple interfaces in one place.
Switch Profile defines on which Nodes we will use a specific Interface Policy Group.
Whether you connect physical or virtual servers to the Cisco ACI fabric, you define a physical
or a virtual domain. Virtual domains reference a particular virtual machine manager and a
particular pool of VLANs or VxLANs that will be used.
Attachable Access Entity Profile (AEP) connects the concept of Domains (and corresponding
VLAN/VxLANs) with interfaces that tie to that domain.
Note: For VLAN/VxLAN to function on the specific Leaf, both the AEP and End Point
Group (EPG) have to be provisioned.
The person who administers the VLAN or VxLAN space is the Infrastructure Administrator. The
person who consumes the domain is the Tenant Administrator. The Infrastructure Administrator
associates domains with a set of ports that are entitled or expected to be connected to virtualized
servers or physical servers through an Attachable Access Entity Profile (AEP).
The Tenant Administrator is in charge of creating Application Network Profiles (ANP) and
maintaining the Tenant’s resources.
vPC Explicit Protection group defines the vPC domain ID and which switches participate in
the domain. That configuration is not needed for the single interface connectivity represented in
this module.
Note: To connect the ACI fabric to Nexus devices, typically we will need only one object
listed above. Due to lab scalability, we will have up to 16 times more objects created.
In this Module, in Task 1 you will configure single interface (not port channel) connectivity with
the external Nexus 5548 by creating an access policy and then, in Task 2, tie the Nexus L2
Outside Bridged Domain with the Application Network Profile you created earlier in the lab to
enable the traffic between VMs and Hosts behind Nexus.
Note: If you want to save lab time, you can configure this module task by using Postman.
Please refer back to Module 2 for details on how to use this tool.
72
Task 1: Create Access Policy to Nexus Switch
2. Expand Interface Policies, right click on Policy Groups and select Create Access Port
Policy Group.
Note: To check the policies we just created or to see the default policies, just expand the
feature you want to see by clicking on + sign next to it and selecting policy.
3. Type TX_L2_Out (where X is your pod number) for the Name, CDP_Enable and
LLDP_Disable policies and click Submit.
Note: All other policies will automatically leverage default values.
Note: If Attachable Access Entity Profile (AEP) was already created this would be the
place to connect it. We will connect the AEP and Interface Policy Group later when we
create the AEP.
73
4. Right click on Profiles and select Create Interface Profile.
5. Type ip_AccX_L2_N5K as a name (where X is your pod number) and click + to specify
Interface Selectors.
6. Enter the Interface Name and Interface ID based on the table below, Select the
TX_L2_Out Access Port Policy Group you just created (where X is your pod number)
and click OK.
Pod Name Int ID Pod Name Int ID
Pod1 Port17 1/17 Pod9 Port25 1/25
Pod2 Port18 1/18 Pod10 Port26 1/26
Pod3 Port19 1/19 Pod11 Port27 1/27
Pod4 Port20 1/20 Pod12 Port28 1/28
Pod5 Port21 1/21 Pod13 Port29 1/29
Pod6 Port22 1/22 Pod14 Port30 1/30
Pod7 Port23 1/23 Pod15 Port31 1/31
Pod8 Port24 1/24 Pod16 Port32 1/32
Note: The Screenshot below is from pod5.
74
7. Click SUBMIT to complete Interface Profile.
Note: We just created the Interface Profile for a single interface that will tie to the Nexus
5548 via the port specified in the table. Next we will define from which Switch we will
use this port.
9. Click + next to the Associated Interface Selector Profile to associate the Interface
Profile with the Switch Profile.
10. Select Interface Profile ip_AccX_L2_N5K created in the previous steps and click
Submit.
75
Up until now we have configured all tasks on the left hand side of the graph that were referenced
at the beginning of this module.
In the final steps we will:
1) Create the VLAN Pool and the L2 Domain for Nexus connectivity.
2) Tie that L2 Domain with the Interface Policy Group association using an Attachable
Access Entity Profile (AEP).
12. Type vp_TX_L2Out (where X is your pod number) as a Name, select Static Allocation
and click + to specify the VLAN range.
76
13. We are using a single VLAN with the number 2000+X (where X is your pod number) per
pod. Type 2000+X in both Range spots and click OK.
Example: For pod 13 vlans will be 2013-2013.
Note: The following screenshot shows input for pod5. This VLAN will operate between
the ACI Border Leaf and the Nexus 5548 Switch.
16. Type TX_L2_ExtDom (where X is your pod number) as a name, select vp_TX_L2Out-
static for the VLAN pool, select TenantX for a Security Domain and click SUBMIT.
Now we will create the Attachable Access Entity Profile to connect the Bridge Domain and
VLANs with the Interface properties we created earlier in this task.
77
17. Expand Global Policies, right click on Attachable Access Entity Profiles and select
Create Attachable Access Entity Profile.
18. Type TX_AEP as AEP name and click + to associate with the domain.
19. Select TX_L2_ExtDom as the L2 External domain we want to tie to this AEP and click
Update.
This concludes the setup for the access policy to enable the connection between the ACI fabric
and the Nexus Switching infrastructure.
So far we have the EPG that defines the App_Servers and Web_Servers. To enable connectivity
to the outside L2 network for the applications, we need to create an EPG to represent the outside
L2 network and use contracts to allow classified traffic between the EPGs.
78
Task 2: Enable and Verify App Traffic with the External L2
In the previous Task, we provided policies for overall connectivity between ACI fabric and
Nexus Switch.
In this Task, we will define and enable connectivity from the Tenant and Application Network
Profile (ANP) perspective.
22. Go to your Tenant TenantX (where X is your pod number).
24. Verify that this Application profile consists of the two EPGs you created previously.
Note: As of ACI version 1.2 we can connect EPG’s, external L2 and L3 and various
existing VMM domains with contracts using drag and drop method.
25. Expand Networking, right click on External Bridged Networks and select Create
Bridged Outside.
79
26. Enter the following information and click ADD (Do not click Next yet!).
Name: TX_L2_Out (Where X is your pod number)
External Bridged Domain: TX_L2_ExtDom
Bridge Domain: TenantX/VMData-App
Encap: See table below
Path Type: Port
Path: See table below
Note: The vlan must be in the vlan pool you created earlier or you will get an invalid-
vlan fault. Screenshot is from pod5.
Note: The process is simplified in this ACI version 1.2 compared to the older versions.
Pod Encap Path Pod Encap Path
Pod1 vlan-2001 Node-101/eth1/17 Pod9 vlan-2009 Node-101/eth1/25
Pod2 vlan-2002 Node-101/eth1/18 Pod10 vlan-2010 Node-101/eth1/26
Pod3 vlan-2003 Node-101/eth1/19 Pod11 vlan-2011 Node-101/eth1/27
Pod4 vlan-2004 Node-101/eth1/20 Pod12 vlan-2012 Node-101/eth1/28
Pod5 vlan-2005 Node-101/eth1/21 Pod13 vlan-2013 Node-101/eth1/29
Pod6 vlan-2006 Node-101/eth1/22 Pod14 vlan-2014 Node-101/eth1/30
Pod7 vlan-2007 Node-101/eth1/23 Pod15 vlan-2015 Node-101/eth1/31
Pod8 vlan-2008 Node-101/eth1/24 Pod16 vlan-2016 Node-101/eth1/32
80
28. To create an External EPG, click + next to the External EPG Networks.
29. For the Name, type TX_L2Out_EPG (where X is your pod number) and click OK.
We defined the external Bridged Domain, the Bridge Domain that is on the ACI side and EPG
for our external L2 connectivity within the Tenant. To enable traffic we have to associate policy
between external L2 EPG and Server’s EPGs.
81
32. Click + next to Provided Contracts, select the App_Contract and click Update.
33. Click the + next to Consumed Contracts, select the App_Contract and click Update.
Note: EPG TX_L2Out_EPG now has a relationship with App_Servers EPG using the
App_Contract. We can test that relationship by reaching outside IP address.
82
37. Go back to your vCenter. If you are logged out, IP address and credentials are below.
38. From the Hosts and Clusters menu, right click on App-ServerX and Open Console.
39. If you were logged out, log back in using student/1234Qwer credentials and Ping
192.168.11.200
Note: 192.168.11.200 is IP address of the Loopback on Nexus 5548 simulating host
beyond it.
Congratulations! You have successfully attached external L2 domain to the ACI fabric.
83
Module 7: Connecting Ext. Layer 3 Connectivity
In the previous Module, we connected ACI fabric to the external L2 network. Whether we want
to connect to the WAN or we just need routing within our Data Center, the external L3
connectivity will be needed.
In this Module, you will create an External Layer 3 connection and import networks from the
external OSPF Domain. Nexus 5548 Layer Switch connected to the Leaf2 will present that
external OSPF source.
To enable application traffic we will create an external L3 EPG and establish its relationship
with the other EPGs using contract.
Note: If you want to save lab time, you can configure this module task by task using
Postman. Please check the Module 2 for details how to use this tool.
In ACI version 1.1, there are a few major additional to the external L3 connectivity.
- In addition to iBGP and NSSA OSPF, as of 1.1 we can use EIGRP, OSPF regular area
and eBGP for the external L3 connectivity.
- As of version 1.1, ACI fabric allows for Transit routing.
84
Task 2: Create Layer 3 External Connectivity
In this task, you will enable External Routing using OSPF within your Tenant. We will connect
ACI Fabric with Nexus 5548 and create needed Networking constructs for that connectivity,
such as interface profile, Node profile, EPG and OSPF parameters.
In the following task we will focus on application connectivity aspect via Application Network
Profile (ANP) to enable communication between external L3 and our Web-Server EPGs.
2. Enter Name N5K-L3-Out, check OSPF checkbox with OSPF Area ID 1. Select
TenantX/TX_Production (where X is your pod number) as a VRF and click + under
Nodes and Interfaces Protocol Profiles.
Note: TX_Production is our Tenant’s private network (VRF) that we created in the
Module 3.
85
3. For the Node Name, enter Leaf102 and click + next to Nodes.
4. Select Leaf102 (Node-102) as a Node ID, enter 1.0.0.4 for Router ID and click OK.
Note: Defining OSPF properties of the node (Leaf).
86
7. Refer to the following chart to select the proper interface on the Leaf102 as a Path.
Enter 172.16.2.2/24 as the IP address, MTU as 1500 Bytes and click OK.
Note: Making sure that MTU matches N5K configuration ensures that OSPF neighboring
can be established.
Pod Interface Pod Interface Pod Interface Pod Interface
Pod1 1/17 Pod5 1/21 Pod9 1/25 Pod13 1/29
Pod2 1/18 Pod6 1/22 Pod10 1/26 Pod14 1/30
Pod3 1/19 Pod7 1/23 Pod11 1/27 Pod15 1/31
Pod4 1/20 Pod8 1/24 Pod12 1/28 Pod16 1/32
87
10. Click Next to proceed to Step 2 of the creation of External Routed Networking.
11. Click + next to External EPG Networks to create EPG for L3 External Network.
88
14. Click OK to complete the Create External Network part.
15. Click Finish to finish configuring External Routed Network.
16. Click on Tenant TenantX->Networking and notice the new External Routed Network
N5K_L3_Out we just created related with our Tenant’s private network TX_Production.
The basic L3 connectivity and EPGs are created in this task. We will focus on enabling our
application to connect with the External L3 in the following task.
89
Task 3: Enable App Traffic with the External L3
We established general connectivity with external L3 in the previous task. In this task, the focus
is on enabling communication between Web_Servers EPG and L3 Ext_EPG via Contract.
We will start by creating several Filter entries that will be part of the Contract.
Note: We are doing a very similar configuration as what we did when we created the
App_Services contract earlier in the lab, just without using wizard.
18. Expand Security Policies, right click Filters and select Create Filter.
19. Enter Name Web_Service_Ports and click + to add the first Entry.
20. Name Filter Entry ICMP, EtherType is IP and IP Protocol is icmp. Click Update.
21. Click + to add the second entry. Name it SSH, EtherType IP, IP Protocol is tcp. Set the
Destination Port/Range to 22 by typing values directly in both fields and click Update.
Note: Option Stateful refers to the new distributed stateful firewall feature enforced by
Cisco AVS (Application Virtual Switch). For more details check this article.
90
22. Click + to add the third entry. Name it HTTP, EtherType IP, IP Protocol is tcp. Set the
Destination Port/Range to http from the drop down menu and click Update.
24. Next we will create a new Contract that uses this Filter. Expand Security Policies, right
click Contracts and select Create Contract.
26. Enter Name Web_Subject. Check both Reverse Filter Ports and Apply Both Directions.
Click + and add filter TenantX/Web_Service_Ports you just created. Click Update.
91
Next we will associate Contract with EPGs. There are several ways we can do that.
So far in the lab, we associated contract with EPG during the creation of the application profile
in module 4. In module 6 we associated contract by designating the specific EPG to be either
provider or consumer of the contract or both.
In this module we will use drag and drop method enabled as of version 1.2.
All of the above methods do the same, and choice is matter of personal preference.
We want Web_Servers EPG to dictate what kind of traffic can get to it from the outside, i.e. to
Provide Contract. L3 External EPG will consume that contract.
30. Select TenantX/N5K-L3-Out/L3-Out-EPG for the External Network and click OK.
31. Click on Contract Icon, drag it first to the Web_Servers EPG and then to the L3-Out-
EPG.
1
2
92
32. Verify that L3-Out-EPG is Consumer EPG and epg-Web_Servers is Provider EPG.
Select Choose An Existing Contract radial button, select TenantX/Web_Contract,
uncheck No Filter and select Web_Contract/Web_Subject as a Contract Subject.
Click OK.
33. Click on Contract Icon to verify provider/consumer relationship and finish configuration
by clicking Submit.
Note: As a challenge, can you find where within your tenant you can see all contracts
associated with L3-Out-EPG? Can you find L3-Out-EPG object?
So far we configured ACI OSPF properties to connect with the external L3 and configured
Application Profile to include contract between L3_Out and Web_Servers EPGs.
The last step is to define which subnets will be advertised via OSPF. Since Bridge Domains are
carriers of the subnet information, this is what we will configure next.
93
34. Expand Networking->Bridge Domains and click on VMData-Web. Select L3
Configuration tab and click + for Associated L3 Outs.
Note: We are binding L3 Outside routed network to VMData-Web bridge domain.
94
38. If Policy Usage Warning pops up, click Submit Changes.
39. Repeat the same Bridge Domain modification steps for the VMData-App. Associate
TenantX/N5K-L3-Out as L3-Out with VMData-App and advertise externally subnet
192.18.11.1/24.
We are done configuring External L3 Connectivity using OSPF. In the next task we will verify
our configuration.
First we will check OSPF connectivity between the Border Leaf and Nexus 5548.
41. Expand POD 1->Leaf102->Protocols. Select OSPF and click on the Operational tab.
Verify that OSPF process for your tenant TenantX (where X is your pod number) is UP.
42. Expand OSPF and click on Ospf for VRF-TenantX: TX_Production (where X is your
pod number). Verify that the state with the external Nexus 5548 is Full.
95
43. Expand OSPF Databases and select Routes. Verify the presence of the routes 0.0.0.0/0,
172.16.2.0/24, and 10.0.X.1/32 (where X is your pod number).
Note: 10.0.X.1 is N5K loopback interface simulating remote host’s IP address.
44. Return to vSphere, go to the Web_Server console and ping the IP Prefix 10.0.X.1 (where
X is your pod number). This should be successful.
Note: It might take some time for all OSPF and MP-BGP info to propagate.
45. From the App_Server console ping the IP Prefix listed in the table above. This ping
should fail though we did make the App_Server subnet public.
Note: Consider why this ping fails but the previous one succeeds. The answer is in a few
steps.
96
46. Connect to the N5K using the Putty SSH client on your desktop. Use IP address
192.168.199.11, Username: student and Password:1234Qwer.
47. Explore routing and set up of Nexus 5548. Perform testing from the Nexus side.
Note: Your access is limited to show, ping and ssh commands.
Suggested commands:
# show run ospf
# show ip route vrf tenantX (notice that both subnets 192.168.10.0/24 and
192.168.11.0/24 are in the routing table, so that is not a reason of failed ping.)
# show ip ospf interface brief vrf tenantX
# ping 192.168.10.(100+X) vrf tenantX
(X is your pod number, Example: for pod 2 fourth octet is 102)
# ssh [email protected].(100+X) vrf tenantX (password is 1234Qwer, type exit to
close ssh session)
# ssh [email protected].(100+X) vrf tenantX (this one will fail, why?)
Note: The answer for SSH is the same as it is for failed ping. The ssh to the Web_Server
is successful because there is a contract (which includes ssh) between the External L3
EPG and the Web_Servers EPG.
Ping and SSH to the App Server fail since there are no contracts between corresponding
EPGs. That is the property of the whitelist models, no contracts, no data traffic allowed.
48. Telnet to the non-Border Leaf 101 at 192.168.199.6 using credentials admin/1234Qwer
and explore routing from the Leaf 101 perspective.
Suggested Commands:
# show vrf all
# show ip route vrf TenantX:TX_Production (replace 2 X’s with your pod number)
Note: You can use the output of show vrf all command to specify vrf by copying and
pasting.
# show ip route vrf overlay-1
97
Appendix A: Troubleshooting
Cause: Occasionally during the lab reset process vCenter and Host lose credentials trust.
98
6. New window will pop up. Enter username: root password: 1234Qwer and click Next.
99