Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
274 views61 pages

DG Poweredge MX Vmware Esxi Sfs

This guide provides instructions for configuring VMware ESXi and virtual machines on Dell EMC PowerEdge MX platforms running SmartFabric Services. It describes the communication between ESXi hypervisors within and outside the MX as well as specific storage attachment scenarios to deploy VMware setups.

Uploaded by

alvaro.nologin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
274 views61 pages

DG Poweredge MX Vmware Esxi Sfs

This guide provides instructions for configuring VMware ESXi and virtual machines on Dell EMC PowerEdge MX platforms running SmartFabric Services. It describes the communication between ESXi hypervisors within and outside the MX as well as specific storage attachment scenarios to deploy VMware setups.

Uploaded by

alvaro.nologin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Dell EMC PowerEdge MX VMware ESXi with

SmartFabric Services
Deployment Guide
H18333.2

Abstract
This document guides the reader through the process of configuring VMware ESXi
and virtual machines on the Dell EMC PowerEdge MX platform running SmartFabric
Services. This guide describes how ESXi hypervisors communicate within the MX
platform and how traffic is passed between ESXi hypervisors inside and outside the
MX platform. Specific storage scenarios are also explained in detail that allow the reader
to deploy VMware setups.

Dell Technologies Networking Solutions

May 2021
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents

Chapter 1: Overview...................................................................................................................... 5
Introduction...........................................................................................................................................................................5
SmartFabric Services....................................................................................................................................................5
Hardware overview............................................................................................................................................................. 5
Rack switches used in this guide..................................................................................................................................... 7
Servers used in this guide..................................................................................................................................................7
MX9116n topology used in this guide..............................................................................................................................8
Alternative MX5108n topology for single chassis...................................................................................................... 10

Chapter 2: Configuring Leaf Switches..........................................................................................12


Switch configuration commands.................................................................................................................................... 12

Chapter 3: Configure SmartFabric Services................................................................................. 16


Configure PowerEdge MX using OME-M.................................................................................................................... 16

Chapter 4: Configure vCenter..................................................................................................... 20


vCenter settings................................................................................................................................................................20

Chapter 5: Network Connectivity Between VMs.......................................................................... 24


MX server sled VM to remote server VM connectivity........................................................................................... 24
Configuration steps...........................................................................................................................................................24
VM to VM communication between MX server sleds.............................................................................................. 27

Chapter 6: NPG Mode Storage Attachment................................................................................. 29


Connect PowerEdge MX VMs to storage over Fibre Channel switch ................................................................29

Chapter 7: Direct Attach Storage................................................................................................. 31


Use Direct Attach Method to connect MX VMs to storage................................................................................... 31

Chapter 8: Validation from OME-M and CLI................................................................................. 32


Dell EMC PowerSwitch S5232F-ON validation......................................................................................................... 32
MX9116n FSE Fibre Channel validation commands...................................................................................................34
Validation of SmartFabric deployment using OME-M..............................................................................................35
MX9116n FSE CLI commands.........................................................................................................................................38
ESXi validation using VM to VM ping...........................................................................................................................40

Chapter 9: Troubleshooting......................................................................................................... 42
Troubleshooting VM to VM traffic................................................................................................................................42

Chapter 10: Other VMware Considerations.................................................................................. 46

Appendix A: Full Switch Mode Example........................................................................................47

Contents 3
Appendix B: Configure a Virtual Distributed Switch..................................................................... 51

Appendix C: Storage Considerations............................................................................................53

Appendix D: Versions Used in this Guide......................................................................................58

Appendix E: References.............................................................................................................. 60
Dell Technologies documentation................................................................................................................................. 60
OME-M and OS10 compatibility and documentation............................................................................................... 60
Support and feedback.......................................................................................................................................................61

4 Contents
1
Overview

Introduction
Our vision at Dell Technologies is to be the essential infrastructure company from the edge, to the core, and to the cloud. Dell
EMC Networking ensures modernization for current applications and for the emerging cloud-native world.
Dell Technologies is committed to disrupting the fundamental economics of the market with an open strategy that gives you
the freedom of choice for networking operating systems and top-tier merchant silicon. The Dell Technologies strategy enables
business transformations that maximize the benefits of collaborative software and standards-based hardware, including lowered
costs, flexibility, freedom, and security. Dell Technologies provides further customer enablement through validated deployment
guides which demonstrate these benefits while maintaining a high standard of quality, consistency, and support.
NOTE: This guide may contain language that is not consistent with Dell's current guidelines. Dell plans to update this guide
over subsequent future releases to revise the language accordingly.

SmartFabric Services
As part of the MX platform, the Dell EMC SmartFabric OS10 network operating system includes a fully integrated network
automation and orchestration solution called SmartFabric Services.
A SmartFabric is a logical entity that contains a collection of physical resources (such as servers, switches) and logical
resources (such as networks, templates, and uplinks). On PowerEdge MX, switches operating in SmartFabric mode operate as a
simple Layer 2 input output aggregation device, which enables interoperability with network equipment vendors. A SmartFabric
provides I/O aggregation, plug-and-play fabric deployment, and a single interface to manage all switches in the fabric as a single
logical switch. This guide demonstrates the use of SmartFabric Services alongside VMware ESXi with a deployment example.

Hardware overview
NOTE: Verify that the software requirements provided in Appendix D are met before performing the steps provided in this
document.

Dell EMC PowerEdge MX


The Dell EMC PowerEdge MX is a unified, high-performance data center infrastructure. It provides the agility, resiliency, and
efficiency to optimize a wide variety of traditional and new, emerging data center workloads and applications. With its kinetic
architecture and agile management, the PowerEdge MX platform dynamically configures the compute, storage, and fabric,
increases team effectiveness, and accelerates operations. The responsive design delivers the innovation and longevity that
customers need for their IT and digital business transformations.
As part of the PowerEdge MX platform, the Dell EMC SmartFabric OS10 network operating system includes SmartFabric
Services. SmartFabric Services is a network automation and orchestration solution that is fully integrated with the PowerEdge
MX platform.

Overview 5
Figure 1. Dell EMC PowerEdge MX

Dell EMC Networking MX9116n Fabric Switching Engine (FSE)


The Dell EMC Networking MX9116n Fabric Switching Engine (FSE) is a scalable, high-performance, low latency 25 GbE switch
purpose-built for the PowerEdge MX platform. In addition to sixteen internal 25 GbE ports, the MX9116n FSE also provides two
100 GbE QSFP28 ports, two 100 GbE QSFP28 unified ports, and twelve 2x 100 GbE QSFP-28 Double Density (DD) ports. In this
document, two MX9116n FSEs are used as I/O modules connected to the S5232F-ON leaf switches. The two 100 GbE QSFP28
unified ports can be configured for native Ethernet or native Fibre Channel.

Figure 2. Dell EMC Networking MX9116n Fabric switching engine (FSE)

Dell EMC Networking MX5108n Ethernet switch


The Dell EMC Networking MX5108n Ethernet switch is targeted at small PowerEdge MX platform deployments of one or
two chassis. While not a scalable switch, it still provides high-performance and low latency with a non-blocking switching
architecture. In addition to eight internal 25 GbE ports, the MX5108n provides one 40 GbE QSFP+ port, two 100 GbE QSFP28
ports, four 10 GbE RJ45 BASE-T ports. In this document, two MX5108n may be used as I/O modules connected to the
S5232F-ON leaf switches.

Figure 3. Dell EMC Networking MX5108n Ethernet switch

Dell EMC Networking MX7116n Fabric Expander Module (FEM)


The Dell EMC Networking MX7116n Fabric Expander Module (FEM) acts as an Ethernet repeater, taking signals from attached
compute sleds and repeating them to the associated lane on the external QSFP28-DD connector. The MX7116n FEM provides
two QSFP28-DD interfaces, each providing eight 25 Gbps connections to the chassis.

6 Overview
There is no operating system or switching ASIC on the MX7116n FEM. Updates for the MX7116n FEM are packaged with
OME-M updates. Also, there is no management or user interface, which makes the MX7116n FEM maintenance-free.
The MX7116n FEM is required to be connected to a MX9116n FSE or another valid switch in order to function, where all logical
operations occur. The PowerEdge MX7000 chassis supports up to four MX7116n FEM units in Fabric A, Fabric B, or both.

Figure 4. Dell EMC Networking MX7116n Fabric expander module (FEM)

Rack switches used in this guide


Dell EMC PowerSwitch S5232F-ON
The Dell EMC PowerSwitch S5232F-ON is a 1U, multilayer switch with thirty-two 100 GbE QSFP28 ports and two 10 GbE SFP+
ports. This guide uses two S5232F-ON switches in each rack as leaf switches.

Figure 5. Dell EMC PowerSwitch S5232F-ON

Dell EMC PowerSwitch S3048-ON


The Dell EMC PowerSwitch S3048-ON is a 1U switch with 48x 1 GbE BASE-T ports and 4x 10 GbE SFP+ ports. This guide uses
one S3048-ON switch in each rack for out-of-band (OOB) management traffic.

Figure 6. Dell EMC PowerSwitch S3048-ON

Servers used in this guide


Dell EMC PowerEdge MX740c
The Dell EMC PowerEdge MX740c is a two-socket, full-height, single-width compute sled with impressive performance and
scalability. It is ideal for dense virtualization environments and can serve as a foundation for collaborative workloads. An
MX7000 chassis supports up to eight MX740c compute sleds.

Overview 7
Figure 7. Dell EMC PowerEdge MX740c

Dell EMC PowerEdge R730xd


The Dell EMC PowerEdge R730xd is a 2-RU, two-socket server platform. The R730xd allows up to thirty 2.5" SSDs or HDDs
with SAS, SATA, and NVMe support. In this guide, three R730xd servers are used for vCenter and VMs.

Figure 8. Dell EMC PowerEdge R730xd

MX9116n topology used in this guide


Logical topology and Physical layout and wiring show the logical and physical topologies that are validated for the examples in
this guide.

8 Overview
Figure 9. Logical topology

Two MX7000 chassis are in the Multi-Chassis Management group. The MX9116n FSEs are connected to MX7116n FEMs.
MX9116n FSEs are in a VLTi connected on QSFP28-DD ports 1/1/37 and 1/1/39. The uplinks from MX9116n FSEs are connected
to S5232F-ON using ports 1/1/41 and 1/1/42.
Dell EMC PowerSwitch S5232F-ONs are in a VLTi connected on ports 1/1/29 and 1/1/31. Port group breakout is configured on
port 1/1/5 (100 GbE) to create 4x 25 GbE ports. The S5232F-ON switch is then connected to three R730xd servers using ports
1/1/5:1, 1/1/5:2, and 1/1/5:3, 25 GbE connections. See Physical layout and wiring for additional details on the physical wiring
that is used.
The VLTi connections between the two MX9116n FSEs shown in this diagram require 200 Gbps for VLT to function, if
SmartFabric Mode is used.

Overview 9
Figure 10. Physical layout and wiring

Alternative MX5108n topology for single chassis


The figure below shows an alternate logical topology for a single-chassis environment using two Dell EMC Networking MX5108n
Ethernet switches.

10 Overview
Figure 11. Alternate logical topology using MX5108n in a single chassis

Using MX5108n in a single chassis


Two MX5108n I/O modules are connected to the MX740c compute sleds using internal connections. The MX5108n switches are
in a VLTi connected on ports 1/1/9 and 1/1/10. Port 1/1/11 is used for uplink and configured for breakout from 100 GbE to 4x 25
GbE. The MX5108n is then connected to the upstream switch S5232F-ON on ports 1/1/11:1 and 1/1/11:2.
Dell EMC PowerSwitch S5232F-ON switches are connected in a VLTi using ports 1/1/29 and 1/1/31. The port group breakout
is configured on port 1/1/5 from 100 GbE to 4x 25 GbE. The S5232F-ON switch is then connected to three R640 servers using
ports 1/1/5:1, 1/1/5:2, and 1/1/5:3, 25 GbE connections.
While the examples provided in this guide do not use MX5108n switches, they can still be used to configure this scenario.

Overview 11
2
Configuring Leaf Switches
This section covers the switch configuration for the Dell EMC PowerSwitch S5232F-ON switches running SmartFabric OS10 in
Full Switch mode. In the Logical topology, the S5232F-ON switches are connected to the Dell EMC Networking MX9116n FSE
in the PowerEdge MX chassis, and to Dell PowerEdge R730xd servers. Run the commands in the following sections to complete
the configuration of both leaf switches. The port numbers used in the configuration commands correspond to those shown in
Physical layout and wiring.

Switch configuration commands


Use the commands provided in this section to configure the hostname, OOB management IP address, and default gateway.

General settings
NOTE: The default spanning tree settings are used in this deployment. In SmartFabric OS10, RPVST+ is enabled by default.
The RPVST+ VLAN priority numbers start at 32769. If required, modify the spanning tree settings for your environment. It is
recommended that LLDP is left as enabled on each interface.

S5232F-Leaf1A S5232F-Leaf1B

configure terminal configure terminal

hostname S5232F-Leaf1A hostname S5232F-Leaf1B

interface mgmt1/1/1 interface mgmt1/1/1


no ip address no ip address
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24
no shutdown no shutdown
management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Configure VLTi
Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi. The vlt-domain command configures the peer leaf-2 switch as a back-up
destination.

S5232F-Leaf1A S5232F-Leaf1B

interface range ethernet1/1/29-1/1/31 interface range ethernet1/1/29-1/1/31


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet1/1/29-1/1/31 discovery-interface ethernet1/1/29-1/1/31

12 Configuring Leaf Switches


Configure VLANs
Run the commands in this section to configure the VLANs. These commands allow you to create the ESXi Management VLAN,
assign a unique IP address on each switch, configure the Virtual Router Redundancy Protocol (VRRP) to provide gateway
redundancy, and set the VRRP priority.
NOTE: The switch with the largest priority value becomes the primary VRRP router. Assign the same virtual address to
both switches.
Similar commands are then used to create the ESXi Management, vMotion, vSAN, and VM Network VLANs.

S5232F-Leaf1A S5232F-Leaf1B

interface vlan1611 interface vlan1611


description ESXi_Mgmt description ESXi_Mgmt
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.16.11.253/24 ip address 172.16.11.252/24
vrrp-group 11 vrrp-group 11
priority 150 priority 100
virtual-address 172.16.11.254 virtual-address 172.16.11.254

interface vlan1612 interface vlan1612


description vMotion description vMotion
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.16.12.253/24 ip address 172.16.12.252/24
vrrp-group 12 vrrp-group 12
priority 150 priority 100
virtual-address 172.16.12.254 virtual-address 172.16.12.254

interface vlan1613 interface vlan1613


description vSAN description vSAN
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.16.13.253/24 ip address 172.16.13.252/24
vrrp-group 13 vrrp-group 13
priority 150 priority 150
virtual-address 172.16.13.254 virtual-address 172.16.13.254

interface vlan1614 interface vlan1614


description Web_VM_Network description Web_VM_Network
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.16.14.253/24 ip address 172.16.14.252/24
vrrp-group 14 vrrp-group 14
priority 150 priority 100
virtual-address 172.16.14.254 virtual-address 172.16.14.254

interface vlan1615 interface vlan1615


description APP_VM_Network description APP_VM_Network
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.16.15.253/24 ip address 172.16.15.252/24
vrrp-group 15 vrrp-group 15
priority 150 priority 100
virtual-address 172.16.15.254 virtual-address 172.16.15.254

interface vlan1616 interface vlan1616


description DB_VM_Network description DB_VM_Network
no shutdown no shutdown
mtu 9216 mtu 9216
ip address 172.16.16.253/24 ip address 172.16.16.252/24
vrrp-group 16 vrrp-group 16
priority 150 priority 100
virtual-address 172.16.16.254 virtual-address 172.16.16.254

Configuring Leaf Switches 13


Configure the switch interconnect and VLTi
Once the SmartFabric has been created on the PowerEdge MX system, create port-channel 1 on the S5232F-ON leaf
switches to connect to the MX9116n FSE.

NOTE: Ports 1/1/1 and 1/1/3 on the S5232-ON switches are added to the port channel in the next section.

Use the switchport mode trunk command to enable the port channel to carry traffic for multiple VLANs, and allow all
VLANs on the port channel.
Use the 100 GbE interfaces for the port channel in a VLTi. The VLTi ports are 1/1/29 and 1/1/31 for both switches in
this example. Add each interface connecting to an MX environment to the port channel as LACP active members using the
channel-group 1 mode active command.

S5232F-Leaf1A S5232F-Leaf1B

interface port-channel 1 interface port-channel 1


description To_MX_Chassis description To_MX_Chassis
no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1611-1616 switchport trunk allowed vlan 1611-1616
mtu 9216 mtu 9216
vlt-port-channel 1 vlt-port-channel 1

interface ethernet1/1/29 interface ethernet1/1/29


description To_Leaf_1B description To_Leaf_1A
no shutdown no shutdown
no switchport no switchport
mtu 9216 mtu 9216

interface ethernet1/1/31 interface ethernet1/1/31


description To_Leaf_1B description To_Leaf_1A
no shutdown no shutdown
no switchport no switchport
mtu 9216 mtu 9216

Configure interfaces
In this topology, Interface 1/1/5 on both leafs is connected to two R730xd servers and one R730xd jump box. Interfaces 1/1/1
and 1/1/3 are connected to MX9116n FSEs on interface 1/1/41 and 1/1/42.
Configure port-group breakout on Interface 1/1/5 by using the commands below.

S5232F-Leaf1A# configure terminal


S5232F-Leaf1A(config)# port-group 1/1/2
S5232F-Leaf1A(conf-pg-1/1/2)# mode eth 25g-4x

Use the switchport mode trunk command to enable ports to carry traffic for multiple VLANs. Configure the ports as trunk
(tagged) ports on VLANs 1611 through 1616 (the ESXi Management, vMotion, vSAN, and VM Network VLANs).
Interfaces 1/1/1 and 1/1/3 are added into the port channel as LACP active members.
When the configuration is complete, exit configuration mode and save the configuration with the end and write memory
commands.

S5232F-Leaf1A S5232F-Leaf1B

interface ethernet1/1/5:1 interface ethernet1/1/5:1


description R730xd_Management description R730xd_Management
no shutdown no shutdown
fec off fec off
switchport access vlan 1 switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1611-1616 switchport trunk allowed vlan 1611-1616

14 Configuring Leaf Switches


S5232F-Leaf1A S5232F-Leaf1B

mtu 9216 mtu 9216

interface ethernet1/1/5:2 interface ethernet1/1/5:2


description R730xd description R730xd
no shutdown no shutdown
fec off fec off
switchport access vlan 1 switchport access vlan 1
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 1611-1616 switchport trunk allowed vlan 1611-1616
mtu 9216 mtu 9216

interface ethernet1/1/5:3 interface ethernet1/1/5:3


description Jumpbox description Jumpbox
no shutdown no shutdown
fec off fec off
switchport access vlan 1611 switchport access vlan 1611
mtu 9216 mtu 9216

interface ethernet1/1/1 interface ethernet1/1/1


description To_MX_Chassis description To_MX_Chassis
no shutdown no shutdown
channel-group mode 1 active channel-group mode 1 active
no switchport no switchport
mtu 9216 mtu 9216

interface ethernet1/1/3 interface ethernet1/1/3


description To_MX_Chassis description To_MX_Chassis
no shutdown no shutdown
channel-group mode 1 active channel-group mode 1 active
no switchport no switchport
mtu 9216 mtu 9216

end end
write memory write memory

Configuring Leaf Switches 15


3
Configure SmartFabric Services

Configure PowerEdge MX using OME-M


Before configuring the PowerEdge MX, review the SmartFabric mode requirements, guidelines, and restrictions in the Dell EMC
PowerEdge MX Networking Deployment Guide.
For more information about physical links between MX9116n FSE and MX7116n FEM, see the Dell EMC PowerEdge MX
Networking Deployment Guide.
For a scalable fabric using more than one MX chassis, the chassis must be in a Multi-Chassis Management (MCM) group.
Create the MCM group before configuring SmartFabric.
Follow the steps mentioned below to configure and deploy SmartFabric Services using the OME-M console.

Create VLANs
For the example topology used in this guide, VLANs 1611 through 1616 are created for ESXi management, vMotion, vSAN, and
the VM data network. The following table provides details for each VLAN.
NOTE: See the Dell EMC PowerEdge MX Networking Deployment Guide for instructions on how to create VLANs in
OME-M.

VLAN ID VLAN name Description Network address Gateway address


1611 ESXi_Mgmt ESXi host in-band 172.16.11.0/24 172.16.11.254
management
1612 vMotion VM migration 172.16.12.0/24 172.16.12.254
1613 vSAN Storage 172.16.13.0/24 172.16.13.254
1614 web VM data network 172.16.14.0/24 172.16.14.254
1615 app VM data network 172.16.15.0/24 172.16.15.254
1616 db VM data network 172.16.16.0/24 172.16.16.254

NOTE: For information about network type and QoS group settings, see the Networks and automated QoS section of the
Dell EMC PowerEdge MX Networking Deployment Guide.
The general steps to create VLANs from OME-M are:
1. OME-M console
2. Configuration
3. VLANs
NOTE: With OME-M 1.20.10 and earlier, the VLANs option is called Networks.
4. Define
For each VLAN, you must provide a Name, Description (optional), VLAN ID, and Network Type.

Create SmartFabric
To create a SmartFabric using the OME-M console, follow the instructions that are mentioned in the SmartFabric Creation
section in the Dell EMC PowerEdge MX Networking Deployment Guide.
For the example shown in Logical topology, two chassis are in an MCM group. Choose 2x MX9116n in different chassis as design
type for this scenario.

16 Configure SmartFabric Services


The SmartFabric deployment takes about 20 minutes to complete. During this time, the related IOMs reload, the operating mode
of the IOMs change to SmartFabric, and the SmartFabric is created.
The general steps to create the SmartFabric in OME-M are:
1. OME-M console
2. Devices
3. Fabric
4. Add Fabric
5. Provide Name
6. Select the Design Type
NOTE: If applicable, the uplink port speed breakout should be configured before creating uplinks. To change the port speed
or breakout configuration, see the Configure uplink port speed or breakout section in the Dell EMC PowerEdge MX
Networking Deployment Guide. No port breakout was used in this example.

Create uplinks
To create an Ethernet - No Spanning Tree uplink from the MX9116n FSEs to the S5232F-ON leafs, see Create Ethernet - No
Spanning Tree uplink section in the Dell EMC PowerEdge MX Networking Deployment Guide.
After creating uplinks, the SmartFabric creates an uplink object and status of fabric turns green.
The general steps for creating uplinks in OME-M are:
1. OME-M console
2. Devices
3. Fabric
4. Select Fabric
5. Add Uplink
When creating the uplinks in OME-M, you must provide a Name, Description, set the Uplink Type as Ethernet - No
Spanning Tree, select the ports, and then select the tagged and untagged VLANs.

Create server template


A server template contains parameters that are extracted from a server and allows these parameters to be quickly applied to
multiple compute sleds in the chassis. A server template contains all server settings for a specific deployment type including
BIOS, iDRAC, RAID, NIC/CNA, and so on. The template is captured from a reference server and can then be deployed to
multiple servers simultaneously. Select a reference server before you create the template. The server template also allows an
administrator to associate VLANs to compute sleds.
To create a server template, see the Create a server template section in the Dell EMC PowerEdge MX Networking
Deployment Guide.
The general steps for creating a server template in OME-M are:
1. OME-M console
2. Configuration
3. Templates
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
4. Create Template
From Reference Device, add the Name and Description. Select the Reference device and Configuration Elements.

Associate network with server template


Created templates must be associated with the networks. In this scenario, VLANs 1611 through 1616 are configured as tagged
networks. This in turn applies the settings to other compute sleds when the templates are deployed.
To associate the networks with a template, see the Dell EMC PowerEdge MX Networking Deployment Guide.
The general steps to associate networks with templates in OME-M are:
1. OME-M console

Configure SmartFabric Services 17


2. Configuration
3. Templates
NOTE: With OME-M 1.20.10 and earlier, the Templates section is called Deploy.
4. Select server template
5. Edit network
6. Select tagged and untagged networks

Deploy server template


Deploy server templates to apply the settings and parameters to the compute sleds. After a server template deployment, the
interfaces on the switch are updated automatically. SmartFabric configures each interface with untagged and tagged VLANs
along with associated QoS settings.
To deploy the server template, follow the instructions in the Deploy a server template section in the Dell EMC PowerEdge MX
Networking Deployment Guide.
The general steps for deploying a server template in OME-M are:
1. OME-M console
2. Configuration
3. Templates
NOTE: With OME-M 1.20.10 and earlier, the Templates option is called Deploy.
4. Select server template
5. Deploy Template
6. Select the compute sleds on which server template must be deployed.

Profiles with Server template deployment


OME-M 1.30.00 adds support for Profiles. OME-M creates and automatically assigns a profile once the server template is
deployed successfully. You can edit, assign, unassign, and delete the profile.
NOTE: For more information, see the Profiles with Server template deployment section in the Dell EMC PowerEdge
MX Networking Deployment Guide.
1. OME-M console
2. Configuration
3. Profiles
This shows Profiles with the Server template deployment.

Create a profile
If the server template is not deployed, OME–M allows the user to create server profiles and apply them to compute sled or slot.
To create a profile, follow the instructions on Create a Profile section in Dell EMC PowerEdge MX Networking Deployment
Guide.
To create a profile in OME-M:
1. OME-M console
2. Configuration
3. Profiles
4. Create profile
Select the server template to attach profile to a template then enter the Name Prefix, Description, and Profile Count of the
profile.

18 Configure SmartFabric Services


Edit a profile
The Edit Profile feature allows user to change the profile name, network options, iDRAC management IP, target attributes, and
unassigned virtual identities. User can edit the profile characteristics that are unique to the device or slot.
To edit a profile, follow the instructions in the Edit a profile section in Dell EMC PowerEdge MX Networking Deployment Guide.
The general steps for editing a profile in OME-M are:
1. OME-M console
2. Configuration
3. Profiles
4. Click Edit > Edit Profile to select the profile to edit.
You can edit the Name Prefix, Description, and other target attributes of the profile.

Assign a profile
The Assign a profile function allows you to assign and deploy a profile on a target device.
To assign a profile, follow the instructions in the Assign a profile section in Dell EMC PowerEdge MX Networking Deployment
Guide.
The general steps for assigning a profile in OME-M are:
1. OME-M console
2. Configuration
3. Profiles
4. Select the profile to assign and then click Assign.
5. Select the compute sleds on which the profiles must be deployed.

Unassign a profile
You can unassign a profile that has been assigned to a target server.
To unassign a profile, follow the instructions in the Unassign a profile section in the Dell EMC PowerEdge MX Networking
Deployment Guide.
To unassign a profile, perform the following steps:
1. OME-M console
2. Configuration
3. Profiles
4. Select the profile to unassign, then click Unassign.
NOTE: To delete a profile, follow the instructions in the Dell EMC PowerEdge MX Networking Deployment Guide.

Configure SmartFabric Services 19


4
Configure vCenter

vCenter settings
The information in this section addresses the following topics:
● Data center creation
● Creation of a cluster
● Adding a host
● Virtual machine creation
● Configuration of a cluster
● vDS creation
Rack servers can be added to vCenter after the SmartFabric is deployed and the uplink is created. The example environment
in this guide has three PowerEdge rack servers that are connected to the S5232F-ON leaf switches. The rack servers are in a
vSphere cluster named Compute.
The links that are provided in the table below contain instructions on how to configure the wanted VMware environment. These
steps were used to set up the example topologies in this guide. The VMware configurations in this guide are only examples on
how to set up the environment.
NOTE: The references to the VMware instructions that are found in the following table may change from the examples that
are provided in this guide. For this guide, VMware vSphere Hypervisor (ESXi) version 7.0 was used.

Table 1. Documents used to configure VMware environment


Steps Reference
Create a data center VMware vSphere Documentation: Create Data Centers
Create a cluster VMware vSphere Documentation: Creating Clusters
Configure a cluster VMware vSphere Documentation: Configure a Cluster in the vSphere Client
Add a host VMware vSphere Documentation: Add a Host
Create a virtual machine VMware vSphere Documentation: Creating a Virtual Machine in the vSphere Client
Create vDS and set up VMware vSphere Documentation: Setting Up Networking with vSphere Distributed Switches
networking

20 Configure vCenter
Figure 12. vCenter Settings

NOTE: For more information about configuring VMware vSphere, see the Organizing Your Inventory section within the
VMware vSphere Product Documentation repository.
The MX compute sleds can now communicate with the rack servers and the vCenter vDS. The MX compute sleds are joined to
the vSphere cluster by an administrator as well.
The best practice is to configure vDS, along with six distributed port groups, one for each VLAN.
NOTE: The example that is provided in vDS distributed port group configuration only shows one port group for all the
hypervisors.

Configure vCenter 21
Figure 13. vDS distributed port group configuration

NOTE: For each port group in this vDS example, both of the uplinks are active and the Route based on physical NIC
load-balancing method is used as recommended in the VMware Validated Design Documentation. A detailed vCenter
configuration is beyond the scope of this document. For more information about vCenter configuration, see the VMware
vSphere Documentation.

NIC teaming
NIC teaming can be used to connect a virtual switch to multiple physical NICs on a host to increase network bandwidth and to
provide redundancy. If there is adapter failure or a network outage, a NIC team can distribute the traffic between its members
and provide passive failover. NIC teaming policies are set at the virtual switch or port group level for a vSphere Standard
Switch, and at a port group or port level for a vSphere Distributed Switch.
The links that are provided in the table below contain instructions on how to configure the wanted NIC Teaming environment
that is based on the switch type being used.

Table 2. Documents used to configure NIC teaming


Steps Reference
Distributed switch VMware vSphere Documentation: Configure NIC Teaming, Failover, and Load Balancing on a
Distributed Port Group or Distributed Port
Standard switch VMware vSphere Documentation: Configure NIC Teaming, Failover, and Load Balancing on a vSphere
Standard Switch or Standard Port Group

NOTE: If using NPAR, also see the NIC teaming guidelines section in the Dell EMC PowerEdge MX Networking Deployment
Guide.
In this deployment example, if NIC teaming is defined at VMware vSphere, then a teaming configuration is not needed on
PowerEdge MX environment with SmartFabric mode. From the OME-M console, select Configuration > Templates > Select
Server Template > Edit Network, and then select the NO Teaming option.
NOTE: If LACP teaming is selected at VMware vSphere, LACP teaming needs to be selected in the PowerEdge MX
environment on OME-M to provide proper synchronization.
Depending on the deployment environment, if the NIC teaming is set on VMware vSphere, when one uplink goes down, the
connection is switched to another uplink in the team. When the down connection comes up, it is immediately restored. An

22 Configure vCenter
unstable connection can cause packet loss. In VMware ESXi 6.7 and later, the user can edit the delay threshold in order to
avoid packet loss. The default value is 100 ms, however Dell Technologies recommends that you change the value to 30000 ms.
To modify the TeamPolicyUpDelay, from VMware vSphere, select ESXi Host > Configure > System > Advanced System
Settings > Net.teampolicyupdelay. From this location, you can edit the value to 30000 ms.

Configure vCenter 23
5
Network Connectivity Between VMs
The scenarios in this chapter describe the steps to obtain network connectivity between VMs. The two traditional scenarios are:
● MX compute sled VM to remote server VM connectivity
● MX compute sled VM to MX compute sled VM connectivity
The scenarios in this section provide the steps that are required in order to achieve connectivity.

NOTE: The content within Chapter 8 can be used to validate the configuration.

MX server sled VM to remote server VM connectivity

Figure 14. Connection of VMs outside of PowerEdge MX chassis

The PowerEdge MX chassis and rack servers are connected through leaf switches. VMs are deployed on each of the three
compute devices as shown in the figure above. This scenario consists of a Dell EMC PowerEdge MX740c compute sled deployed
in each PowerEdge MX chassis, and a Dell EMC PowerEdge R730xd rack server.
All the management links use the same ESXi-MGMT VLAN. The ESXi host VMs use the corresponding VLANs for VM to VM
connectivity. The example in the section below shows the VLAN and IP allocation information that is used.

Configuration steps
VLAN and IP allocation information
All ESXi hosts, whether running on the MX compute sleds or in the rack server, are all in the same ESXi_MGMT VLAN. In this
example, MGMT VLAN 1611 is deployed. vMotion, vSAN, and ESXi Host VMs are also deployed on VLANs 1612 through 1616.
NOTE: For more information about the VLANs and the IP allocations that are used in this guide, see the Configure MX
Using OME-M section.

24 Network Connectivity Between VMs


Configure and set up VMware vCenter
NOTE: In this example, Windows Server 2016 and VMware vCenter Server Appliance 7.0 are used.

For the scenarios in this guide, the VMware vCenter Server Appliance (VCSA) is deployed on the same server as the DNS and
jump-box which serves as a centralized multipurpose work-station, as shown in the figure below.
To configure the Dell EMC PowerEdge R730xd server with the required services and install the VMware vCenter server
appliance, perform the following steps:
1. Upgrade the server with the latest BIOS and NIC driver.
NOTE: See Appendix D for more information.
2. Using the same subnet IP, configure both NIC ports on the server. This configuration is used for DNS.
3. From the Server Manager in Microsoft Windows, perform the following actions:
● Create a DNS server
● Create the forward and reverse lookup zones
● Assign host records as shown in the figure below
NOTE: For detailed information, see the User Guide specific for your DNS server.

Figure 15. Example DNS settings


4. Install and configure VMware vCenter Server Appliance 7.0 or the latest version. For more information, see Appendix D.

Create and configure ESXi hosts


After the VMware vCenter server appliance is installed and host information has been successfully added to the DNS server,
perform the following steps to create and configure the ESXi host using VMware vCenter.
1. From the Hosts and Clusters tab, right-click the vSphere client listing and select Create New Datacenter.
2. Right-click Datacenter and then select Add host.
3. Enter the IP address or hostname as configured in the DNS server, then enter the username and password of the ESXi host.
4. Follow the prompts to verify the host summary and other information.
5. From the Ready to Complete screen, click Finish.

Network Connectivity Between VMs 25


Figure 16. Listing of ESXi hosts created

NOTE: Repeat the steps provided in this section to create additional ESXi hosts that are specific for your environment.

Create and configure VMs


To create and configure the ESXI host virtual machine (VM), perform the following steps:
1. Right-click ESXi host and select New Virtual Machine.
2. Select New Virtual Machine and click Next.
3. Provide an appropriate name to VM, select the desired Datacenter, and click Next.
4. Select ESXi Host, where the VM must be created, and click Next.
5. Select the appropriate Datastore for the VM and click Next.
6. Select the compatible ESXi version and click Next.
7. Select the Operating System family and wanted version.
8. Click to select the network port-group and designate the wanted hard-disk and memory size.
NOTE: Browse for vDS during step 8, when selecting Network Port-Group for Compute host VM. Select MGMT-vDS
as a Network Port-Group for MGMT host VM.
9. Verify the selections and if there are no changes to make, click Finish.
10. Click to select the newly created VM then select Launch Remote Console to install and configure the operating system.
NOTE: This example uses a vDS. For details on the creation and configuration of a vDS, see Appendix B.

26 Network Connectivity Between VMs


Figure 17. Listing of VMs created

NOTE: Repeat the steps provided in this section to create and configure additional VMs for the same host or for other
ESXi hosts.

NOTE: To configure PowerEdge MX through OME-M, see the Dell EMC PowerEdge MX Networking Guide Deployment
Guide.

VM to VM communication between MX server sleds


There are two distinctive scenarios for MX server sled VM to MX server sled VM connectivity:
● MX sled VM to MX sled VM in the same chassis
● MX sled VM to MX sled VM in separate chassis in the same MCM group
To obtain network connectivity for both scenarios, use the configuration steps that are provided in the Configuration steps
section. A logical topology is provided for each example below.

Network Connectivity Between VMs 27


VMs on different compute sleds in the same MX chassis
This scenario consists of two VMs on separate MX compute sleds.

Figure 18. VM to VM connectivity within a chassis

The same information within the Configuration Steps section is used for this scenario. The same steps are also used in the other
VM to VM network connectivity example scenarios in this guide.

VMs on different MX chassis in the same MCM group


To establish connectivity between VMs in two different MX chassis within the same MCM group, follow the instructions that
are provided within this section. First, configure one or more MX chassis in same MCM group, in which one chassis serves as
the lead and another chassis serves as a secondary chassis.
NOTE: For information about MCM groups and physical links between the MX9116n FSE and MX7116n FEM chassis, see the
Dell EMC PowerEdge MX Networking Deployment Guide.
With an MCM group configured, use the same steps that are provided in the Configuration Steps section to obtain connectivity
between the VMs in this example.

Figure 19. VM to VM connectivity in an MCM group

28 Network Connectivity Between VMs


6
NPG Mode Storage Attachment

Connect PowerEdge MX VMs to storage over Fibre


Channel switch
A common storage connectivity method using an NPIV Proxy Gateway (NPG), may be used when connecting VMs in an
PowerEdge MX compute sled to a storage area network hosting a storage array.
NPG mode is very simple to implement on the MX9116n FSE. There is very little configuration that needs to be done. In NPG
mode, the MX9116n FSE converts FCoE from the server to native FC and aggregates the traffic into an uplink. The NPG switch
is effectively transparent to the FC SAN, which “sees” the hosts themselves. NPG mode allows for larger SAN deployments that
aggregate the I/O traffic at the NPG switch. The figure below shows the topology and ports used on the MX9116n FSE to setup
this scenario.

Topology

Figure 20. Connection of PowerEdge MX VMs to storage over a FC switch

NOTE: Brocade switches were used in the validation of the brownfield example above. Other FC switches may be used as
well.

NOTE: The MX5108n Ethernet switch does not support this feature.

To configure the NPG scenario in SmartFabric mode and allow VM access, perform the following steps:
1. Connect the MX9116n FSE to the FC SAN.
NOTE: Verify that the cables do not cris-cross between the switches.
2. Configure the MX SmartFabric as described in the Configure SmartFabric section. This information is also used to configure
the NPG settings.
3. Configure vCenter as described in Chapter 4.
4. Use the steps provided in the Configuration steps section to complete the configuration.
NOTE: This example uses a Virtual Distributed Switch. See Appendix A for information on how to create and configure a
Virtual Distributed Switch.

NOTE: FC switches are providing NPIV proxy gateway to MX I/O modules in NPG mode. The configuration of the existing
FC switches is beyond the scope of this guide.

NPG Mode Storage Attachment 29


NOTE: To configure the PowerEdge MX using OME-M, see the Dell EMC PowerEdge MX Networking Deployment Guide.

30 NPG Mode Storage Attachment


7
Direct Attach Storage

Use Direct Attach Method to connect MX VMs to


storage
Direct Attach mode, or F_port, allows the VMs in an PowerEdge MX compute sled to access FC storage that is directly
connected to the MX9116n switch. In Direct Attach mode, the switch supports the required services such as name server and
zoning that are typical of FC switches.
This scenario demonstrates directly attaching to a Dell EMC Unity 500F storage array. MX9116n FSE universal ports 44:1 and
44:2 are required for the FC connections and operate in F_port mode. This mode is what allows an FC storage array to be
connected directly to the MX9116n FSE. The uplink type enables F_port functionality on the MX9116n unified ports, converting
FCoE traffic to native FC traffic and passing that traffic to a directly attached FC storage array. The figure below shows the
topology and ports that are used on the MX9116n FSE to set up this scenario.

Topology

Figure 21. Connection of PowerEdge MX VMs to storage using Direct Attach

To configure the Direct Attach storage scenario in SmartFabric mode and allow FM access, perform the following steps:

NOTE: The MX5108n Ethernet switch does not support this feature.

1. Cable the storage array to the MX9116n FSE as shown in Connection of PowerEdge MX VMs to storage using Direct Attach.
2. Configure the MX SmartFabric as described in Chapter 3. The information in this section is also used to configure the Direct
Attach (F_port) settings.
3. Configure vCenter as described in Chapter 4 and in Chapter 5.
4. Use the steps provided in the Configuration steps section to complete the configuration.
NOTE: This example uses a Virtual Distributed Switch. For details on how to create and configure a Virtual Distributed
Switch, see Appendix A.

NOTE: See Appendix B for information about the unity configuration steps.

Direct Attach Storage 31


8
Validation from OME-M and CLI
After the switches are configured and devices are connected, validate the network configuration from the switch CLI and the
OME-M. The validation can also be done through testing the connectivity between end hosts. This section provides a list of the
most common commands and their output for the examples that are used in this guide.

Dell EMC PowerSwitch S5232F-ON validation


Use the CLI commands in this section to validate the leaf switch configuration.

show interface status


The show interface status | grep up command is used to verify that the required interfaces are up, and that the
links are established at the appropriate speeds.

S5232F-Leaf1A# show interface status | grep up


Port Description Status Speed Duplex Mode Vlan Tagged-Vlans
Eth 1/1/1 To MX Chassis up 100G full -
Eth 1/1/3 To MX Chassis up 100G full -
Eth 1/1/5:1 R730xd_Management up 25G full T 1 1611-1616
Eth 1/1/5:2 R730xd up 25G full T 1 1611-1616
Eth 1/1/5:3 Jumpbox up 25G full A 1611
Eth 1/1/29 VLTi up 100G full -
Eth 1/1/31 VLTi up 100G full -

show vlt
The show vlt 1 command validates the VLT configuration status when the VLTi Link Status is up. The role of one switch in
the VLT pair is primary, and its peer switch is assigned the secondary role.

S5232-Leaf1A# show vlt 1


Domain ID : 1
Unit ID : 1
Role : secondary
Version : 2.3
Local System MAC address : 54:bf:64:e5:67:41
Role priority : 32768
VLT MAC address : 54:bf:64:dd:ae:40
IP address : fda5:74c8:b79e:1::1
Delay-Restore timer : 90 seconds
Peer-Routing : Enabled
Peer-Routing-Timeout timer : 0 seconds
VLTi Link Status
port-channel1000 : up

VLT Peer Unit ID System MAC Address Status IP Address Version


----------------------------------------------------------------------------
2 54:bf:64:dd:ae:40 up fda5:74c8:b79e:1::2 2.3

show port-channel summary


The show port-channel summary command is used to view port channel numbers, interfaces used, and status. In
SmartFabric OS10, the VLTi is automatically configured as a static LAG using port channel 1000. Ports 1/1/29 and 1/1/31
are port channel members, and (P) indicates each is up and active.

32 Validation from OME-M and CLI


Port channel 1 is also created for interfaces that are connected to MX9116n FSEs. As shown below, the ports that are added in
the port channel are ports 1/1/1, and 1/1/2.

S5232F-Leaf1A# show port-channel summary

Group Port-Channel Type Protocol Member Ports


------------------------------------------------------------------
1 port-channel1 (U) Eth DYNAMIC 1/1/1(P) 1/1/3(P)
1000 port-channel1000 (U) Eth STATIC 1/1/29(P) 1/1/31(P)

show vlan
The show vlan command is used to view interfaces that are assigned to each VLAN and whether the interfaces are access/
untagged (A) or tagged (T). Port channel 1000 is the VLTi. VLAN ID 4094 is reserved as an internal control VLAN for the VLT
domain, and it is not user configurable. VLANs 1611 to 1616 are also shown which are ESXi management, vMotion, vSAN, and VM
network VLANs.

S5232F-Leaf1A# show vlan


NUM Status Q Ports
* 1 Active A Eth1/1/2, Eth1/1/4,1/1/5:1-1/1/5:2,1/1/5:4,1/1/6-1/1/28, 1/1/30,
1/1/33-1/1/34
A Po1,1000
1611 Active T Eth1/1/5:1-1/1/5:2
T Po1,1000
A Eth1/1/5:3
1612 Active T Eth1/1/5:1-1/1/5:2
T Po1,1000
1613 Active T Eth1/1/5:1-1/1/5:2
T Po1,1000
1614 Active T Eth1/1/5:1-1/1/5:2
T Po1,1000
1615 Active T Eth1/1/5:1-1/1/5:2
T Po1,1000
1616 Active T Eth1/1/5:1-1/1/5:2
T Po1,1000
4094 Active T Po1000

show lldp neighbors


The show lldp neighbors command provides information about connected devices. In this case, ethernet1/1/1 and
ethernet1/1/2 connect to the two MX9116n FSEs. The remaining links, ethernet1/1/29, and ethernet 1/1/31, represent the
VLTi connection.

S5232F-Leaf1A# show lldp neighbors


Loc PortID Rem Host Name Rem Port Id Rem Chassis Id
--------------------------------------------------------------------------------------
ethernet1/1/1 MX9116N-A1 ethernet1/1/41 20:04:0f:01:cc:c7
ethernet1/1/3 MX9116N-A2 ethernet1/1/41 20:04:0f:22:10:80
ethernet1/1/29 S5232-Leaf1B ethernet1/1/29 54:bf:64:dd:ae:40
ethernet1/1/31 S5232-Leaf1B ethernet1/1/31 54:bf:64:dd:ae:40
ethernet1/1/5:1 Not Advertised 24:8a:07:29:24:ca 24:8a:07:29:24:cc
ethernet1/1/5:2 Not Advertised 24:8a:07:3b:f5:14 24:8a:07:3b:f5:16
ethernet1/1/5:3 BroadcomDual25G 00:0a:f7:a9:44:3e 00:0a:f7:a9:44:3e

show spanning-tree brief


The show spanning-tree brief command validates that STP is enabled on the leaf switches.
NOTE: Ensure that the STP root is one of the upstream switches. Do not designate MX I/O modules as the STP root. For
troubleshooting information, see Chapter 9.

S5232F-Leaf1A# show spanning-tree brief


Spanning tree enabled protocol rapid-pvst

Validation from OME-M and CLI 33


VLAN 1
Executing IEEE compatible Spanning Tree Protocol
Root ID Priority 8192, Address 2004.0f01.ccc7
Root Bridge hello time 2, max age 20, forward delay 15
Bridge ID Priority 8192, Address 54bf.64e5.6741
Configured hello time 2, max age 20, forward delay 15
Flush Interval 200 centi-sec, Flush Invocations 5
Flush Indication threshold 5

(Output Truncated)

MX9116n FSE Fibre Channel validation commands


show fc switch
The show fc switch command verifies the switch mode (F-Port or NPG mode) for FC traffic.

MX9116n-A1# show fc switch

Switch Mode : NPG


Switch WWN : 10:00:20:04:0f:21:d4:80

show fc ns switch
The show fc ns switch command shows all device ports logged into the fabric. In this deployment, four ports are logged in
to each switch: two storage ports and two CNA ports.

MX9116n-A1# show fc ns switch

Total number of devices = 3

Switch Name 10:00:20:04:0f:0c:82:ae


Domain Id 1
Switch Port fibrechannel1/1/44:2
FC-Id 01:00:00
Port Name 50:06:01:60:47:e4:1b:19
Node Name 50:06:01:60:c7:e0:1b:19
Class of Service 8
Symbolic Port Name UNITY::::SPA12::FC::::::
Symbolic Node Name UNITY::::SPB::FC::::::
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

Switch Name 10:00:20:04:0f:0c:82:ae


Domain Id 1
Switch Port fibrechannel1/1/44:1
FC-Id 01:01:00
Port Name 50:06:01:68:47:e4:1b:19
Node Name 50:06:01:60:c7:e0:1b:19
Class of Service 8
Symbolic Port Name UNITY::::SPB12::FC::::::
Symbolic Node Name UNITY::::SPB::FC::::::
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

Switch Name 10:00:20:04:0f:0c:82:ae


Domain Id 1
Switch Port ethernet1/71/3
FC-Id 01:03:00
Port Name 20:01:06:3c:f9:a4:cd:00
Node Name 20:00:06:3c:f9:a4:cd:00
Class of Service 8
Symbolic Port Name

34 Validation from OME-M and CLI


Symbolic Node Name
Port Type N_PORT
Registered with NameServer Yes
Registered for SCN Yes

show fcoe sessions


The show fcoe sessions command shows active FCoE sessions. The output includes MAC addresses, Ethernet interfaces,
the FCoE VLAN ID, FC IDs, and WWPNs of logged-in CNAs.

MX9116n-A1# show fcoe sessions

Enode MAC Enode Interface FCF MAC FCF interface VLAN FCoE MAC
FC-ID PORT WWPN
------------------------------------------------------------------
06:3c:f9:a4:cd:03 Eth 1/1/1 20:04:0f:21:d4:fa Fc 1/1/44:2 30
0e:fc:00:01:04:01 01:04:01 20:01:06:3c:f9:a4:cd:03

show vfabric
The show vfabric command output provides a variety of information including the default zone mode, the active zone set,
and interfaces that are members of the vfabric.

MX9116n-A1# show vfabric

Fabric Name New vfabric


Fabric Type NPG
Fabric Id 1
Vlan Id 30
FC-MAP 0xEFC00
Vlan priority 3
FCF Priority 128
FKA-Adv-Period Enabled,8
Config-State ACTIVE
Oper-State UP
==========================================
Members
fibrechannel1/1/44:1
fibrechannel1/1/44:2
ethernet1/1/1

Validation of SmartFabric deployment using OME-M


This section contains the steps to validate the SmartFabric deployment using the OME-M console and the MX9116n FSE CLI
commands.

View MCM group topology


Use the OME-M console to show the physical cabling of the SmartFabric. To view the group topology from the OME-M console,
click View Topology in left navigation panel, select the listing of the lead chassis, and then click Show Wiring.

Validation from OME-M and CLI 35


Figure 22. Topology view of the MCM group

View SmartFabric status


Use the OME-M console to show overall health of the SmartFabric. To view the health of the SmartFabric from the OME-M
console, click Devices>Fabric, and then click to expand the details section of the created SmartFabric.

36 Validation from OME-M and CLI


Figure 23. Status view of the SmartFabric

The Overview tab shows the current inventory such as switches, servers, and interconnects between the MX9116n FSEs in the
fabric.

Figure 24. Overview tab

The Topology tab shows the wiring and VLTi created by the SmartFabric mode.

Figure 25. Topology tab

Validation from OME-M and CLI 37


MX9116n FSE CLI commands
This section contains the CLI commands to validate the MX9116n FSE.

show switch-operating mode


Use the show switch-operating-mode command to display the current operating mode.

MX9116n-A1# show switch-operating-mode


Switch-Operating-Mode : Smart Fabric Mode

show discovered-expanders
The show discovered-expanders command is only available on the MX9116n FSE. This command shows how the service
tags of the MX7116n FEMs are attached to the MX9116n FSEs. The command also shows the associated port-group and virtual
slot.

MX9116n-A1# show discovered-expanders


Service Model Type Chassis Chassis-slot Port-group Virtual
tag service-tag Slot-Id
------------------------------------------------------------------
CBJWLN2 MX7116n FEM CF54XM2 A1 1/1/1 71

show unit-provision
The show unit-provision command is only available on the MX9116n FSE. The command shows the unit ID, provision, and
the discovered name of the MX7116n FEM that is attached to the MX9116n FSE.

MX9116n-A1# show unit-provision


Node ID | Unit ID | Provision Name | Discovered Name | State |
---------+---------+---------------------------------+-------|
1 | 71 | CBJWLN2 | CBJWLN2 | up |
(output truncated)

show smartfabric personality


The show smartfabric personality command is used on a node to view the personality of the SmartFabric Services
configured. The possible values can be PowerEdge MX, Isilon, VxRail, L3 fabric.

show smartfabric cluster


The show smartfabric cluster command is used to see if the node is part of the cluster. The command shows the
cluster information of the node such as the node role, service, virtual IP address, and node domain. It can also be used to verify
role of the node as either Backup or Master.

MX9116n-A1# show smartfabric cluster

----------------------------------------------------------
CLUSTER DOMAIN ID : 97
VIP : fde1:53ba:e9a0:de14:0:5eff:fe00:197
ROLE : BACKUP
SERVICE-TAG : 87QLMR2
----------------------------------------------------------

38 Validation from OME-M and CLI


show smartfabric cluster member
The show smartfabric cluster member command is used to see the member details of the cluster. This command
shows the cluster member information such as the service tag, IP address, status, role, node type, and the service tag of the
chassis where the node belongs to.

MX9116n-A1# show smartfabric cluster member


Service-tag Status Role Type Chassis-Service-Tag Chassis-Slot
------------------------------------------------------------------
87QLMR2 ONLINE BACKUP MX9116n CBMXLN2 A1
87QMMR2 ONLINE MASTER MX9116n CF54XM2 A2

show smartfabric details


The show smartfabric details command shows all the details of the configured fabric. The command also displays which
nodes are part of the fabric, the fabric status, and the design type associated with the fabric.

MX9116n-A1# show smartfabric details


----------------------------------------------------------
Name : SmartFabric1
Description :
ID : b16b835e-9c46-4c2b-b1ed-a11269bdea3eDesign
Type : 2xMX9116n_Fabric_Switching_Engines_in_different_chassis
Validation Status: VALID
VLTi Status : VALID
Placement Status : VALID
Nodes : 87QLMR2, 87QMMR2
----------------------------------------------------------

show smartfabric uplinks


The show smartfabric uplinks command is used to verify the uplinks that are configured across the nodes in the fabric.
The command shows the name, description, ID, media type, native VLAN, configured interfaces, and the network profile that is
associated with the fabric.

MX9116n-A1# show smartfabric uplinks


---------------------------------------------------------
Name : Uplink01
Description :
ID : ffa4bdfd-fd4a-4301-877a-860c93f9df39
Media Type : ETHERNET
Native Vlan : 1
Untagged-network :
Networks : ec1c6d5e-3945-41c1-92d2-371e5215c911
Configured-Interfaces : 87QLMR2:ethernet1/1/41, 87QLMR2:ethernet1/1/42,
87QMMR2:ethernet1/1/41, 87QMMR2:ethernet1/1/42

show smartfabric networks


The show smartfabric networks command is used to view the various network profiles that are configured. The
command displays the VLANs that are configured, the QoS Priority, and the network type for each network profile.

MX9116n-A1# show smartfabric networks


Name Type QosPriority Vlan

------------------------------------------------------------------
web GENERAL_PURPOSE SILVER 1614
db GENERAL_PURPOSE SILVER 1616
VLAN001 GENERAL_PURPOSE BRONZE 1
app GENERAL_PURPOSE SILVER 1615
vMotion VM_MIGRATION PLATINUM 1612

Validation from OME-M and CLI 39


ESXi_Mgmt HYPERVISOR_MANAGEMENT PLATINUM 1611
vSAN STORAGE_DATA_REPLICATION PLATINUM 1613

ESXi validation using VM to VM ping


Validation between VM on R730xd Rack server and VM on MX740c
compute sled
In this scenario, the vCenter is deployed on the R730xd system. VMs are deployed on the MX compute sleds and R730xd server.
The network is established on tagged VLANs 1611 through 1616 which are used for ESXi management, vMotion, vSAN, and VM
networks.
After the successful installation of the guest operating system on the VM, assign the IP from the private IP network to each
host VM in the corresponding subnet. The topology can be validated end-to-end by pinging from one VM on the MX740c to one
VM on the R730xd server.
The following figure shows the VM on R730xd server with an assigned IP of 10.10.10.13, and that the VM on the MX740c with
an assigned IP of 10.10.10.11.

Figure 26. Pinging between the PowerEdge MX VM and the rack server VM

Validation between VMs on two MX740c compute sleds


This scenario pings one VM on the MX740c to another VM on the MX740c to validate the internal MX chassis scenario. The
chassis are in the MCM group. The assigned IPs on the VMs that are deployed on the MX740c compute sleds are 10.10.10.11,
and 10.10.10.12.
The ping is shown below.

40 Validation from OME-M and CLI


Figure 27. Example of pinging between VMs in a chassis

Validation from OME-M and CLI 41


9
Troubleshooting

Troubleshooting VM to VM traffic
This chapter discusses various issues that may be encountered while configuring the examples in this guide. A problem
statement is given for each scenario, followed by one or more possible solutions. For additional troubleshooting guidance,
see the Dell EMC PowerEdge MX Networking Deployment Guide.

Table 3. Troubleshooting issues and resolutions


Problem Description Solution
Dropped packets between VMs for Rebooting the MX9116n FSE on the This is caused by the MX9116n FSE on
15 seconds after switch reboots PowerEdge MX chassis while passing traffic the PowerEdge MX chassis becoming the
between VMs deployed on MX740c compute Spanning Tree root.
sled and VMs outside of the MX chassis
deployed on R730xd rack server causes 3 to To resolve, make an Upstream switch
5 Requested Time Outs and dropped packets the STP Root, not the MX9116n FSE. In
for up to 15 seconds. the topology mentioned here, whichever
switch has the lower priority number
increases the likelihood of the bridge to
become STP Root.
Run the commands detailed in the
SmartFabric OS10 User Guide to make
an upstream switch the STP root. See
OME-M and OS10 compatibility and
documentation to find the appropriate
version.

NPAR issues on a virtual switch When enabling NPAR on QLogic 41xxx Implement a supported configuration, by
series adapters (for example QLogic 41262 attaching one NIC enabled partition per
in conjunction with either standard virtual physical adapter port to any single virtual
switch or distributed virtual switch) on ESXi switch and utilize multiple distributed port
operating system, connecting multiple NIC groups in the case of a distributed virtual
enabled partitions from a single physical switch to separate network workloads.
adapter port to the same virtual switch is
not a supported configuration. This can cause If a user requires multiple NIC enabled
unexpected issues on the network. partition ports per physical adapter port,
then separate virtual switches are needed
and each virtual switch would have one
NIC enabled partition port per physical
adapter port

Port flapping with LLDP When MX9116n FSEs are connected to To resolve, it is recommended to not
advertisements enabled on vDS MX7116n FEMs in an MCM Group, port enable LLDP in the distributed virtual
flapping has been observed when a switch settings, as this can cause more
user enables LLDP (Link Layer Discovery than the expected number of LLDP
Protocol) under Discovery and selects neighbors (one for iDRAC and one for the
Advertise under Operation on the adapter port) on the switch interface.
distributed virtual switch settings with an
To disable discovery protocol under vDS,
End to End ESXi scenario.
select vDS under the Networking listing,
NOTE: This issue is not seen when using choose Configure, then click Edit.
CDP. Click the Advanced listing and from the
Type drop-down, disable the Discovery
Protocol or select CDP as per topology.

42 Troubleshooting
Table 3. Troubleshooting issues and resolutions (continued)
Problem Description Solution

NOTE: For an example of the settings


to select, see vDS Edit Settings
screen.

Figure 28. vDS Edit Settings screen

Problem Description Solution


FCoE session will not establish In a scenario mentioned where MX9116n FSE is To resolve this,
connected to PowerEdge MX740c or MX840c
For ESXi 6.5/6.7 OS, update qedf to
compute sled with QLogic 41262 mezzanine
1.3.42.0 or newer as well as associated
adapter, configured to use FCoE. When a
qedentv/qedrntv and qedil/qedi to
connecting IOM is reset, FCoE session may
versions matching the Dell EMC qualified
not establish or re-establish.
combinations.
The MX9116n includes two universal QSFP28
ports, each of which can break out to 4x16Gb
or 4x32Gb fibre channel interfaces, making
a total of 8 Fibre Channel Forwarder (FCF)
ports. This issue is most often seen in
MX9116n FSE configurations with 4 or more
FCF ports configured in a vFabric.
In ESXi OS vmkernel.log, this issue
is most easily found with repeated
qfcoe_ctlr_recv_adv events with no
"libfcoe: hostX: FIP selected
Fibre-Channel Forwarder" event, where
X is a number corresponding to a FCoE vmhba
adapter.

Inconsistencies found in VMNIC In a vDS environment it is important that Make sure to configure the final NPAR
enumerations with NPAR vmnic enumeration are consistent between configuration (with the server template)
hosts in order to avoid misconfiguration. In before booting ESXi for the first time.
NPAR setups, there is a risk of unaligned PCI
to vmnic enumerations depending on which If the order of MAC-addresses is in
NPAR configuration that existed the first time sequence, then the order is most likely
ESXi booted. correct. Another possible solution is to
reconfigure in accordance with the How

Troubleshooting 43
Problem Description Solution

VMware ESXi determines the order in


which names are assigned to devices KB
article.

MX7116n FEMs are not discovered In the PowerEdge MX chassis, the Link Layer To resolve the issue, there are few points
when creating a SmartFabric Discovery Protocol (LLDP) advertisements that need to be considered. In the In end-
from the Blade NICs may not be visible to to-end ESXi scenarios mentioned here:
the IOMs. Running the "show lldp neighbors" 1. It is recommended to not enable
command from the IOM will not list the NICs. LLDP on distributed virtual switch
settings. LLDP is not a currently
NOTE: For an example of the IOM not supported Discovery protocol on a
listing the NIC, see NICs not listed in show Distributed Virtual Switch in ESXi on
lldp neighbors output. MX platform.
In the Blade iDRAC, the status of the NICs 2. Make sure to disable the Beacon
are shown as Unknown, and the Switch Probing and revert back to Link
Connection ID and Switch Port Connection ID Status only on all port groups.
are shown as Not Applicable. This can be done under port-group
settings > Teaming and failover.
NOTE: For an example of the Unknown NOTE: To see an example
NIC status in iDRAC, see Unknown NIC of the Teaming and failover
status in iDRAC . section and the options available,
This issue may prevent MX7116n from being seeTeaming and failover settings.
discovered when creating a SmartFabric. 3. If the NICs are configured for Jumbo
Frames, try turning this off.
4. Set up the Traffic Filtering (ACL)
to drop LLDP packets in ingress
and egress direction. Ensure that
the same ACL does not exist on
any physical switch or virtual switch
where the SmartFabric is expected to
be interconnected.

Figure 29. NICs not listed in show lldp neighbors output

Figure 30. Unknown NIC status in iDRAC

44 Troubleshooting
Figure 31. Teaming and failover settings

Troubleshooting 45
10
Other VMware Considerations
OpenManage integration for VMware vCenter
OpenManage Integration for VMware vCenter (OMIVV) is designed to streamline the management processes in your data
center environment by allowing you to use VMware vCenter Server to manage your full server infrastructure - both physical and
virtual.
OMIVV supports the following:
● PowerEdge MX modular infrastructure
● PowerEdge MX740c and PowerEdge MX840c servers
● Chassis and Server management using unified Chassis Management IP
● Standalone and Multi-Chassis Configuration deployment modes.
For more information on OMIVV, see the OpenManage Integration for VMware vCenter KB article.

vRealize
In addition to the integration for vCenter, OMIVV includes rights for a server management pack for vRealize Operations
Manager. This leverages the vCenter integration to provide new dashboards, giving hardware relationship mapping into the
vRealize Operations data for better troubleshooting and combined reporting.
For additional information on vRealize Operations, see the VMware vRealize Operations product page.

VMmark
VMmark is a free tool used to measure the performance, scalability, and power consumption of virtualization platforms.
The VMmark benchmark:
● Allows accurate and reliable benchmarking of virtual data center performance and power consumption.
● Allows comparison of the performance and power consumption of different virtualization platforms.
● Can be used to determine the performance effects of changes in hardware, software, or configuration within the
virtualization environment.
For additional information on VMmark, see the VMmark product page.

46 Other VMware Considerations


A
Full Switch Mode Example
The Dell EMC Networking MX9116n Fabric Switching Engine (FSE) operate in one of two modes:
● Full Switch mode (Default) - All switch-specific SmartFabric OS10 capabilities are available
● SmartFabric mode - Switches operate as a Layer 2 I/O aggregation fabric and are managed through the Open Manage
Enterprise-Modular (OME-M) console
This section describes the full switch mode configuration for the scenarios in this guide.

Full switch mode


In Full Switch mode, all SmartFabric OS10 features and functions supported by the hardware are available to the user. The MX
switch operates the same way as any other SmartFabric OS10 switch. Configuration is primarily done through the CLI, however,
the following items can be configured or managed using the OME-M GUI:
● Initial switch deployment which includes configuration of the hostname, password, SNMP, NTP.
● Set ports administratively up or down, configure MTU.
● Monitor health, logs, alerts, and events.
● Update the SmartFabric OS10 software
● View physical topology .
● Switch power management
Full Switch mode is typically used when a desired feature or function is not available when operating in SmartFabric Services
mode. For more information about Dell EMC SmartFabric OS10 operations, find the appropriate version of the SmartFabric OS10
User Guide in the OME-M and OS10 compatibility and documentation table.

Ethernet switch configuration


The following section outlines example configuration commands issued to the Dell EMC Networking MX9116n switches. The
switches start at their factory default setting. Find the appropriate version of the SmartFabric OS10 User Guide in the OME-M
and OS10 compatibility and documentation table for more information.
Below are the steps to configure the MX9116n switches:
1. Set the switch hostname and management IP address.
2. Configure the VLT between the switches.
3. Configure the VLANs.
4. Configure the port channels to connect to the upstream switches.
5. Configure the ports connected to the upstream switches.
Use the following commands to set the hostname, and to configure the OOB management interface and default gateway.

MX9116n-A1 MX9116n-A2

configure terminal configure terminal

hostname MX9116n-A1 hostname MX9116n-A2

interface mgmt 1/1/1 interface mgmt 1/1/1


no ip address dhcp no ip address dhcp
no shutdown no shutdown
ip address 100.67.XX.XX/24 ip address 100.67.YY.YY/24

management route 0.0.0.0/0 100.67.XX.XX management route 0.0.0.0/0 100.67.YY.YY

Full Switch Mode Example 47


Configure the VLT between switches using the following commands. VLT configuration involves setting a discovery interface
range and discovering the VLT peer in the VLTi.
NOTE: The default speed of port-groups 1/1/10 through 1/1/12 is 100g-2x (QSFP28-DD 200 GbE connection) which means
all the ports from 1/1/35 through 1/1/40 are all 100 GbE ports. Only port-groups 1/1/11 and 1/1/12 (ports 1/1/37 through
1/1/40) are used in this example.

MX9116n-A1 MX9116n-A2

interface range ethernet1/1/37-1/1/40 interface range ethernet1/1/37-1/1/40


description VLTi description VLTi
no shutdown no shutdown
no switchport no switchport

vlt-domain 1 vlt-domain 1
backup destination 100.67.YY.YY backup destination 100.67.XX.XX
discovery-interface ethernet discovery-interface ethernet
1/1/37-1/1/40 1/1/37-1/1/40

Configure the required VLANs on each switch. In this example, VLAN 10 is used.

MX9116n-A1 MX9116n-A2

interface vlan10 interface vlan10


description To-Leaf_1 description To-Leaf_2
no shutdown no shutdown
mtu 9216 mtu 9216

Configure the port channels that connect to the upstream switches. The LACP protocol is used to create the dynamic LAG.
Trunk ports allow tagged VLANs to traverse the trunk link. In this example, the trunk is configured allow VLAN 10.

MX9116n-A1 MX9116n-A2

interface port-channel1 interface port-channel1


description To-Leaf_1 description To-Leaf_2
no shutdown no shutdown
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 10 switchport trunk allowed vlan 10
vlt-port-channel 1 vlt-port-channel 1

interface ethernet1/1/41 interface ethernet1/1/41


description To-Leaf_1 description To-Leaf_1
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

interface ethernet1/1/42 interface ethernet1/1/42


description To-Leaf_2 description To-Leaf_2
no shutdown no shutdown
no switchport no switchport
channel-group 1 mode active channel-group 1 mode active

end end
write memory write memory

FCoE switch configuration


The following section outlines the Fibre Channel configuration commands issued to the Dell EMC Networking MX9116n switches.
The initial configuration of the switch included host name, management interface IP address, VLAN and VLT configuration
described in Ethernet Switch Configuration section of this chapter. Find the appropriate version of the SmartFabric OS10 User
Guide in the OME-M and OS10 compatibility and documentation table for more information.
Below are the steps of FCoE configuration on the MX9116n switches in F-Port mode and NPG mode:

48 Full Switch Mode Example


1. Enable the FC and DCBx feature.
2. Configure the Class and Policy maps.
3. Configure the FC Alias, FC Zone, and the FC Zoneset.
4. Configure the vFabric, interfaces facing downstream switches, if any, and the Fibre Channel interfaces that face the FC
switch.
Use the following commands to enable FC and DCBx feature on the switch

MX9116n-A1 MX9116n-A2

feature fc {NPG, domain-id 100} feature fc {NPG, domain-id 100}

dcbx enable dcbx enable

NOTE: The FC feature has two options, NPG mode and F-Port mode.

Use the following commands to configure Class map and Policy map.

MX9116n-A1 MX9116n-A2

class-map type network-qos class_Dot1p_3 class-map type network-qos class_Dot1p_3


match qos-group 3 match qos-group 3

trust dot1p-map map_Dot1pToGroups trust dot1p-map map_Dot1pToGroups


qos-group 3 dot1p 3 qos-group 3 dot1p 3

qos-map traffic-class map_GroupsToQueues qos-map traffic-class map_GroupsToQueues


queue 3 qos-group 3 queue 3 qos-group 3

policy-map type network-qos policy-map type network-qos


policy_Input_PFC policy_Input_PFC

class class_Dot1p_3 class class_Dot1p_3


pause pause
pfc-cos 3 pfc-cos 3

policy-map type queuing policy-map type queuing


policy_Output_BandwidthPercent policy_Output_BandwidthPercent

class map_ETSQueue_3 class map_ETSQueue_3


bandwidth percent 98 bandwidth percent 98

system qos system qos


trust-map dot1p map_Dot1pToGroups trust-map dot1p map_Dot1pToGroups
qos-map traffic-class map_GroupsToQueues qos-map traffic-class map_GroupsToQueues

The following commands are used to configure FC Alias, FC Zone and FC Zoneset on the switch for the Initiator (Server
NIC/CNA) and target (Unity).

MX9116n-A1 MX9116n-A2

fc alias A1 fc alias A2
member wwn 20:01:06:3c:f9:a5:cd:00 member wwn 20:01:06:3c:f9:a5:cd:01
fc alias SPA0 fc alias SPA1
member wwn 50:06:01:66:47:e0:1b:19 member wwn 50:06:01:67:47:e0:1b:19
fc alias SPB0 fc alias SPB1
member wwn 50:06:01:6e:47:e0:1b:19 member wwn 50:06:01:6f:47:e0:1b:19

fc zone Zone-A1 fc zone Zone-A2


member alias-name A1 member alias-name A2
member alias-name SPA0 member alias-name SPA1
member alias-name SPB0 member alias-name SPB1

Full Switch Mode Example 49


MX9116n-A1 MX9116n-A2

fc zoneset ZSA1 fc zoneset ZSA2


member Zone-A1 member Zone-A2

The following commands are used to configure vFabric, and the FCoE and FC interfaces.

MX9116n-A1 MX9116n-A2

vfabric 101 vfabric 102


vlan 40 vlan 30
fcoe fcmap 0xEFC00 fcoe fcmap 0xEFC01
zoneset activate ZSA1 zoneset activate ZSA2

interface ethernet1/1/41 interface ethernet1/1/41


no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
flowcontrol receive off flowcontrol receive off
flowcontrol transmit off flowcontrol transmit off
priority-flow-control mode on priority-flow-control mode on
service-policy input type network-qos service-policy input type network-qos
policy_Input_PFC policy_Input_PFC
service-policy output type queuing service-policy output type queuing
policy_Output_BandwidthPercent policy_Output_BandwidthPercent
ets mode on ets mode on
vfabric 101 vfabric 102
! !
interface ethernet1/1/42 interface ethernet1/1/42
no shutdown no shutdown
switchport access vlan 1 switchport access vlan 1
flowcontrol receive off flowcontrol receive off
flowcontrol transmit off flowcontrol transmit off
priority-flow-control mode on priority-flow-control mode on
service-policy input type network-qos service-policy input type network-qos
policy_Input_PFC policy_Input_PFC
service-policy output type queuing service-policy output type queuing
policy_Output_BandwidthPercent policy_Output_BandwidthPercent
ets mode on ets mode on
vfabric 101 vfabric 102

interface fibrechannel1/1/44:1 interface fibrechannel1/1/44:1


no shutdown no shutdown
vfabric 101 vfabric 102
! !
interface fibrechannel1/1/44:2 interface fibrechannel1/1/44:2
no shutdown no shutdown
vfabric 101 vfabric 102

50 Full Switch Mode Example


B
Configure a Virtual Distributed Switch
Standard vs distributed switches
There are two choices for virtual switches that can be used in your VMware environment: the standard switch and the virtual
distributed switch (vDS). The vDS is used in most cases because of the additional features offered over the standard switch.
The examples provided in this guide were configured with a vDS. The basic steps for creating the vDS are provided below.
NOTE: For detailed information on the features of each virtual switch type, refer to Chapters 2 and 3 of VMware: vSphere
Networking.

NOTE: To create a standard switch, see the Create a vSphere Standard Switch KB article.

Create and configure vDS


The examples provided in this guide utilized vDS. The following steps were used to create and configure the vDS:
1. Open vCenter.
2. Under the Network tab, right-click Host cluster, select Distributed Switch, and then select New Distributed Switch.
3. Name the new Virtual Distributed Switch and click Next.
4. Select ESXi version for Distributed Switch and click Next.
5. Select numbers of uplinks and name the Distributed Port Group. Click Next.
6. Verify all the information and click Finish..
For additional information on creating a vDS, see Table 1.

Add hosts to vDS


Below are the steps to add and manage hosts and VMs to a vDS.
1. Right-click on Virtual Distributed Switch and select Add and Manage hosts.
2. Select the option Add host and click Next.
3. Click on New hosts select all compute hosts and click OK.
4. Right-click on VM, select Edit Setting, and select vDS as a Network Port Group.

Configure a Virtual Distributed Switch 51


Figure 32. Managing the vDS

52 Configure a Virtual Distributed Switch


C
Storage Considerations

Using Raw Device Mapping (RDM)


An RDM provides several benefits and should be used when connecting to raw storage devices.
Using RDMs, you can:
● Use vMotion to migrate virtual machines using raw LUNs.
● Add raw LUNs to virtual machines using the vSphere Web Client.
● Use file system features such as distributed file locking, permissions, and naming.
● Direct communication between VM and disk.
For more information about RDM, see the VMware Docs: Raw Device Mapping guide.

Configure Unity
For the storage connectivity examples in Chapter 6 and Chapter 7, a Dell EMC Unity 500F storage array is used. Below are the
steps of how to configure and setup Unity.
1. Enter the management IP address of the Unity into a web browser.
2. Provide username and password.
3. Under the System tab, click System View and select Enclosures. From the drop-down menu, select DPE. Verify ports are
connected, up, and in a healthy state.
4. Under the Storage tab, click Pools. Click the plus (+) sign at the top left corner to create a Pool. Assign pool name, select a
drive, select storage capacity and click Finish.
5. Under the Access tab click Hosts. Click the plus (+) sign at the top left corner. From the drop-down menu select the host
option.
6. Assign an appropriate host name, host description (optional) and select the Operating System for the host.

Storage Considerations 53
Figure 33. Creation of a host in Unity
7. Select the Initiator from the top window, if it was discovered automatically. Otherwise add the Initiator information
manually by clicking the plus (+) sign button.
8. In the WWN tab, provide a CNA Virtual WWN, then the CNA Virtual WWPN, separating them with a colon.

54 Storage Considerations
Figure 34. Discovery of initiator in Unity

NOTE: To find the CNA WWN information, go to OME-M and click Devices, select Compute, and then click on the
desired server sled. From the Hardware tab, select Network Devices. Expand NIC/CNA to find the port information
along with NIC/CNA “WWN” information. It will look similar what is shown in this example.
9. Double-click the host. Assign a Host server management IP.

Storage Considerations 55
Figure 35. Assign host MGMT IP address
10. Create LUNs (Logical Unit Numbers) with a desire disk size as per specific environment.

Figure 36. Create LUNs for host in Unity


11. The initiator path is discovered automatically.

56 Storage Considerations
Figure 37. Discover Initiator Path of host in Unity

Storage Considerations 57
D
Versions Used in this Guide
The following tables include the hardware, software, and firmware used to configure and validate Scenarios and topologies in
this guide.

Validated components
Dell EMC PowerSwitch

Table 4. Dell EMC PowerSwitch systems


Operating system
Qty Item version
2 Dell EMC PowerSwitch S5232F-ON leaf switches 10.5.2.3
1 Dell EMC PowerSwitch S3048-ON OOB management switch 10.5.2.3

Dell EMC PowerEdge MX7000 chassis and components

Table 5. Dell EMC PowerEdge MX7000 chassis and components


Qty Item Version
2 Dell EMC PowerEdge MX7000 chassis -
4 Dell EMC PowerEdge MX740c sled (2 per chassis) A01
4 Dell EMC PowerEdge M9002m modules (2 per chassis) 1.30.00
2 Dell EMC Networking MX9116n FSE (1 per chassis) 10.5.2.4
2 Dell EMC Networking MX7116n FEM (1 per chassis) N/A
2 Dell EMC Networking MX5108n 10.5.2.4

Minimum software and firmware requirements

Table 6. Minimum software and firmware requirements for MX9116n FSE


Software Minimum release requirement
Operating 10.5.2.4
system
ONIE v3.35.1.1-15
BIOS v3.35.0.1-4
CPLD system v0xd

Table 7. Minimum software and firmware requirements for MX5108


Software Minimum release requirement
Operating 10.5.2.4
system
ONIE v3.35.1.1-15
BIOS v3.35.0.1-4
CPLD system v0xd

Dell EMC PowerEdge MX740c compute sled details

58 Versions Used in this Guide


Table 8. Dell EMC PowerEdge MX740c compute sled details
Qty per
sled Item Firmware version
1 QLogic QL41262HMKR (25G) mezzanine CNA 15.20.16
2 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz -
12 16 GB DDR4 DIMMs (192 GB total) -
1 Boot Optimized Storage Solution (BOSS) Controller w/ 2x 240 GB SATA SSDs 2.6.13.3020
1 PERC H730P MX 25.5.5.0005
3 600 GB SAS HDD -
- Dell OS Driver Pack 19.12.05
- BIOS 2.10.2
- iDRAC with Lifecycle Controller 4.40.10.00

VMware details

Table 9. VMware details


Item Version
VMware vCenter Server Appliance (VCSA) 7.0.2 (17694817)
VMware-VMvisor-Installer (ESXi) 7.0.2 (17630552)
VMware Remote Console (VMRC) 12.0.0
VMware Virtual Distributed Switch (vDS) 7.0.2

Versions Used in this Guide 59


E
References

Dell Technologies documentation


The following Dell Technologies documentation provides additional and relevant information. Access to some of these
documents will depend on your login credentials. If you do not have access to a document, contact your Dell Technologies
representative.

NOTE: If you do not have access to a document, contact your Dell EMC representative.

Dell EMC PowerEdge MX Networking Deployment Guide


Dell EMC PowerEdge MX SmartFabric and Cisco ACI Integration Guide
Dell EMC PowerEdge MX Management Module Cabling
PowerEdge MX I/O Guide
Dell EMC Networking Solutions
Dell EMC Storage Compatibility Matrix for SC Series, PS Series and FS Series
Manuals and documents for Dell EMC PowerEdge MX7000
Manuals and documents for Dell EMC Networking MX9116n
Manuals and documents for Dell EMC Networking MX5108
Manuals and documents for Dell EMC PowerEdge MX740c
Manuals and documents for Dell EMC PowerSwitch S3048-ON
Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10
Management Networks for Dell EMC Networking
What is the OpenManage Integration for VMware vCenter
OpenManage Integration for VMware vCenter v4.3.0
Raw Device Mapping (RDM) in VMware

OME-M and OS10 compatibility and documentation


The following table shows the compatibility matrix of OME-M and OS10, and links to User Guides and Release Notes for all
versions.

Table 10. OME-M and OS10 compatibility and documentation


OME-M OS10 version OS10 User Guide OS10 Release OME-M Chassis User OME-M Release Notes
version Notes a Guide
1.10.00 10.5.0.1 Page Link Link Page Link Link
PDF Link PDF Link

1.10.20 10.5.0.5 Page Link Link Page Link Link


PDF Link PDF Link

1.20.00 10.5.0.7 Page Link Link Page Link Link


PDF Link PDF Link

60 References
Table 10. OME-M and OS10 compatibility and documentation (continued)
OME-M OS10 version OS10 User Guide OS10 Release OME-M Chassis User OME-M Release Notes
version Notes a Guide
1.20.10 10.5.1.6 Page Link Link Page Link Link
PDF Link PDF Link

1.30.00 10.5.2.4 Page Link Link Page Link Link


PDF Link PDF Link

a. To access the OS10 Release Notes, you must log in to your Dell Digital Locker account.

Support and feedback


For technical support, visit https://www.dell.com/support or call (USA) 1-800-945-3355.
We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to
[email protected].

References 61

You might also like