B200M3 SpecSheet
B200M3 SpecSheet
Spec Sheet
discontinued
OVERVIEW
Delivering performance, versatility and density without compromise, the Cisco UCS B200 M3 Blade Server
addresses the broadest set of workloads, from IT and web infrastructure through distributed database.
The enterprise-class Cisco UCS B200 M3 blade server extends the capabilities of Cisco’s Unified Computing
System portfolio in a half-width blade form factor. The Cisco UCS B200 M3 harnesses the power of the latest
Intel® Xeon® E5-2600 v2 and E5-2600 series processor family CPUs with up to 768 GB of RAM (using 32 GB
DIMMs), 2 drives, and up to 80 Gbs throughput connectivity.
DETAILED VIEWS
Blade Server Front View
Figure 2 is a detailed front view of the Cisco UCS B200 M3 Blade Server.
4 5
UCS B200 M3
331360
1 2 3 6 7 8 9 10 11
Notes...
1. For information about the KVM local I/O cable that plugs into the console connector (a cable is included with
every Cisco UCS 5100 Series blade server chassis accessory kit), see ORDER OPTIONAL KVM LOCAL I/O CABLE*
on page 37.
NOTE: NOTE: The B200 M3 blade server requires UCS Manager (UCSM) to operate as
part of the UCS system.
■ The B200 M3 with E5-2600 CPUs requires UCSM 2.0.2(q) or later
■ The B200 M3 with E5-2600 v2 CPUs requires UCSM 2.1.3 or later
Capability/Feature Description
Chassis The UCS B200 M3 Blade Server mounts in a Cisco UCS 5100 series blade
server chassis
CPU One or two Intel® E5-2600 v2 or E5-2600 series processor family CPUs
Memory 24 total slots for registered ECC DIMMs for up to 768 GB total memory
capacity (B200 M3 configured with 2 CPUs using 32 GB DIMMs)
■ One connector for Cisco’s VIC 1340 or 1240 adapter, which provides
Ethernet and Fibre Channel over Ethernet (FCoE)
■ One connector for various types of Cisco adapters, Emulex or QLogic
adapters, and Cisco UCS Storage Accelerator adapters.
■ SAS/SATA support
■ RAID 0 and 1 and JBOD
Internal storage devices Up to two optional, front-accessible, hot-swappable 2.5-inch small form
factor (SFF) SAS or SATA solid-state disks (SSDs) or hard disk drives (HDDs).
An internal USB 2.0 port is also supported. A 4 GB USB 2.0 device is available
from Cisco.
Capability/Feature Description
Video The Cisco Integrated Management Controller (CIMC) provides video using the
Matrox G200e video/graphics controller:
Power subsystem Integrated in the Cisco UCS 5100 series blade server chassis
Fans Integrated in the Cisco UCS 5100 series blade server chassis
Integrated management The built-in Cisco Integrated Management Controller (CIMC) GUI or CLI
processor interface enables you to monitor the server inventory, health, and system
event logs.
Cisco UCS Diagnostics The Cisco UCS Blade Server Diagnostics tool for Cisco UCS Blade Servers
for Cisco UCS B-Series enables you to verify the health of the hardware components on your
Blade Servers servers. The diagnostics tool provides a variety of tests to exercise and
stress the various hardware subsystems on the Cisco UCS Blade Servers, such
as memory and CPU. You can use the tool to run a sanity check on the state
of your Cisco UCS Blade Servers after you fix or replace a hardware
component. You can also use this tool to run comprehensive burn-in tests
before you deploy a new Cisco UCS Blade Server in your production
environment.
See the following links for more information:
User Guide:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/ucs_di
agnostics/b_UCS_Blade_Server_Diagnostics_User_Guide.html
ISO Download:
http://www.cisco.com/cisco/software/navigator.html
UCSB-B200-M3 UCS B200 M3 Blade Server w/o CPU, memory, HDD, VIC 1340 or 1240 adapter, or
mezzanine adapters
The base Cisco UCS B200 M3 blade server does not include the following components. They must
be selected during product ordering:
■ CPUs
■ Memory
■ Disk drives
■ Cisco adapters (such as the VIC 1340, VIC 1240, VIC 1380, VIC 1280, and Port Expander)
■ Emulex and QLogic network adapters
■ Cisco UCS Storage Accelerators (such as the Fusion-io and LSI Logic adapter)
NOTE: Use the steps on the following pages to order servers with the
configurable components that you want configured in your servers.
■ Intel Xeon E5-2600 v2 and E5-2600 series processor family CPUs. See the following link for
instructions on how to upgrade your server from Intel Xeon E5-2600 to Intel Xeon E5-2600 v2
CPUs as well as how to upgrade to 1866-MHz DIMMs (supported on E5-2600 v2 CPUs):
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/CPU/IVB/install/IVB-B.html
Table 3 Supported Intel CPUs: E5-2600 v2 and E5-2600 Series Processor Family CPUs
Table 3 Supported Intel CPUs: E5-2600 v2 and E5-2600 Series Processor Family CPUs (continued)
Supported Configurations
■ Choose one CPU from any one of the rows of Table 3 on page 9.
■ Choose two identical CPUs from any one of the rows of Table 3 on page 9.
Caveats
■ The B200 M3 configured with 1 CPU provides limited network connectivity options. The
following restrictions apply:
— A virtual interface card (VIC), the VIC 1340 or 1240, must always be installed in the
VIC 1240 mezzanine connector.
■ DIMMs
— Clock speed: 1866, 1600, or 1333 MHz
— Ranks per DIMM: 1, 2, or 4
— Operational voltage: 1.35 or 1.5 V
— Registered
■ DDR3 ECC registered DIMMs (RDIMMs) or load-reduced DIMMS (LRDIMMS)
■ Memory is organized with four memory channels per CPU, with up to three DIMMs per
channel (DPC), as shown in Figure 3. Maximum memory capacity is 768 GB (B200 M3
configured with 2 CPUs with 32 GB DIMMs).
Select the memory configuration and whether or not you want the memory mirroring option.
The supported memory DIMMs and the mirroring option are listed in Table 4.
The supported memory DIMMs in the UCS B200 M3 are listed in Table 4.
Ranks
Product ID (PID) PID Description Voltage
/DIMM
DIMM Options
Notes...
1. Dual voltage DIMM (operates at 1.5 V with BIOS is set for memory performance mode (default), or 1.35 V when
BIOS is set for power-savings mode).
The DDR3 DIMMs that have been discontinued but are still supported are shown in Table 5.
Ranks
Product ID (PID) PID Description Voltage
/DIMM
DIMM Options
UCS-MR-1X041RY-A 4 GB DDR3-1600-MHz RDIMM/PC3-12800/1R/x4/1.35V 1.5/1.35 V 1
UCS-MR-1X082RX-A 8 GB DDR3-1333-MHz RDIMM/PC3-10600/2R/x4/1.35V 1.5/1.35 V 2
UCS-MR-1X082RZ-A 8 GB DDR3-1866-MHz RDIMM/PC3-14900/2R/x4/1.5 1.5 V 2
Supported Configurations
■ Select from 1 to 12 DIMMs for CPU 1 (note that there are 12 DIMM slots available)
■ Select 2, 4, 6, 8, or 12 DIMMs for CPU 1. The DIMMs will be placed by the factory as shown in
Table 6:
Table 6 DIMM Placement With Memory Mirroring (B200 M3 configured with 1 CPU)
Number of DIMMs DIMM Placement in Banks (with memory mirroring implemented)
per CPU
CPU 1
2 2 DIMMs in Bank 0 (A0, B0)
4 4 DIMMs in Bank 0 (A0, B0, C0, D0)
61 3 DIMMs in Bank 0 (A0, B0, C0)
3 DIMMs in Bank 1 (A1, B1, C1)
8 4 DIMMs in Bank 0 (A0, B0, C0, D0)
4 DIMMs in Bank 1 (A1, B1, C1, D1)
12 4 DIMMs in Bank 0 (A0, B0, C0, D0)
4 DIMMs in Bank 1 (A1, B1, C1, D1)
4 DIMMs in Bank 3 (A2, B2, C2, D2)
Notes...
1. Not recommended (for performance reasons)
■ Select the memory mirroring option (N01-MMIRROR) as shown in Table 4 on page 13.
■ Select from 1 to 12 DIMMs per CPU (note that there are 12 DIMM slots per CPU)
■ Select 2, 4, 6, 8, or 12 DIMMs per CPU. The DIMMs will be placed by the factory as shown in
Table 7:
Table 7 DIMM Placement With Memory Mirroring (B200 M3 configured with 2 CPUs)
Number of DIMM Placement in Banks (with memory mirroring implemented)
DIMMs per
CPU
CPU 1 CPU 2
2 2 DIMMs in Bank 0 (A0, B0) 2 DIMMs in Bank 0 (E0, F0)
4 4 DIMMs in Bank 0 (A0, B0, C0, D0) 4 DIMMs in Bank 0 (E0, F0, G0, H0)
61 3 DIMMs in Bank 0 (A0, B0, C0) 3 DIMMs in Bank 0 (E0, F0, G0)
3 DIMMs in Bank 1 (A1, B1, C1) 3 DIMMs in Bank 1 (E1, F1, G1)
8 4 DIMMs in Bank 0 (A0, B0, C0, D0) 4 DIMMs in Bank 0 (E0, F0, G0, H0)
4 DIMMs in Bank 1 (A1, B1, C1, D1) 4 DIMMs in Bank 1 (E1, F1, G1, H1)
12 4 DIMMs in Bank 0 (A0, B0, C0, D0) 4 DIMMs in Bank 0 (E0, F0, G0, H0)
4 DIMMs in Bank 1 (A1, B1, C1, D1) 4 DIMMs in Bank 1 (E1, F1, G1, H1)
4 DIMMs in Bank 2 (A2, B2, C2, D2) 4 DIMMs in Bank 2 (E2, F2, G2, H2)
Notes...
1. Not recommended (for performance reasons)
■ Select the memory mirroring option (N01-MMIRROR) as shown in Table 4 on page 13.
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs in the server.
Caveats
■ System speed is dependent on how many DIMMs are populated per channel. See Table 8 for
details.
1.3 V 1.5 V 1.3 V 1.5 V 1.3 V 1.5 V 1.3 V 1.5 V 1.3 V 1.5 V 1.3 V 1.5 V
1DPC 1333 1333 1333 1333 1333 1333 1333 1333 1333 1333 1333 1333
1333 2DPC 1333 1333 1333 1333 1333 1333 1333 1333 1333 1333 1333 1333
DIMM
3DPC 1066 1066 NA1 1066 1066 1066 NA1 1066 1066 1066 NA1 1066
1DPC 1333 1333 1333 1333 16002 16002 1333 1600 16002 16002 1333 1600
1600 2DPC 1333 1333 1333 1333 1333 1600 1333 1600
DIMM 16002 16002 16002 16002
3DPC 1066 1066 NA1 1066 1066 1066 NA1 13333 1066 1066 NA1 13333
1DPC NA1 1333 NA1 1333 NA1 1600 NA1 1600 NA1 1866 NA1 1866
1866 2DPC NA1 1333 NA1 1333 NA1 1600 NA1 1600 NA1 1866 NA1 1866
DIMM4
3DPC NA1 1066 NA1 1066 NA1 1066 NA1 1333 NA1 1333 NA1 1333
Notes...
1. NA = not applicable
2. These DIMMs operate at 1333 MHz instead of 1600 MHz when used with any E5-2600 CPUs. They operate at 1600
MHz when used with E5-2600 v2 CPUs that support 1600- and 1866-MHz speeds.
3. For E5-2600 v2 CPUs, 8-GB 1600-MHz DIMMs at 3 DIMMs per channel currently are set to operate at 1066 MHz
instead of 1333 MHz
4. 1866-MHz DIMMs are only offered and supported with E5-2600 v2 CPU-configured servers
■ For optimum performance, do not mix DIMMs with different frequencies. If you mix DIMM
frequencies, the system defaults to the lower frequency.
■ Do not mix RDIMMs with LRDIMMs
■ DIMMs for CPU 1 and CPU 2 (when populated) must always be configured identically.
■ Memory mirroring reduces the amount of available memory by 50% (quantity of DIMMs must
be even for mirroring).
■ By default, starting with UCSM 2.0.4, DIMMs run in memory performance mode (1.5v) by
BIOS default, which yields faster memory speeds than when the BIOS is set for the memory
to run in power-savings mode. Memory speed is dependent on factors such as:
— CPU choice
— DIMM choice
— DIMM population (how many DIMMs per channel are populated)
— BIOS setting.
For the DIMMs to run in power-savings mode (1.35 V, if the DIMM supports this), change
the BIOS setting to power-savings mode.
■ With 3 RDIMMs populated per channel, memory always runs at 1.5 V regardless if the BIOS
setting is power-savings mode (1.35 V) or performance mode (1.5 V).
■ With 3 LRDIMMs populated per channel, memory can operate at 1.5 V or 1.35 V, depending
on the BIOS setting.
For more information regarding memory, see CPUs and DIMMs on page 39.
NOTE: The UCS B200 M3 blade server meets the external storage target and switch
certifications as described in the following link:
http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperabilit
y/matrix/Matrix8.html#wp323852
Choose Drives
NOTE: 4K format drives are not qualified or supported at this time with B-Series UCS
servers.
Drive
Product ID (PID) PID Description Capacity
Type
HDDs
12 Gbps Drives
UCS-HD600G15K12G 600 GB 12G SAS 15K RPM SFF HDD SAS 600 GB
UCS-HD450G15K12G 450 GB 12G SAS 15K RPM SFF HDD SAS 450 GB
UCS-HD300G15K12G 300 GB 12G SAS 15K RPM SFF HDD SAS 300 GB
UCS-HD12TB10K12G 1.2 TB 12G SAS 10K RPM SFF HDD SAS 1.2 TB
UCS-HD900G10K12G 900 GB 12G SAS 10K RPM SFF HDD SAS 900 GB
UCS-HD600G10K12G 600 GB 12G SAS 10K RPM SFF HDD SAS 600 GB
UCS-HD300G10K12G 300 GB 12G SAS 10K RPM SFF HDD SAS 300 GB
SSDs
12 Gbps Drives
UCS-SD800G12S4-EP 800 GB 2.5 inch Enterprise Performance 12G SAS SSD SAS 800 GB
(10X endurance) (Samsung 1635)
Drive
Product ID (PID) PID Description Capacity
Type
UCS-SD400G12S4-EP 400 GB 2.5 inch Enterprise Performance 12G SAS SSD SAS 400 GB
(10X endurance) (SanDisk Lightning)
6 Gbps Drives
UCS-SD120GBKS4-EV 120 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 120 GB
UCS-SD960GBKS4-EV 960 GB 2.5 inch Enterprise Value 6G SATA SSD (Samsung PM863) SATA 960 GB
UCS-SD800G0KS2-EP 800 GB SAS 2.5" Enterprise Performance SSD (Samsung 1625) SAS 800 GB
UCS-SD400G0KS2-EP 400 GB SAS 2.5" Enterprise Performance SSD (Samsung 1625) SAS 400 GB
UCS-SD240GBKS4-EV 240 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 240 GB
NOTE: The integrated RAID controller supports hard disk drives (HDDs) or solid state
drives (SSDs). Write cache is not implemented. SSDs are recommended for
applications requiring high-speed local storage, which is an order of magnitude
faster than HDDs.
Supported Configurations
Caveats
■ For RAID’ed configurations, if you select two drives, they must be identical in type (HDD or
SSD) and capacity.
■ For JBOD configurations, if you select two drives, you can mix and match any combination of
HDD and SSD, regardless of capacity.
Cisco developed Virtual Interface Cards (VICs) to provide flexibility to create multiple NIC and
HBA devices. The VICs also support adapter Fabric Extender and Virtual Machine Fabric Extender
technologies.
Emulex and QLogic Converged Network Adapters (CNAs) consolidate Ethernet and Storage (FC)
traffic on the Unified Fabric by supporting FCoE.
Cisco UCS Storage Accelerator adapters are designed specifically for the Cisco UCS B-series M3
blade servers and integrate seamlessly to allow improvement in performance and relief of I/O
bottlenecks.
NOTE: There are two slots on the server. One accommodates Cisco, Emulex, and
QLogic I/O adapters or Cisco Storage Accelerator adapters as well as other options,
and one is a dedicated slot for the VIC 1340 or 1240 adapter only. Table 10 shows
which adapters plug into each of the two slots. Only the VIC 1340 or 1240 adapter
plugs into the VIC 1340/1240 adapter slot. All other adapters plug into the
mezzanine adapter slot.
NOTE: You must have a B200 M3 configured with 2 CPUs to support cards that plug
into the mezzanine connector. The VIC 1340 or 1240 adapter is supported on both 1-
and 2-CPU configured systems.
The supported mezzanine adapters in the UCS B200 M3 are listed in Table 10.
UCSB-MLOM-40G-03 Cisco UCS VIC 1340 adapter for M3 blade servers. Plugs into the VIC
dedicated VIC 1340/1240 slot only. Only the VIC 1340 or VIC 1340/1240
1240 can be plugged into the slot.
UCSB-MLOM-40G-01 Cisco UCS VIC 1240 adapter for M3 blade servers. Plugs into the VIC
dedicated VIC 1340/1240 slot only. Only the VIC 1340 and VIC 1340/1240
1240 can be plugged into the slot.
UCS-VIC-M83-8P Cisco UCS VIC 1380 dual 40Gb capable Virtual Interface Card Mezzanine
UCS-VIC-M82-8P Cisco UCS VIC 1280 dual 40Gb capable Virtual Interface Card Mezzanine
UCSB-F-FIO-1600MS UCS 1600 GB Fusion ioMemory3 SX Scale line for B-Series Mezzanine
UCSB-F-FIO-1300MP UCS 1300 GB Fusion ioMemory3 PX Performance line for B-Series Mezzanine
Expander Option
UCSB-MLOM-PT-01 Cisco UCS Port Expander Card. This is a hardware option to Mezzanine
enable an additional 4 ports of the VIC 1340 or 1240, bringing
the total capability of the VIC 1340/1240 to dual 4 x 10 GbE
Notes...
1. Do not mix Fusion io storage accelerator families. That is, do not mix “MP” or “MS” (ioMemory3) with “M”
(ioDrive2) family cards.
Supported Configurations
For a B200 M3 configured with 1 CPU, the supported configurations are listed in Table 11.
Choose one configuration.
Adapter in
Fabric
VIC
Extender Adapter in Mezzanine Slot Ports Reference
1340/1240
Compatibility
Slot
For a B200 M3 configured with 2 CPUs, the supported configurations are listed in Table 12.
Choose one configuration.
Adapter in
Fabric
VIC
Extender Adapter in Mezzanine Slot Ports Reference
1340/1240
Compatibility
Slot
2304 (PIDs UCS-IOM-2304)
2304 VIC 1340 or None 4 x 10 Gb Figure 17 on page 60
1240
2304 VIC 1340 or VIC 1380 (with 1340) or 8 x 10 Gb Figure 18 on page 61
1240 VIC 1280 (with 1240)
2304 VIC 1340 or Emulex or QLogic I/O adapter, or 6 x 10 Gb Figure 19 on page 61
1240 Cisco UCS Storage Accelerator 4 x 10 Gb
2304 VIC 1340 or Port Expander Card 8 x 10 Gb Figure 20 on page 62
1240
2304 None Emulex or QLogic I/O adapter1 2 x 10 Gb Figure 21 on page 62
2208XP (PIDs UCS-IOM-2208XP, UCS-IOM2208-16FET)
2208XP VIC 1340 or None 4 x 10 Gb Figure 22 on page 63
1240
2208XP VIC 1340 or VIC 1380 (with 1340) or 8 x 10 Gb Figure 23 on page 64
1240 VIC 1280 (with 1240)
2208XP VIC 1340 or Emulex or QLogic I/O adapter, or 6 x 10 Gb Figure 24 on page 64
1240 Cisco UCS Storage Accelerator 4 x 10 Gb
2208XP VIC 1340 or Port Expander Card 8 x 10 Gb Figure 25 on page 65
1240
2208XP None Emulex or QLogic I/O adapter2 2 x 10 Gb Figure 26 on page 65
2204XP (PIDs UCS-IOM-2204XP, UCS-IOM2204-8FET)
2204XP VIC 1340 or None 2 x 10 Gb Figure 27 on page 66
1240
2204XP VIC 1340 or VIC 1380 (with 1340) or 4 x 10 Gb Figure 28 on page 67
1240 VIC 1280 (with 1240)
2204XP VIC 1340 or Emulex or QLogic I/O adapter, or 4 x 10 Gb Figure 29 on page 67
1240 Cisco UCS Storage Accelerator 4 x 10 Gb
2204XP VIC 1340 or Port Expander Card 4 x 10 Gb Figure 30 on page 68
1240
2204XP None Emulex or QLogic I/O adapter1 2 x 10 Gb Figure 31 on page 68
Adapter in
Fabric
VIC
Extender Adapter in Mezzanine Slot Ports Reference
1340/1240
Compatibility
Slot
2104XP (PID N20-I6584)
2104XP VIC 1340 or None 2 x 10 Gb Figure 32 on page 69
1240
2104XP VIC 1340 or Cisco UCS Storage Accelerator 2 x 10 Gb Figure 33 on page 69
1240
Notes...
1. Specifically, the UCSB-MEZ-QLG-03 (QLogic) or UCSB-MEZ-ELX-03 (Emulex) adapter
2. Specifically, the UCSB-MEZ-QLG-03 (QLogic) or UCSB-MEZ-ELX-03 (Emulex) adapter
To help ensure that your operating system is compatible with the adapter you have selected,
please check the Hardware Compatibility List at this URL:
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
Caveats
■ If a VIC 1340 or 1240 adapter is not installed, you must choose an Emulex or QLogic I/O
adapter to be installed in the mezzanine slot (see also NEBS Compliance on page 50)
■ Note that you cannot mix VIC 13xx Series adapters with VIC 12xx Series adapters. For
example, if you install a VIC 1340, you cannot install a VIC 1280. In this case, you must
install a VIC 1380. Also, the VIC 13xx Series adapters are compatible with systems
implementing 6200 and 6300 Series Fabric Interconnects, but 61xx Series FIs are not
supported. All FIs are supported with the VIC 12xx Series adapters. 61xx/21xx Series
FIs/FEXs are supported on the B200 M3, but only with the VIC 12xx Series adapters
NOTE: The module used in this system conforms to TPM v1.2 and 2.0, as defined by
the Trusted Computing Group (TCG). It is also SPI-based.
NOTE: Dual card support (mirroring) is supported with UCS Manager 2.2.x and later.
Supported Configurations
(1) Select one or two Cisco Flexible Flash secure digital cards
NOTE: A clearance of 0.950 inches (24.1 mm) is required for the USB device to be
inserted and removed (see the following figure).
NOTE: When the Cisco 4GB USB key is purchased with a server, it is pre-installed
into the internal USB port and held firmly in place with a clip to protect it from
shock and vibration during shipment and transportation. This clip also prevents the
USB key from undergoing shock and vibration during ongoing customer operational
use.
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
Table 17 OS Media
If you have noncritical implementations and choose to have no service contract, the following
coverage is supplied:
For support of the entire Unified Computing System, Cisco offers the Cisco Unified Computing
Support Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world.
For UCS blade servers, there is Smart Call Home, which provides proactive, embedded
diagnostics and real-time alerts. For systems that include Unified Computing System Manager,
the support service includes downloads of UCSM upgrades. The Unified Computing Support
Service includes flexible hardware replacement options, including replacement in as little as
two hours. There is also access to Cisco's extensive online technical resources to help maintain
optimal efficiency and uptime of the unified computing environment. You can choose a desired
service listed in Table 18.
For faster parts replacement than is provided with the standard Cisco Unified Computing System
warranty, Cisco offers the Cisco Unified Computing Warranty Plus Service. You can choose from
several levels of advanced parts replacement coverage, including onsite parts replacement in as
little as four hours. Warranty Plus provides remote access any time to Cisco support
professionals who can determine if a return materials authorization (RMA) is required. You can
choose a service listed in Table 19.
Service On
Product ID (PID) Description
Level GSP Site?
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
Partner Unified Computing Support Options enable eligible Cisco partners to develop and
consistently deliver high-value technical support that capitalizes on Cisco intellectual assets.
This helps partners to realize higher margins and expand their practice.
PSS is available to all Cisco PSS partners, but requires additional specializations and
requirements. For additional information, see the following URL:
www.cisco.com/go/partnerucssupport
Partner Support Service for UCS provides hardware and software support, including triage
support for third party software, backed by Cisco technical resources and level three support.
Service On
Product ID (PID) Description
Level GSP Site?
CON-PSJ1-B200M3 PSJ1 No UCS SUPP PSS 8X5XNBD UCS B200 M3 Blade Server
CON-PSJ2-B200M3 PSJ2 No UCS SUPP PSS 8X5X4 UCS B200 M3 Blade Server
CON-PSJ3-B200M3 PSJ3 No UCS SUPP PSS 24X7X4 UCS B200 M3 Blade Server
CON-PSJ4-B200M3 PSJ4 No UCS SUPP PSS 24X7X2 UCS B200 M3 Blade Server
Partner Support Service for UCS Hardware Only provides customers with replacement parts in as
little as two hours. See Table 21.
Service On
Product ID (PID) Description
Level GSP Site?
Combined Services makes it easier to purchase and manage required services under one
contract. SMARTnet services for UCS help increase the availability of your vital data center
infrastructure and realize the most value from your unified computing investment. The more
benefits you realize from the Cisco Unified Computing System (Cisco UCS), the more important
the technology becomes to your business. These services allow you to:
Service
On
Product ID (PID) Level Description
Site?
GSP
CON-NCF2-B200M3 NCF2 No CMB SPT SVC 24X7X2 UCS B200 M3 Blade Server
CON-NCF2P-B200M3 NCF2P Yes CMB SPT SVC 24X7X2OS UCS B200 M3 Blade Server
CON-NCF4P-B200M3 NCF4P Yes CMB SPT SVC 24X7X4OS UCS B200 M3 Blade Server
CON-NCF4S-B200M3 NCF4S Yes CMB SPT SVC 8X5X4OS UCS B200 M3 Blade Server
CON-NCFCS-B200M3 NCFCS Yes CMB SPT SVC 8X5XNBDOS UCS B200 M3 Blade Server
CON-NCFE-B200M3 NCFE No CMB SPT SVC 8X5X4 UCS B200 M3 Blade Server
CON-NCFP-B200M3 NCFP No CMB SPT SVC 24X7X4 UCS B200 M3 Blade Server
CON-NCFT-B200M3 NCFT No CMB SPT SVC 8X5XNBD UCS B200 M3 Blade Server
With the Cisco Unified Computing Drive Retention (UCDR) Service, you can obtain a new disk
drive in exchange for a faulty drive without returning the faulty drive. In exchange for a Cisco
replacement drive, you provide a signed Certificate of Destruction (CoD) confirming that the
drive has been removed from the system listed, is no longer in service, and has been destroyed.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The UCDR service enables you to
retain your drives and ensures that the sensitive data on those drives is not compromised, which
reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in Table 23.
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Service Service
Service Description Service Level Product ID (PID)
Program Name Level GSP
For more service and support information, see the following URL:
http://www.cisco.com/en/US/services/ps2961/ps10312/ps10321/Cisco_UC_Warranty_Support_DS.pdf
For a complete listing of available services for Cisco Unified Computing System, see this URL:
http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
The KVM local I/O cable ordering information is listed in Table 24.
NOTE: *The blade chassis ships with the KVM local I/O cable.
SUPPLEMENTAL MATERIAL
System Board
A top view of the UCS B200 M3 system board is shown in Figure 5.
1 2 3 4 5 6 5 4 7 8
C0 E0
C1 E1
C2 E2
D0 F0
D1 F1
D2 F2
CPU 2
CPU 1
B2 H2
B1 H1
331367
B0 H0
A2 G2
A1 G1
A0 G0
Notes...
1. A USB device installed in this connector must have sufficient clearance to easily slide in and out. The Cisco USB
device clearance is 0.950 inches (24.1 mm), which is a sufficient amount of clearance.
Physical Layout
Memory is organized as shown in Figure 6.
The DIMM and CPU physical layout is shown in Figure 7. The 12 DIMM slots at the left are controlled by CPU
1 and the 12 DIMM slots on the right are controlled by CPU 2.
1 1 2 2
C0 E0
E0
C1 E1
E1
C2 E2
E2
D0 F0
D1 F1
D2 F2
CPU 2
CPU 1
B2 H2
B1 H1
B0 H0
A2 G2
331374
A1 G1
A0 G0
■ For optimum performance, populate at least one DIMM per memory channel per CPU.
■ Do not mix RDIMMs with LRDIMMs.
■ Each channel has three DIMM slots (for example, channel A = slots A0, A1, and A2).
— A channel can operate with one, two, or three DIMMs installed.
— If a channel has only one DIMM, populate slot 0 first (the blue slot).
■ When both CPUs are installed, populate the DIMM slots of each CPU identically.
— Fill bank 0 blue slots in the channels first: A0, E0, B0, F0, C0, G0, D0, H0
— Fill bank 1 black slots in the channels second: A1, E1, B1, F1, C1, G1, D1, H1
— Fill bank 2 white slots in the channels third: A2, E2, B2, F2, C2, G2, D2, H2
■ Observe the DIMM rules shown in Table 25
DIMM Parameter DIMMs in the Same Channel DIMMs in the Same Bank1
DIMM Capacity
RDIMM = 16, 8, or 4 GB DIMMs in the same channel (for For best performance, DIMMs in the
example, A0, A1, A2) can have same bank (for example, A0, B0,
different capacities. C0, D0) should have the same
capacity.
LRDIMM = 32 GB You cannot mix 32 GB LRDIMMs with You cannot mix 32 GB LRDIMMs with
any RDIMM. any RDIMM.
DIMM Speed
1866, 1600 or 1333 MHz2 DIMMs will run at the lowest speed of DIMMs will run at the lowest speed
the DIMMs/CPUs installed. of the DIMMs/CPUs installed.
DIMM Type
RDIMMs or LRDIMMs You cannot mix LRDIMMs with You cannot mix LRDIMMs with
RDIMMS. RDIMMS.
DIMMs per Channel (DPC) See Table 8 on page 16 for See Table 8 on page 16 for
valid LRDIMM and RDIMM 1 DPC and 2 valid LRDIMM and RDIMM 3 DPC
DPC memory configurations configurations
Notes...
1. Although you can have different DIMM capacities in the same bank, this will result in less than optimal
performance. For optimal performance, all DIMMs in the same bank should be identical.
2. Only 1866-, 1600-, and 1333-MHz DIMMs are currently available for the B200 M3 server.
1 A0 E0
2 A0, B0 E0, F0
Many memory configurations are possible. For best results, follow Table 27 when populating 1866- and
1600-MHz DIMMs for Intel Xeon E5-2600 v2 CPUs and Table 28 on page 44 when populating 1600-MHz DIMMs
for Intel Xeon E5-2600 CPUs.
NOTE: These tables list only some recommended and suggested configurations.
There are numerous other possible configurations supported by Cisco. Cisco supports
all mixing and population configurations of the Cisco DIMMs as long as the mixing
does not violate the few fundamental rules noted in this document.
Table 27 Recommended Memory Configurations for Intel Xeon E5-2600 v2 CPUs (with 1866- and 1600-MHz DIMMs)1
Table 27 Recommended Memory Configurations for Intel Xeon E5-2600 v2 CPUs (with 1866- and 1600-MHz DIMMs)1
Notes...
1. Rows marked in yellow indicate best performance.
2. 1866-MHz 4 GB DIMMs are not offered.
3. For E5-2600 v2 CPUs, 8-GB 1600-MHz DIMMs at 3 DIMMs per channel currently are set to operate at 1066 MHz
instead of 1333 MHz
Table 28 Recommended Memory Configurations for Intel Xeon E5-2600 CPUs ( with 1600-MHz DIMMs).1
Table 28 Recommended Memory Configurations for Intel Xeon E5-2600 CPUs ( with 1600-MHz DIMMs).1
4 x 32 GB — — 4 x 32 GB — — 1600 8
Notes...
1. Rows marked in yellow indicate best performance.
2. When using this configuration, the BIOS runs the memory at 1.5 V (performance mode). The BIOS will not allow
this configuration to run at 1.35 V.
Total
Total DIMMs CPU 1 Total DIMMs CPU 2
CPU 1 DIMMs CPU 2 DIMMs Capacity for
for CPU 1 Capacity for CPU 2 Capacity
2 CPUs
1 x 8 GB 1 8 GB 1 x 8 GB 1 8 GB 16 GB
1 x 16 GB 1 16 GB 1 x 16 GB 1 16 GB 32 GB
2 x 4 GB 2 8 GB 2 x 4 GB 2 8 GB 16 GB
4 x 4 GB 4 16 GB 4 x 4 GB 4 16 GB 32 GB
2 x 8 GB 2 16 GB 2 x 8 GB 2 16 GB 32 GB
8 x 4 GB 8 32 GB 8 x 4 GB 8 32 GB 64 GB
4 x 8 GB 4 32 GB 4 x 8 GB 4 32 GB 64 GB
8 x 4 GB 8 32 GB 8 x 4 GB 8 32 GB 64 GB
1 x 32 GB 1 32 GB 1 x 32 GB 1 32 GB 64 GB
5 x 8 GB 5 40 GB 5 x 8 GB 5 40 GB 80 GB
3 x 16 GB 3 48 GB 3 x 16 GB 3 48 GB 96 GB
4x8 GB + 4x4 GB1 8 48 GB 4x8 GB + 4x4 GB 8 48 GB 96 GB
7 x 8 GB 7 56 GB 7 x 8 GB 7 56 GB 112 GB
4 x 16 GB 4 64 GB 4 x 16 GB 4 64 GB 128 GB
8 x 8 GB 8 64 GB 8 x 8 GB 8 64 GB 128 GB
2 x 32 GB 2 64 GB 2 x 32 GB 2 64 GB 128 GB
5 x 16 5 80 GB 5 x 16 5 80 GB 160 GB
4x16 GB + 4x4 GB1 8 80 GB 4x16 GB + 4x4 GB 8 80 GB 160 GB
4x8 GB + 4x16 GB1 8 96 GB 4x8 GB + 4x16 GB 8 96 GB 192 GB
3 x 32 GB 3 96 GB 3 x 32 GB 3 96 GB 192 GB
7 x 16 GB 7 112 GB 7 x 16 GB 7 112 GB 224 GB
8 x 16 GB 8 128 GB 8 x 16 GB 8 128 GB 256 GB
4 x 32 GB 4 128 GB 4 x 32 GB 4 128 GB 256 GB
8 x 32 GB 8 256 GB 8 x 32 GB 8 256 GB 512 GB
Notes...
1. When using this configuration, the BIOS runs the memory at 1.5 V (performance mode). The BIOS will not allow
this configuration to run at 1.35 V.
UCSB-HS-01-EP= CPU Heat Sink for UCS B200 M3 and B420 M31
UCS-CPU-LPCVR= CPU load plate dust cover (for unpopulated CPU sockets)
UCS-CPU-EP-PNP= Pick n place CPU tools for M3/EP 10/8/6/4/2 core CPUs (Green)3
UCS-CPU-EP2-PNP= Pick n place CPU tools for M3/EP v2 12 core CPUs (Purple)4
UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit (when replacing a CPU)3
Notes...
1. This part is included/configured with your UCS server (in some cases, as determined by the configuration of
your server).
2. This part is included/configured with the UCS 5108 blade server chassis.
3. This part is included with the purchase of each optional or spare Intel Xeon E5-2600 CPU processor kit.
4. This part is included with the purchase of each optional or spare Intel Xeon E5-2600 v2 CPU processor kit.
Upgrading your Server from Intel Xeon E5-2600 to Intel Xeon E5-2600 v2 CPUs (or
downgrading from Intel Xeon E5-2600 v2 to Intel Xeon E5-2600 CPUs)
See the following link:
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/CPU/IVB/install/IVB-B.html
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/B200M3.html#wp1070500
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/B200M3.html#wp1034829
Instructions for using this tool set are found at the following link:
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/B200M3.html#wp1070500
NOTE: When you purchase a spare CPU, the Pick n Place Toolkit is included.
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/B200M3.html#wp1070500
CAUTION: Use only the thermal grease specified for this server
(UCS-CPU-GREASE=). This thermal grease comes in a blue-tipped syringe
and is to be used in all servers except the C220 M3 and C240 M3 servers,
which use thermal grease in a red-tipped syringe (UCS-CPU-GREASE2=).
Thermal grease in the red-tipped syringe is only to be used for the C220
M3 and C240 M3 servers and has different thermal conductivity
properties, which may cause overheating if used in the B200 M3 server.
Likewise, thermal grease in the blue-tipped syringe is only to be used for
all UCS servers other than the C220 M3 and C240 M3 (such as the B200 M3
server) and has different thermal conductivity properties, which may
cause overheating if used in the C220 M3 and C240 M3 servers.
NOTE: When you purchase a spare CPU, the thermal grease with syringe applicator
is included.
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/B200M3.html#wp1034378
http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/B200M3.html#wp1010281
NOTE: When you purchase a spare CPU, the CPU cleaning kit is included.
NEBS Compliance
When configured with the specific set of components shown in Table 31, the UCS B200 M3 blade server
meets Network Building Equipment Standards (NEBS) Level 1 and Level 3 compliance.
Component
Description Product ID (PID)
Category
CPUs Up to two CPUs: Intel Xeon E5-2658v 2 .40 GHz 95W UCS-CPU-E52658B
10C/25MB Cache
Mezzanine Cards Cisco UCS VIC 1340 adapter for M3 blade servers UCSB-MLOM-40G-03
Network Connectivity
This section explains how the UCS B200 M3 server connects to Fabric Interconnects using the network
adapters in the UCS B200 M3 blade server and the Fabric Extender modules in the UCS 5108 blade server
chassis. The UCS B200 M3 server plugs into the front of the UCS 5108 blade server chassis. The Fabric
Extender modules plug into the back of the UCS 5108 series blade server chassis. A midplane connects the
UCS B200 M3 blade server to the Fabric Extenders. Figure 8 shows an example configuration where 2 x 10G
KR ports are routed from the VIC 1340 or 1240 adapter to the Fabric Extender modules and the remaining 2
x 10G KR ports are routed from the mezzanine adapter to the Fabric Extender modules.
Note that you cannot mix VIC 13xx Series adapters with VIC 12xx Series adapters. For example, if you install
a VIC 1340, you cannot install a VIC 1280. In this case, you must install a VIC 1380. Also, the VIC 13xx Series
adapters are compatible with systems implementing 6200 and 6300 Series Fabric Interconnects, but 61xx
Series FIs are not supported. All FIs are supported with the VIC 12xx Series adapters. 61xx/21xx Series
FIs/FEXs are supported on the B200 M3, but only with the VIC 12xx Series adapters
The server accommodates two types of network adapters. One is the Cisco VIC 1340 or 1240 adapter. The
other is a Cisco adapter, Emulex or QLogic I/O adapter, or Cisco Storage Accelerator adapter. The VIC
1340/1240 is the only adapter that can be used in the VIC 1340/1240 slot. All other types of adapters plug
into the mezzanine slot.
■ Cisco VIC 1340 or adapter. This adapter plugs into the VIC 1340/1240 slot and is natively
capable of 4x10Gb ports and 256 PCIe devices. The capabilities of the adapter can easily be
expanded by using the Port Expander Card in the mezzanine slot.
■ Cisco VIC 1380 or 1280 Mezzanine adapter. This adapter plugs into the mezzanine slot and is
capable of 4x10Gb ports in the UCS B200 M3 server, depending on the Fabric Extender
chosen (see Table 12 on page 23) and 256 PCIe devices.
■ Cisco Port Expander Card. This I/O expander plugs into the mezzanine slot and enables full
second-generation VIC functionality with the VIC 1340 or 1240. Using the Port Expander Card
with the VIC 1340 or 1240 allows you to have 8 ports of 10Gb each (depending on the Fabric
Extender option chosen - see Table 12 on page 23).
■ QLogic and Emulex adapters
■ Cisco Storage Accelerators. These flash storage devices do not have network connectivity;
instead they provide independent high-speed storage controlled by CPU 2. See Table 10 on
page 21 for descriptions.
There are two groups of four ports on the VIC 1340 or 1240. Two ports of the first group and two ports of the
second group are wired through the UCS 5108 Blade Server chassis to Fabric Extender A and Fabric Extender
B. The other two ports of each group are wired to the mezzanine slot, as represented in Figure 9.
The number of ports available at the mezzanine adapter depends on the type of mezzanine adapter that is
plugged into the mezzanine slot on the system board. The maximum number of ports is four. The VIC 1340
or 1240 senses the type of adapter plugged into the mezzanine slot. In the event a Port Expander Card
occupies the mezzanine slot, the four 10G KR ports between the adapters are used for port expansion;
otherwise, they are unused.
Mezzanine Adapters
There are multiple options for the mezzanine slot:
NOTE: In a B200 M3 configured with 2 CPUs, if a UCS B200 M3 blade server does not
have a VIC 1340 or 1240 installed, the mezzanine slot is required to have a QLogic or
Emulex I/O adapter installed to provide I/O connectivity. In a B200 M3 configured
with 1 CPU, however, a VIC 1340 or 1240 must always be installed.
■ Cisco adapters
— VIC 1340
— VIC 1240
— VIC 1380
— VIC 1280
— Port Expander Card
The following sections explain the various I/O options that are possible with the different Fabric Extenders
(Cisco UCS 2208XP, 2204XP, and 2104XP) and the VIC 1340 or 1240 and mezzanine adapters.
The options shown in Figure 10 and Figure 11 demonstrate how the server uses these options:
In Figure 10, two ports from the VIC 1340 or 1240 are channeled to 2304 Fabric Extender A and two are
channeled to 2304 Fabric Extender B. The result is 20 Gbps of bandwidth to each Fabric Extender.
Figure 10 Option 1 - VIC 1340/1240 to UCS 2304 Fabric Extender (no mezzanine adapter)
10G KR
10G KR
10G KR
10G KR
Port Group A Port Group B
PCIe x16
CPU
In Figure 11, two ports from the VIC 1340/1240 are channeled to 2304 Fabric Extender A and two are
channeled to 2304 Fabric Extender B. The Port Expander Card installed in the mezzanine slot acts as a
pass-through device to channel two ports to each of the Fabric Extenders. The result is 40 Gbps of
bandwidth to each Fabric Extender.
Figure 11 Option 2 - VIC 1340/1240 and Port Expander Card to UCS 2304 Fabric Extender
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
Port Expander Mezzanine Adapter 10G KR VIC 1340/1240 Adapter
10G KR
PCIe x16
CPU
The options shown in Figure 12 and Figure 13 demonstrate how the server uses these options:
In Figure 12, two ports from the VIC 1340 or 1240 are channeled to 2208XP Fabric Extender A and two are
channeled to 2208XP Fabric Extender B. The result is 20 Gbps of bandwidth to each Fabric Extender.
Figure 12 Option 1 - VIC 1340/1240 to UCS 2208XP Fabric Extender (no mezzanine adapter)
In Figure 13, two ports from the VIC 1340/1240 are channeled to 2208XP Fabric Extender A and two are
channeled to 2208XP Fabric Extender B. The Port Expander Card installed in the mezzanine slot acts as a
pass-through device to channel two ports to each of the Fabric Extenders. The result is 40 Gbps of
bandwidth to each Fabric Extender.
Figure 13 Option 2 - VIC 1340/1240 and Port Expander Card to UCS 2208XP Fabric Extender
The options shown in Figure 14 and Figure 15 demonstrate how the server uses these options:
In Figure 14, one port from the VIC 1340/1240 is channeled to 2204XP Fabric Extender A and one is
channeled to 2204XP Fabric Extender B. The result is 10 Gbps of bandwidth to each Fabric Extender.
Figure 14 Option 1 - VIC 1340/1240 to UCS 2204XP Fabric Extender (no mezzanine adapter)
In Figure 15, one port from the VIC 1340/1240 is channeled to 2204XP Fabric Extender A and one is
channeled to 2204XP Fabric Extender B. The Port Expander Card installed in the mezzanine slot acts as a
pass-through device to channel one port to each of the Fabric Extenders. The result is 20 Gbps of bandwidth
to each Fabric Extender.
Figure 15 Option 2 - VIC 1340/1240 and Port Expander Card to UCS 2204XP Fabric Extender
In Figure 16, one port from the VIC 1340/1240 is channeled to 2104XP Fabric Extender A and one is
channeled to 2104XP Fabric Extender B. The result is 10 Gbps of bandwidth to each Fabric Extender.
Figure 16 Option 1 - VIC 1340/1240 to UCS 2104XP Fabric Extender (no mezzanine adapter)
The options shown in Figure 17 through Figure 21 demonstrate how the server uses these options:
NOTE: A Cisco Storage Accelerator adapter may also be plugged into the mezzanine
adapter. There is no network connectivity for this kind of adapter; instead it
provides high-speed storage to the system and is controlled by CPU 2.
In Figure 17, two ports from the VIC 1340/1240are channeled to 2304 Fabric Extender A and two are
channeled to 2304 Fabric Extender B. The result is 20 Gbps of bandwidth to each Fabric Extender.
Figure 17 Option 1 - VIC 1340/1240 to UCS 2304 Fabric Extender (no mezzanine adapter)
10G KR
10G KR
10G KR
PCIe x16
CPU CPU
In Figure 18, two ports from the VIC 1340/1240 are channeled to 2304 Fabric Extender A and two are
channeled to 2304 Fabric Extender B. The VIC 1380/1280 installed in the mezzanine slot also channels two
ports to each of the Fabric Extenders. The result is 40 Gbps of bandwidth to each Fabric Extender.
Figure 18 Option 2 - VIC 1340/1240 and VIC 1380/1280 to UCS 2304 Fabric Extender
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
Port Group A Port Group B Port Group A Port Group B
CPU CPU
In Figure 19, two ports from the VIC 1340/1240 are channeled to 2304 Fabric Extender A and two are
channeled to 2304 Fabric Extender B. The adapter installed in the mezzanine slot also channels one port to
each of the Fabric Extenders. The result is 30 Gbps of bandwidth to each Fabric Extender.
Figure 19 Option 3 - VIC 1340/1240 and Emulex or QLogic I/O Adapter to UCS 2304 Fabric Extender
10G KR
10G KR
10G KR
10G KR
10G KR
Port A Port B
CPU CPU
In Figure 20, two ports from the VIC 1340/1240are channeled to 2304 Fabric Extender A and two are
channeled to 2304 Fabric Extender B. The Port Expander Card installed in the mezzanine slot acts as a
pass-through device to channel two ports to each of the Fabric Extenders. The result is 40 Gbps of
bandwidth to each Fabric Extender.
Figure 20 Option 4 - VIC 1340/1240 and Port Expander Card to UCS 2304 Fabric Extender
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
10G KR
Port Group A Port Group B
10G KR
10G KR
Port Expander Mezzanine Adapter 10G KR VIC 1340/1240 Adapter
10G KR
CPU CPU
In Figure 21, there is no VIC 1340/1240 adapter installed. In this case, a network adapter must be installed
in the mezzanine slot. Port A and B of the mezzanine adapter connect to the Fabric Extenders, providing 10
Gbps per port.
Figure 21 Option 5 - Emulex/QLogic I/O Adapter to UCS 2304 Fabric Extender (no VIC 1340/1240 adapter)
10G KR
Port A Port B
PCIe x16
CPU CPU
The options shown in Figure 22 through Figure 26 demonstrate how the server uses these options:
NOTE: A Cisco Storage Accelerator adapter may also be plugged into the mezzanine
adapter. There is no network connectivity for this kind of adapter; instead it
provides high-speed storage to the system and is controlled by CPU 2.
In Figure 22, two ports from the VIC 1340/1240are channeled to 2208XP Fabric Extender A and two are
channeled to 2208XP Fabric Extender B. The result is 20 Gbps of bandwidth to each Fabric Extender.
Figure 22 Option 1 - VIC 1340/1240 to UCS 2208XP Fabric Extender (no mezzanine adapter)
In Figure 23, two ports from the VIC 1340/1240 are channeled to 2208XP Fabric Extender A and two are
channeled to 2208XP Fabric Extender B. The VIC 1380/1280 installed in the mezzanine slot also channels
two ports to each of the Fabric Extenders. The result is 40 Gbps of bandwidth to each Fabric Extender.
Figure 23 Option 2 - VIC 1340/1240 and VIC 1380/1280 to UCS 2208XP Fabric Extender
In Figure 24, two ports from the VIC 1340/1240 are channeled to 2208XP Fabric Extender A and two are
channeled to 2208XP Fabric Extender B. The adapter installed in the mezzanine slot also channels one port
to each of the Fabric Extenders. The result is 30 Gbps of bandwidth to each Fabric Extender.
Figure 24 Option 3 - VIC 1340/1240 and Emulex or QLogic I/O Adapter to UCS 2208XP Fabric Extender
In Figure 25, two ports from the VIC 1340/1240are channeled to 2208XP Fabric Extender A and two are
channeled to 2208XP Fabric Extender B. The Port Expander Card installed in the mezzanine slot acts as a
pass-through device to channel two ports to each of the Fabric Extenders. The result is 40 Gbps of
bandwidth to each Fabric Extender.
Figure 25 Option 4 - VIC 1340/1240 and Port Expander Card to UCS 2208XP Fabric Extender
In Figure 26, there is no VIC 1340/1240 adapter installed. In this case, a network adapter must be installed
in the mezzanine slot. Port A and B of the mezzanine adapter connect to the Fabric Extenders, providing 10
Gbps per port.
Figure 26 Option 5 - Emulex/QLogic I/O Adapter to UCS 2208XP Fabric Extender (no VIC 1340/1240 adapter)
The options shown in Figure 27 through Figure 31 demonstrate how the server uses these options:
NOTE: A Cisco Storage Accelerator adapter may also be plugged into the mezzanine
adapter. There is no network connectivity for this kind of adapter; instead it
provides high-speed storage to the system and is controlled by CPU 2.
In Figure 27, one port from the VIC 1340/1240 is channeled to 2204XP Fabric Extender A and one is
channeled to 2204XP Fabric Extender B. The result is 10 Gbps of bandwidth to each Fabric Extender.
Figure 27 Option 1 - VIC 1340/1240 to UCS 2204XP Fabric Extender (no mezzanine adapter)
In Figure 28, one port from the VIC 1340/1240 is channeled to 2204XP Fabric Extender A and one is
channeled to 2204XP Fabric Extender B. The VIC 1380/1280 installed in the mezzanine slot also channels
one port to each of the Fabric Extenders. The result is 20 Gbps of bandwidth to each Fabric Extender.
Figure 28 Option 2 - VIC 1340/1240 and VIC 1380/1280 to UCS 2204XP Fabric Extender
In Figure 29, one port from the VIC 1340/1240 is channeled to 2204XP Fabric Extender A and one is
channeled to 2204XP Fabric Extender B. The adapter installed in the mezzanine slot also channels one port
to each of the Fabric Extenders. The result is 20 Gbps of bandwidth to each Fabric Extender.
Figure 29 Option 3 - VIC 1340/1240 and Emulex/QLogic I/O Adapter to UCS 2204XP Fabric Extender
In Figure 30, one port from the VIC 1340/1240 is channeled to 2204XP Fabric Extender A and one is
channeled to 2204XP Fabric Extender B. The Port Expander Card installed in the mezzanine slot acts as a
pass-through device to channel one port to each of the Fabric Extenders. The result is 20 Gbps of bandwidth
to each Fabric Extender.
Figure 30 Option 4 - VIC 1340/1240 and Port Expander Card to UCS 2204XP Fabric Extender
In Figure 31, there is no VIC 1340/1240 adapter installed. In this case, a network adapter must be installed
in the mezzanine slot. Port A and B of the mezzanine adapter connect to the Fabric Extenders, providing 10
Gbps per port.
Figure 31 Option 5 - Emulex/QLogic I/O Adapter to UCS 2204XP Fabric Extender (no VIC 1340/1240 adapter)
In Figure 32, one port from the VIC 1340/1240 is channeled to 2104XP Fabric Extender A and one is
channeled to 2104XP Fabric Extender B. The result is 10 Gbps of bandwidth to each Fabric Extender. With
this option, no adapter is located in the mezzanine connector.
Figure 32 Option 1 - VIC 1340/1240 to UCS 2104XP Fabric Extender (no mezzanine adapter)
In Figure 33, one port from the VIC 1340/1240 is channeled to 2104XP Fabric Extender A and one is
channeled to 2104XP Fabric Extender B. The result is 10 Gbps of bandwidth to each Fabric Extender. The
Cisco Storage Accelerator adapter is located in the mezzanine connector as an independent device
controlled by CPU 2.
Figure 33 Option 2 - VIC 1340/1240 to UCS 2104XP Fabric Extender (Cisco Storage Accelerator installed)
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Power Specifications
For configuration-specific power specifications, use the Cisco UCS Power Calculator at:
http://ucspowercalc.cisco.com