B c240 m6 Install Guide
B c240 m6 Install Guide
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2021, 2022, 2023 Cisco Systems, Inc. All rights reserved.
CONTENTS
CHAPTER 1 Overview 1
Overview 1
External Features 4
PCIe Risers 23
Summary of Server Features 29
Serviceable Component Locations 40
Front-Panel LEDs 62
Rear-Panel LEDs 65
Internal Diagnostic LEDs 67
Preparing For Component Installation 68
Required Equipment For Service Procedures 68
Shutting Down and Removing Power From the Server 68
Shutting Down Using the Power Button 68
Shutting Down Using The Cisco IMC GUI 69
Bias-Free Documentation
Note The documentation set for this product strives to use bias-free language. For purposes of this
documentation set, bias-free is defined as language that does not imply discrimination based on age,
disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and
intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in
the user interfaces of the product software, language used based on standards documentation, or language
that is used by a referenced third-party product.
Introduction
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL
ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND
RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED
WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL
RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT
ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND
ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE
SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE
FOR A COPY.
The following information is for FCC compliance of Class A devices: This equipment has been tested and
found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC rules. These limits
are designed to provide reasonable protection against harmful interference when the equipment is operated
in a commercial environment. This equipment generates, uses, and can radiate radio-frequency energy and,
if not installed and used in accordance with the instruction manual, may cause harmful interference to radio
communications. Operation of this equipment in a residential area is likely to cause harmful interference, in
which case users will be required to correct the interference at their own expense.
The following information is for FCC compliance of Class B devices: This equipment has been tested and
found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC rules. These limits
are designed to provide reasonable protection against harmful interference in a residential installation. This
equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance
with the instructions, may cause harmful interference to radio communications. However, there is no guarantee
that interference will not occur in a particular installation. If the equipment causes interference to radio or
television reception, which can be determined by turning the equipment off and on, users are encouraged to
try to correct the interference by using one or more of the following measures:
• Reorient or relocate the receiving antenna.
• Increase the separation between the equipment and receiver.
• Connect the equipment into an outlet on a circuit different from that to which the receiver is connected.
• Consult the dealer or an experienced radio/TV technician for help.
Modifications to this product not authorized by Cisco could void the FCC approval and negate your authority
to operate the product.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the
University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating
system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE
OF THESE SUPPLIERS ARE PROVIDED "AS IS" WITH ALL FAULTS. CISCO AND THE
ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE
PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL,
CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST
PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE
THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY
OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual
addresses and phone numbers. Any examples, command display output, network topology diagrams, and
other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses
or phone numbers in illustrative content is unintentional and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current
online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at
www.cisco.com/go/offices.
The documentation set for this product strives to use bias-free language. For purposes of this documentation
set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial
identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be
present in the documentation due to language that is hardcoded in the user interfaces of the product software,
language used based on standards documentation, or language that is used by a referenced third-party product.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and
other countries. To view a list of Cisco trademarks, go to this URL: https://www.cisco.com/c/en/us/about/
legal/trademarks.html. Third-party trademarks mentioned are the property of their respective owners. The use
of the word partner does not imply a partnership relationship between Cisco and any other company. (1721R)
Overview
The Cisco UCS C240 M6 is a stand-alone 2U rack server chassis that can operate in both standalone
environments and as part of the Cisco Unified Computing System (Cisco UCS).
The Cisco UCS C240 M6 servers support a maximum of two 3rd Gen Intel® Xeon® Scalable Processors, in
either one or two CPU configurations.
The servers support:
• 16 DIMM slots per CPU for 3200-MHz DDR4 DIMMs in capacities up to 128 GB DIMMs.
• A maximum of 8 or 12 TB of memory is supported for a dual CPU configuration populated with either:
• DIMM memory configurations of either 32 128 GB DDR DIMMs, or 16 128 GB DDR4 DIMMs
plus 16 512 GB Intel® Optane™ Persistent Memory Modules (DCPMMs).
• The servers have different supported drive configurations depending on whether they are configured
with large form factor (LFF) or small form factor (SFF) front-loading drives.
• The C240 M6 12 LFF supports midplane mounted storage through a maximum of r4 LFF HDDs.
• Up to 2 M.2 SATA RAID cards for server boot.
• Rear Storage risers (2 slots each)
• One rear PCIe riser (3 slots)
• Internal slot for a 12 G SAS RAID controller with SuperCap for write-cache backup, or for a SAS HBA.
• Network connectivity through either a dedicated modular LAN over motherboard card (mLOM) that
accepts a series 14xx/15xxx Cisco virtual interface card (VIC) or a third-party NIC. These options are
in addition to Intel x550 10Gbase-T mLOM ports built into the server motherboard.
• One mLOM/VIC card provides 10/25/40/50/100 Gbps. The following mLOMs are supported:
• Cisco UCS VIC 15238 Dual Port 40/100G QSFP28 mLOM (UCSC-M-V5D200G) supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• two 40G/100G QSFP28 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation
• Cisco UCS VIC 15428 Quad Port CNA MLOM (UCSC-M-V5Q50G) supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• four 10G/25G/50G SFP56 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation
• Cisco UCS VIC 1467 Quad Port 10/25G SFP28 mLOM (UCSC-M-V25-04) supports:
• a x16 PCIe Gen3 Host Interface to the rack server
• four 10G/25G QSFP28 ports
• 2GB DDR3 Memory, 1866 MHz
These options are in addition to Intel x550 10Gbase-T mLOM ports built into the server motherboard.
• The following virtual interface cards (VICs) are supported in addition to some third-party VICs):
• Cisco UCS VIC 1455 quad port 10/25G SFP28 PCIe (UCSC-PCIE-C25Q-04=)
• Cisco UCS VIC 1495 Dual Port 40/100G QSFP28 CNA PCIe (UCSC-PCIE-C100-042)
• The server can be configured with a SATA Interposer card. If your server uses a SATA Interposer
card, up to a maximum of 8 SATA-only drives can be configured. These drives can be installed
only in slots 1-8.
• Drive bays 5 —12 support SAS/SATA SSDs or HDDs only; no NVMe.
• Optionally, the rear-loading drive bays support four 4 2.5-inch SAS/SATA or NVMe drives.
Note To order the GPU Ready configuration through the Cisco online ordering
and configuration tool, you must select the GPU air duct PID to enable GPU
ready configuration. Follow the additional rules displayed in the tool. For
additional information, see GPU Card Configuration Rules, on page 192.
Note To order the GPU Ready configuration through the Cisco online ordering
and configuration tool, you must select the GPU air duct PID to enable GPU
ready configuration. Follow the additional rules displayed in the tool. For
additional information, see GPU Card Configuration Rules, on page 192.
External Features
This topic shows the external features of the different configurations of the server.
For definitions of LED states, see Front-Panel LEDs, on page 62.
9 NVMe Drive Bays, front loading 10 KVM connector (used with KVM cable that provides
one DB-15 VGA, one DB-9 serial, and two USB 2.0
Drive bays 1—24 support front-loading SFF
connectors)
SAS/SATA drives.
Drive bays 1 through 4 can support SAS/SATA
hard drives and solid-state drives (SSDs) or NVMe
PCIe drives. Any number of NVMe drives up to 4
can reside in these slots.
Drive bays 5 - 24 support SAS/SATA hard drives
and solid-state drives (SSDs) only.
Drive bays are numbered 1 through 24 with bay 1
as the leftmost bay.
9 Drive Bays front loading 10 Drive Bays 13 through 24 are blocked off with sheet
metal.
Drive bays 1—12 support front-loading SFF
SAS/SATA drives.
Drive bays 1 thorough 4 can support SAS/SATA
hard drives and solid-state drives (SSDs) as well
as NVME PCIe drives. Any number of NVMe
drives up to 4 can reside in these slots.
Drive bays 5 - 12 support SAS/SATA hard drives
and solid-state drives (SSDs) only.
Drive bays are numbered 1 through 24 with bay 1
as the leftmost bay.
Note If the server has a SATA Interposer
card, a maximum of 8 SATA drives is
supported in slots 1 through 8.
Cisco UCS C240 M6 Server 12 SAS/SATA Drives (Plus Optical) Front Panel Features
The following figure shows the front panel features of Cisco UCS C240 M6S, which is the small form-factor
(SFF) drive, 12-drive version of the server. Front-loading drives can be mix and match in slots 1 through 4
to support up to four SFF NVMe or SFF SAS/SATA drives. UCS C240 M6 servers with any number of NVMe
drives must be dual CPU systems.
This configuration can support up to 4 optional SAS/SATA drives in the rear PCIe slots.
For definitions of LED states, see Front-Panel LEDs, on page 62.
Figure 3: Cisco UCS C240 M6 Server 12 SAS/SATA Plus Optical Drive, Front Panel Features
9 Drive Status LEDs 10 Drive bays 13-24 are blocked off with sheet metal.
Drive bays 1—12 support front-loading SFF drives.
Drive bays 1 thorough 4 can support SAS/SATA
hard drives and solid-state drives (SSDs) as well
as NVME PCIe drives. Any number of NVMe
drives up to 4 can reside in these slots.
Note If the server has a SATA Interposer
card, a maximum of 8 SATA drives is
supported in slots 1 through 8.
11 KVM connector (used with KVM cable that 12 Optional optical DVD drive is installed horizontally.
provides one DB-15 VGA, one DB-9 serial, and
two USB 2.0 connectors)
9 Drive bays 1—12 support front-loading SFF 10 Drive bays 13 —24 are blocked off with sheet metal.
NVMe drives only.
Drive bays are numbered 1 through 12 with bay 1
as the leftmost bay.
9 Drive Status LEDs 10 Drive bays 1—24 support front-loading SFF NVMe
drives.
Drive bays are numbered 1 through 24 with bay 1
as the leftmost bay.
9 KVM connector (used with KVM cable that 10 Drive bays 1—12 support front-loading LFF
provides one DB-15 VGA, one DB-9 serial, and SAS-only drives.
two USB 2.0 connectors)
Drive bays are numbered 1 through 12 with bay 1
as the leftmost bay, 12 as the rightmost bottom bay.
1 Riser 1A 2 Riser 2A
3 Riser 3A or 3C - -
The following table shows the riser options for this version of the server.
Riser Options
Cisco UCS C240 M6 Server 12 SAS/SATA Drive Rear Panel, I/O Centric
The Cisco UCS C240 M6 12 SAS/SATA SFF version has a rear configuration option, for either I/O (I/O
Centric) or Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the
Storage Centric version of the server offering drive bays.
Note This version of server has an option for a DVD drive on the front of the server. The rear panel shown
here is the same for both the standard server and the DVD drive version of the server.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6S.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs, on page 65.
1 Riser 1A 2 Riser 2A
3 Riser 3A or 3C -
The following table shows the riser options for this version of the server.
Riser Options
Riser Options
Cisco UCS C240 M6 Server 24 NVMe Drive Rear Panel, I/O Centric
The Cisco UCS C240 M6 24 NVMe version has a rear configuration option, for either I/O (I/O Centric) or
Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6SN.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs, on page 65.
The following table shows the riser options for this version of the server.
1 Riser 1A 2 Riser 2A
Riser Options
Cisco UCS C240 M6 Server 12 NVMe Drive Rear Panel, I/O Centric
The Cisco UCS C240 M6 24 NVMe version has a rear configuration option, for either I/O (I/O Centric) or
Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6N.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs, on page 65.
The following table shows the riser options for this version of the server.
1 Riser 1A 2 Riser 2A
Riser Options
The following table shows the riser options for this version of the server.
3 Riser 3B, 3C -
Riser Options
Riser 3 Riser 3B has two PCIe slots that can support two SFF
drives (NVMe).
This riser is controlled by CPU 2.
• Slot 7 (drive bay 104), x4
• Slot 8 (drive bay 103), x4
The following table shows the riser options for this version of the server.
1 Riser 1A 2 Riser 2A
3 Riser 3B, 3C -
Riser Options
Cisco UCS C240 M6 Server 24 NVMe Drive Rear Panel, Storage Centric
The Cisco UCS C240 M6 24 NVMe SFF version has a rear configuration option, for either I/O (I/O Centric)
or Storage (Storage Centric) with the I/O Centric version of the server using PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the Storage Centric version of the Cisco UCS
C240 M6SN.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs, on page 65.
The following table shows the riser options for this version of the server.
Riser Options
Cisco UCS C240 M6 Server 12 NVMe Drive Rear Panel, Storage Centric
The Cisco UCS C240 M6 12 NVMe SFF version has a rear configuration option, for either I/O (I/O Centric)
or Storage (Storage Centric) with the I/O Centric version of the server using PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the Storage Centric version of the Cisco UCS
C240 M6N.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs, on page 65.
The following table shows the riser options for this version of the server.
Riser Options
The following table shows the riser options for this version of the server.
1 Riser 1B 2 Riser 2A
3 Riser 3B -
Riser Options
Riser Options
PCIe Risers
The following different PCIe riser options are available.
Riser 1 Options
This riser supports two options, Riser 1A and 1B.
1 PCIe slot 1, Full height, ¾ length, 2 PCIe slot 2, full height, full length,
x8, NCSI x16, NCSI, GPU capable
Riser 2
This riser supports one option, Riser 2A.
1 PCIe slot 4, Full height, ¾ length, 2 PCIe slot 5, full height, full length,
x8 x16
Riser 3
This riser supports three options, 3A, 3B, and 3C.
1 PCIe slot 7, Full height, full length 2 PCIe slot 8, full height, full length,
x8 x16
3 Edge Connectors
1 PCIe Slot 7, Drive Bay 103, x4 2 PCIe Slot 8, Drive Bay 103, x4
3 Edge Connectors
Feature Description
Front panel:
• One front-panel keyboard/video/mouse (KVM) connector
that is used with the KVM breakout cable. The breakout
cable provides two USB 2.0, one VGA, and one DB-9 serial
connector.
Feature Description
Front Panel The front panel controller provides status indications and control
buttons
InfiniBand The PCIe bus slots in this server support the InfiniBand
architecture.
Expansion Slots For the SFF versions of the server, three half-height riser slots
are supported:
• Riser 1A (3 PCIe slots)
• Riser 1B (2 drive bays)
• Riser 2A (3 PCIe slots)
• Riser 3A (2 PCIe slots)
• Riser 3B (2 drive bays)
• Riser 3C (1 full-length, double-wide GPU)
Feature Description
Feature Description
Feature Description
• UCSC-C240-M6S:
• Up to 12 SFF SAS/SATA hard drives (HDDs) or
SAS/SATA solid state drives (SSDs)
• Optionally, up to four SFF NVMe PCIe SSDs. These
drives must be placed in front drive bays 1, 2, 3, and 4
only, can be mixed with SAS/SATA drives. The rest of
the bays (5 - 12) can be populated with SAS/SATA
SSDs or HDDs. Two CPUs are required in a server that
has any number of NVMe drives.
• Optionally, one front-facing DVD drive
• Optionally, up to two SFF rear-facing
SAS/SATA/NVMe drives
• If using a SATA Interposer, up to 8 SATA-only drives
can be installed (slots 1-8 only).
• UCSC-C240-M6SX:
• Up to 24 front SFF SAS/SATA hard drives (HDDs) or
SAS/SATA solid state drives (SSDs).
• Optionally, up to four front SFF NVMe PCIe SSDs.
These drives must be placed in front drive bays 1, 2, 3,
and 4 only. The rest of the bays (5 - 24) can be
populated with SAS/SATA SSDs or HDDs. Two CPUs
are required in a server that has any number of NVMe
drives.
• Optionally, up to four SFF rear-facing
SAS/SATA/NVMe drives
• UCSC-C240-M6N:
• Up to 12 front NVMe (only) drives
• Optionally, up to 2 rear NVMe (only) drives
• Two CPUs are required in a server that has any number
of NVMe drives.
• UCSC-C240-M6SN:
• Up to 24 front NVMe drives (only).
• Optionally, up to 2 rear NVMe drives (only)
• Two CPUs are required when choosing NVMe SSDs
• Other Storage:
• A mini-storage module connector on the motherboard
Feature Description
supports a boot-optimized RAID controller carrier that
holds up two SATA M.2 SSDs. Mixing different
capacity SATA M.2 SSDs is not supported.
Storage Controllers One SATA Interposer board, 12G RAID HBA, or one or two 12G
SAS HBAs plug into a dedicated slot.
• SATA Interposer board:
• AHCI support for up to eight SATA-only drives (slots
1- 8)
• Supported only on the UCSC-C240M6-S server
Feature Description
Modular LAN over Motherboard (mLOM) slot The dedicated mLOM slot on the motherboard can flexibly
accommodate the following cards:
• Cisco Virtual Interface Cards (VICs)
• Quad Port Intel i350 1GbE RJ45 Network Interface Card
(NIC)
Feature Description
Feature Description
Front Panel The front panel controller provides status indications and control
buttons
InfiniBand The PCIe bus slots in this server support the InfiniBand
architecture.
Expansion Slots • Riser 1B (1 PCIe slot reserved for a drive controller and 2
HDD slots)
• Riser 2A (3 PCIe slots)
• Riser 3B (2 HDD slots)
Feature Description
Internal Storage Devices • Large Form Factor (LFF) drives with a 12-drive backplane.
The server can hold a maximum of:
• 12 LFF 3.5-inch front-loading SAS-only hard drives
(HDDs, 4 mid-plane LFF drives,
• As an option, up to four 3.5-inch mid-plane SAS-only
LFF HDDs
• Optionally, up to four rear-facing SAS/SATA
HDDs/SSDs or up to four rear-facing NVMe PCIe SSDs
Feature Description
Storage Controllers The 12G RAID HBA or 12G SAS HBA plugs into slot 1 (bottom
slot) of riser 1B.
• Cisco M6 12G SAS RAID Controller with 4GB FBWC
• RAID support (RAID 0, 1, 5, 6, 10, 50 and 60) and
SRAID
• Supports up to 32 internal SAS/SATA drives
• Plugs into drive slot 1 of riser 1B
Modular LAN over Motherboard (mLOM) slot The dedicated mLOM slot on the motherboard can flexibly
accommodate the following cards:
• Cisco Virtual Interface Cards (VICs)
• Quad Port Intel i350 1GbE RJ45 Network Interface Card
(NIC)
7 PCIe riser 3 (PCIe slots 7 and 8 numbered from 8 PCIe riser 2 (PCIe slots 4, 5, 6 numbered from
bottom to top), with the following options: bottom to top), with the following options:
• 3A (Default Option)—Slots 7 (x16 • 2A (Default Option)—Slot 4 (x24 mechanical,
mechanical, x8 electrical), and 8 (x16 x8 electrical) supports full height, ¾ length
mechanical, x8 electrical). Both slots can card; Slot 5 (x24 mechanical, x16 electrical)
accept a full height, full length GPU card. supports full height, full length GPU card; Slot
6 (x16 mechanical, x8 electrical) supports full
• 3B (Storage Option)—Slots 7 (x24 height, full length card.
mechanical, x4 electrical) and 8 (x24
mechanical, x4 electrical). Both slots can
accept 2.5-ich SFF universal HDDs.
• 3C (GPU Option)—Slots 7 (x16 mechanical,
x16 electrical) and 8 empty (NCSI support
limited to one slot at a time). Slot 7 can
support a full height, full length GPU card.
The Technical Specifications Sheets for all versions of this server, which include supported component part
numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).
Note Before you install, operate, or service a server, review the Regulatory Compliance and Safety Information
for Cisco UCS C-Series Servers for important safety information.
Warning To prevent the system from overheating, do not operate it in an area that exceeds the maximum
recommended ambient temperature of: 35° C (95° F).
Statement 1047
Warning The plug-socket combination must be accessible at all times, because it serves as the main
disconnecting device.
Statement 1019
Warning This product relies on the building’s installation for short-circuit (overcurrent) protection. Ensure
that the protective device is rated not greater than: 250 V, 15 A.
Statement 1005
Warning Installation of the equipment must comply with local and national electrical codes.
Statement 1074
Warning This unit is intended for installation in restricted access areas. A restricted access area can be
accessed only through the use of a special tool, lock, and key, or other means of security.
Statement 1017
Caution To ensure proper airflow it is necessary to rack the servers using rail kits. Physically placing the units
on top of one another or “stacking” without the use of the rail kits blocks the air vents on top of the
servers, which could result in overheating, higher fan speeds, and higher power consumption. We
recommend that you mount your servers on rail kits when you are installing them into the rack because
these rails provide the minimal spacing required between the servers. No additional spacing between
the servers is required when you mount the units using rail kits.
Caution Avoid uninterruptible power supply (UPS) types that use ferroresonant technology. These UPS types
can become unstable with systems such as the Cisco UCS, which can have substantial current draw
fluctuations from fluctuating data traffic patterns.
• Ensure that there is adequate space around the server to allow for accessing the server and for adequate
airflow. The airflow in this server is from front to back.
• Ensure that the air-conditioning meets the thermal requirements listed in the Environmental Specifications,
on page 175.
• Ensure that the cabinet or rack meets the requirements listed in the Rack Requirements, on page 45.
• Ensure that the site power meets the power requirements listed in the Power Specifications, on page 176.
If available, you can use an uninterruptible power supply (UPS) to protect against power failures.
Rack Requirements
The rack must be of the following type:
• A standard 19-in. (48.3-cm) wide, four-post EIA rack, with mounting posts that conform to English
universal hole spacing, per section 1 of ANSI/EIA-310-D-1992.
• The rack-post holes can be square 0.38-inch (9.6 mm), round 0.28-inch (7.1 mm), #12-24 UNC, or #10-32
UNC when you use the Cisco-supplied slide rails.
• The minimum vertical rack space per server must be two rack units (RUs), equal to 3.5 in. (88.9 mm).
Warning To prevent bodily injury when mounting or servicing this unit in a rack, you must take special
precautions to ensure that the system remains stable. The following guidelines are provided to
ensure your safety:
This unit should be mounted at the bottom of the rack if it is the only unit in the rack.
When mounting this unit in a partially filled rack, load the rack from the bottom to the top with
the heaviest component at the bottom of the rack.
If the rack is provided with stabilizing devices, install the stabilizers before mounting or servicing
the unit in the rack.
Statement 1006
c) Install the second inner rail to the opposite side of the server.
Step 2 Open the front securing plate on both slide-rail assemblies. The front end of the slide-rail assembly has a spring-loaded
securing plate that must be open before you can insert the mounting pegs into the rack-post holes.
On the outside of the assembly, push the green-arrow button toward the rear to open the securing plate.
1 Front mounting pegs 3 Securing plate shown pulled back to the open
position
b) Push the mounting pegs into the rack-post holes from the outside-front.
c) Press the securing plate release button, marked PUSH. The spring-loaded securing plate closes to lock the pegs in
place.
d) Adjust the slide-rail length, and then push the rear mounting pegs into the corresponding rear rack-post holes. The
slide rail must be level front-to-rear.
The rear mounting pegs enter the rear rack-post holes from the inside of the rack post.
e) Attach the second slide-rail assembly to the opposite side of the rack. Ensure that the two slide-rail assemblies are at
the same height and are level front-to-back.
f) Pull the inner slide rails on each assembly out toward the rack front until they hit the internal stops and lock in place.
Step 4 Insert the server into the slide rails:
Caution This server can weigh up to 64 pounds (29 kilograms) when fully loaded with components. We recommend
that you use a minimum of two people or a mechanical lift when lifting the server. Attempting this procedure
alone could result in personal injury or equipment damage.
a) Align the rear ends of the inner rails that are attached to the server sides with the front ends of the empty slide rails
on the rack.
b) Push the inner rails into the slide rails on the rack until they stop at the internal
spots .
c) Slide the inner-rail release clip toward the rear on both inner rails, and then continue pushing the server into the rack
until its front slam-latches engage with the rack posts.
Figure 9: Inner-Rail Release Clip
Step 5 (Optional) Secure the server in the rack more permanently by using the two screws that are provided with the slide rails.
Perform this step if you plan to move the rack with servers installed.
With the server fully pushed into the slide rails, open a hinged slam latch lever on the front of the server and insert a
screw through the hole that is under the lever. The screw threads into the static part of the rail on the rack post and prevents
the server from being pulled out. Repeat for the opposite slam latch.
Note The cable management arm (CMA) is reversible left-to-right. To reverse the CMA, see Reversing the
Cable Management Arm (Optional), on page 50 before installation.
Step 1 With the server pushed fully into the rack, slide the CMA tab of the CMA arm that is farthest from the server onto the
end of the stationary slide rail that is attached to the rack post. Slide the tab over the end of the rail until it clicks and
locks.
Figure 10: Attaching the CMA to the Rear Ends of the Slide Rails
1 CMA tab on arm farthest from server attaches 3 CMA tab on width-adjustment slider attaches
to end of stationary outer slide rail. to end of stationary outer slide rail.
Step 2 Slide the CMA tab that is closest to the server over the end of the inner rail that is attached to the server. Slide the tab
over the end of the rail until it clicks and locks
Step 3 Pull out the width-adjustment slider that is at the opposite end of the CMA assembly until it matches the width of your
rack.
Step 4 Slide the CMA tab that is at the end of the width-adjustment slider onto the end of the stationary slide rail that is attached
to the rack post. Slide the tab over the end of the rail until it clicks and locks.
Step 5 Open the hinged flap at the top of each plastic cable guide and route your cables through the cable guides as desired.
Step 1 Rotate the entire CMA assembly 180 degrees, left-to-right. The plastic cable guides must remain pointing upward.
Step 2 Flip the tabs at the ends of the CMA arms so that they point toward the rear of the server.
Step 3 Pivot the tab that is at the end of the width-adjustment slider. Depress and hold the metal button on the outside of the tab
and pivot the tab 180 degrees so that it points toward the rear of the server.
Figure 11: Reversing the CMA
Note This section describes how to power on the server, assign an IP address, and connect to server management
when using the server in standalone mode.
Connection Methods
There are two methods for connecting to the system for initial setup:
• Local setup—Use this procedure if you want to connect a keyboard and monitor directly to the system
for setup. This procedure can use a KVM cable (Cisco PID N20-BKVM) or the ports on the rear of the
server.
• Remote setup—Use this procedure if you want to perform setup through your dedicated management
LAN.
Note To configure the system remotely, you must have a DHCP server on the
same network as the system. Your DHCP server must be preconfigured
with the range of MAC addresses for this server node. The MAC address
is printed on a label that is on the pull-out asset tag on the front panel. This
server node has a range of six MAC addresses assigned to the Cisco IMC.
The MAC address printed on the label is the beginning of the range of six
contiguous MAC addresses.
Step 1 Attach a power cord to each power supply in your server, and then attach each power cord to a grounded AC power outlet.
If you are using DC power supplies, see Installing DC Power Supplies (First Time Installation).
Wait for approximately two minutes to let the server boot to standby power during the first bootup. You can verify system
power status by looking at the system Power Status LED on the front panel. The system is in standby power mode when
the LED is amber.
Step 2 Connect a USB keyboard and VGA monitor to the server using one of the following methods:
• Connect an optional KVM cable (Cisco PID N20-BKVM) to the KVM connector on the front panel. Connect your
USB keyboard and VGA monitor to the KVM cable.
• Connect a USB keyboard and VGA monitor to the corresponding connectors on the rear panel.
Step 4 Continue with Setting Up the System With the Cisco IMC Configuration Utility, on page 54.
Note To configure the system remotely, you must have a DHCP server on the same network as the system.
Your DHCP server must be preconfigured with the range of MAC addresses for this server node. The
MAC address is printed on a label that is on the pull-out asset tag on the front panel. This server node
has a range of six MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is
the beginning of the range of six contiguous MAC addresses.
Step 1 Attach a power cord to each power supply in your server, and then attach each power cord to a grounded AC power outlet.
If you are using DC power supplies, see Installing DC Power Supplies (First Time Installation).
Wait for approximately two minutes to let the server boot to standby power during the first bootup. You can verify system
power status by looking at the system Power Status LED on the front panel. The system is in standby power mode when
the LED is amber.
Step 2 Plug your management Ethernet cable into the dedicated management port on the rear panel.
Step 3 Allow your preconfigured DHCP server to assign an IP address to the server node.
Step 4 Use the assigned IP address to access and log in to the Cisco IMC for the server node. Consult with your DHCP server
administrator to determine the IP address.
Note The default username for the server is admin. The default password is password.
Step 5 From the Cisco IMC Server Summary page, click Launch KVM Console. A separate KVM console window opens.
Step 6 From the Cisco IMC Summary page, click Power Cycle Server. The system reboots.
Step 7 Select the KVM console window.
Note The KVM console window must be the active window for the following keyboard actions to work.
Step 8 When prompted, press F8 to enter the Cisco IMC Configuration Utility. This utility opens in the KVM console window.
Note The first time that you enter the Cisco IMC Configuration Utility, you are prompted to change the default
password. The default password is password. The Strong Password feature is enabled.
Step 9 Continue with Setting Up the System With the Cisco IMC Configuration Utility, on page 54.
Step 1 Set the NIC mode to choose which ports to use to access Cisco IMC for server management:
• Shared LOM EXT (default)—This is the shared LOM extended mode, the factory-default setting. With this mode,
the Shared LOM and Cisco Card interfaces are both enabled. You must select the default Active-Active NIC
redundancy setting in the following step.
In this NIC mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If the system
determines that the Cisco card connection is not getting its IP address from a Cisco UCS Manager system because
the server is in standalone mode, further DHCP requests from the Cisco card are disabled. Use the Cisco Card
NIC mode if you want to connect to Cisco IMC through a Cisco card in standalone mode.
• Shared LOM—The 1-Gb/10-Gb Ethernet ports are used to access Cisco IMC. You must select either the
Active-Active or Active-standby NIC redundancy setting in the following step.
• Dedicated—The dedicated management port is used to access Cisco IMC. You must select the None NIC redundancy
setting in the following step.
• Cisco Card—The ports on an installed Cisco UCS Virtual Interface Card (VIC) are used to access the Cisco IMC.
You must select either the Active-Active or Active-standby NIC redundancy setting in the following step.
See also the required VIC Slot setting below.
• VIC Slot—Only if you use the Cisco Card NIC mode, you must select this setting to match where your VIC is
installed. The choices are Riser1, Riser2, or Flex-LOM (the mLOM slot).
• If you select Riser1, you must install the VIC in slot 2.
• If you select Riser2, you must install the VIC in slot 5.
• If you select Flex-LOM, you must install an mLOM-style VIC in the mLOM slot.
Step 2 Set the NIC redundancy to your preference. This server has three possible NIC redundancy settings:
• None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting can be
used only with the Dedicated NIC mode.
• Active-standby—If an active Ethernet port fails, traffic fails over to a standby port. Shared LOM and Cisco Card
modes can each use either Active-standby or Active-active settings.
• Active-active (default)—All Ethernet ports are utilized simultaneously. The Shared LOM EXT mode must use
only this NIC redundancy setting. Shared LOM and Cisco Card modes can each use either Active-standby or
Active-active settings.
Step 3 Choose whether to enable DHCP for dynamic network settings, or to enter static network settings.
Note Before you enable DHCP, you must preconfigure your DHCP server with the range of MAC addresses for
this server. The MAC address is printed on a label on the rear of the server. This server has a range of six
MAC addresses assigned to Cisco IMC. The MAC address printed on the label is the beginning of the range
of six contiguous MAC addresses.
Step 10 (Optional) Enable auto-negotiation of port settings or set the port speed and duplex mode manually.
Note Auto-negotiation is applicable only when you use the Dedicated NIC mode. Auto-negotiation sets the port
speed and duplex mode automatically based on the switch port to which the server is connected. If you disable
auto-negotiation, you must set the port speed and duplex mode manually.
What to do next
Use a browser and the IP address of the Cisco IMC to connect to the Cisco IMC management interface. The
IP address is based upon the settings that you made (either a static address or the address assigned by your
DHCP server).
Note The factory default username for the server is admin. The default password is password.
To manage the server, see the Cisco UCS C-Series Integrated Management Controller GUI Configuration
Guide or the Cisco UCS C-Series Servers Integrated Management Controller CLI Configuration Guide for
instructions on using those interfaces for your Cisco IMC release. The links to the configuration guides are
in the Cisco Integrated Management Controller.
Dedicated None
This server has the following NIC mode settings that you can choose from:
• Shared LOM EXT (default)—This is the shared LOM extended mode, the factory-default setting. With
this mode, the Shared LOM and Cisco Card interfaces are both enabled. You must select the default
Active-Active NIC redundancy setting in the following step.
In this NIC mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If
the system determines that the Cisco card connection is not getting its IP address from a Cisco UCS
Manager system because the server is in standalone mode, further DHCP requests from the Cisco card
are disabled. Use the Cisco Card NIC mode if you want to connect to Cisco IMC through a Cisco card
in standalone mode.
• Shared LOM—The 1-Gb/10-Gb Ethernet ports are used to access Cisco IMC. You must select either the
Active-Active or Active-standby NIC redundancy setting in the following step.
• Dedicated—The dedicated management port is used to access Cisco IMC. You must select the None
NIC redundancy setting in the following step.
• Cisco Card—The ports on an installed Cisco UCS Virtual Interface Card (VIC) are used to access the
Cisco IMC. You must select either the Active-Active or Active-standby NIC redundancy setting in the
following step.
See also the required VIC Slot setting below.
• VIC Slot—Only if you use the Cisco Card NIC mode, you must select this setting to match where your
VIC is installed. The choices are Riser1, Riser2, or Flex-LOM (the mLOM slot).
• If you select Riser1, you must install the VIC in slot 2.
• If you select Riser2, you must install the VIC in slot 5.
• If you select Flex-LOM, you must install an mLOM-style VIC in the mLOM slot.
This server has the following NIC redundancy settings that you can choose from:
• None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting
can be used only with the Dedicated NIC mode.
• Active-standby—If an active Ethernet port fails, traffic fails over to a standby port. Shared LOM and
Cisco Card modes can each use either Active-standby or Active-active settings.
• Active-active (default)—All Ethernet ports are utilized simultaneously. The Shared LOM EXT mode
must use only this NIC redundancy setting. Shared LOM and Cisco Card modes can each use either
Active-standby or Active-active settings.
Caution When you upgrade the BIOS firmware, you must also upgrade the Cisco IMC firmware to the same
version or the server does not boot. Do not power off the server until the BIOS and Cisco IMC firmware
are matching or the server does not boot.
Cisco provides the Cisco Host Upgrade Utility to assist with simultaneously upgrading the BIOS, Cisco
IMC, and other firmware to compatible levels.
The server uses firmware obtained from and certified by Cisco. Cisco provides release notes with each firmware
image. There are several possible methods for updating the firmware:
• Recommended method for firmware update: Use the Cisco Host Upgrade Utility to simultaneously
upgrade the Cisco IMC, BIOS, and component firmware to compatible levels.
See the Cisco Host Upgrade Utility Quick Reference Guide for your firmware release at the documentation
roadmap link below.
• You can upgrade the Cisco IMC and BIOS firmware by using the Cisco IMC GUI interface.
See the Cisco UCS C-Series Rack-Mount Server Configuration Guide.
• You can upgrade the Cisco IMC and BIOS firmware by using the Cisco IMC CLI interface.
See the Cisco UCS C-Series Rack-Mount Server CLI Configuration Guide.
For links to the documents listed above, see the Cisco UCS C-Series Documentation Roadmap.
Step 2 Use the arrow keys to select the BIOS menu page.
Step 3 Highlight the field to be modified by using the arrow keys.
Step 4 Press Enter to select the field that you want to change, and then modify the value in the field.
Step 5 Press the right arrow key until the Exit menu screen is displayed.
Step 6 Follow the instructions on the Exit menu screen to save your changes and exit the setup utility (or press F10). You can
exit without saving changes by pressing Esc.
You must enter your Cisco IMC credentials to authenticate the connection.
• To switch from Cisco IMC CLI to host serial, press Esc+8.
Note You cannot switch to Cisco IMC CLI if the serial-over-LAN (SOL) feature
is enabled.
• After a session is created, it is shown in the CLI or web GUI by the name serial.
Note Any mouse or keyboard that is connected to the KVM cable is disconnected
when you enable Smart Access USB.
• You can use USB 3.0-based devices, but they will operate at USB 2.0 speed.
• We recommend that the USB device have only one partition.
• The file system formats supported are: FAT16, FAT32, MSDOS, EXT2, EXT3, and EXT4. NTFS
is not supported.
• The front-panel KVM connector has been designed to switch the USB port between Host OS and BMC.
• Smart Access USB can be enabled or disabled using any of the BMC user interfaces. For example, you
can use the Cisco IMC Configuration Utility that is accessed by pressing F8 when prompted during
bootup.
• Enabled: the front-panel USB device is connected to the BMC.
• Disabled: the front-panel USB device is connected to the host.
• In a case where no management network is available to connect remotely to Cisco IMC, a Device Firmware
Update (DFU) shell over serial cable can be used to generate and download technical support files to the
USB device that is attached to front panel USB port.
Front-Panel LEDs
Figure 12: Front Panel LEDs
2 SAS/SATA drive activity LED • Off—There is no hard drive in the hard drive tray (no
access, no fault).
SAS
• Green—The hard drive is ready.
• Green, blinking—The hard drive is reading or writing
data.
1 NVMe SSD drive fault • Off—The drive is not in use and can be safely
removed.
NVMe Note NVMe solid state drive (SSD) drive tray
LEDs have different behavior than • Green—The drive is in use and functioning properly.
SAS/SATA drive trays.
• Green, blinking—the driver is initializing following
insertion, or the driver is unloading following an eject
command.
• Amber—The drive has failed.
• Amber, blinking—A drive Locate command has been
issued in the software.
Rear-Panel LEDs
Figure 13: Rear Panel LEDs
2 1-Gb/10-Gb Ethernet link speed (on both LAN1 and • Amber—Link speed is 100 Mbps.
LAN2)
• Amber—Link speed is 1 Gbps.
• Green—Link speed is 10 Gbps.
3 1-Gb/10-Gb Ethernet link status (on both LAN1 and • Off—No link is present.
LAN2)
• Green—Link is active.
• Green, blinking—Traffic is present on the active link.
6 Power supply status (one LED each power supply unit) AC power supplies:
• Off—No AC input (12 V main power off, 12 V
standby power off).
• Green, blinking—12 V main power off; 12 V standby
power on.
• Green, solid—12 V main power on; 12 V standby
power on.
• Amber, blinking—Warning threshold detected but 12
V main power on.
• Amber, solid—Critical error detected; 12 V main
power off (for example, over-current, over-voltage,
or over-temperature failure).
8 SAS/SATA drive activity LED • Off—There is no hard drive in the hard drive tray (no
access, no fault).
SAS
• Green—The hard drive is ready.
• Green, blinking—The hard drive is reading or writing
data.
7 NVMe SSD drive fault • Off—The drive is not in use and can be safely
removed.
NVMe Note NVMe solid state drive (SSD) drive tray
LEDs have different behavior than • Green—The drive is in use and functioning properly.
SAS/SATA drive trays.
• Green, blinking—the driver is initializing following
insertion, or the driver is unloading following an eject
command.
• Amber—The drive has failed.
• Amber, blinking—A drive Locate command has been
issued in the software.
1 Fan module fault LEDs (one on the top of each fan 3 DIMM fault LEDs (one behind each DIMM socket
module) on the motherboard)
• Amber—Fan has a fault or is not fully seated. These LEDs operate only when the server is in
standby power mode.
• Green—Fan is OK.
• Amber—DIMM has a fault.
• Off—DIMM is OK.
Caution After a server is shut down to standby power, electric current is still present in the server. To completely
remove power, you must disconnect all power cords from the power supplies in the server, as directed
in the service procedures.
You can shut down the server by using the front-panel power button or the software management
interfaces.
• Emergency shutdown—Press and hold the Power button for 4 seconds to force the main power off and immediately
enter standby mode.
Step 3 If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the
power supplies in the server.
Step 5 If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the
power supplies in the server.
Step 3 If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the
power supplies in the server.
a) If the cover latch is locked, slide the lock sideways to unlock it.
When the latch is unlocked, the handle pops up so that you can grasp it.
b) Lift on the end of the latch so that it pivots vertically to 90 degrees.
c) Simultaneously, slide the cover back and lift the top cover straight up from the server and set it aside.
Step 2 Replace the top cover:
a) With the latch in the fully open position, place the cover on top of the server about one-half inch (1.27 cm) behind
the lip of the front cover panel.
b) Slide the cover forward until the latch makes contact.
c) Press the latch down to the closed position. The cover is pushed forward to the closed position as you push down the
latch.
d) Lock the latch by sliding the lock button to sideways to the left.
Locking the latch ensures that the server latch handle does not protrude when you install the blade.
• Hot-plug replacement—For OS informed replacment (VMD is disabled), you must take the component
offline before removing it for the following component:
• NVMe PCIe solid state drives
Step 3 Grasp the left and right detent and lift it out of the chassis.
Note You might need to slide the air duct towards the back of the server while lifting the air duct up.
What to do next
When you are done servicing the server, install the air duct. See Installing the Air Duct, on page 74.
Step 3 When the air duct is correctly seated, attach the server's top cover.
The server top cover should sit flush so that the metal tabs on the top cover match the indents in the top edges of the air
duct.
Warning Blank faceplates and cover panels serve three important functions: they prevent exposure to
hazardous voltages and currents inside the chassis; they contain electromagnetic interference
(EMI) that might disrupt other equipment; and they direct the flow of cooling air through the
chassis. Do not operate the system unless all cards, faceplates, front covers, and rear covers are
in place.
Statement 1029
Caution When handling server components, handle them only by carrier edges and use an electrostatic discharge
(ESD) wrist-strap or other grounding device to avoid damage.
Tip You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue
unit identification LED on both the front and rear panels of the server. This button allows you to locate
the specific server that you are servicing when you go to the opposite side of the rack. You can also
activate these LEDs remotely by using the Cisco IMC interface.
7 PCIe riser 3 (PCIe slots 7 and 8 numbered from 8 PCIe riser 2 (PCIe slots 4, 5, 6 numbered from
bottom to top), with the following options: bottom to top), with the following options:
• 3A (Default Option)—Slots 7 (x16 • 2A (Default Option)—Slot 4 (x24 mechanical,
mechanical, x8 electrical), and 8 (x16 x8 electrical) supports full height, ¾ length
mechanical, x8 electrical). Both slots can card; Slot 5 (x24 mechanical, x16 electrical)
accept a full height, full length GPU card. supports full height, full length GPU card; Slot
6 (x16 mechanical, x8 electrical) supports full
• 3B (Storage Option)—Slots 7 (x24 height, full length card.
mechanical, x4 electrical) and 8 (x24
mechanical, x4 electrical). Both slots can
accept 2.5-ich SFF universal HDDs.
• 3C (GPU Option)—Slots 7 (x16 mechanical,
x16 electrical) and 8 empty (NCSI support
limited to one slot at a time). Slot 7 can
support a full height, full length GPU card.
The Technical Specifications Sheets for all versions of this server, which include supported component part
numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).
Note You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because
they are hot-swappable.
To replace rear-loading SAS/SATA drives, see Replacing Rear-Loading SAS/SATA Drives, on page 81.
• Cisco UCS C240 M6 12 SAS/SATA plus optical drive—SFF drives, with 12-drive backplane and DVD
drive option.
• Front-loading drive bays 1—12 support 2.5-inch SAS/SATA drives.
• Optionally, front-loading drive bays 1 through 4 can support 2.5-inch NVMe SSDs.
• Cisco UCS C240 M6 12 LFF SAS/SATA—Large form-factor (LFF) drives, with 12-drive backplane.
• Front-loading drive bays 1—12 support 3.5-inch SAS-only drives.
• Optionally, up to 4 mid-plane mounted SAS-only HDDs can be supported.
• Optionally, rear drive bays can support up to 4 SFF SAS/SATA or NVMe HDDs.
Figure 17: Large Form-Factor Drive (12-Drive) Version, Drive Bay Numbering
Note For diagrams of which drive bays are controlled by particular controller
cables on the backplane, see Storage Controller Cable Connectors and
Backplanes, on page 186.
• Front-loading drives are hot pluggable, but each drive requires a 10 second delay between hot removal
and hot insertion.
• Keep an empty drive blanking tray in any unused bays to ensure proper airflow.
• You can mix SAS/SATA hard drives and SAS/SATA SSDs in the same server. However, you cannot
configure a logical volume (virtual drive) that contains a mix of hard drives and SSDs. That is, when
you create a logical volume, it must contain all SAS/SATA hard drives or all SAS/SATA SSDs.
Step 1 Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.
Step 2 Go to the Boot Options tab.
Step 3 Set UEFI Boot Options to Enabled.
Step 4 Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your Boot Option #1.
Step 5 Go to the Advanced tab.
Step 6 Select LOM and PCIe Slot Configuration.
Step 7 Set the PCIe Slot ID: HBA Option ROM to UEFI Only.
Step 8 Press F10 to save changes and exit the BIOS setup utility. Allow the server to reboot.
Step 9 After the OS installs, verify the installation:
a) Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.
b) Go to the Boot Options tab.
c) Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.
Step 1 Use a web browser and the IP address of the server to log into the Cisco IMC GUI management interface.
Step 2 Navigate to Server > BIOS.
Step 3 Under Actions, click Configure BIOS.
Step 4 In the Configure BIOS Parameters dialog, select the Advanced tab.
Step 5 Go to the LOM and PCIe Slot Configuration section.
Step 6 Set the PCIe Slot: HBA Option ROM to UEFI Only.
Step 7 Click Save Changes. The dialog closes.
Step 8 Under BIOS Properties, set Configured Boot Order to UEFI.
Step 9 Under Actions, click Configure Boot Order.
Step 10 In the Configure Boot Order dialog, click Add Local HDD.
Step 11 In the Add Local HDD dialog, enter the information for the 4K sector format drive and make it first in the boot order.
Step 12 Save changes and reboot the server. The changes you made will be visible after the system reboots.
Note You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because
they are hot-swappable.
Step 1 Remove the drive that you are replacing or remove a blank drive tray from the bay:
a) Press the release button on the face of the drive tray.
b) Grasp and open the ejector lever and then pull the drive tray out of the slot.
c) If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift
the drive out of the tray.
Step 2 Install a new drive:
a) Place a new drive in the empty drive tray and install the four drive-tray screws.
b) With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c) Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
Note You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because
they are hot-swappable.
• UCS C240 M6 8 SAS/SATA plus optical—SFF drives, with 8-drive backplane and DVD drive option.
• Hardware RAID—Rear drive bays support SAS or NVMe drives
• Intel® Virtual RAID on CPU—Rear drive bays support NVMe drives only.
• The rear drive bay numbering follows the front-drive bay numbering in each server version:
• 8-drive server—rear bays are numbered bays 9 and 10.
• 12-drive server—rear bays are numbered bays 13 and 14.
• 24-drive server—rear bays are numbered bays 25 and 26.
Note You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because
they are hot-swappable.
Step 1 Remove the drive that you are replacing or remove a blank drive tray from the bay:
a) Press the release button on the face of the drive tray.
b) Grasp and open the ejector lever and then pull the drive tray out of the slot.
c) If you are replacing an existing drive, remove the four drive-tray screws that secure the drive to the tray and then lift
the drive out of the tray.
Step 2 Install a new drive:
a) Place a new drive in the empty drive tray and install the four drive-tray screws.
b) With the ejector lever on the drive tray open, insert the drive tray into the empty drive bay.
c) Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
Figure 19: Replacing a Drive in a Drive Tray
Step 2 Grasp the handle for the mid-mount drive cage, and swing the cage cover open.
When the cage cover is open, it will be pointing up at a 90-degree angle.
Step 3 Grasping the cage cover handle, pull up on the drive cage until the bottom row of drives clears the top of the server.
When pulling on the mid-mount drive cage, it will arc upward.
Step 4 Grasp the drive handle and pull the drive out of the mid-mount drive cage.
Step 5 Orient the drive so that the handle is at the bottom and align it with its drive bay.
Step 6 Holding the drive level, slide it into the drive bay until it connects with the midplane.
Step 7 Push down on the drive cage so that it seats into the server.
Step 8 Grasp the handle and close the server cage cover.
Note Make sure that the server cage cover is completely closed, and the server cage is completely seated in the server.
When the server cage is completely seated, its top is flush with the fans and rear PCI riser cages.
Before submitting the drive to the RMA process, it is a best practice to reseat the drive. If the false UBAD
error exists, reseating the drive can clear it. If successful, reseating the drive reduces inconvenience, cost, and
service interruption, and optimizes your server uptime.
Note Reseat the drive only if a UBAD error occurs. Other errors are transient, and you should not attempt
diagnostics and troubleshooting without the assistance of Cisco personnel. Contact Cisco TAC for
assistance with other drive errors.
Caution This procedure might require powering down the server. Powering down the server will cause a service
interruption.
• When reseating the drive, allow 20 seconds between removal and reinsertion.
Step 1 Attempt a hot reseat of the affected drive(s). Choose the appropriate option:
a) For a front-loading drive, see Replacing a Front-Loading SAS/SATA Drive, on page 80
b) For a rear-loading drive, see Replacing a Rear-Loading SAS/SATA Drive, on page 82
c) For a mid-mount drive, see Replacing Mid-Mounted SAS/SATA Drives (LFF Server), on page 83
Note While the drive is removed, it is a best practice to perform a visual inspection. Check the drive bay to ensure
that no dust or debris is present. Also, check the connector on the back of the drive and the connector on
the inside of the server for any obstructions or damage.
Also, when reseating the drive, allow 20 seconds between removal and reinsertion.
Step 2 During boot up, watch the drive's LEDs to verify correct operation.
See Status LEDs and Buttons, on page 61.
Step 3 If the error persists, cold reseat the drive, which requires a server power down. Choose the appropriate option:
a) Use your server management software to gracefully power down the server.
See the appropriate Cisco management software documentation.
b) If server power down through software is not available, you can power down the server by pressing the power button.
See Status LEDs and Buttons, on page 61.
c) Reseat the drive as documented in Step 1.
d) When the drive is correctly reseated, restart the server, and check the drive LEDs for correct operation as documented
in Step 2.
Step 4 If hot and cold reseating the drive (if necessary) does not clear the UBAD error, choose the appropriate option:
a) Contact Cisco Systems for assistance with troubleshooting.
b) Begin an RMA of the errored drive.
• Hot-plug support must be enabled in the system BIOS. If you ordered the system with NVMe drives,
hot-plug support is enabled at the factory.
• You cannot control NVMe PCIe SSDs with a SAS RAID controller because NVMe SSDs interface with
the server via the PCIe bus.
• You can combine NVMe SSDs in the same system, but the same partner brand must be used. For example,
two Intel NVMe SFF 2.5-inch SSDs and two HGST SSDs is an invalid configuration.
• UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported
in all supported operating systems except VMWare ESXi.
Step 1 Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.
Step 2 Navigate to Advanced > PCI Subsystem Settings > NVMe SSD Hot-Plug Support.
Step 3 Set the value to Enabled.
Step 4 Save your changes and exit the utility.
Step 1 Use a browser to log in to the Cisco IMC GUI for the server.
Step 2 Navigate to Compute > BIOS > Advanced > PCI Configuration.
Step 3 Set NVME SSD Hot-Plug Support to Enabled.
Step 4 Save your changes.
Note OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all
supported operating systems except VMware ESXi.
Note OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug
Support in the System BIOS, on page 88.
Connectors are keyed, and they are different at each end of the cable to prevent improper installation. The
backplane connector IDs are silk screened onto the interior of the server.
For this task, you need the appropriate cables.
Step 4 Orient the NVMe D cable correctly, lower it into place, and attach both ends.
Step 6 Orient the NVMe D cable correctly, lower it into place, and attach both ends.
Note The NVMe D cable lies on top of the NVMe C cable.
Step 7 If drives are installed in any of slots 1 through 4, look at the drive LEDs to verify correct operation.
See Front-Panel LEDs, on page 62.
Step 8 When drives are successfully booted up to runtime, reinstall the fan tray.
See Installing the Fan Tray, on page 100.
• The rear drive bay numbering follows the front-drive bay numbering in each server version:
• 12-drive server—rear bays are numbered bays 103 and 104.
• 24-drive server—rear bays are numbered bays 101 through 104.
• You can combine NVMe 2.5-inch SSDs in the same system, but the same partner brand must be used.
For example, two Intel NVMe SFF 2.5-inch SSDs and two HGST SSDs is an invalid configuration.
• UEFI boot is supported in all supported operating systems. Hot-insertion and hot-removal are supported
in all supported operating systems except VMWare ESXi.
Note OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all
supported operating systems except VMware ESXi.
Note OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug
Support in the System BIOS, on page 88.
Tip There is a fault LED on the top of each fan module. This LED lights green when the fan is correctly
seated and is operating OK. The LED lights amber when the fan has a fault or is not correctly seated.
Caution You do not have to shut down or remove power from the server to replace fan modules because they
are hot- swappable. However, to maintain proper cooling, do not operate the server for more than one
minute with any fan module removed.
Step 1 Remove the screws that secure the fan tray to the chassis.
a) Locate the screws that secure the fan tray to the server.
b) Using a #2 Phillips screwdriver, loosen the screws.
Step 2 Disconnect the fan tray cable from the fan tray, leaving the motherboard connection in place.
Step 3 Remove the fan tray from the server.
a) Grasp the handles at the top of the fan tray.
b) Holding the fan tray level and making sure that the fan tray cable does not obstruct removal, lift the fan tray up until
it is removed from the chassis.
What to do next
Reinsert the fan tray into the chassis. See Installing the Fan Tray, on page 100.
• Two different form factors exist for heatsinks, a low profile and a high profile. The server can be ordered
with either, but you cannot mix high- and low-profile CPUs and heatsinks in the same server. A single
server must have all of one type.
The CPU and heatsink installation procedure is different depending on the type of heatsink used in your
server.
• Low profile (UCSC-HSLP-M6), which has 4 T30 Torx screws on the main heatsink, and 2
Phillips-head screws on the extended heatsink.
This heat sink is required for servers that contain one or more GPUs.
This heat sink is not supported on C240 M6 LFF servers.
See also Additional CPU-Related Parts to Order with RMA Replacement CPUs, on page 111.
Step 1 Choose the appropriate method to loosen the securing screws, based on the whether the CPU has a high-profile or
low-profile heatsink.
c) Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.
Caution While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance
when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
d) Go to step 3.
Step 2 Remove the CPU.
a) Using a #2 Phillips screwdriver, loosen the two Phillips head screws for the extended heatsink.
b) Using a T30 Torx driver, loosen the four Torx securing nuts.
c) Push the rotating wires towards each other to move them to the unlocked position.
Caution Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the
rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully
in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.
d) Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.
Caution While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance
when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
e) Go to step 3.
Step 3 Put the CPU assembly on a rubberized mat or other ESD-safe work surface.
When placing the CPU on the work surface, the heatsink label should be facing up. Do not rotate the CPU assembly
upside down.
Step 5 Detach the CPU from the CPU carrier by disengaging CPU clips and using the TIM breaker.
a) Turn the CPU assembly upside down, so that the heatsink is pointing down.
c) Lower the TIM breaker into the u-shaped securing clip to allow easier access to the CPU carrier.
Note Make sure that the TIM breaker is completely seated in the securing clip.
d) Gently pull up on the outer edge of the CPU carrier (2) so that you can disengage the second pair of CPU clips near
both ends of the TIM breaker.
Caution Be careful when flexing the CPU carrier! If you apply too much force you can damage the CPU carrier.
Flex the carrier only enough to release the CPU clips. Make sure to watch the clips while performing this
step so that you can see when they disengage from the CPU carrier.
e) Gently pull up on the outer edge of the CPU carrier so that you can disengage the pair of CPU clips (3 in the following
illustration) which are opposite the TIM breaker.
f) Grasp the CPU carrier along the short edges and lift it straight up to remove it from the heatsink.
b) Flip the CPU and carrier right-side up so that the words PRESS HERE are visible.
c) Align the posts on the fixture and the pin 1 locations on the CPU carrier and the fixture (1 in the following illustration).
d) Lower the CPU and CPU carrier onto the fixture.
Step 7 Use the provided cleaning kit (UCSX-HSCK) to remove all of the thermal interface barrier (thermal grease) from the
CPU, CPU carrier, and heatsink.
Important Make sure to use only the Cisco-provided cleaning kit, and make sure that no thermal grease is left on any
surfaces, corners, or crevices. The CPU, CPU carrier, and heatsink must be completely clean.
What to do next
Choose the appropriate option:
• If you will be installing a CPU, go to Installing the CPUs and Heatsinks, on page 107.
• If you will not be installing a CPU, verify that a CPU socket cover is installed. This option is valid only
for CPU socket 2 because CPU socket 1 must always be populated in a runtime deployment.
Step 1 Remove the CPU socket dust cover (UCS-CPU-M6-CVR=) on the server motherboard.
a) Push the two vertical tabs inward to disengage the dust cover.
b) While holding the tabs in, lift the dust cover up to remove it.
Step 2 Grasp the CPU fixture on the edges labeled PRESS, lift it out of the tray, and place the CPU assembly on an ESD-safe
work surface.
Step 3 Apply new TIM.
Note The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
• If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 4.
• If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU
surface from the supplied syringe. Continue with step a below.
a) Apply the Bottle #1 cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=), as well as the
spare CPU package, to the old TIM on the heatsink and let it soak for a least 15 seconds.
b) Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to
avoid scratching the heatsink surface.
c) Completely clean the bottom surface of the heatsink using Bottle #2 to prepare the heatsink for installation.
d) Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of
thermal interface material to the top of the CPU. Use the pattern shown in the following figure to ensure even coverage.
Caution Use only the correct heatsink for your CPU. Heatsink UCSC-HSHP-240M6= is for servers with no GPU.
Heatsink UCSC-HSLP-M6= is for servers with GPUs installed.
• For a CPU with a low-profile heatsink, set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing
nuts to secure the CPU to the motherboard (3) first. Then, set the torque driver to 6 in-lb of torque and tighten
the two Phillips head screws for the extended heatsink (4).
Note The following items apply to CPU replacement scenarios. If you are replacing a system chassis and
moving existing CPUs to the new chassis, you do not have to separate the heatsink from the CPU. See
Additional CPU-Related Parts to Order with RMA Replacement System Chassis, on page 112.
Caution Use only the correct heatsink for your CPUs to ensure proper cooling. There
are two different heatsinks, a low profile (UCSC-HSLP-M6) which is used
with GPUs, and a high-profile (UCSC-HSHP-240M6) for servers without
GPUs.
• Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):
• CPU Carrier: UCS-M6-CPU-CAR=
• #1 flat-head screwdriver (for separating the CPU from the heatsink)
• Heatsink cleaning kit (UCSX-HSCK=)
One cleaning kit can clean up to four CPUs.
• Thermal interface material (TIM) kit for M6 servers (UCS-CPU-TIM=)
One TIM kit covers one CPU.
A CPU heat sink cleaning kit is good for up to four CPU and heat sink cleanings. The cleaning kit contains
two bottles of solution, one to clean the CPU and heat sink of old TIM and the other to prepare the surface of
the heat sink.
New heat sink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the
CPU surface prior to installing the heat sinks. Therefore, even when you are ordering new heat sinks, you
must order the heat sink cleaning kit.
Note Unlike previous generation CPUs, the M6 server CPUs do not require you to separate the heatsink from
the CPU when you move the CPU-heatsink assembly. Therefore, no additional heatsink cleaning kit or
thermal-interface material items are required.
• The only tool required for moving a CPU/heatsink assembly is a T-30 Torx driver.
Caution DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.
Caution Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the server might result in system
problems or damage to the motherboard.
Note To ensure the best server performance, it is important that you are familiar with memory performance
guidelines and population rules before you install or replace DIMMs.
• CPU 1 supports channels P1 A1, P1 A2, P1 B1, P1 B2, P1 C1, P1 C2, P1 D1, P1 D2, P1 E1, P1 E2,
P1 F1, P1 F2, P1 G1, P1 G2, P1 H1, and P1 H2.
• CPU 2 supports channels P2 A1, P2 A2, P2 B1, P2 B2, P2 C1, P2 C2, P2 D1, P2 D2, P2 E1, P2 E2,
P2 F1, P2 F2, P2 G1, P2 G2, P2 H1, and P2 H2.
• Each channel has two DIMM sockets (for example, channel A = slots A1, A2).
• In a single-CPU configuration, populate the channels for CPU1 only (P1 A1 through P1 H2).
• For optimal performance, populate DIMMs in the order shown in the following table, depending on the
number of CPUs and the number of DIMMs per CPU. If your server has two CPUs, balance DIMMs
evenly across the two CPUs as shown in the table.
1 A1 - A1
12 A1, C1, D1, E1, G1, A2, C2, D2, E2, G2, A1, C1, D1, E1, G1, A2, C2, D2, E2, G2,
H1 H2 H1 H2
16 All populated (A1 All populated (A2 All populated (A1 All populated (A2
through H1) through H2) through H1) through H2)
Table 16: DIMM Plus Intel Optane Persistent Memory 200 Series Memory Population Order
Total Number of DIMMs per DDR4 DIMM Slot Intel Optane Persistent Memory
CPU 200 Series DIMM Slot
8+4 DIMMs A1, B1, C1, D1, E1, F1, G1, H1 A1, C1, E1, G1
8+8 DIMMs A0, B0, C0, D0, E0, F0, G0, H0 A1, B1, C1, D1, E1, F1, G1, H1
Memory Mirroring
The CPUs in the server support memory mirroring only when an even number of channels are populated with
DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled.
Memory mirroring reduces the amount of memory available by 50 percent because only one of the two
populated channels provides data. The second, duplicate channel provides redundancy.
Replacing DIMMs
Identifying a Faulty DIMM
Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Internal
Diagnostic LEDs, on page 67 for the locations of these LEDs. When the server is in standby power mode,
these LEDs light amber to indicate a faulty DIMM.
a) Align the new DIMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to correctly
orient the DIMM.
b) Push down evenly on the top corners of the DIMM until it is fully seated and the ejector levers on both ends lock
into place.
c) Replace the top cover to the server.
d) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Caution DCPMMs and their sockets are fragile and must be handled with care to avoid damage during installation.
Note To ensure the best server performance, it is important that you are familiar with memory performance
guidelines and population rules before you install or replace DCPMMs.
Intel Optane DC Persistent Memory Module Population Rules and Performance Guidelines
This topic describes the rules and guidelines for maximum memory performance when using Intel Optane
DC persistent memory modules (DCPMMs) with DDR4 DRAM DIMMs.
Configuration Rules
Observe the following rules and guidelines:
• To use DCPMMs in this server, two CPUs must be installed.
• The DCPMMs run at 2666 MHz. If you have 2933 MHz RDIMMs or LRDIMMs in the server and you
add DCPMMs, the main memory speed clocks down to 2666 MHz to match the speed of the DCPMMs.
• Each DCPMM draws 18 W sustained, with a 20 W peak.
• When using DCPMMs in a server:
• The DDR4 DIMMs installed in the server must all be the same size.
• The DCPMMs installed in the server must all be the same size and must have the same SKU.
Note DCPMM configuration is always applied to all DCPMMs in a region, including a replacement DCPMM.
You cannot provision a specific replacement DCPMM on a preconfigured server.
Caution If you cannot safely view and access the component, remove the server from the rack.
c) Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
d) Remove the air baffle that covers the front ends of the DIMM slots to provide clearance.
Caution If you are moving DCPMMs with active data (persistent memory) from one server to another as in an RMA
situation, each DCPMM must be installed to the identical position in the new server. Note the positions of
each DCPMM or temporarily label them when removing them from the old server.
e) Locate the DCPMM that you are removing, and then open the ejector levers at each end of its DIMM slot.
Step 2 Install a new DCPMM:
Note Before installing DCPMMs, see the population rules for this server: Intel Optane DC Persistent Memory Module
Population Rules and Performance Guidelines, on page 116.
a) Align the new DCPMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to
correctly orient the DCPMM.
b) Push down evenly on the top corners of the DCPMM until it is fully seated and the ejector levers on both ends lock
into place.
c) Reinstall the air baffle.
d) Replace the top cover to the server.
e) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Step 3 Perform post-installation actions:
• If the existing configuration is in 100% Memory mode, and the new DCPMM is also in 100% Memory mode (the
factory default), the only action is to ensure that all DCPMMs are at the latest, matching firmware level.
• If the existing configuration is fully or partly in App-Direct mode and new DCPMM is also in App-Direct mode,
then ensure that all DCPMMs are at the latest matching firmware level and also re-provision the DCPMMs by
creating a new goal.
• If the existing configuration and the new DCPMM are in different modes, then ensure that all DCPMMs are at the
latest matching firmware level and also re-provision the DCPMMs by creating a new goal.
There are a number of tools for configuring goals, regions, and namespaces.
• To use the server's BIOS Setup Utility, see Server BIOS Setup Utility Menu for DCPMM, on page 118.
• To use Cisco IMC or Cisco UCS Manager, see the Cisco UCS: Configuring and Managing Intel Optane DC Persistent
Memory Modules guide.
Caution Potential data loss: If you change the mode of a currently installed DCPMM from App Direct to Memory
Mode, any data in persistent memory is deleted.
DCPMMs can be configured by using the server's BIOS Setup Utility, Cisco IMC, Cisco UCS Manager, or
OS-related utilities.
The server BIOS Setup Utility includes menus for DCPMMs. They can be used to view or configure DCPMM
regions, goals, and namespaces, and to update DCPMM firmware.
To open the BIOS Setup Utility, press F2 when prompted onscreen during a system boot.
The DCPMM menu is on the Advanced tab of the utility:
Advanced > Intel Optane DC Persistent Memory Configuration
• Update firmware
• Configure security
You can enable security mode and set a password so that the DCPMM configuration is locked.
When you set a password, it applies to all installed DCPMMs. Security mode is disabled by default.
• Configure data policy
• Regions: Displays regions and their persistent memory types. When using App Direct mode with
interleaving, the number of regions is equal to the number of CPU sockets in the server. When using
App Direct mode without interleaving, the number of regions is equal to the number of DCPMMs in the
server.
From the Regions page, you can configure memory goals that tell the DCPMM how to allocate resources.
• Create goal config
• Namespaces: Displays namespaces and allows you to create or delete them when persistent memory is
used. Namespaces can also be created when creating goals. A namespace provisioning of persistent
memory applies only to the selected region.
Existing namespace attributes such as the size cannot be modified. You can only add or delete namespaces.
• Total capacity: Displays the total DCPMM resource allocation across the server.
3. Select Update.
Note The Cisco IMC firmware does not include an out-of-band management interface for the M.2 drives
installed in the M.2 version of this mini-storage module (UCS-MSTOR-M2). The M.2 drives are not
listed in Cisco IMC inventory, nor can they be managed by Cisco IMC. This is expected behavior.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Locate the mini-storage module carrier in its socket between PCIe riser 2 and 3.
Step 5 Using a Phillips screwdriver, loosen each of the captive screws and lift the M.2 riser out of the server.
Step 6 Remove a carrier from its socket:
a) Using a Phillips screwdriver, loosen the screw that holds the module to the carrier.
b) Push outward on the securing clips that holds each end of the carrier.
c) Lift both ends of the carrier to disengage it from the socket on the motherboard.
d) Set the carrier on an anti-static surface.
Step 7 Install a carrier to its socket:
a) Position carrier over socket, with the carrier's connector facing down. Two alignment pegs must match with two holes
on the carrier.
b) Gently push down the socket end of the carrier so that the two pegs go through the two holes on the carrier.
c) Push down on the carrier so that the securing clips click over it at both ends.
Step 8 Replace the top cover to the server.
Step 9 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Step 1 Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a
Mini-Storage Module Carrier, on page 120.
Step 2 Remove an M.2 SSD:
a) Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier.
b) Remove the M.2 SSD from its socket on the carrier.
Step 3 Install a new M.2 SSD:
a) Insert the new M.2 SSD connector-end into the socket on the carrier with its label side facing up.
b) Press the M.2 SSD flat against the carrier.
c) Install the single screw that secures the end of the M.2 SSD to the carrier.
Step 4 Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage
Module Carrier, on page 120.
Warning There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same
or equivalent type recommended by the manufacturer. Dispose of used batteries according to the
manufacturer’s instructions.
[Statement 1015]
Warning Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate
regulations for your country or locale.
The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The
battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be purchased from
most electronic stores.
a) Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68.
b) Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
c) Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
d) Remove PCIe riser 1 from the server to provide clearance to the RTC battery socket that is on the motherboard. See
Replacing a PCIe Riser, on page 127.
e) Locate the horizontal RTC battery socket.
f) Remove the battery from the socket on the motherboard. Gently pry the securing clip to the side to provide clearance,
then lift up on the battery.
Step 2 Install a new RTC battery:
a) Insert the battery into its socket and press down until it clicks in place under the clip.
Note The positive side of the battery marked “3V+” should face up.
b) Replace PCIe riser 1 to the server. See Replacing a PCIe Riser, on page 127.
c) Replace the top cover to the server.
d) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
This section includes procedures for replacing AC and DC power supply units.
Caution Do not mix PSU types in the same server. PSU must be the same type and wattage.
1050 W All UCS C240 M6 One power supply is mandatory; one more can be added for 1 + 1
AC models redundancy as long power supplies are the same.
1050 W All UCS C240 M6 One power supply is mandatory; one more can be added for 1 + 1
DC models redundancy as long power supplies are the same.
1600 W All UCS C240 M6 One power supply is mandatory; one more can be added for 1 + 1
AC models redundancy as long power supplies are the same.
2300 W All UCS C240 M6 One power supply is mandatory; one more can be added for 1 + 1
AC models redundancy as long power supplies are the same.
Note If you have ordered a server with power supply redundancy (two power supplies), you do not have to
power off the server to replace a power supply because they are redundant as 1+1.
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Caution DO NOT interchange power supplies of Cisco UCS C240 M5 servers and Cisco UCS C240 SD M5
servers with the power supplies of the Cisco UCS C240 M6 server.
Step 1 Remove the power supply that you are replacing or a blank panel from an empty bay:
a) Perform one of the following actions:
• If your server has only one power supply, shut down and remove power from the server as described in Shutting
Down and Removing Power From the Server, on page 68.
• If your server has two power supplies, you do not have to shut down the server.
b) Remove the power cord from the power supply that you are replacing.
c) Grasp the power supply handle while pinching the release lever toward the handle.
d) Pull the power supply out of the bay.
Step 2 Install a new power supply:
a) Grasp the power supply handle and insert the new power supply into the empty bay.
b) Push the power supply into the bay until the release lever locks.
c) Connect the power cord to the new power supply.
d) Only if you shut down the server, press the Power button to boot the server to main power mode.
Note This procedure is for replacing DC power supplies in a server that already has DC power supplies
installed. If you are installing DC power supplies to the server for the first time, see Installing DC Power
Supplies (First Time Installation), on page 125.
Warning A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.
Statement 1022
Warning This product requires short-circuit (overcurrent) protection, to be provided as part of the building
installation. Install only in accordance with national and local wiring regulations.
Statement 1045
Warning Installation of the equipment must comply with local and national electrical codes.
Statement 1074
Note If you are replacing DC power supplies in a server with power supply redundancy (two power supplies),
you do not have to power off the server to replace a power supply because they are redundant as 1+1.
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Step 1 Remove the DC power supply that you are replacing or a blank panel from an empty bay:
a) Perform one of the following actions:
• If you are replacing a power supply in a server that has only one DC power supply, shut down and remove power
from the server as described in Shutting Down and Removing Power From the Server, on page 68.
• If you are replacing a power supply in a server that has two DC power supplies, you do not have to shut down
the server.
b) Remove the power cord from the power supply that you are replacing. Lift the connector securing clip slightly and
then pull the connector from the socket on the power supply.
c) Grasp the power supply handle while pinching the release lever toward the handle.
d) Pull the power supply out of the bay.
Step 2 Install a new DC power supply:
a) Grasp the power supply handle and insert the new power supply into the empty bay.
b) Push the power supply into the bay until the release lever locks.
c) Connect the power cord to the new power supply. Press the connector into the socket until the securing clip clicks
into place.
d) Only if you shut down the server, press the Power button to boot the server to main power mode.
Figure 26: Replacing DC Power Supplies
Note This procedure is for installing DC power supplies to the server for the first time. If you are replacing
DC power supplies in a server that already has DC power supplies installed, see Replacing DC Power
Supplies, on page 124.
Warning A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.
Statement 1022
Warning This product requires short-circuit (overcurrent) protection, to be provided as part of the building
installation. Install only in accordance with national and local wiring regulations.
Statement 1045
Warning Installation of the equipment must comply with local and national electrical codes.
Statement 1074
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Caution As instructed in the first step of this wiring procedure, turn off the DC power source from your facility’s
circuit breaker to avoid electric shock hazard.
Step 1 Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.
Note The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector
on one end that is keyed to the DC input socket on the power supply. The other end of the cable has no connector
so that you can wire it to your facility’s DC power.
Step 2 Wire the non-terminated end of the cable to your facility’s DC power input source.
Step 3 Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align
for correct polarity and ground.
Step 4 Return DC power from your facility’s circuit breaker.
Step 5 Press the Power button to boot the server to main power mode.
Figure 27: Replacing DC Power Supplies
Step 6 See Grounding for DC Power Supplies, on page 126 for information about additional chassis grounding.
When using a DC power supply, additional grounding of the server chassis to the earth ground of the rack is
available. Two screw holes for use with your dual-hole grounding lug and grounding wire are supplied on the
chassis rear panel.
Note The grounding points on the chassis are sized for 10-32 screws. You must provide your own screws,
grounding lug, and grounding wire. The grounding lug must be dual-hole lug that fits 10-32 screws.
The grounding cable that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted
by the local code.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Remove the PCIe riser that you are replacing:
a) Grasp the flip-up handle on the riser and the blue forward edge, and then lift up evenly to disengage its circuit board
from the socket on the motherboard. Set the riser on an antistatic surface.
b) If the riser has a card installed, remove the card from the riser. See Replacing a PCIe Card, on page 129.
Step 5 Install a new PCIe riser:
Note The PCIe risers are not interchangeable. If you plug a PCIe riser into the wrong socket, the server will not boot.
Riser 1 must plug into the motherboard socket labeled “RISER1.” Riser 2 must plug into the motherboard
socket labeled “RISER2.”
a) If you removed a card from the old PCIe riser, install the card to the new riser. See Replacing a PCIe Card, on page
129.
b) Position the PCIe riser over its socket on the motherboard and over its alignment slots in the chassis.
c) Carefully push down on both ends of the PCIe riser to fully engage its circuit board connector with the socket on the
motherboard.
Step 6 Replace the top cover to the server.
Step 7 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Note Cisco supports all PCIe cards qualified and sold by Cisco. PCIe cards not qualified or sold by Cisco are
the responsibility of the customer. Although Cisco will always stand behind and support the C-Series
rack-mount servers, customers using standard, off-the-shelf, third-party cards must go to the third-party
card vendor for support if any issue with that particular card occurs.
• LFF server—Slots 1 (Reserved), 2 (x4), and 3 (x4). All slots are controlled by CPU 1.
• Riser 2 contains PCIe slots 4, 5 and 6 and is available in the following different options:
• SFF server, I/O-Centric—Slots 4 (x8), 5 (x16), and 6 (x8). All slots are controlled by CPU 2.
• SFF server, Storage Centric—Slots 4, 5, and 6 do not support storage devices in the SFF model of
the server.
• LFF server—Slots 4 (x8), 5 (x16), and 6 (x8). All slots are controlled by CPU 2.
• Riser 3 contains PCIe slots 7 and 8 and is available in the following different options:
• SFF server, I/O-Centric—Slots 7 (x8) and 8 (x8) for SATA/SAS models. Slots 7 and 8 are controlled
by CPU 2 for SATA/SAS servers.
Slots 7 and 8 are not supported for NVMe-only models.
• SFF server, Storage Centric—Slots 7 (x4) and 8 (x4) for drive bays in 24-drive and 12-drive
SAS/SATA versions of the server. All slots are controlled by CPU 2 .
Slots 7 and 8 are not supported for NVMe-only models.
• LFF server—Slots 7 (x4) and 8 (x4) for drive bays. All slots are controlled by CPU 2.
Note If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco
Virtual Interface Card (VIC) Considerations, on page 130.
Note RAID controller cards install into a dedicated motherboard socket. See Replacing a SAS Storage
Controller Card (RAID or HBA), on page 133.
Note For instructions on installing or replacing double-wide GPU cards, see GPU Installation, on page 191.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Remove the PCIe card that you are replacing:
a) Remove any cables from the ports of the PCIe card that you are replacing.
b) Use two hands to flip up and grasp the blue riser handle and the blue finger grip area on the front edge of the riser,
and then lift straight up.
c) On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing
plate open.
d) Open the hinged card-tab retainer that secures the rear-panel tab of the card.
e) Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.
If the riser has no card, remove the blanking panel from the rear opening of the riser.
c) Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged card-tab
retainer over the card’s rear-panel tab.
d) Swing the hinged securing plate closed on the bottom of the riser. Ensure that the clip on the plate clicks into the
locked position.
e) Position the PCIe riser over its socket on the motherboard and over the chassis alignment channels.
f) Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.
Step 6 Replace the top cover to the server.
Step 7 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Figure 29: PCIe Riser Card Securing Mechanisms
Note If you use the Cisco Card NIC mode, you must also make a VIC Slot setting that matches where your
VIC is installed. The options are Riser1, Riser2, and Flex-LOM. See NIC Mode and NIC Redundancy
Settings, on page 56 for more information about NIC modes.
If you want to use the Cisco UCS VIC card for Cisco UCS Manager integration, see also the Cisco UCS
C-Series Server Integration with Cisco UCS Manager Guides for details about supported configurations,
cabling, and other requirements.
VIC How Many Slots That Primary Slot For Primary Slot For Minimum Cisco
Supported in Support VICs Cisco UCS Manager Cisco Card NIC IMC Firmware
Server Integration Mode
• A total of 3 VICs are supported in the server: 2 PCIe style, and 1 mLOM style.
Note Single wire management is supported on only one VIC at a time. If multiple
VICs are installed on a server, only one slot has NCSI enabled at a time.
For single wire management, priority goes to the MLOM slot, then slot 2,
then slot 5 for NCSI management traffic. When multiple cards are installed,
connect the single-wire management cables in the priority order mentioned
above.
• The primary slot for a VIC card in PCIe riser 1 is slot 2. The secondary slot for a VIC card in PCIe riser
1 is slot 1.
Note The NCSI protocol is supported in only one slot at a time in each riser. If
a GPU card is present in slot 2, NCSI automatically shifts from slot 2 to
slot 1.
• The primary slot for a VIC card in PCIe riser 2 is slot 5. The secondary slot for a VIC card in PCIe riser
2 is slot 4.
Note The NCSI protocol is supported in only one slot at a time in each riser. If
a GPU card is present in slot 5, NCSI automatically shifts from slot 5 to
slot 4.
Note If your mLOM card is a Cisco UCS Virtual Interface Card (VIC), see Cisco Virtual Interface Card (VIC)
Considerations, on page 130 for more information and support details.
Note For servers running in standalone mode only: After you replace controller hardware
(UCSC-RAID-M6T, UCSC-RAID-M6HD, UCSC-RAID-M6SD, UCSC-SAS-M6T, or
UCSC-SAS-M6HD), you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller
firmware, even if the firmware Current Version is the same as the Update Version. Running HUU is
necessary to program any controller specific values to the storage controller for the specific server. If
you do not run HUU, the storage controller may not be discovered..
See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring
server components to compatible levels: HUU Guides.
b) For each Storage controller card, grasp the connector for the rear-drive cable, and disconnect it from the card.
You can leave the other end of the rear-drive cable attached.
1 SAS cable connection on Storage 2 SAS cable that connects to rear drives
controller card. in Riser 3B
5 Ribbon cables connecting Storage 6 SAS cable that connects to rear drives
controller cards to motherboard. in Riser 1B
d) Grasp each card tray by the handle and lift the Storage controller cards out of the chassis.
What to do next
Reinsert the dual Storage controller cards. Go to Installing the Dual Storage Controller Cards, on page 137.
d) Holding the card tray by the handle, keep the tray level and lower it into the
server.
e) Using a #2 Phillips head screwdriver, tighten the screws at the edges of each tray.
f) Gently push the handle of the tray towards the front of the server.
This step seats each Storage controller card into its socket on the interior wall. You might feel some resistance as the
card meets the socket. This resistance is normal.
What to do next
Perform other maintenance tasks, if needed, or replace the top cover and restore facility power.
1 Storage controller card connector for rear 2 SAS/SATA cable for rear
drives (Riser 3B) drives
c) Using both hands, grasp the tray's handle, and keeping the Storage controller card tray level, lift it out of the chassis.
What to do next
Reinsert the Storage controller card. Go to Installing the Storage Controller Card, on page 144.
e) Using a #2 Phillips head screwdriver, tighten the screws at the edges of the tray.
f) Using both hands, make sure to apply equal pressure to both sides of the handle, and gently push the handle of the
tray towards the front of the server.
This step seats the Storage controller card into its sockets on the interior wall. You might feel some resistance as the
card meets the socket. This resistance is normal.
What to do next
Perform other maintenance tasks, if needed, or replace the top cover and restore facility power.
Verify Cabling
After installing a Storage controller card, the cabling between the card(s) and rear drives should be as follows.
• For a 24-drive server, verify the following:
• the SAS/SATA cable is connected to the controller card and Riser 3B
• the SAS/SATA cable is connected to the controller card and the Riser 1B
• both ribbon cables are connected to the controller card and the motherboard
Note The only version of this server that supports the SATA Interposer card is the SFF, 12-drive version
(UCSC-C240-M6S)
For software-based storage control for the front-loading drives, the server requires a SATA interposer card
that plugs into a dedicated socket on the motherboard (the same socket used for SAS storage controllers). The
interposer card sits between the front-loading drive bays and the fan tray and supports up to eight SATA drives
in bays 1 through 8).
Note You cannot use a hardware RAID controller card and the embedded software RAID controller to control
front drives at the same time. See Storage Controller Considerations, on page 183 for details about RAID
support.
a) Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68.
b) Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
c) Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 2 Remove any existing SATA interposer card from the server:
Note A SATA interposer card for this server is preinstalled inside a carrier-frame that helps to secure the card to the
inner chassis wall. You do not have to remove this carrier frame from the existing card.
d) Making sure that the card is disconnected from its socket, grasp the handle and lift straight up to remove the interposer
from the server.
d) Keeping the card level, push the handle forward to seat the interposer card into its socket on the interior wall.
e) Using a #2 Phillips screwdriver, tighten the captive screws.
The Supercap provides approximately three years of backup for the disk write-back cache DRAM in the case
of a sudden power loss by offloading the cache to the NAND flash.
Note The Cisco Boot-Optimized M.2 RAID Controller is not supported when the server is used as a
compute-only node in Cisco HyperFlex configurations.
• The minimum version of Cisco IMC and Cisco UCS Manager that support this controller is 4.0(4) and
later.
• This controller supports RAID 1 (single volume) and JBOD mode.
• A SATA M.2 drive in slot 1 (the top) is the first SATA device; a SATA M.2 drive in slot 2 (the underside)
is the second SATA device.
• The name of the controller in the software is MSTOR-RAID.
• A drive in Slot 1 is mapped as drive 253; a drive in slot 2 is mapped as drive 254.
• When using RAID, we recommend that both SATA M.2 drives are the same capacity. If different
capacities are used, the smaller capacity of the two drives is used to create a volume and the rest of the
drive space is unusable.
JBOD mode supports mixed capacity SATA M.2 drives.
• Hot-plug replacement is not supported. The server must be powered off.
• Monitoring of the controller and installed SATA M.2 drives can be done using Cisco IMC and Cisco
UCS Manager. They can also be monitored using other utilities such as UEFI HII, PMCLI, XMLAPI,
and Redfish.
• Updating firmware of the controller and the individual drives:
• For standalone servers, use the Cisco Host Upgrade Utility (HUU). Refer to the HUU Documentation.
• For servers integrated with Cisco UCS Manager, refer to the Cisco UCS Manager Firmware
Management Guide.
• The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.
• If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is
auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of
a volume, you must create a RAID volume and manually reinstall any OS.
• We recommend that you erase drive contents before creating volumes on used drives from another server.
The configuration utility in the server BIOS includes a SATA secure-erase function.
• The server BIOS includes a configuration utility specific to this controller that you can use to create and
delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility
by pressing F2 when prompted during server boot. Then navigate to Advanced > Cisco Boot Optimized
M.2 RAID Controller.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Remove a controller from its motherboard socket:
a) Locate the controller in its socket between PCIe Riser 2 and 3.
Figure 30: Cisco Boot-Optimized M.2 RAID Controller on Motherboard
b) Using a #2 Phillips screwdriver, loosen the captive screws and remove the M.2 module.
c) At each end of the controller board, push outward on the clip that secures the carrier.
d) Lift both ends of the controller to disengage it from the carrier.
a) Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 drive to the carrier.
b) Lift the M.2 drive from its socket on the carrier.
c) Position the replacement M.2 drive over the socket on the controller board.
d) Angle the M.2 drive downward and insert the connector-end into the socket on the carrier. The M.2 drive's label must
face up.
e) Press the M.2 drive flat against the carrier.
f) Install the single screw that secures the end of the M.2 SSD to the carrier.
g) Turn the controller over and install the second M.2 drive.
Figure 31: Cisco Boot-Optimized M.2 RAID Controller, Showing M.2 Drive Installation
c) Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 2 Remove an existing intrusion switch:
a) Disconnect the intrusion switch cable from the socket on the motherboard.
b) Use a #1 Phillips-head screwdriver to loosen and remove the single screw that holds the switch mechanism to the
chassis wall.
c) Slide the switch mechanism straight up to disengage it from the clips on the chassis.
Step 3 Install a new intrusion switch:
a) Slide the switch mechanism down into the clips on the chassis wall so that the screwholes line up.
b) Use a #1 Phillips-head screwdriver to install the single screw that secures the switch mechanism to the chassis wall.
c) Connect the switch cable to the socket on the motherboard.
Step 4 Replace the cover to the server.
Step 5 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
TPM Considerations
• This server supports either TPM version 1.2 or TPM version 2.0.
• Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does
not already have a TPM installed.
• If there is an existing TPM 1.2 installed in the server, you cannot upgrade to TPM 2.0. If there is no
existing TPM in the server, you can install TPM 2.0.
• If the TPM 2.0 becomes unresponsive, reboot the server.
Note Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does
not already have a TPM installed.
This topic contains the following procedures, which must be followed in this order when installing and enabling
a TPM:
1. Installing the TPM Hardware
2. Enabling the TPM in the BIOS
3. Enabling the Intel TXT Feature in the BIOS
Note For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard
screwdriver.
Note You must set a BIOS Administrator password before performing this procedure. To set this password,
press the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to
Security > Set Administrator Password and enter the new password twice as prompted.
Step 1 Reboot the server and watch for the prompt to press F2.
Step 2 When prompted, press F2 to enter the BIOS Setup utility.
Step 3 Verify that the prerequisite BIOS values are enabled:
a) Choose the Advanced tab.
b) Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.
c) Verify that the following items are listed as Enabled:
• VT-d Support (default is Enabled)
• VT Support (default is Enabled)
• TPM Support
• TPM State
Note For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers
who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste
regulations.
To remove the printed circuit board assembly (PCBA), the following requirements must be met:
• The server must be disconnected from facility power.
• The server must be removed from the equipment rack.
• The server's top cover must be removed. See Removing the Server Top Cover, on page 69.
Figure 32: Screw Locations for Removing the UCS C240 M6 PCBA
2 Boot Cisco IMC from alternate image: CN pins 6 Recover BIOS: SW12 switch 5
1-2
Default setting: Off
Default: Open. Place the jumper shunt over the
Recovery mode: On.
pins to close the circuit.
3 System Firmware Secure Erase: CN3 pins 3-4 7 Clear CMOS: SW12 switch 9
Default: Open. Place the jumper shunt over the Default setting: Off
pins to close the circuit.
Recovery mode: On. Gently push the switch to the
right, which is the On position.
Caution Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any
necessary customized settings in the BIOS before you use this clear CMOS procedure.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Using your finger, gently push the SW12 switch 9 to the side marked ON.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note You must allow the entire server to reboot to main power mode to complete the reset. The state of the switch
cannot be determined without the host CPU running.
Step 7 Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the
server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Using your finger, gently push switch 9 to its original position (OFF).
Note If you do not reset the switch to its original position (OFF), the CMOS settings are reset to the defaults every
time you power-cycle the server.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Note As indicated by the message shown above, there are two procedures for recovering the BIOS. Try
procedure 1 first. If that procedure does not recover the BIOS, use procedure 2.
Step 1 Download the BIOS update package and extract it to a temporary location.
Step 2 Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains the
bios.cap file that is required in this procedure.
Note The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must
be formatted with either the FAT16 or FAT32 file system.
Step 3 Insert the USB drive into a USB port on the server.
Step 4 Reboot the server.
Step 5 Return the server to main power mode by pressing the Power button on the front panel.
The server boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it
displays this message:
Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 6 Wait for server to complete the BIOS update, and then remove the USB drive from the server.
Note During the BIOS update, Cisco IMC shuts down the server and the screen goes blank for about 10 minutes.
Do not unplug the power cords during this update. Cisco IMC powers on the server after the update is complete.
Using the BIOS Recovery Switch (SW12, Switch 5) and bios.cap File
You can use this switch to switch the server to use a recovery BIOS.
You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers, on page
165.
Step 1 Download the BIOS update package and extract it to a temporary location.
Step 2 Copy the contents of the extracted recovery folder to the root directory of a USB drive. The recovery folder contains
the bios.cap file that is required in this procedure.
Note The bios.cap file must be in the root directory of the USB drive. Do not rename this file. The USB drive must
be formatted with either the FAT16 or FAT32 file system.
Step 3 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68. Disconnect power cords from all power supplies.
Step 4 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 5 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 6 Using your finger, gently slide the SW12 switch 5 to the ON position.
Step 7 Reconnect AC power cords to the server. The server powers up to standby power mode.
Step 8 Insert the USB thumb drive that you prepared in Step 2 into a USB port on the server.
Step 9 Return the server to main power mode by pressing the Power button on the front panel.
The server boots with the updated BIOS boot block. When the BIOS detects a valid bios.cap file on the USB drive, it
displays this message:
Found a valid recovery file...Transferring to Cisco IMC
System would flash the BIOS image now...
System would restart with recovered image after a few seconds...
Step 10 Wait for server to complete the BIOS update, and then remove the USB drive from the server.
Note During the BIOS update, Cisco IMC shuts down the server and the screen goes blank for about 10 minutes.
Do not unplug the power cords during this update. Cisco IMC powers on the server after the update is complete.
Step 11 After the server has fully booted, power off the server again and disconnect all power cords.
Step 12 Using your finger, gently slide the switch back to its original position (OFF).
Note If you do not reset the switch to is original position (OFF), after recovery completion you see the prompt,
“Please remove the recovery jumper.”
Step 13 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68. Disconnect power cords from all power supplies.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Using your finger, gently slide the SW12 switch 6 to the ON position.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note You must allow the entire server to reboot to main power mode to complete the reset. The state of the switch
cannot be determined without the host CPU running.
Step 7 Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the
server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Reset the switch to its original position (OFF).
Note If you do not remove the switch to its original position (OFF), the BIOS password is cleared every time you
power-cycle the server.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Using the Boot Alternate Cisco IMC Image Header (CN3, Pins 1-2)
You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.
You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers, on page
165.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68. Disconnect power cords from all power supplies.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Install a two-pin jumper across CN3 pins 1 and 2.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note When you next log in to Cisco IMC, you see a message similar to the following:
'Boot from alternate image' debug functionality is enabled.
CIMC will boot from alternate image on next reboot or input power cycle.
Note If you do not remove the jumper, the server will boot from an alternate Cisco IMC image every time that you
power cycle the server or reboot Cisco IMC.
Step 7 To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC
power cords from the server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Remove the jumper that you installed.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Using the System Firmware Secure Erase Header (CN3, Pins 3-4)
You can use this header to securely erase system firmware from the server.
You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers, on page
165.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 68. Disconnect power cords from all power supplies.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Install a two-pin jumper across CN3 pins 3 and 4.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note You must allow the entire server to reboot to main power mode to complete the reset. The state of the jumper
cannot be determined without the host CPU running.
Step 7 Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the
server to remove all power.
Step 8 Remove the top cover from the server.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Server Specifications
This appendix lists the physical, environmental, and power specifications for the server.
• Physical Specifications, on page 173
• Environmental Specifications, on page 175
• Power Specifications, on page 176
Physical Specifications
The following figure shows the height, width, and depth of the chassis as measured to different locations.
The following table lists additional physical specifications for the server versions.
Description Specification
Environmental Specifications
As a Class A2 product, the server has the following environmental specifications.
Description Specification
Temperature, Extended Operating 5°C to 40°C (41°F to 104°F) with no direct sunlight
Humidity condition: Uncontrolled, not to exceed 50% RH starting condition
Derate the maximum temperature by 1°C (33.8°F) per every 305 meters of altitude
above 900m
Humidity (RH), operating 10% to 90% and 28°C (82.4°F) maximum dew-point temperature, non-condensing
environment
Minimum to be higher (more moisture) of -12 ˚C (10.4 ˚F) dew point or 8% relative
humidity
Maximum to be 24 ˚C (75.2 ˚F) dew point or 90% relative humidity
Humidity (RH), non-operating 5% to 93% relative humidity, non-condensing, with a maximum wet bulb
temperature of 28 °C across the 20 °C to 40 °C dry bulb range.
(when the server is stored or transported)
Power Specifications
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
You can get more specific power information for your exact server configuration by using the Cisco UCS
Power Calculator:
http://ucspowercalc.cisco.com
The power specifications for the supported power supply options are listed in the following sections.
Description Specification
Description Specification
Description Specification
Note For the 80PLUS platinum certification documented in the following table, you can find test results at
https://www.clearesult.com/80plus/.
This section lists the specifications for each 2300 W AC power supply (Cisco part number UCSC-PSU1-2300).
Parameter Specification
Maximum Input at Nominal Input Voltage (W) 1338 1330 2490 2480
Maximum Input at Nominal Input Voltage (VA) 1351 1343 2515 2505
Note Only the approved power cords or jumper power cords listed below are supported.
R2XX-DMYMPWRCORD NA NA
No power cord; PID option for ordering server with no power cord
Note For SFF, 12-drives version only: Do not mix controller types in the server. Do not use the embedded
SATA controller and a hardware-based RAID controller card to control front-loading drives at the same
time. This combination is not supported and could result in data loss.
This server supports the RAID and HBA controller options and cable requirements shown in the following
table.
Storage Adapter Product Name Supported Maximum Supported RAID Cache Size (GB)
(PID) Server Number of Type
Drives
Supported
Storage Adapter Product Name Supported Maximum Supported RAID Cache Size (GB)
(PID) Server Number of Type
Drives
Supported
Note For servers running in standalone mode only: After you replace controller hardware
(UCSC-RAID-M6T, UCSC-RAID-M6HD, UCSC-RAID-M6SD, UCSC-SAS-M6T, or
UCSC-SAS-M6HD), you must run the Cisco UCS Host Upgrade Utility (HUU) to update the controller
firmware, even if the firmware Current Version is the same as the Update Version. Running HUU is
necessary to program any controller specific values to the storage controller for the specific server. If
you do not run HUU, the storage controller may not be discovered..
See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring
server components to compatible levels: HUU Guides.
Cisco M6 12G Modular SAS RAID Controller or HBA For Up To 16 Drives (UCSC-RAID-M6T)
The drive support differs by server version, as described in the following sections. These controllers are
supported only in these server versions:
• SFF 12-Drives, SAS/SATA
• SFF 12-Drives NVMe
Cisco 12G Modular SAS RAID Controller or HBA For Up To 28 Drives (UCSC-RAID-M6SD)
This controller is supported only in this server version:
• SFF 24-Drives SAS/SATA
• SFF 24 Drives NVMe
This HW RAID or HBA option can control up to 24 front-loading SAS/SATA drives in this server version,
plus 2 rear-loading SAS/SATA drives.
1. Connect a SAS/SATA cable from the small Slimline connector on the RAID card.
2. Connect a SAS/SATA cable to the Riser 3B connector on the PCE Riser 3 cage.
3. Connect a SAS/SATA cable from the second small Slimline connector on the RAID card.
4. Connect a SAS/SATA cable to the Riser 1B connector on the PCE Riser 1 cage.
Cisco 12G Modular SAS RAID Controller or HBA For Up To 32 Drives (UCSC-RAID-M6HD)
LFF 12-Drives
This HW RAID or HBA option can control up to 12 front-loading SAS/SATA drives in this server version,
plus 2 rear-loading SAS/SATA drives, and up to 4 optional mid mount drives. This option is only supported
for the LFF drive version of the server.
To connect the RAID card to the front-loading drives, connect the split cable (Y cable) as follows:
1. Connect the single end (1) of the SAS/SATA cable to the RAID card.
2. Connect each of the dual-end connectors (2 and 3) to the two front backplane connectors.
To connect the RAID card to the front drives and the mid-mount drives:
1. Connect one end of the SAS/SATA cable from the RAID card to the midplane connector.
*The NVME server supports only 2 double-wide GPUs or 4 single-wide GPUs since it supports only two
risers.
• All GPU cards must be procured from Cisco because of a unique SBIOS ID that is required by CIMC
and UCSM.
• Do not mix different brands or models of GPU cards in the server.
• NVIDIA Single Wide GPUs are supported:
• The GPUs can be populated in Riser 1A slots 2 (x16) and 3 (x8), Riser 2 slots 5 (x16) and 6 (x8),
and Riser 3C slot 7 (x16).
• Each server can support a maximum of five of these GPUs.
• GPUs are not supported in Riser 1B or Riser 3B. Riser 3B cannot mechanically accept a GPU.
• The UCSC-C240M6-S and UCSC-C240M6-S servers support one full-height, full-length, double-wide
GPU (PCIe slot 7 only) in Riser 3C.
• The UCSC-C240-M6N and UCSC-C240-M6N servers do not support any GPU in slot 7 of Riser 3C.
• UCSM managed servers are discoverable only if a PCIe VIC card is installed in slot 1 or slot 4 or an
mLOM VIC card is installed in the mLOM slot. If you install double-width GPUs, they must be located
in slots 2, 5, or 7. Therefore, if two GPUs are installed, UCSM managed servers are discoverable only
if you install a VIC in slot 1, slot 4, or the mLOM slot. The server can support 2 PCIe VICs and 1 mLOM
VIC along with 2 or 3 GPUs.
• Use the UCS power calculator at the following link to determine the power needed based on your server
configuration: http://ucspowercalc.cisco.com
If you need to change this setting, enter the BIOS Setup Utility by pressing F2 when prompted during
bootup.
• If the server is integrated with Cisco UCS Manager and is controlled by a service profile, this setting is
enabled by default in the service profile when a GPU is present.
To change this setting manually, use the following procedure.
Step 1 Refer to the Cisco UCS Manager configuration guide (GUI or CLI) for your release for instructions on configuring service
profiles:
Cisco UCS Manager Configuration Guides
Step 2 Refer to the chapter on Configuring Server-Related Policies > Configuring BIOS Settings.
Step 3 In the section of your profile for PCI Configuration BIOS Settings, set Memory Mapped IO Above 4GB Config to one of
the following:
• Disabled—Does not map 64-bit PCI devices to 64 GB or greater address space.
• Enabled—Maps I/O of 64-bit PCI devices to 64 GB or greater address space.
• Platform Default—The policy uses the value for this attribute contained in the BIOS defaults for the server. Use
this only if you know that the server BIOS is set to use the default enabled setting for this item.
UCSC-RIS1A-240M6 Riser 1A
UCSC-RIS2A-240M6 Riser 2A
UCSC-RIS3C-240M6 Riser 3C
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Remove the single-wide GPU card that you are replacing:
a) Use two hands to flip up and grasp the blue riser handle and the blue finger grip area on the front edge of the riser,
and then lift straight up.
b) On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing
plate open.
c) Open the hinged card-tab retainer that secures the rear-panel tab of the card.
d) Disconnect the Y cable (UCSC-CBL-240M6) from the GPU ends (the black 8-pin connectors).
If you are inserting another GPU, you can leave the opposite end of the cable (the white connector) attached. If you
are not inserting another GPU, you can leave the cable in place or completely remove it.
e) Pull evenly on both ends of the single-wide GPU card to remove it from the socket on the PCIe riser.
If the riser has no card, remove the blanking panel from the rear opening of the riser.
Rear HDD 30 C
GPU A100
GPU A10
GPU A100
GPU A10 28 C
The NVIDIA GPU card might be shipped with two power cables: a straight cable and a Y-cable. The straight
cable is used for connecting power to the GPU card in this server; do not use the Y-cable, which is used for
connecting the GPU card in external devices only.
The supported NVIDIA GPU requires a C240 M5 NVIDIA Cable (UCS-P100CBL-240M5).
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 68.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing the Server Top Cover, on page 69.
Step 4 Remove an existing GPU card:
a) Disconnect any existing cable from the GPU card.
b) Use two hands to grasp the metal bracket of the PCIe riser and lift straight up to disengage its connector from the
socket on the motherboard. Set the riser on an antistatic surface.
c) On the bottom of the riser, press down on the clip that holds the securing plate.
d) Swing open the hinged securing plate to provide access.
e) Open the hinged plastic retainer that secures the rear-panel tab of the card.
f) Disconnect the GPU card's power cable from the power connector on the PCIe riser.
g) Pull evenly on both ends of the GPU card to remove it from the socket on the PCIe riser.
Figure 36: PCIe Riser Card Securing Mechanisms
a) Align the GPU card with the socket on the riser, and then gently push the card’s edge connector into the socket. Press
evenly on both corners of the card to avoid damaging the connector.
b) Connect the GPU power cable. The straight power cable connectors are color-coded. Connect the cable's black
connector into the black connector on the GPU card and the cable's white connector into the white GPU POWER
connector on the PCIe riser.
Caution Do not reverse the straight power cable. Connect the black connector on the cable to the black connector
on the GPU card. Connect the white connector on the cable to the white connector on the PCIe riser.
e) Position the PCIe riser over its socket on the motherboard and over the chassis alignment channels.
f) Carefully push down on both ends of the PCIe riser to fully engage its connector with the sockets on the motherboard.
At the same time, align the GPU front support bracket (on the front end of the GPU card) with the securing latch that
is on the server's air baffle.
Step 6 Insert the GPU front support bracket into the latch that is on the air baffle:
a) Pinch the latch release tab and hinge the latch toward the front of the server.
b) Hinge the latch back down so that its lip closes over the edge of the GPU front support bracket.
c) Ensure that the latch release tab clicks and locks the latch in place.
Replacing a Heatsink
For GPUs, the correct heatsink is the low-profile heatsink (UCSC-HSLP-M6), which has 4 T30 Torx screws
on the main heatsink, and 2 Phillips-head screws on the extended heatsink. High profile heatsinks
(UCSC-HSHP-240M6) cannot be used on a GPU.
Use the following procedures to replace the heatsink on a GPU.
• Removing a Heat Sink, on page 199
• Installing a Heatsink, on page 202
a) Use two hands to flip up and grasp the blue riser handle and the blue finger grip area on the front edge of the riser,
and then lift straight up.
b) On the bottom of the riser, push the release latch that holds the securing plate, and then swing the hinged securing
plate open.
c) Open the hinged card-tab retainer that secures the rear-panel tab of the card.
a) Use two hands to grasp the metal bracket of the PCIe riser and lift straight up to disengage its connector from the
socket on the motherboard. Set the riser on an antistatic surface.
b) On the bottom of the riser, press down on the clip that holds the securing plate.
c) Swing open the hinged securing plate to provide access.
d) Open the hinged plastic retainer that secures the rear-panel tab of the card.
e) Disconnect the GPU card's power cable from the power connector on the PCIe riser.
f) Pull evenly on both ends of the GPU card to remove it from the socket on the PCIe riser.
c) Push the rotating wires towards each other to move them to the unlocked position.
Caution Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the
rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully
in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.
d) Grasp the CPU and heatsink along the edge of the carrier and lift the CPU and heatsink off of the motherboard.
Caution While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance
when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
What to do next
Install a low profile heatsink (UCSC-HSLP-M6) onto the GPU. See Installing a Heatsink, on page 202.
Installing a Heatsink
Use this procedure to install a low-profile heatsink (UCSC-HSLP-M6) on a GPU.
a) Apply the Bottle #1 cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=), as well as the
spare CPU package, to the old TIM on the heatsink and let it soak for a least 15 seconds.
b) Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to
avoid scratching the heatsink surface.
c) Completely clean the bottom surface of the heatsink using Bottle #2 to prepare the heatsink for installation.
d) Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of
thermal interface material to the top of the CPU. Use the pattern shown in the following figure to ensure even coverage.
Figure 39: Thermal Interface Material Application Pattern
Caution Use only the correct heatsink for your CPU. CPU 1 uses heatsink UCSB-HS-M6-R and CPU 2 uses heatsink
UCSB-HS-M6-F.
e) Set the T30 Torx driver to 12 in-lb of torque and tighten the 4 securing nuts to secure the CPU to the motherboard
(3) first.
f) Set the torque driver to 6 in-lb of torque and tighten the two Phillips head screws for the extended heatsink (4).
Step 3 Select your License Server’s MAC address from the Server host ID pull-down.
Note It is important to use the same Ethernet ID consistently to identify the server when generating licenses on
NVIDIA’s Licensing Portal. NVIDIA recommends that you select one entry for a primary, non-removable
Ethernet interface on the platform.
Step 3 Use the License Server Configuration menu to install the .bin file that you generated earlier.
a) Click Choose File.
b) Browse to the license .bin file that you want to install and click Open.
c) Click Upload.
The license file is installed on your License Server. When installation is complete, you see the confirmation message,
“Successfully applied license file to license server.”
Step 1 Open the NVIDIA Control Panel using one of the following methods:
• Right-click on the Windows desktop and select NVIDIA Control Panel from the menu.
• Open Windows Control Panel and double-click the NVIDIA Control Panel icon.
Step 2 In the NVIDIA Control Panel left-pane under Licensing, select Manage License.
The Manage License task pane opens and shows the current license edition being used. The GRID software automatically
selects the license edition based on the features that you are using. The default is Tesla (unlicensed).
Step 3 If you want to acquire a license for GRID Virtual Workstation, under License Edition, select GRID Virtual Workstation.
Step 4 In the License Server field, enter the address of your local GRID License Server. The address can be a domain name or
an IP address.
Step 5 In the Port Number field, enter your port number of leave it set to the default used by the server, which is 7070.
Step 6 Select Apply.
The system requests the appropriate license edition from your configured License Server. After a license is successfully
acquired, the features of that license edition are enabled.
Note After you configure licensing settings in the NVIDIA Control Panel, the settings persist across reboots.
Step 2 Edit the ServerUrl line with the address of your local GRID License Server.
The address can be a domain name or an IP address. See the example file below.
Step 3 Append the port number (default 7070) to the end of the address with a colon. See the example file below.
Step 4 Edit the FeatureType line with the integer for the license type. See the example file below.
• GRID vGPU = 1
• GRID Virtual Workstation = 2
The service automatically acquires the license edition that you specified in the FeatureType line. You can confirm this
in /var/log/messages.
Note After you configure licensing settings in the NVIDIA Control Panel, the settings persist across reboots.
Using gpumodeswitch
The command line utility gpumodeswitch can be run in the following environments:
• Windows 64-bit command prompt (requires administrator permissions)
• Linux 32/64-bit shell (including Citrix XenServer dom0) (requires root permissions)
Note Consult NVIDIA product release notes for the latest information on compatibility with compute and
graphic modes.
Switches to graphics mode. Switches mode of all supported GPUs in the server unless you specify
otherwise when prompted.
• --gpumode compute
Switches to compute mode. Switches mode of all supported GPUs in the server unless you specify
otherwise when prompted.
Note After you switch GPU mode, reboot the server to ensure that the modified resources of the GPU are
correctly accounted for by any OS or hypervisor running on the server.
Note You must do this procedure before you update the NVIDIA drivers.
Step 1 Install your hypervisor software on a computer. Refer to your hypervisor documentation for the installation instructions.
Step 2 Create a virtual machine in your hypervisor. Refer to your hypervisor documentation for instructions.
Step 3 Install the GPU drivers to the virtual machine. Download the drivers from either:
• NVIDIA Enterprise Portal for GRID hypervisor downloads (requires NVIDIA login):
https://nvidia.flexnetoperations.com/
• NVIDIA public driver area: http://www.nvidia.com/Download/index.aspx
• AMD: http://support.amd.com/en-us/download