Configure and Customize
Configure and Customize
OS
Version 3.x
Rev 03
November 2019
Copyright © 2019 Dell Inc. or its subsidiaries. All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property
of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com
Figures 9
Tables 11
Preface 13
Tuning considerations.......................................................................... 50
VxFlex OS system changes................................................................................51
Using the set_performance_parameters utility for MDM and SDS....... 51
Caching Updates for VxFlex OS 2.x/3.x...............................................52
Read RAM Cache settings for SDS...................................................... 53
Read Flash Cache settings for SDS......................................................54
Jumbo Frames and the potential impact on performance.....................55
Optimize Linux.................................................................................................. 59
Change the GRUB template for Skylake CPUs.................................... 60
Optimize ESXi................................................................................................... 61
Optimize the SVM............................................................................................. 61
Optimizing VM guests.......................................................................................62
I/O scheduler....................................................................................... 62
Paravirtual SCSI controller...................................................................63
VxFlex OS Performance Parameters.................................................................63
RAID controller virtual disk settings.................................................................. 65
Apply Performance Profiles to system components..........................................65
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its
software and hardware. Therefore, some functions described in this document might not be
supported by all versions of the software or hardware currently in use. The product release notes
provide the most up-to-date information on product features.
Contact your Dell EMC technical support professional if a product does not function properly or
does not function as described in this document.
Note: This document was accurate at publication time. Go to Dell EMC Online Support
(https://support.emc.com) to ensure that you are using the latest version of this document.
Previous versions of Dell EMC VxFlex OS were marketed under the name Dell EMC ScaleIO.
Similarly, previous versions of Dell EMC VxFlex Ready Node were marketed under the name Dell
EMC ScaleIO Ready Node.
References to the old names in the product, documentation, or software, etc. will change over
time.
Note: Software and technical aspects apply equally, regardless of the branding of the product.
Related documentation
The release notes for your version includes the latest information for your product.
The following Dell EMC publication sets provide information about your VxFlex OS or VxFlex Ready
Node product:
l VxFlex OS software (downloadable as VxFlex OS Software <version> Documentation set)
l VxFlex Ready Node with AMS (downloadable as VxFlex Ready Node with AMS Documentation
set)
l VxFlex Ready Node no AMS (downloadable as VxFlex Ready Node no AMS Documentation
set)
l VxRack Node 100 Series (downloadable as VxRack Node 100 Series Documentation set)
You can download the release notes, the document sets, and other related documentation from
Dell EMC Online Support.
Typographical conventions
Dell EMC uses the following type style conventions in this document:
Bold Used for names of interface elements, such as names of windows,
dialog boxes, buttons, fields, tab names, key names, and menu paths
(what the user specifically selects or clicks)
Technical support
Go to Dell EMC Online Support and click Service Center. You will see several options for
contacting Dell EMC Technical Support. Note that to open a service request, you must have a
valid support agreement. Contact your Dell EMC sales representative for details about
obtaining a valid support agreement or with questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall quality of
the user publications. Send your opinions of this document to [email protected].
This section describes the tasks required to get started with your VxFlex OS system.
You can log into VxFlex OS via the GUI or by using the CLI
Post-deployment tasks and configuring and customizing required workflows can be done through
the GUI or CLI.
Log in
To access the CLI, you must first log in to the management system using a terminal application.
If the CLI and the MDM do not reside on the same server, add the --mdm_ip parameter to all CLI
commands.
In a non-clustered environment, use the MDM IP address. In a clustered environment, use the IP
addresses of the master and slave MDMs, separated by a comma. For example:
When using LDAP, include the LDAP domain in the command. For more information on LDAP, see
VxFlex OS User Roles and LDAP Usage Technical Notes. For example:
The default user created during setup is the SuperUser, with the admin username.
login
Log the specified user into the management system. Every user must log in before performing CLI
commands.
When a user is authenticated by the system, all commands will be executed with the respective
role until a logout is performed, or until the session expires, by reaching one of the following
timeouts:
l Maximum session length (default: 8 hours)
l Session idle time (default: 10 minutes)
Syntax
Parameters
--username
Username
--password
User password. If you do not type your password, you will be prompted to do so.
Note: In Linux, to prevent the password from being recorded in the history log, leave out
the password flag and enter the password interactively.
--ldap_authentication
Log in using the LDAP authentication method. LDAP authentication parameters should be
configured and LDAP authentication method should be set.
--native_authentication
Log in using the native authentication method (default).
--approve_certificate_once
One-time approval of the MDM's certificate (without adding the certificate to the truststore)
--approve_certificate
Automatic approval of the MDM's certificate for the next commands (adds the certificate to
the truststore)
--accept_banner_by_scripts_only
Examples
Note: During installation using the VxFlex OS Installer or the VMware plug-in, the password for
the admin user is reset, and you should log in with the new password. If you installed VxFlex
OS manually, after logging in the first time with the default password (admin), you must
change the password and log in again. Once that is accomplished, the admin user can create
additional users.
When logging in, if a login banner has been configured and enabled in your system, you are
prompted to press any key, after which the banner is displayed. To continue, enter "q" to quit the
login banner, and then enter "y" to approve the banner.
logout
Log the current user out of the system.
Syntax
scli --logout
Example
scli --logout
This section contains activities that are performed after deployment of a VxFlex OS system.
Add and map volumes Adding and mapping volumes can be Mandatory
performed using various VxFlex OS
management tools.
Create a Lockbox, and add the MDM If you want to use SNMP, ESRS, LDAPS, or Optional
credentials to it the Auto Collect Logs feature, your system
must have a Lockbox. Recommended best
practice is to create the Lockbox during
installation with the VxFlex OS Installer when
selecting Set advanced options. Only when
a Lockbox has not been created during
installation, should you manually create a
Lockbox.
Define LDAP users (if using VxFlex OS For detailed information about setting up Optional
with LDAP) VxFlex OS with LDAP, see "Configure LDAP
users".
Enable SNMP and configure the If SNMP was not configured during
SNMP trap receiver installation using the advanced options in the
VxFlex OS Installer, it can be configured after
deployment.
Enable FIPS compliance You can enable OpenSSL Federal Information Optional
Processing Standards (FIPS) compliance
implementation in the MDM for
communication between the SDSs and the
SDCs to the MDM. For instructions on how
to enable OpenSSL FIPS compliance
implementation, see "Enable OpenSSL FIPS
compliance."
Procedure
1. Create a lockbox:
Note: From system version 2.5 and later, the installation process assigns a random
passphrase to this property, and it is highly recommended not to configure or use this
property, because it could create a security breach.
Windows example:
C:\Program Files\EMC\ScaleIO\Gateway\bin\FOSGWTool.bat --
set_mdm_credentials --mdm_user admin --mdm_password Scaleio123
Customer support
Dell EMC provides immediate support via early detection of issues that are communicated by the
Secure Remote Services (SRS) gateway or email alert notifications. You may choose which
method best suits your business requirements.
For SRS configuration, refer to "Register VxFlex OS system to SRS". For information on how to
configure the email alert notifications, refer to "Email notifications".
9. Click the Register to SRS button to register the VxFlex OS system to the SRS.
Configure SRS
Enable Secure Remote Support (SRS) for remote support.
Before you begin
Ensure that you have
l One or more IP addresses of the SRS gateway servers. Note that SRS does not currently
support IPv6.
l SRS username and password.
l VxFlex OS Gateway IP address, username, and password.
l The VxFlex OS Management IP address to be used as the Connect-In IP address. It must be an
IP address that is accessible from the SRS gateway (for example, in case of NAT).
VxFlex OS Gateway
installed on Location of gatewayUser.properties file
Linux /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
At the very end of the command output, the token is displayed. For example:
"YWRtaW46MTQ4NjQyMzM2jg4MzpmOTdkYzczOTRjMGRjNzFjNT1jNTljMzJiM2U5NWJjNmFh
NA"
f. Optionally, use the same command as the previous step to add the root certificate to the
truststore.
3. Restart the VxFlex OS Gateway service.
Option Description
Windows Restart the EMC ScaleIO Gateway service
Linux Type the command
4. Register VxFlex OS on the SRS gateway. In command line, use the following FOSGWTool
command:
where --ESRS_gateway_user is the user name used for EMC support (typically, an email
address), and --ESRS_gateway_password is its corresponding password. --
connect_in_ip is the MGMT IP address of the current Master MDM.
Results
SRS registration is complete.
Activity Command
Activity Command
Email notifications
Service Requests may be created via an email alert notification that is triggered by alerts sent from
the VxFlex OS system.
VxFlex OS can send an email when alerts are triggered from the system. The email is sent from the
VxFlex OS Gateway via a designated SMTP server to the SRS which then notifies Dell EMC
customer support.
Note: The default setting after an upgrade is SRS. So if you upgrade your system and the
notification method property was set to email, you must update the setting after the upgrade.
There are two ways to configure the email notifications feature:
l Configure the gatewayUser.properties file and then restart the gateway service
l Configure and start the gateway service using the REST API
Note: Verify that a lockbox is configured. If it is not, refer to "Create the lockbox" for steps on
how to manually create a lockbox. After creating the lockbox, you must add MDM credentials
to the lockbox.
Results
Email notification is now set up to alert customer support with any issues from the VxFlex OS
system.
where:
l <JSESSION_ID> is the JSESSIONID returned in the response of the
j_spring_security_check command. For more information, see "Working with the
(IM) REST API" in the VxFlex OS REST API Reference Guide.
l <IM_IP_ADDRESS> is the IP address of the VxFlex OS Installer
l <NOTIFICATION_METHOD> is one of the options of the notificationMethod
detailed above.
notifications/email/actions/setEmailSID?
type=<SID_TYPE>&sid=<SID>&smtpServer=<SMTP_SERVER>&username=<USERNAME>&p
assword=<PASSWORD>&authenticate=<AUTHENTICATE>
where:
l <JSESSION_ID> is the JSESSIONID returned in the response of the
j_spring_security_check command. For more information, see "Working with the
(IM) REST API" in the VxFlex OS REST API Reference Guide.
l <IM_IP_ADDRESS> is the IP address of the VxFlex OS Installer
l <SID_TYPE> is the type of sender identity. See the options available for type above
l <SID> is the email address you which to use for the sender identity
l <SMTP_SERVER> is the SMTP server address, if using SMTP authentication
l <USERNAME> is the username for SMTP authentication
l <PASSWORD> is the password for SMTP authentication
l <AUTHENTICATE> indicates whether you want to use SMTP authentication
where:
l <JSESSION_ID> is the JSESSIONID returned in the response of the
j_spring_security_check command. For more information, see "Working with the
(IM) REST API" in the VxFlex OS REST API Reference Guide.
l <IM_IP_ADDRESS> is the IP address of the VxFlex OS Installer
Note: You must stop the email notification feature before changing its configuration.
After configuring the email notification feature, start the feature so that it begins
sending out email notifications. Any time the feature is stopped, it needs to be started
using this command. The email notification feature automatically restarts after a reboot.
where:
l <JSESSION_ID> is the JSESSIONID returned in the response of the
j_spring_security_check command. For more information, see "Working with the
(IM) REST API" in the VxFlex OS REST API Reference Guide.
l <IM_IP_ADDRESS> is the IP address of the VxFlex OS Installer
where:
l <JSESSION_ID> is the JSESSIONID returned in the response of the
j_spring_security_check command. For more information, see "Working with the
(IM) REST API" in the VxFlex OS REST API Reference Guide.
l <IM_IP_ADDRESS> is the IP address of the VxFlex OS Installer
l <SID_TYPE> is the type of sender identity. See the options available for type above
l <SID> is the email address you which to use for the sender identity
3. Add users:
4. (Optional) Log in with the new user and then change its password:
This is optional because the VxFlex OS GUI, CLI, or REST will enforce a password change if
a user logs in with the original password.
4. Use the CLI to set the system to mixed authentication method, LDAP and native:
Configure SNMP
Configure Simple Network Management Protocol (SNMP) for error reporting, if it was not
configured during installation.
Before you begin
Ensure that a lockbox has already been created and that the MDM credentials have been added to
it.
About this task
Enable the SNMP feature in the gatewayUser.properties file.
Procedure
1. Use a text editor to open the gatewayUser.properties file, located in the following
directory on the VxFlex OS Installer/VxFlex OS Gateway server:
l Linux: /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes
l Windows: C:\Program Files\EMC\ScaleIO\Gateway\webapps\ROOT\WEB-INF
\classes\
2. Locate the parameter features.enable_snmp and edit it as follows:
features.enable_snmp=true
3. To add the trap receiver IP address or host name (to configure trap receivers using host
names see "Configure Dynamic Host Name resolution for SNMP in VxFlex OS"), edit the
parameter snmp.traps_receiver_ip.
The SNMP trap receivers’ IP address parameter supports up to two comma-separated or
semi-colon-separated host names or IP addresses.
4. You can optionally change the following parameters:
Option Description
snmp.sampling_frequency The MDM sampling period. The default is 30.
snmp.resend_frequency The frequency of resending existing traps. The default is 0,
which means that traps for active alerts are sent every
sampling cycle.
cat /proc/sys/crypto/fips_enabled
c. To create more than one volume, select Create multiple volumes and type the number
of volumes in the Copies box.
d. Type a name in the Name box.
e. To start from a specific number other than 1, type it in the Start numbering at box.
This number will be the first number in the series that will be appended to the volume
name.
f. Type a number in the Size box, representing the volume size in GB (allocation granularity
is 8 GB).
g. Select Thick (default) or Thin provisioning options.
h. Enable or disable the Read RAM Cache feature by selecting or clearing Use Read RAM
Cache.
i. Click OK.
3. Map volumes to an SDC:
a. In the Frontend > Volumes view, select the volumes.
b. From the Command menu or context-sensitive menu, select Map Volumes.
The Map Volumes window is displayed, showing a list of the volumes that will be
mapped.
c. In the Select Nodes panel, select one or more SDCs to which you want to map the
volumes.
Example:
2. Add a volume:
Example:
Example:
You can use the VxFlex OS GUI or the CLI --query_all command to see the installed
nodes and storage.
4. Get the host UUID by running the following command:
xe host-list
5. Edit the file /etc/lvm/lvm.conf by editing the lines that start with types, and adding
"scini", 16 inside the square brackets.
Example:
7. Use the retrieved host UUID while running the sr-create command.
Note: VxFlex OS provides a unique ID to each volume. It is highly recommended to use
the unique ID when running on XenServer. For example, the VxFlex OS volume name in
Note: To add a shared storage repository, the following conditions must be fulfilled:
l All nodes in the XenServer Center Storage Pool must be installed with SDC.
l The VxFlex OS volume to be used as the shared SR must be mapped to all SDCs in the
Storage Pool.
Results
You can now start using your storage.
Mount VxFlex OS
The exposed VxFlex OS volumes are connected to the servers via the network. To configure
mounting options of VxFlex OS devices, follow the instructions for your specific Linux-based
operating system, below.
About this task
Use persistent device names since the /dev/sciniX names are not guaranteed to persist
between reboots. How to use persistent device names is described in full in "Associating volumes
with physical disks".
To mount VxFlex OS:
Procedure
1. Determine the /dev/disk/by-id correlation to /dev/sciniX:
mount /dev/disk/by-id/<EMC-vol-id>
Example:
mount /dev/disk/by-id/emc-vol-7ec27ef55b8f2108-85a0f0330000000a /
mnt_scinia
3. To make the mount command persistent, edit the /etc/fstab file according to the
instructions for your operating system:
l RHEL 6.x:
a. In /etc/fstab, use a text editor to add the VxFlex OS mount lines:
/dev/disk/by-id/emc-vol-7ec27ef55b8f2108-85a0f0330000000a /
mnt_scinia ext4 defaults 0 0
mount /mnt_scinia
l RHEL 7.x:
In /etc/fstab, use a text editor to add _netdev to the VxFlex OS mount lines.
Example:
/dev/disk/by-id/emc-vol-7ec27ef55b8f2108-85a0f0330000000a /mnt_scinia
ext4 defaults,_netdev 0 0
Ensure that you comply with the netdev and syntax rules for your file system, as
described in the man page.
l SLES:
In /etc/fstab, use a text editor to add nofail to the VxFlex Ready Node mount lines.
Example:
/dev/disk/by-id/emc-vol-7ec27ef55b8f2108-85a0f0330000000a /mnt_scinia
ext3 nofail 0 0
Ensure that you comply with the nofail and syntax rules for your file system, as
described in the man page.
This output shows the Volume ID and name, and other volume information.
This output shows the scini volume name and the volume ID.
By matching the volume ID in both outputs, you can match the operating system names, sciniX,
with the VxFlex OS volume name.
For example:
l scinia = fac22a6300000000 = vol0
l scinic = fac22a6400000001 = vol1
Alternatively, run the sg_inq /dev/sciniX SCSI query command (requires that the sg3_utils
be installed on the Linux host). The result of this command includes the EMC volume ID at the
bottom of the output, as illustrated:
Note:
The product identification remains as ScaleIO (not VxFlex OS).
types = [ "scini", 16 ]
2. When VxFlex OS scini devices are used, add the following filter:
3. After configured, the lvmdiskscan command should yield results similar to the following:
CuAt:
name = "scinid0"
attribute = "vol_id"
value = "e120a92d00000000"
type = "R"
generic = "D"
rep = "s"
nls_index = 22
[root@cnode02 /]#odmget -q "name like scinid* and attribute=vol_id" CuAt
CuAt:
name = "scinid2"
attribute = "vol_id"
value = "e120a92f00000002"
type = "R"
generic = "D"
rep = "s"
nls_index = 22
CuAt:
name = "scinid8"
attribute = "vol_id"
value = "e120a93500000008"
type = "R"
generic = "D"
rep = "s"
nls_index = 22
CuAt:
name = "scinid0"
attribute = "vol_id"
value = "e120a92d00000000"
type = "R"
generic = "D"
rep = "s"
nls_index = 22
You can get information for a single volume, by using this command:
2. Match the value of the value field with the VxFlex OS volume ID.
This section provides an overview of the different tasks required to enhance VxFlex OS
performance.
Upgrades
For any VxFlex OS upgrade from 2.x to 3.x, profile parameters are preserved during the upgrade. If
Fine Granularity is configured and the profile was compact, it changes to high performance. For a
clean 3.x install, high performance is the default. Users should implement performance tunings by
following the guidelines described in this document.
Tuning considerations
In version 3.x, high_performance profile is the default. Medium Granularity configuration can be
changed to compact which would require less memory and CPU resources, however it may have
some impact on the performance of the setup. With Fine Granularity configuration, including a
mixed Fine and Medium Granularity, only high_performance profile is allowed.
The main difference between the high_performance (default) and compact profiles are the
amount of server resources (CPU and memory) that are consumed. A high_performance profile
(or configuration) will always consume more resources.
This document will describe commands using the VxFlex OS command line interface (scli) to
quickly and easily modify the desired performance profile.
Users will achieve optimum performance by always setting the performance profile to
high_performance. A complete list of parameters comparing the Default and the
high_performance profiles is available in the "Performance Parameters".
scli –-
query_performan
ce_parameters
scli --
set_performance
_parameters
--all_sds --
all_sdc --
apply_to_mdm --
profile
high_performanc
e
scli --
set_performance
_parameters
--all_sds --
all_sdc --
apply_to_mdm --
profile compact
scli --
query_performan
ce_parameters
--print_all
Task Command
To view full parameter settings of a specific SDS (this also shows the Execute the
MDM settings) command:
scli --
query_performan
ce_parameters
--sds_name
<NAME> --
print_all
To view full parameter settings of a specific SDC (this also shows the Execute the
MDM settings) command:
scli --
query_performan
ce_parameters
--sdc_name
<NAME> --
print_all
Note: Refer to the "Performance Parameters" for a list containing all default and performance
profile parameters.
The following table summarizes information about the caching modes provided by the system.
RAM Read Read-only RAM Read cache, the fastest type of caching, Disabled
Cache caching uses RAM that is allocated for caching. Its
(rmcache) performed by size is limited to the amount of allocated
server RAM. RAM.
RMCache should only be used for HDD pools.
Read Flash Read-only Read Flash Cache uses the full capacity of Disabled
Cache caching SSD or flash PCI devices (up to eight) to
(RFCache) performed by provide a larger footprint of read-only LRU
one or more (least recently used) based-caching resources
dedicated for the SDS. This type of caching reacts
SSD devices quickly to workload changes to speed up HDD
or flash PCI Read performance.
drives in the
server.
DAS Cache Read and DAS Cache uses the full capacity of one or Disabled
write-back more SSD devices to provide a large footprint
caching of both read and write-back caching
performed by resources to the SDS. This caching mode
one or more moves "hot" (active) chunks of data from
dedicated HDDs to cache, for Read and Write buffering.
SSD devices For write-back caching, the write is
in the server temporarily written to the SSD, which is much
faster than an HDD, allowing faster response
of the SDS to write acknowledgment. One
SSD device can accelerate several HDDs (in
DAS Cache they are called "Volumes").
Striping the Cache on two devices is not
supported in the VxFlex Ready Node solution.
Note: If a fault occurs in the caching
device before the writes have been
offloaded, all the HDD devices cached by
DAS Cache acquire failed status, and a
rebuild process commences in VxFlex OS.
Once the rebuild is over, the caching disk
can be replaced, all caching has stopped
in the storage pool, and the HDD
members in the storage pool can be
cleared of errors.
Task Command
Task Command
l
scli --set_rmcache_usage --
protection_domain_name <domain NAME> --
storage_pool_name <pool NAME> --use_rmcache
[--dont_use_rmcache]
l
scli --enable_sds_rmcache [--
disable_sds_rmcache] --sds_name <NAME>
2. Set the RFcache parameters (Recommendation: these parameters have a great impact on
performance, therefore use the defaults).
Note: The default settings are; Passthrough mode = Write_Miss, Page Size 64 KB, Max IO
size 128 KB.
3. Enable acceleration of a Storage Pool—accelerate all SDS devices that are in the pool:
where, network_ID is the ID from the output in the previous step; in this case, the ID is 17.
4. In the Advanced tab of the Adapter Properties dialog for your vendor and driver, change
the value of Jumbo Packet to 9000, as illustrated in the following figure:
Figure 3 Adapter properties
5. Click OK.
The network connection may disconnect briefly during this phase.
6. Verify that the configuration is working, by typing the command:
b. Create VMKernel with jumbo frames support by typing the following commands:
a. esxcfg-vswitch -d
Note: Changing Jumbo Frames on a vSwitch will not change the VMKernel MTU size.
For older vCenter versions, check to ensure the MTU setting has been changed. If not
successful, users may need to delete and recreate the VMKernel. For newer vCenter
versions, modify the MTU of VMKernel by using the vsphere web client.
Procedure
1. Edit the /etc/sysconfig/network/ifcfg-<NIC_NAME>.
2. Add parameter mtu=9000 to the file.
3. To apply the changes type:
4. Execute ifconfig command again to confirm that the settings have been changed.
5. To test the command type:
Optimize Linux
When using the SSD devices, it is recommended that the I/O scheduler of the devices be modified.
Type the following on each server, for each SDS device:
For example:
When CPUs have more than twelve physical cores and more performance is required from the
node. For example: when CloudLink is being used for SW encryption or when Fine Granularity is
being used and more performance per node is required, type the following:
Note: To make these changes persistent after reboot, either create a script that runs on boot,
or change the kernel default scheduler via kernel command line.
vim /etc/default/grub
2. Find the GRUB_CMDLINE_LINUX configuration option and append the following to the line:
intel_idle.max_cstate=1 intel_pstate=disable
Example:
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root
rd.lvm.lv=rhel/swap rhgb intel_idle.max_cstate=1
intel_pstate=disable quiet"
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
Optimize ESXi
To improve I/O concurrency, users may increase the per device queue length value on a per data
store basis. Per device queue length is referred to as “No of outstanding IOs with competing
worlds” in the display output.
Use the following command to increase the queue length;
where, <Outstanding IOs> can be a number ranging from 32-16384 (the default=32). We
recommend increasing queue length to 256.
Example:
2. From the Resources tab of the Virtual Machine Properties window, select Reserve all
guest memory (All locked).
The Virtual Machine Properties window is displayed:
Optimizing VM guests
Select the VM guest to optimize.
I/O scheduler
When using SSD devices, it is recommended that you modify the devices' I/O scheduler.
Type the following on each server, for each SDS device:
Example:
Note: For most Linux distributions, NOOP is not the default. Different Linux versions have
different default values. For RHEL7 and SLES 11/12, the default value is deadline, but for older
versions, the default is CFQ.
2. Re-run vmware-config-tools.pl:
vmware-config-tools.pl
You can add up to 4 PVSCSI controllers per guest. Allocating different VxFlex OS volumes to
different PVSCSI controllers can help realize the maximum potential of guest performance.
It is strongly recommended that you review this VMware Knowledge Base article (article 2053145)
so that you can make educated decisions regarding PVSCSI values.
MDM mdm_sds_snapshot_used_capacity_t 1 99 50 50
hreshold
SDS mdm_number_sockets_per_sds_ip 1 8 1 2
SDS sds_number_network_umt 2 16 4 8
SDS sds_number_os_threads 1 32 4 8
SDS sds_number_sockets_per_sds_ip 1 8 1 4
SDS sds_number_io_buffers 1 10 2 5
SDC sdc_number_sockets_per_sds_ip 1 8 1 2
SDC sdc_number_network_os_threads 1 10 2 8
SDC sdc_number_non_io_os_threads 1 16 3 3
l SDCs
l MDM cluster
Note: After changing the performance profile of an SDS (on an SVM), you must perform
manual memory allocation on the SVM, as described in the VxFlex OS Deployment Guide.
To apply a profile to system components, perform the following steps:
Procedure
1. Depending on the system component that you want to configure, in either the Backend >
Storage or Frontend > SDCs view, navigate to, and select the desired objects.
Note: If you want to apply the Performance Profile to MDMs, select the System object.
2. Right-click the object and select Set Performance Profile for XXX, where XXX represents
one of the following:
l MDMs
l All SDSs
l SDS
l All SDCs
l SDC
This section provides answers to common questions or scenarios when working with theVxFlex OS
system.
Chapter 5, "Backend"
Chapter 8, "Security"
The following topics describe how to create volumes from devices added to SDS nodes, and then
to map the volumes to SDC nodes. Devices may have been added during, or after, the installation
process.
l Volumes.................................................................................................................................70
l Configuring volumes, volume trees, SDCs, and snapshots.....................................................76
l Migrating V-Trees................................................................................................................. 78
l Set V-Tree compression mode.............................................................................................. 82
l Snapshot Policies.................................................................................................................. 83
l Apply Performance Profiles to system components.............................................................. 86
l Volumes in the vSphere environment.................................................................................... 86
l Add an external SDC to an existing VxFlex OS system.......................................................... 88
l SDC operations..................................................................................................................... 92
Volumes
You can define, configure and manage volumes in the VxFlex OS system.
Add volumes
Add volumes to a system.
Before you begin
There must be at least three SDS nodes in the system and there must be sufficient capacity
available.
Note: For the minimum size of an SDS, see "System Requirements" in the Getting to Know
guide .
About this task
The adding and mapping volume process is necessary, as part of the getting started process,
before applications can access the volumes. In addition, you may create additional volumes and
map them as part of the maintenance of the virtualization layer.
You can configure the caching option when creating the volumes, or you can change the Read
RAM Cache feature later. If you want to enable the caching feature, ensure that the feature is also
enabled in the backend of the system, for the corresponding Storage Pool and SDSs. For more
information, see "Change Read RAM Cache volume settings".
Define volume names according to the following rules:
l Contains less than 32 characters
l Contains only alphanumeric and punctuation characters
l Is unique within the object type
VxFlex OS objects are assigned a unique ID that can be used to identify the object in CLI
commands. You can retrieve the ID via a query, or via the object’s property sheet in the VxFlex OS
GUI. It is highly recommended to give each volume a meaningful name associated with its
operational role.
To add one or multiple volumes, perform these steps:
Procedure
1. In any of the Frontend > Volumes views, navigate to the Storage Pool to which you want to
add the volume, and select it.
2. From the Command menu or context-sensitive menu, select Add Volume.
3. In the Add Volume window, if you want to create more than one volume, select Create
multiple volumes and type the number of volumes you would like to add in the Copies box.
l If you type 1, only one volume will be created (optional—can be left blank).
l If you type a number greater than 1, the characters %i% will be added to the Name box,
and multiple volumes will be created, accordingly.
The volumes will be named and numbered automatically, starting from 1. If you want the
numbering to start from a different number, type it in the Start numbering at box, as
described in step 5. The remaining options in the window will be assigned to all the
volumes created in this operation.
4. Type a name for the volume:
l If you are adding one volume, enter the name in the Name box.
l If you are adding multiple volumes, enter the base name in the Base name box.
The volumes will all be created with the same name, and a number will be appended
instead of the characters %i%. These characters can be positioned anywhere in the
name. The names that will be created are displayed in the right pane of the window, as
shown in the figure later in this topic.
5. If you want the numbering to start from a specific number other than 1, type it in the Start
numbering at box.
This number will be the first number in the series that will be appended to the volume name.
For example, if the Name is Vol%i% and the Start numbering at value is 100, the name of
the first volume created will be Vol100, and the second volume will be Vol101, and so on.
6. Type a number in the Size box, representing the volume size in GB (basic allocation
granularity is 8 GB).
7. Select either Thick (default) or Thin provisioning options.
8. If you want to enable the RMcache feature (disabled by default), select Use RMcache.
9. Click OK.
Note: The progress of the operation is displayed at the bottom of the window. It is
recommended to keep the window open until the operation is completed, and until you
can see the result of the operation.
Remove volumes
Remove volumes from Storage Pools.
Before you begin
Ensure that the volume you are removing is not mapped to any SDCs. If it is, unmap it before
removing. For information, see "Unmap volumes". Also, ensure that the volume is not the source
volume of any Snapshot Policy. You must first remove the volume from the Snapshot Policy before
you can remove the volume.
About this task
If you want to remove a volume’s related snapshots, or just the snapshots, see "Remove
Snapshots".
Best practice is to avoid deleting volumes or snapshots while the MDM cluster is being upgraded,
to avoid causing a Data Unavailability status.
Note: Removal of a volume erases all the data on the corresponding volume.
Procedure
1. In any of the views of the Frontend > Volumes view, navigate to the volume or volumes you
want to remove, and select them.
2. From the Command menu or context-sensitive menu, select Remove.
The Remove Volumes window is displayed, showing a list of the volumes that will be
removed.
3. Click OK.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
4. In the Overwrite volume content dialog box, in the Password Confirmation field, enter
the admin password.
5. Click OK.
Results
The content from the volume is overwritten.
The Snapshot Volume window is displayed, showing the volumes for which snapshots will
be created.
3. In the Index box, type the number that you want to append to the snapshot names.
4. If you want the snapshots to belong to a consistency group, ensure that the Create
Consistency Group check box is selected.
5. Click OK.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
Remove snapshots
You can remove a volume together with its snapshots, or remove individual snapshots.
About this task
Before removing a volume or snapshots, you must ensure that they are not mapped to any SDCs.
If they are, unmap them before removing them. Snapshots are unmapped in the same way as
volumes are unmapped. For information, see "Unmap volumes".
Best practice is to avoid deleting volumes or snapshots while the MDM cluster is being upgraded,
to avoid causing a Data Unavailability status.
Note: Removal of a volume or snapshot erases all the data on the corresponding volume or
snapshot.
Procedure
1. In the Frontend > Volumes > V-Trees view, navigate to the volume from which you want to
remove snapshots, and select it.
2. From the Command menu or context-sensitive menu, select one of the following options:
l To remove both the parent volume and all volumes that were created as snapshots of the
specified volume or one of its descendants, select Remove with Descendants
l To retain the parent volume, and remove only its snapshots, select Remove
Descendants Only
The Remove Volumes window is displayed, showing a list of the objects that will be
removed.
3. Click OK.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
Map volumes
Map one or more volumes to SDCs.
About this task
Mapping exposes the volume to the specified SDC, effectively creating a block device on the SDC.
For Linux devices, the scini device name can change on reboot. It is recommended to mount a
mapped volume to the VxFlex Ready Node unique ID, a persistent device name, rather than to the
scini device name.
To identify the unique ID, run the ls -l /dev/disk/by-id/ command. For more information,
see "Associate VxFlex OS volumes with physical disks". You can also identify the unique ID using
VMware. In the VMware management interface, the device is called EMC Fibre Channel Disk,
followed by an ID number starting with the prefix eui.
Note: You can't map a volume if the volume is an auto snapshot that is not locked.
Unmap volumes
Unmap one or more volumes from SDCs.
Procedure
1. In any of the Frontend > Volume views, navigate to the volumes, and select them.
2. From the Command menu or context-sensitive menu, select Unmap Volumes.
The Unmap Volumes window is displayed, showing a list of the volumes that will be
unmapped.
3. If you want to exclude some SDCs from the unmap operation, in the Select Nodes panel,
select one or more SDCs for which you want to retain mapping.
You can use the search box to find SDCs
4. Click Unmap Volumes.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
The Increase Volumes’ Size window is displayed, showing a list of the volumes that will be
modified.
4. In the New Size box, type a number representing the new volume size in GB (basic
allocation granularity is 8 GB).
5. Click OK.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
5. In the Select Nodes panel, select the SDCs to which you want to apply the changes.
6. Click Set Limits.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
Procedure
1. In the Frontend > Volumes > V-Trees view, navigate to the snapshot whose consistency
group you want to remove, and select the snapshot.
2. From the Command menu or context-sensitive menu, select Remove Consistency Group.
The Remove Consistency Group window is displayed, showing the selected snapshot.
3. Click OK.
The progress of the operation is displayed at the bottom of the window. It is recommended
to keep the window open until the operation is completed, and until you can see the result of
the operation.
also describes setting the SDC restriction mode and how to approve SDCs before mapping
volumes.
Set the system's restricted SDC mode using the VxFlex OS GUI
Use the VxFlex OS GUI to set the restricted SDC mode.
About this task
The system's restricted SDC mode can also be set using the CLI or the REST API. For details, see
the CLI Reference Guide or the REST API Reference Guide.
Note: In a system that has been upgraded and already has volumes mapped to SDCs, if you
want to enable restricted SDC mode, you must first approve the SDCs and only then enable
restricted SDC mode.
Procedure
1. In the Frontend > SDCs view, right-click on the System and select Set Restricted SDC
Mode.
2. In the Set Restricted SDC Mode dialog box, select one of the following options and click
OK.
l No restriction
l GUID restriction
l Approved IP restriction
Results
The restricted SDC mode is set for the system. If you enabled restricted SDC mode by selecting
either GUID restriction or Approved IP restriction, you must configure approved
SDCs before you can map volumes.
2. In the Approve SDC window, verify that the SDCs listed are the ones you want to approve
and click OK.
Results
The SDCs are approved and you can map volumes. The Approved IPs column in the Frontend >
SDCs displays which SDCs are approved.
Migrating V-Trees
Migrating a V-Tree allows you to move the V-Tree to a different Storage Pool.
Migrating a V-Tree frees up capacity in the source Storage Pool. For example, from an HDD
Storage Pool to an SSD Storage Pool or different attributes (for example, from thin to thick). For
more information, see "Volume Tree" in the Getting to Know Guide.
There are several motives for migrating a V-Tree to a different Storage Pool:
l In order to move the volumes to a Storage Pool with a different performance tier
l To move to a different Storage Pool or Protection Domain due to multitenancy
l To decrease the capacity of a system by moving out of a specific Storage Pool
l To change from a thin-provisioned volume to a thick-provisioned volume or the reverse
l In order to move the volumes from a Medium Granularity Storage Pool to a Fine Granularity
Storage Pool
l To clear a Protection Domain for maintenance and then return the volumes to it
During V-tree migration, you can run other tasks such as creating snapshots, deleting snapshots,
and entering maintenance mode.
Note: You cannot create snapshots when migrating from a Medium Granularity Storage Pool
to a Fine Granularity Storage Pool.
When the user requests a V-Tree migration, the MDM first estimates whether the destination
Storage Pool has enough capacity for the migration to complete successfully. The MDM bases the
estimation on its information of the current capacity of the V-Tree. If there is insufficient capacity
at the destination based on that estimate, migration does not start. (An advanced option allows
you to force the migration even if there is insufficient capacity at the destination, with the
intention to increase the capacity as required during the migration.) The MDM does not reserve
the estimated capacity at the destination (since the capacity of the source volume can grow
during migration and the reserved capacity does not guarantee success). The MDM does not hold
onto source capacity once it has been migrated, but releases it immediately.
Use the following table to understand which V-Tree migrations are possible and under what
specific conditions:
V-Tree migration can take a long time, depending on the size of the V-Tree and the system
workload. During migration, the V-Tree is fully available for user I/O. V-Tree migration is done
volume block by volume block. When a single block has completed its migration, the capacity of
the block at the source becomes available, and it becomes active in the destination Storage Pool.
During migration, the V-Tree has some of its blocks active in the source Storage Pool and the
remaining blocks active in the destination Storage Pool.
Note: You can speed up the migration by adjusting the volume migration I/O priority (QoS).
The default favors applications with one concurrent I/O and 10 MB/sec per device. Increasing
the 10 MB/sec increases the migration speed in most cases. The maximum value that can be
reached 25 MB/sec. The faster the migration, the higher the impact might be on applications.
To avoid significant impact, the value of concurrent I/O operations per second should not be
increased.
When migrating from a Medium Granularity Storage Pool to a Fine Granularity Storage Pool,
volumes must be zero padded. For more information on zero padding, see "Storage Pools" in the
Getting to Know Guide.
You can pause a V-Tree migration at any time, in the following ways:
l Gracefully: To allow all data blocks currently being migrated to finish before pausing.
l Forcefully: To stop the migration of all blocks currently in progress.
Once paused, you can choose to resume the V-Tree migration, or to roll back the migration and
have all volume blocks returned to the original Storage Pool.
V-Tree migration can also be paused internally by the system. System pauses happen when a
rebuild operation begins at either the source or destination Storage Pool. If the migration is paused
due to a rebuild operation, it remains paused until the rebuild ends. If the system encounters a
communication error that prevents the migration from proceeding, it pauses the migration and
periodically tries to resume it. After a configurable number of attempts to resume the migration,
the migration remains paused and no additional retries will be attempted. You can manually resume
migrations that were internally paused by the system.
Concurrent V-Tree migrations are allowed in the system. These migrations are prioritized by the
order in which they were invoked, or by manually assigning the migration to the head or the tail of
the migration queue. You can update the priority of a migration while it is being run. The system
strives to adhere to the priority set by the user, but it is possible that volume blocks belonging to
migrations lower in priority are run before ones that are higher in priority. This can happen when a
Storage Pool that is involved in migrating a higher priority block is busy with other incoming
migrations and the Storage Pools involved in lower priority migrations are available to run the
migration.
5. Optionally, expand Advanced to select one or several of the following advanced options:
Option Description
Add migration at the Give this vTree migration the highest priority in the migration
head of the priority queue.
migration queue
Ignore destination Allow the migration to start regardless of whether there is enough
capacity capacity at the destination.
Enable compression Compression is done by applying a compression-algorithm to the
data. For more information on compression mode, see "V-Trees" in
the Getting to Know guide.
Convert vTree Convert a thin-provisioned vTree to thick-provisioned, or vice-
from... versa, at the destination, depending on the provisioning of the
source volume.
Note: SDCs with a version earlier than v3.0 do not fully support
converting a thick-provisioned vTree to a thin-provisioned vTree
during migration; after migration, the vTree will be thin-
provisioned, but the SDC will not be able to trim it. These
volumes can be trimmed by unmapping and then remapping
them, or by rebooting the SDC. The SDC version will not affect
capacity allocation and a vTree converted from thick to thin
provisioning will be reduced in size accordingly in the system.
Save current vTree The provisioning state is returned to it's original state before the
provisioning state migration took place.
during migration
Procedure
1. From Frontend > Volumes > V-Tree Migration view or V-Tree Capacity Utilization view,
drill down to the relevant volume.
2. Right-click on the volume and select Set V-Tree Compression Mode.
The Set V-Tree Compression Mode dialog box is displayed.
3. Select the Enable Compression check-box.
4. Click OK.
5. In the Are you sure? dialog box, click OK.
6. Click Close.
Results
Compression mode is enabled for the V-Tree.
Snapshot Policies
Snapshot Policies enable you to define policies where you can configure the number of snapshots
to take at a given time for one or more volumes.
The snapshots are taken according to the rule defined. You can define the time interval in-between
two rounds of snapshots as well as the number of snapshots to retain, in a multi-level structure.
For example take snapshots every x minutes/hours/days/weeks. There are one to six levels, with
the first level having the most frequent snapshots.
Example:
Rule: Take snapshots every 60 minutes
Retention Levels:
l 24 snapshots
l 7 snapshots
l 4 snapshots
After defining the parameters, you must then select the source volume to add to the Snapshot
Policy. You can add multiple source volumes to a Snapshot Policy, but a specific volume can only
be the source volume of a single policy. Only one volume per VTree may be used as a source
volume of a policy (any policy). For more information on V-Trees, see "Snapshots" in the Getting
to Know Guide.
When you remove the source volume from the policy, you must choose how to handle auto-
snapshots. Snapshots created by the policy are referred to as auto-snapshots. Your selection
depends on if there are locked auto snapshots.
l If the source volume has no auto snapshots, it doesn’t matter if you select Detach auto
snapshots or Remove auto snapshots.
l If the source volume has auto snapshots but none of them are locked, you can choose to
detach all snapshots. They become regular snapshots as if the user created them manually. If
you select Remove auto snapshots, they are deleted.
l If the source volume has locked auto snapshots, you can choose to detach all snapshots. They
become regular snapshots, as if the user created them manually. If you remove them, those
that are not locked are removed, while the auto snapshots which are locked are detached.
Procedure
1. From Frontend > Volumes, select the Snapshot Policy view.
2. Click Add New Policy.
The New Policy dialog box is displayed.
3. Enter a name in the Policy Name box.
4. Select the Create Paused Policy check box to put policy on hold.
5. Enter the time interval to take snapshots in the Take snapshot every box.
6. Enter the number of snapshots to keep according to the time interval defined in the
Retention Levels box.
7. Click Create.
8. Select one or more volumes to add to the snapshot policy.
9. Click Add and then Close.
Lock/unlock auto-snapshots
You can lock/unlock auto-snapshots.
About this task
Procedure
1. From Frontend > Volumes, select the Snapshot Policy view.
2. Select the relevant policy and from the Command menu or context-sensitive menu, select
View/Edit Policy details.
3. From the Source Volumes pane, click the Auto Snapshots grouped by Storage Pools or
the Auto Snapshots grouped by time button .
4. Drill down to the relevant snapshot and then right-click and select Lock Snapshot or
Unlock Snapshot.
The Lock/Unlock Auto Snapshots dialog is displayed.
5. Click OK to confirm you would like to lock/unlock the selected Snapshots.
2. Right-click the object and select Set Performance Profile for XXX, where XXX represents
one of the following:
l MDMs
l All SDSs
l SDS
l All SDCs
l SDC
Procedure
1. From the Storage Pools screen, click Actions > Create volume.
The Create Volume dialog appears.
2. Enter the following information:
l Volume name: Enter a name for the new volume.
l Number of volumes to create: Enter the number of volumes to create. Multiple volumes
appear as volume_name-X.
l Volume size: Enter the size of the volume. This must be in multiples of 8 GB.
l Volume provisioning: Select thick or thin provisioning.
l Use RAM Read Cache: Select to enable RAM Read Cache for the created volumes. Use
of RAM Read Cache is determined by the policy for the Storage Pool and the volume.
l Obfuscation Select whether the volume should be obfuscated.
Map volumes
About this task
Manually map volumes after they have been created, from the Volumes screen.
Procedure
1. From the Volumes screen, select a volume to map, then choose Actions > Map a volume.
2. In the Map Volume to ESXs dialog box, select the clusters or ESXis to which this volume
should be mapped.
3. To configure the LUN identifier manually, select Manually configure LUN identifier to and
enter the identifier ID.
4. Click OK.
Unmap a volume
About this task
You can use the VxFlex OS plug-into unmap a volume from an ESXi.
Procedure
1. From the Volumes screen, select the volume to unmap, and click Actions > Unmap volume.
2. In the Unmap Volume from ESXis dialog box, select the ESXis or clusters from which to
unmap the volume, then click OK.
Install the SDC on an ESXi server and connect it to VxFlex OS using esxcli
Install the SDC with the appropriate parameters to connect it to an existing VxFlex OS system.
This procedure is relevant both for adding more SDCs to an existing system, and for adding SDCs
to a 2-layer system during initial deployment activities.
Before you begin
Ensure that you have:
l The virtual IP address or MDM IP address of the existing system. If an MDM virtual IP address
is not in use, obtain the IP addresses of all the MDM managers.
l Login credentials for the intended SDC host
l The required installation software package for your SDC's operating system (available from the
zipped software packages that can be downloaded from the Customer Support site)
l A GUID string for the SDC. These strings can be generated by tools that are freely available
online. The GUID needs to conform to OSF DCE 1.1. The expected format is xxxxxxxx-xxxx-
xxxx-xxxx-xxxxxxxxxxxx where each x can be a digit (0–9) or a letter (a–f).
About this task
The following procedure explains how to manually install an external SDC on an ESXi server using
esxcli in command line. Alternatively, you can install the external SDC using the vSphere VxFlex OS
plug-in.
Note: This procedure requires two server reboots.
Procedure
1. On the ESXi on which you are installing the SDC, set the acceptance level:
where <SERVER_NAME> is the ESXi on which you are installing the SDC.
2. Install the SDC:
where
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM
l <XXXXXX> is the user-generated GUID string
OS Modifications required
Ubuntu/hLinux/OL Before installing VxFlex OS, ensure that you have followed the
required preparation procedures relating to various types of Linux
operating systems.
OS Modifications required
CoreOS Before installing VxFlex OS, ensure that you have followed the
required preparation procedures relating to various types of Linux
operating systems.
l VxFlex OS component CoreOS packages are delivered as TAR
files. Before installing, perform the following:
Procedure
1. Install the GPG key on every server on which SDC will be installed. From the VxFlex OS
installation folder, run the following command on every server:
l CoreOS
MDM_IP=<LIST_VIP_MDM_IPS> ./<LIST_VIP_MDM_IPS>.bsx
where
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM
Results
The SDC is installed on the Linux server and is connected to VxFlex OS.
After you finish
In newly deployed systems, perform the post-deployment tasks described in this guide. It is highly
recommended to run the VxFlex OS system analysis tool to analyze the system immediately after
deployment, before you provision volumes, and before using the system in production.
In existing systems, map volumes to the new SDCs that you added to the system.
where
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM.
l <SDC_PATH> is the path where the SDC installation package is located.
The SDC package is in a format similar to this: EMC-ScaleIO-sdc-3.0-
X.<build>.aix7.aix7.2.ppc.rpm
Results
The SDC is installed on the AIX server and is connected to VxFlex OS.
After you finish
In newly deployed systems, perform the recommended post-deployment tasks described in this
guide. It is highly recommended to run the VxFlex OS system analysis tool to analyze the system
immediately after deployment, before you provision volumes, and before using the system in
production.
In existing systems, map volumes to the new SDCs that you added to the system.
Procedure
1. On the Windows server on which you are installing the SDC, run the following command in
command line:
where
l <SDC_PATH> is the path where the SDC installation package is located
l <LIST_VIP_MDM_IPS> is a comma-separated list of the MDM IP addresses or the
virtual IP address of the MDM
2. Get permission to reboot the Windows server, and perform a reboot to load the SDC driver
on the server.
Results
The SDC is installed on the Windows server and is connected to VxFlex OS.
After you finish
In newly deployed systems, perform the post-deployment tasks described in this guide. It is highly
recommended to run the VxFlex OS system analysis tool to analyze the system immediately after
deployment, before you provision volumes, and before using the system in production.
In existing systems, map volumes to the new SDCs that you added to the system.
SDC operations
Many SDC operations use drv_cfg. The drv_cfg command line is a local CLI utility that affects only
the client on which the SDC is running. Possible SDC operations include updating the SDC driver
with IP changes, detecting new volumes, querying volumes, loading a configuration file, adding an
MDM, modifying an MDM IP address, enabling support of PDL state, and more.
Note:
On ESXi, GUID and MDM lists are stored as module parameters, and not in a
drv_cfg.txt file. To modify these parameters, use esxcli commands.
/etc/init.d/scini restart
Command
drv_cfg --rescan
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --rescan
Description/Notes
Volumes are always exposed to the operating system as devices with the prefix scini (such
as /dev/scinia, /dev/scinib and so on). Unique names can be found under /dev/disk/by-
id/.
VxFlex OS periodically scans the system to detect new volumes. You can initiate a scan for the
most up-to-date status on a particular SDC node. This command is unique because it is not a CLI
command, but rather a command issued on the specific SDC.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: refer to the vSphere Client VMWare guidelines on how to detect new storage. If
troubleshooting is needed, contact customer support.
For further details on how to set the mounting options see "Mount VxFlex OS".
Command
drv_cfg --query_vols
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --query_vols
Description/Notes
This utility retrieves information about all known active volume objects in kernel mode. You can use
this utility to determine which volumes are mapped, and the ID of each volume in VxFlex OS.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
Example
/opt/emc/scaleio/sdc/bin/drv_cfg --query_vols
Command
drv_cfg --query_tgts
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --query_tgts
Description/Notes
This utility retrieves information about all known active tgt objects (SDSs) in kernel mode.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
Example
/opt/emc/scaleio/sdc/bin/drv_cfg --query_tgts
Command
drv_cfg --query_guid
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --query_guid
Description/Notes
This utility retrieves the unique ID of the kernel module. The utility can be used to verify that all
SDC GUIDs in the system are unique.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
Note:
If the SDC was removed and reinstalled, the GUID of the SDC will be different to its original
GUID. In such a case, you may need to remove the SDC, if two SDCs now have the same
GUID.
Example
/opt/emc/scaleio/sdc/bin/drv_cfg --query_guid
Command
drv_cfg --query_mdms
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --query_mdms
Description/Notes
This utility retrieves information about all known MDM objects in kernel mode. This utility is
typically used to determine to which MDM an SDC is connected.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
Example
/opt/emc/scaleio/sdc/bin/drv_cfg --query_mdms
Command
drv_cfg --load_cfg_file
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
This command can not be used on ESXi servers. Instead, follow the steps described in "Modify
parameters on ESXi servers".
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg
--load_cfg_file <FILE_NAME>
Description/Notes
This utility reads a configuration file containing MDM IP addresses, and calls the kernel to connect
to them.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
The configuration file that is loaded when using the drv_cfg --load_cfg_file utility is not
persistent; when you restart the SDC, the changes will be lost.
To make the changes persistent, perform either of the following:
l Install the SDC on every server that will expose VxFlex OS volumes to the application running,
by executing the following command:
/opt/emc/scaleio/sdc/bin/drv_cfg --mod_mdm_ip
--ip <EXISTING_MDM_IP_ADDRESS> --new_mdm_ip
<NEW_MDM_IP_ADDRESSES>
Example
/opt/emc/scaleio/sdc/bin/drv_cfg
--load_cfg_file /etc/emc/scaleio/drv_cfg.txt
Command
drv_cfg --add_mdm
Note:
This is not a CLI command, but rather an executable that is run on the SDC server.
This command can not be used on ESX servers. Instead, follow the steps described in
"Modifying parameters on ESXi servers" in the VxFlex Ready NodeAMS User Guide.
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm
--ip <MDM_IP_ADDRESS_LIST>
Description/Notes
This utility calls the kernel module to connect to an MDM. This command is typically used in cases
where an SDC is connected to more than one VxFlex OS system.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
Note:
Extending your VxFlex OS system with another MDM requires that you update all SDCs in your
system with the new MDM IP address. Run the drv_cfg utility with the --mod_mdm_ip
option (see "Modifying an MDM IP address using drv_cfg"), and to make the change
persistent, use the --file parameter. In addition, any additional objects or systems which
interface with the MDM must also be updated. For more information, see "Modifying an
MDM's management IP address" in the VxFlex OS CLI Reference Guide.
Parameters
Parameter Description
Optional:
Parameter Description
Example
/opt/emc/scaleio/sdc/bin/drv_cfg --add_mdm
--ip 10.100.22.20,10.100.22.30
--file /etc/emc/scaleio/drv_cfg.txt
Command
drv_cfg --mod_mdm_ip
Note:
This is not a CLI command, but rather an executable that is run on the SDC server. This
command can not be used on ESX servers. Instead, follow the steps described in "Modify
parameters in ESXi servers".
Syntax
/opt/emc/scaleio/sdc/bin/drv_cfg --mod_mdm_ip
--ip <EXISTING_MDM_IP_ADDRESS>
--new_mdm_ip <NEW_MDM_IP_ADDRESSES> [--file <CONFIG_FILE_NAME>]
["‑""‑"only_cfg]
Description/Notes
This utility calls the kernel to modify an MDM’s IP address list. It is typically used in cases when an
MDM IP address has changed, or when MDMs are added/removed from/to the system. The
command must be run on every SDC in the system. To bring the changes into effect, a server
restart is required.
Location of drv_cfg command:
l Linux: /opt/emc/scaleio/sdc/bin/drv_cfg
l Windows: C:\Program Files\emc\scaleio\sdc\bin\drv_cfg
l ESXi: Refer to "Update the SDC parameters in VMware based HCI or Compute node".
Note:
Extending your VxFlex OS system with another MDM requires that you update all SDCs in your
system with the new MDM IP address.
Parameters
Parameter Description
Parameter Description
Optional:
Example
/opt/emc/scaleio/sdc/bin/drv_cfg --mod_mdm_ip
--ip 10.100.20.20
--new_mdm_ip 10.100.20.20,10.100.20.30,10.100.20.40
where:
l <<PREVIOUS_MODULE_PARAMS>> is any previous module parameters being used for
this ESXi host.
l <TIMEOUT_VALUE> is the timeout time in milliseconds. Its value can be 1000-3600000
(default is 60000) and including it in the command is optional.
ENV{ID_WWN}="%c"
defaults {
retain_attached_hw_handler "no"
}
You can configure and monitor the storage and devices in the VxFlex OS system.
Configuring storage
Add and remove storage devices, and configure storage features.
Configuring capacity
Add, remove, and configure capacity.
The following topics explain how to add, remove, activate, and inactivate capacity, activate
devices, clear device errors, and set device capacity limits.
The Dashboard Capacity tile, some Backend table views (such as Capacity Usage,
Configuration), and Property Sheets help you to better understand the amount of raw capacity
and net free capacity currently available in the system.
Add SDSs
SDSs and their devices can be added to a system one by one, or in bulk operations, using the Add
SDS command. You can associate up to eight IP addresses to each SDS. By default, performance
tests are performed on the added devices, and the results are saved in the system.
Before you begin
Ensure that at least one suitable Storage Pool is defined in the required Protection Domain. If you
want to add Acceleration devices now, ensure that at least one Acceleration Pool is defined in the
Protection Domain, as well.
All devices in a Storage Pool must be the same media type. Ensure that you know the type of
devices that you are adding to the system. Ensure that the Storage Pool to which you are adding
devices is configured to receive that media type.
About this task
Device data is erased when devices are added to an SDS. When adding a device to an SDS, VxFlex
OS will check that the device is clear before adding it. An error will be returned, per device, if it is
found not to be clear. A device that has been used in the past can be added to the SDS by using
the Force Device Takeover option. When this option is used, any data that was previously
saved on the device will be lost. For more information, see "Configuring direct attached storage" in
the Getting to Know VxFlex OS Guide.
Fields that contain orange explanation marks are mandatory.
You can assign a name to the SDS, as well as to the devices. This name can assist in future object
identification. This can be particularly helpful for SDS devices, because the name will remain
constant, even if the path changes. SDS and device names must meet the following requirements:
l Contains less than 32 characters
l Contains only alphanumeric and punctuation characters
l Is unique within the object type
Note: Devices can be tested before going online. Various testing options are available the
Advanced area of the window (default: Test and Activate ).
Note: Acceleration settings can be configured later, using the Backend > Devices By Pools
view.
Note: You cannot enable zero padding after adding the devices. For more information, see
"Storage Pools" in the Getting to Know VxFlex OS Guide.
Procedure
1. In one of the Backend views, navigate to the desired Protection Domain, and select it.
2. From the Command menu or context-sensitive menu, choose the Add option, then Add
SDS.
roles in the relevant check boxes. Click to add more rows to the table.
5.
Use the New Devices table to add storage devices. Click to add rows to the table. You
must add at least one storage device to the new SDS at this stage. Add a new row for each
device, and enter the required parameters in each row.
You can add more devices later.
Note: If you want to add an SDS without any devices, you can do so using the CLI. To
run CLI commands, you must be logged in. Actual command syntax is operating-system
dependent. For more information, see the VxFlex OS CLI Reference Guide.
6. Optionally, add acceleration devices to the SDS, using the Acceleration Devices table. Click
to add rows to the table. Add a new row for each device, and enter the required
parameters in each row.
7. The Advanced option provides additional items, such as device testing, forced device
takeover, and Read RAM Cache acceleration options. Optionally, click its Expand button to
display and configure them (recommended for advanced users only).
8. Click OK.
The progress of the operation is displayed at the bottom of the dialog box. It is
recommended to keep it open until the operation is completed, and until you can see the
result of the operation.
9. Click Close.
If you chose the Test only advanced option, activate the devices as described in "Activate
devices".
1 2 3 4 5 6 7 8 9
10
Activate devices
Activate a device that was inactivated, or that was added to a system using the Test only
option.
About this task
Use the Activate Device command in the following situations:
l Storage devices were added to the system using the Test only option for Device Tests, and
successfully passed the tests.
l Storage devices were inactivated, and you want to bring them back online.
Procedure
1. In one of the Backend views, navigate to the device or devices in the table, and select the
corresponding rows.
2. Right-click and select Activate.
job queue are shown as Pending. If a job in the queue will take a long time, and you do not want to
wait, you can cancel the operation using the Abort button in the Remove command window (if you
left it open), or using the Abort command from the Command menu.
The Remove command deletes the specified objects from the system. Use the Remove command
with caution.
Procedure
1. In the Backend > Storage view, navigate to the desired object in the table, and select its
row.
2. Right-click the row and select the desired Remove command.
In the confirmation window, click OK.The progress of the operation is displayed at the
bottom of the window. It is recommended to keep the window open until the operation is
completed, and until you can see the result of the operation. For some objects, an Abort
button is available in the window, which can be used if you decide to abort the operation.
The is also an Abort command accessible from the Command menu.
3. Click Close.
Note: The Read RAM Cache features are advanced features, and it is usually
recommended to accept the default values. You can configure these features
later, if necessary, using the Configure Read RAM Cache command. For
more information about Read RAM Cache features, see "Managing RAM read
cache".
b. Select the required Write Handling Mode: Cached or Passthrough.
l Fine Granularity
a. Select the relevant option from the Acceleration Pool list.
b. Select the Enable Compression check box, to enable compression.
6. Select the Use Inflight Checksum checkbox to enable validation of the checksum value of
in-flight data reads and writes.
7. Click OK.
if no capacity is added or removed. For example, during a recovery from an SDS or device
failure, some rebalance activity may be needed to ensure optimal balancing.
To enable or disable Rebuild and Rebalance features, perform these steps:
Procedure
1. In the Backend > Storage view, navigate to, and select the desired Storage Pools.
2. Right-click the Storage Pool and select Enable/Disable Rebuild/Rebalance.
The Enable or Disable Rebuild and Rebalance window is displayed.
3. Select or clear the options that you require (selected=enable; clear=disable), and click OK.
Procedure
1. In one of the Backend views, navigate to the device in the table, and select its row.
2. Right-click the row and select Clear Device Errors.
To use Read RAM Cache, you need to configure settings at two levels:
l Storage Pool—controls Read RAM Cache for all the SDSs in the selected Storage Pool.
Caching can be enabled or disabled, and either Cached (default) or Passthrough Write
Handling modes can be selected. When Read RAM Cache is enabled in a Storage Pool, the
feature is enabled at Storage Pool level. However, caching must also be set to Enabled in each
SDS in the Storage Pool. Caching will only begin once storage devices have been added to the
SDSs. It is possible to enable RAM caching for a Storage Pool and then disable caching on one
or more SDSs individually.
l Per SDS—controls Read RAM Cache for one or more SDSs. Caching can be enabled or
disabled for the specified SDS, and the capacity allocated for caching on an SDS can be
specified. Caching will only begin after one or more storage devices are added to the SDSs.
Ensure that the feature is also enabled at Storage Pool level.
Note: By default, Read RAM Cache is disabled in all volumes. You can enable them from
the Frontend > Volumes view.
Note: Only I/Os that are multiples of 4k bytes can be cached.
6. Click OK.
Procedure
1. In the Backend > Storage view, navigate to, and select the desired Storage Pools.
2. Right-click the Storage Pools and select Reset Background Device Scanner Counters.
The Reset Background Device Scanner Counters window is displayed. The right pane of
the window shows the Storage Pools that you are configuring.
3. Select or clear the option that you require, or both options (selected=enable; clear=disable).
4. Click OK.
Procedure
1. Perform one of the following:
Option Description
To configure counter parameters for all In the Backend > Storage view, select the
SDCs, Protection Domains or Storage Pools System icon.
in the system:
To configure counter parameters for a In the Backend > Storage view, navigate to,
specific Protection Domain or Storage Pool: and select the desired Protection Domain or
Storage Pool.
2. From the Command menu or context-sensitive menu, select Set Oscillating Failure
Properties.
3. Perform one of the following:
Option Description
For system level: In the For All box, select an option: SDCs,
Protection Domains, orStorage Pools.
For a Protection Domain or a Go to the next step.
Storage Pool:
4. In the Counter Type box, select a counter. Options vary, depending on the item selected in
the previous step.
5. In the Window Type box, select an option for the sliding window interval: Short, Medium or
Long.
Option Description
If you want to remove the selected Select the Remove the counter check box.
counter definition from the system:
If you want to modify the threshold Enter a number in the fields for:
for the selected counter definition: l failures (the maximum number of failures per
time interval before reporting begins)
l seconds (the number of seconds per time
interval)
7. Click OK.
The currently configured counter parameters are displayed in the corresponding Property
Sheet, in the Oscillating Failure Parameters section.
3. In the Policy structure page, under Datastore specific rules, select Enable rules for VxFlex
OS VVols storage.
4. In the VxFlex OS VVols rules page, define storage placement rules for the target VVols
datastore.
In the Placement tab, from the Tier drop-down menu, select the appropriate storage policy.
5. In the Storage compatibility page, review the list of Storage Pools that match the policy
you selected in the previous page.
6. In the Review and finish page, review the storage policy settings and click Finish.
Results
The new VM storage policy compatible with VVols appears on the storage policy list. You can now
associate this policy with a virtual machine, or designate the policy as default.
c. If you want to force entry into Maintenance Mode even though there is insufficient
space or degraded/failed capacity, select the corresponding check box:
l Force Insufficient Space—allow entry into maintenance mode, even without enough
available capacity
l Force Degraded or Failed—allow entry into maintenance mode, even with degraded
or failed data
d. Click OK.
The status area at the bottom of the window indicates when the operation is complete.
Once the SDS is in Maintenance Mode, this will be indicated both on the Dashboard, and in
Backend tables and Property Sheets, using the symbol, and the Maintenance Mode
color code (green).
2. To put an SDS back into regular service (cancel Maintenance Mode), perform these steps:
a. In the Backend > Storage view, navigate to, and select the desired SDS.
b. From the Command menu or context-sensitive menu, select Exit Maintenance Mode.
The Exit Maintenance Mode window is displayed.
c. If you want to force exit from Maintenance Mode even though there is a failed SDS,
select the Force Failed SDS check box.
d. Click OK.
The status area at the bottom of the window indicates when the operation is complete.
Once the operation has been successfully completed, the SDS returns to normal operation,
and data deltas collected on other SDSs during the maintenance period are copied back to
the SDS.
2. Right-click the Storage Pool and select Settings > Set I/O Priority.
3. Select Favor Application I/O for Rebalance, Rebuild and Migration, and click OK.
I/O prioritization
About this task
Priority can be given to different types of I/Os in the system. The number of concurrent Rebuild
and Rebalance jobs can be configured, and bandwidth for Rebalance jobs can be configured. You
can also control I/O priority for Migration. Refer to "V-Tree migration" for more information. If the
Dynamic Bandwidth Throttling option is selected, additional items can be configured, such
as Application IOPS threshold, Application bandwidth threshold, and
Application threshold quiet period. Default values for these features are provided in
the VxFlex OS CLI Reference Guide.
NOTICE These features affect system performance, and should only be configured by an
advanced user.
Configure I/O prioritization for Rebuild, Rebalance and Migration by performing these steps:
Procedure
1. In the Backend > Storage view, navigate to, and select the desired Storage Pool.
2. Right-click the Storage Pool and select Settings > Set I/O Priority.
3. Select the desired options and edit values, and click OK.
Configuring acceleration
VxFlex OS supports different types of acceleration to enhance storage performance. Depending
on your system, you can configure VxFlex OS for acceleration using NVDIMM, RFcache or
RMcache.
Procedure
1. In the Backend > Devices view, select the required Protection Domain.
2. Right-click the Protection Domain and select Add Acceleration Pool.
The Add Acceleration Pool window is displayed.
3. Enter a name in the Acceleration Pool Name box.
4. Select a pool type.
l For Fine Granularity data layout, select NVDIMM. You must have at least one NVDIMM
installed in order to select this option.
l For Medium Granularity data layout, select SSD. You must have at least one SSD
installed that can be used for the RFcache feature in order to select this option.
5. Click Add Devices and then click the Add device icon to add a row to the New Devices
table.
6. Enter the following information in the relevant row of the table:
l In the Path cell, enter the location of the acceleration device
l In the Name cell, enter the name of the acceleration device
l From the SDS drop-down list, select the relevant SDS
8. If you want to add more devices, click the Add device icon again and configure the fields in
the new row.
9. Click OK.
Results
The Acceleration Pool has been created, and acceleration devices have been added to it.
After you finish
For RFcache Acceleration Pools, ensure that caching is enabled, using the Configure Caching >
Set Read Flash Cache Policy command. This feature can be enabled at Protection Domain,
Storage Pool, or SDS level.
When adding NVDIMMs to a node, the enumeration of the DAX devices may change in the node.
When adding a new NVDIMM DAX path to the acceleration pool it fails with the error:
Also if the current SDS device includes the name field, you must update it to match the new path
so that it is refected in the GUI. For example:
The following steps describe how to update from two to four NVDIMMs
1. Gracefully shutdown node (place SDS in maintenance mode and move application workload to
othernode)
2. Add two NVDIMMs
3. Boot up node.
4. Login and run Exit SDS maintenance mode from the GUI or CLI.
5. Run the following command:
6. For each existing NVDIMM DAX mount point run the command:
7. Run ndctl to create the DAX device. Repeat for each new device.
8. Add the /dev/daxDevice name to Acceleration pool
Procedure
1. From Backend > Devices, select an SDS.
2. At the top right side of the window, ensure that the display is set to By Pools.
3. Expand the desired Storage Pool, right-click the SDS where the device is installed, and
select Add Acceleration Device.
The Add acceleration device to SDS dialog box is displayed.
4. In the table, add the following information:
l In the Path cell, enter the location of the acceleration device
6. If you want to add more devices, click the Add device icon again and configure the fields in
the new row.
7. Click OK.
Serial Number
Namespace
Locator: A7
Serial Number: 17496594
Locator: B7
Serial Number: 174965AC
3. Find the serial number in the output and record it in the NVDIMM information table.
4. Display the correlation between the ID and NMEM device name of each NVDIMM mounted
on the server:
{
"dev": "nmem1",
"id": "802c-0f-1722-174965ac",
"handle": 4097,
"phys_id": 4370,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
{
"dev": "nmem0",
"id": "802c-0f-1722-17496594",
"handle": 1,
"phys_id": 4358,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
5. In the output from the previous step, find the device (dev) with the id that partially
correlates with the serial number you discovered previously for the failed device.
For example:
l The NVDIMM output displays serial number 16492521 for the NVDIMM device.
l In the previous step, the output displays the ID of device nmem0 as
802c-0f-1746-802c-0f-1711-16492521.
6. Record the NMEM name in the Device name row of the NVDIMM information table.
7. Correlate between the NMEM DIMM and the namespace/DAX device:
{
"dev": "nmem1",
"id": "802c-0f-1722-174965ac",
"handle": 4097,
"phys_id": 4370,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
{
"dev": "nmem0",
"id": "802c-0f-1722-17496594",
"handle": 1,
"phys_id": 4358,
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 30
}
}
{
"dev": "namespace1.0",
"mode": "raw",
"size": 17179869184,
"sector_size": 512,
"blockdev": "pmem1",
"numa_node": 1
}
{
"dev": "namespace0.0",
"mode": "raw",
"size": 17179869184,
"sector_size": 512,
"blockdev": "pmem0",
"numa_node": 0
}
8. In the output displayed in the previous step, locate the namespace that correlates with the
NMEM name and DIMM serial number, and record it in the NVDIMM information table.
In the above example, nmem0's namespace is namespace0.0.
9. Destroy the default namespace that was created for the replacement NVDIMM, using the
namespace discovered in the previous step:
For example, if the replacement NVDIMM maps to namespace0.0, the command is:
10. Create a new, raw nmem device using the region associated with namespace of the failed
device, as recorded in the NVDIMM information table:
For example, if the NVDIMM you replaced mapped to region 0, the command is:
11. Convert the namespace device to the acceleration device name of type /dev/daxX.X:
or
12. Record the acceleration device name in the NVDIMM information table.
13. Run the namespace-to-dax-device correlation command to find the DAX device name
of the replacement NVDIMM:
{
"dev": "namespace1.0",
"mode": "devdax",
"map": "dev",
"size": 16909336576,
"uuid": "c59d6a2d-7eeb-4f32-b27a-9960a327e734",
"daxregion": {
"id": 1,
"size": 16909336576,
"align": 4096,
"devices": [
{
"chardev": "dax1.0",
"size": 16909336576
}
]
},
"numa_node": 1
}
{
"dev": "namespace0.0",
"mode": "devdax",
"map": "dev",
"size": 16909336576,
"uuid": "eff6429c-706f-469e-bab4-a0d34321c186",
"daxregion": {
"id": 0,
"size": 16909336576,
"align": 4096,
"devices": [
{
"chardev": "dax0.0",
"size": 16909336576
}
]
},
"numa_node": 0
}
The DAX device name appears in the output as the chardev value.
In the example output above, the DAX device name is dax0.0.
14. Record the DAX device name in the NVDIMM information table.
15. Find the full acceleration device path:
/dev/daxX.X
For example:
/dev/dax0.0
16. Record the acceleration device path in the NVDIMM information table.
Results
You are now ready to add the DAX device to the NVDIMM Acceleration Pool.
where:
<USERNAME> is the user name used to query the MDM
<PASSWORD> is the user's password
<CLI_BIN> is the location of the VxFlex OS CLI on the server
<MDM_IP_ADDRESSES> is a comma-separated list of MDM IP addresses
(This message may appear several times, depending on the number of SDSs with
insufficient NVDIMM capacity.)
l System is ready for upgrade
If the output is System is ready for upgrade, no further actions are required now.
Your system contains enough NVDIMM capacity to support Fine Granularity storage
acceleration in future software versions.
If the output is Your system has insufficient NVDIMM capacity on SDS
{XXX} to support future version upgrades. The required total NVDIMM
capacity for the upgrade is {YYY}. Contact your account manager for
more information., continue to the next step.
3. Use one of the following methods to determine which SDSs need more NVDIMM capacity.
Option Procedure
CLI a. Prepare a list of the SDSs in your system. You can use the --query_all_sds
to collect this information. For example:
scli --query_all_sds
b. Using the CLI, run the following command for every SDS that uses NVDIMM
acceleration for Fine Granularity storage:
For example:
c. If the Used value is greater than the Capacity value, as shown in the output
example above, more NVDIMM capacity is required in order to upgrade the
system.
d. Make a note of all the SDSs where more NVDIMM capacity is required.
GUI a. In the GUI, open the Monitor > Alerts view, and look for alerts for insufficient
NVDIMM capacity for future version upgrades.
b. Prepare a list of all the SDSs where these alerts occur.
2. Alternatively, you can calculate NVDIMM capacity and RAM capacity using the following
formulas:
Note:
The calculation is in binary MiB, GiB, and TiB
Backend tables and Property Sheets, using the symbol, and the Maintenance Mode
color code (green).
3. Select the SVM, and from the Basic Tasks pane select Shut down the virtual machine.
4. In the VxFlex OS GUI Alerts view, verify that you received an alert that the SDS is
disconnected. If the SVM is a cluster member, also verify that you received an alert that the
MDM cluster is being degraded.
2 31 GB
4 62 GB
6 93 GB
Note:
vCenter does not allow adding more than three virtual NVDIMM devices .If you are
expanding NVDIMM capacity, add a second NVDIMM device based on the rules
above.
d. If additional RAM is needed (see Calculate required NVDIMM and RAM capacity for FG
SDS on page 131), modify the memory size to match the necessary value.
e. Click OK.
2. In the vCenter client view, expand the server and select the Storage VM (SVM). Power-on
the SVM manually.
3. Using the VxFlex OS GUI, verify the following:
a. In the Monitor > Alerts view, verify that no SDS disconnect messages appear.
Note: After the SVM has powered on, it might take approximately 10-20 seconds for
the SDS to power up and remove the disconnection alerts.
b. If the node was an MDM cluster member, in the Dashboard > Management, verify that
the cluster is no longer degraded and that no alert on a degraded cluster appears.
Note: If "could not connect to HOST" alerts appear, wait a few minutes for the alerts
to disappear.
4. In the Backend > Storage > By SDSs view, right-click the SDS and select Exit
Maintenance Mode.
5. In the Action window, click OK.
6. Wait for the rebuild/rebalance operations to complete.
The SVM is now operational and you can add the NVDIMM capacity using the next set of
steps.
7. Create a namespace on the NVDIMM:
a. Connect using SSH to the SVM.
b. Run the following:
c. Perform these steps for creating a namespace for every node with an NVDIMM device.
8. Create an Acceleration Pool for the NVDIMM devices:
a. Connect using SSH to the Master MDM.
b. Use the SCLI to create the Acceleration Pool:
c. For each SDS with NVDIMM, add the NVDIMM devices to the Acceleration Pool:
9. Create a Storage Pool for SSD devices accelerated by NVDIMM Acceleration Pool with Fine
Granularity data layout:
a. Connect using SSH to the Master MDM and run the following SCLI command:
10. Add SSD devices to the Fine Granularity Storage Pool that you created.
11. Set the Spare Capacity for the Fine Granularity Storage pool based on the number of nodes
of equal capacity, allowing for at least one node to fail.
Ten nodes of 20-TB SSD capacity each use a 10% Spare policy.
3. Double-click the icon of the failed memory device to display more information about the
failed NVDIMM.
5. Record the slot number of the failed NVDIMM device in the NVDIMM information table.
Note: If the slot number points to a regular DIMM as faulty, use the DIMM replacement
procedure instead.
6. From the Dell console main window, select System > Inventory > Hardware Inventory.
7. Expand the entry for the relevant DIMM.
The console displays information regarding the DIMM you identified in the previous steps.
The DIMM's PrimaryStatus should appear as Degraded.
8. Using SSH, log in to the Linux server.
9. View information for the faulty DIMM:
Locator: A7
Serial Number: 16492521
Locator: B7
Serial Number: 1649251B
The example output displays the DIMM's Type Detail as Non-Volatile, signifying that it is an
NVDIMM. The output also displays the NVDIMM serial numbers.
10. In the command output, find the Locator and Serial Number, and record their values in
the NVDIMM information table.
11. Display the list of DIMMs mounted on the server:
[
{
"dev": "nmem1",
"id": "802c-0f-1711-1649251b",
"handle": 4097,
"phys_id": 4370,
"state": "disabled",
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 32
}
},
{
"dev": "nmem0",
"id": "802c-0f-1711-",
"handle": 1,
"phys_id": 4358,
"state": "disabled",
"health": {
"health_state": "ok",
"temperature_celsius": 255,
"life_used_percentage": 32
}
}
]
12. In the output from the previous step, find the device (dev) with the id that partially
correlates with the serial number you discovered previously for the failed device.
For example:
l The NVDIMM output displays serial number 16492521 for the NVDIMM device.
l In the previous step, the output displays the ID of device nmem0 as
802c-0f-1746-802c-0f-1711-16492521.
13. Record the NMEM name in the Device name row of the NVDIMM information table.
14. To help correlate nmem mapping, region, and namespace configuration information, enter:
{
"dev": "region1",
"size": 17179869184,
"available_size": 0,
"max_available_extent": 0,
"type": "pmem",
"numa_node": 1,
"mappings": [
{
"dimm": "nmem1",
"offset": 0,
"length": 17179869184,
"position": 0
}
],
"persistence_domain": "unknown",
"namespaces": [
{
"dev": "namespace1.0",
"mode": "devdax",
"map": "dev",
"size": 16909336576,
"uuid": "0a438fbc-91e4-427d-8068-1f26330d85cc",
"daxregion": {
"id": 1,
"size": 16909336576,
"align": 4096,
"devices": [
{
"chardev": "dax1.0",
"size": 16909336576
}
]
},
"numa_node": 1
}
]
}
{
"dev": "region0",
"size": 17179869184,
"available_size": 0,
"max_available_extent": 0,
"type": "pmem",
"numa_node": 0,
"mappings": [
{
"dimm": "nmem0",
"offset": 0,
"length": 17179869184,
"position": 0
}
],
"persistence_domain": "unknown",
"namespaces": [
{
"dev": "namespace0.0",
"mode": "devdax",
"map": "dev",
"size": 16909336576,
"uuid": "38cbd555-3f5b-4f4f-8d83-bf77db75553d",
"daxregion": {
"id": 0,
"size": 16909336576,
"align": 4096,
"devices": [
{
"chardev": "dax0.0",
"size": 16909336576
}
]
},
"numa_node": 0
}
]
}
The order of the list in the output reflects the order for correlating between an NMEM
DIMM and a namespace/DAX device. For example, in the above output:
l The first block of devices is the DIMMs. Under each DIMM, there is an NMEM device
grouping. In the example output, the device groupings are nmem1 and nmem0. Each
bracket contains a device.
l Each grouping also displays the region corresponding to the NMEM device group. In the
example output, the regions are region1 and region0.
l The order of the NMEM devices correlates to the namespace grouping that follows, in
which nmem1 correlates to namespace1.0 and nmem0 correlates to namespace0.0.
l The output displaying the namespace also includes the DAX device name (chardev),
which is displayed as daxX.X.
15. In the output from the previous step, locate the namespace and subsequent DAX device
name (chardev) that correlates with the NMEM and DIMM serial number displayed in the
output in Step 11.
In the above example, where nmem0's namespace is namespace0.0, the DAX device name is
dax0.0.
16. Record the device's region, namespace and DAX device name in the NVDIMM information
table.
Results
You have discovered the region, namespace and DAX device name for the storage devices that
interact with the failed NVDIMM (or, in the case of a failed NVDIMM battery, all NVDIMMs
mounted on the server). You can now remove these storage devices from the NVDIMM
Acceleration Pool and FG Storage Pool.
Remove the storage devices from VxFlex OS in a Linux system
Remove the storage devices that interact with the failed NVDIMM from the relevant VxFlex OS FG
Storage Pool and Acceleration Pool in a Linux-based system.
Before you begin
Ensure that you have admin rights for accessing the VxFlex OS GUI. If necessary, the customer
can give you the credentials.
About this task
Note: In the following task, the term "failed NVDIMM" also refers to all NVDIMMs mounted on
the server in cases of a failed NVDIMM battery or failed system board.
Procedure
1. Log in to the VxFlex OS GUI as an admin user.
2. Go to Backend > Storage > Acceleration view.
3. Expand the SDSs in the Protection Domains.
4. In the Accelerated On column, find the FG Storage Pool that contains the DAX devices you
discovered in the previous task and recorded in the NVDIMM information table.
For example, if you discovered DAX device name dax0.0, in the Accelerated On column look
for /dev/dax0.0.
In the following image, /dev/dax0.0 is located in FG Storage Pool sp2_FG. It is also located
in Acceleration Pool accp_for_sp2_NVDIMM.
5. Record the name of the FG Storage Pool, the Acceleration Pool, the storage devices, and
the DAX devices in the relevant rows in the NVDIMM information table.
In the above example, the information you need to record is:
l Storage Pool: sp2_FG
l Acceleration Pool: accp_for_sp2_NVDIMM
l Storage devices: /dev/sdb, dev/sdc
l Acceleration devices: /dev/dax0.0, /dev/dax1.0.
6. Remove the storage devices you identified in the previous step from the relevant Storage
Pool:
Note: Ensure that you remove only the storage devices impacted by the failed NVDIMM.
a. Navigate to Backend > Devices view and find the SDS with the storage devices you
identified in the previous step.
b. Right-click the storage devices you identified in the previous step, select Remove, and
then click OK.
Note: You can select all the relevant storage devices and remove them
simultaneously.
A confirmation message appears when the process is complete, and a rebuild/rebalance
operation may be triggered.
c. In GUI Dashboard view, wait until the rebuild/rebalance operation is complete and all
counters are at 0.
7. Remove the acceleration devices corresponding to the failed NVDIMM from the relevant
Acceleration Pool:
a. In Backend > Devices view in the relevant Storage Pool, right-click the acceleration
devices you identified previously, select Remove, and then click OK.
A confirmation message appears when the process is complete.
8. If you are replacing the NVDIMM battery, perform the previous steps for every NVDIMM
module mounted on the server.
Results
The acceleration devices and storage devices associated with the faulty NVDIMM have been
removed from the Acceleration Pool and Storage Pool.
EMC-ScaleIO-xcache-x.x-x.0.slesxx.x.x86_64
rpm -i EMC-ScaleIO-xcache-x.x-x.0.slesxx.x.x86_64
6. Click OK.
7. Repeat the previous steps for every SDS on which you want to enable RFcache.
2. Open the following SDS file with a text editor, and change the port number shown there to
the new port number:
Operating Task
System
Linux Run the script: /opt/emc/scaleio/sds/bin/
open_firewall_port.sh
Windows From command line, run the batch file C:\Program Files\EMC
\scaleio\sds\bin\open_firewall_port.bat
Pkill sds
For example, for an SDS called "sds198" where the new port number is 7071, type:
Note: If you modify the SDS port on the MDM first, instead of following the above
procedure, I/O errors might be encountered.
Configure VxFlex OS to use VMware's vSphere API for Storage Awareness (VASA).
vVols in VxFlex OS
When the VASA provider is installed in the VxFlex OS system, you can use and manage vVols using
VxFlex OS.
Once the VASA provider is registered, you can create vVols. For more information, see: https://
docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/
GUID-8D9CC612-6B5A-436D-BB63-67DDFED19747.html and https://docs.vmware.com/en/
VMware-vSphere/6.7/com.vmware.vsphere.storage.doc/GUID-8D9CC612-6B5A-436D-
BB63-67DDFED19747.html.
By default, vVols are not visible in the VxFlex OS GUI. In order to have the vVols displayed, you
must change the system preferences to show externally managed volumes. For more information
on changing system preferences, see Customize system preferences on page 212. Once the
externally managed volumes are visible, you can see the vVols listed in the VxFlex OS GUI
Frontend view. Note that management of the vVols from the VxFlex OS GUI is prohibited.
Logs are located in the following directory: /opt/emc/scaleio/vasa/logs/. For support, you
can use the script /opt/emc/scaleio/vasa/get_vasa_info.sh, which creates a tar.gz
archive in the /tmp directory. It includes the following diagnostic information on VASA:
2. In the vSphere Web Client, navigate to the Hosts and Clusters inventory object.
3. Locate the vCenter inventory object and select it.
l In vSphere 6.0: Click the Manage tab, and click Storage Providers.
l In vSphere 6.5: Click the Configure tab, and click More > Storage Providers.
4. Click the Register a Storage Provider icon.
5. In the New Storage Provider dialog, define the connection information for the storage
provider:
a. Enter a name for the VASA provider.
b. Enter the VASA URL: https://<VASA_FQDN>/version.xml where <VASA_FQDN> is
the fully-qualified domain name of the VASA, as registered in the DNS server.
c. Enter the MDM admin username and password.
3. In the Packages tab, upload all VxFlex OS packages, per the host OS.
4. In the Install tab, select the edited CSV file, and select Add to existing system from the
drop-down menu.
5. Click Upload installation CSV.
6. Start the installation, and monitor as normal.
3. In the Select Installation screen, select Add servers to a registered VxFlex OS system,
and select the system you want to extend.
4. In the Select Management Components screen, select 5-node mode.
5. In the Manager MDM and Tie Breaker MDM fields, select the nodes to add to the cluster.
Note:
It is not recommended to use single mode in production systems, except in temporary
situations.
Tthe following rules are true regardless of the circumstances:
l To remove a cluster member, you first need to make it a standby, then remove the standby. To
add a member to a cluster, you first make it a standby, then add the standby to the cluster.
l The cluster must always have 5, 3, or 1 members, never any other amount. For a further
understanding of this subject, see "The MDM cluster" in the Architecture section of Getting to
Know VxFlex OS. Proceed to the section that describes your environment:
n "Replace a cluster member by adding a new server"
n "Replace a cluster member without adding a new server to the cluster"
scli --query_cluster
# scli --query_cluster
Cluster:
Mode: 5_node, State: Normal, Active: 5/5, Replicas: 3/3
Master MDM:
Name: mdm17, ID: 0x5d07497754427fd0
IPs: 10.3.1.17, 192.168.1.17, Management IPs: 10.3.1.17, Port: 9011
Version: 2.0.972
Slave MDMs:
Name: mdm19, ID: 0x26ee566356362451
IPs: 10.3.1.19, 192.168.1.19, Management IPs: 10.3.1.19, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm18, ID: 0x5843c4d16d8f1082
IPs: 10.3.1.18, 192.168.1.18, Management IPs: 10.3.1.18, Port: 9011
Status: Normal, Version: 2.0.972
Tie-Breakers:
Name: mdm179, ID: 0x7380b70e2f73d346
IPs: 10.3.1.179, 192.168.1.179, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
4. You can see the result of the command by running the following command:
scli --query_cluster
# scli --query_cluster
Cluster:
Mode: 5_node, State: Normal, Active: 5/5, Replicas: 3/3
...
Tie-Breakers:
Name: mdm179, ID: 0x7380b70e2f73d346
IPs: 10.3.1.179, 192.168.1.179, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
IPs: 192.168.1.20, 10.3.1.20, Port: 9011
Status: Normal, Version: 2.0.972
Standby MDMs:
Name: mdm57, ID: 0x073e4c8b1d20d124, Tie Breaker
IPs: 10.3.1.57, 192.168.1.57, Port: 9011
mdm57 has been added as a standby MDM. When it is a standby MDM, it can be added to
the cluster.
5. Replace the current mdm179 with the standby mdm57 by running the following command:
change to a 3-node cluster and then add the member back to the cluster and reassign it to its new
role.
In the following example, we are removing the current server whose IP address is 10.3.1.179,
currently a Tie Breaker member of a 5-node MDM cluster. To retain a majority in the MDM cluster,
we must also remove one of the Slave MDMs in the cluster, in this case the MDM whose IP
address is 10.3.1.19. This process can be used to replace any role in the MDM cluster.
Procedure
1. Verify that the current server (179) is not the Master MDM:
scli --query_cluster
# scli --query_cluster
Cluster:
Mode: 5_node, State: Normal, Active: 5/5, Replicas: 3/3
Master MDM:
Name: mdm17, ID: 0x5d07497754427fd0
IPs: 10.3.1.17, 192.168.1.17, Management IPs: 10.3.1.17, Port: 9011
Version: 2.0.972
Slave MDMs:
Name: mdm19, ID: 0x26ee566356362451
IPs: 10.3.1.19, 192.168.1.19, Management IPs: 10.3.1.19, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm18, ID: 0x5843c4d16d8f1082
IPs: 10.3.1.18, 192.168.1.18, Management IPs: 10.3.1.18, Port: 9011
Status: Normal, Version: 2.0.972
Tie-Breakers:
Name: mdm179, ID: 0x7380b70e2f73d346
IPs: 10.3.1.179, 192.168.1.179, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
IPs: 192.168.1.20, 10.3.1.20, Port: 9011
Status: Normal, Version: 2.0.972
scli --query_cluster
# scli --query_cluster
Cluster:
Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
...
Slave MDMs:
Name: mdm18, ID: 0x5843c4d16d8f1082
IPs: 10.3.1.18, 192.168.1.18, Management IPs: 10.3.1.18, Port: 9011
Status: Normal, Version: 2.0.972
Tie-Breakers:
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
IPs: 192.168.1.20, 10.3.1.20, Port: 9011
Status: Normal, Version: 2.0.972
Standby MDMs:
Name: mdm19, ID: 0x26ee566356362451, Manager
IPs: 10.3.1.19, 192.168.1.19, Management IPs: 10.3.1.19, Port: 9011
Name: mdm179, ID: 0x7380b70e2f73d346, Tie Breaker
IPs: 10.3.1.179, 192.168.1.179, Port: 9011
The cluster has been changed to 3-node mode, as a Slave MDM (mdm19) and a TB MDM
(tb179) have been removed and are now standby MDMs.
Now that the current server is a standby MDM, it can removed from VxFlex OS.
scli --query_cluster
Cluster:
Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
...
Tie-Breakers:
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
IPs: 192.168.1.20, 10.3.1.20, Port: 9011
Status: Normal, Version: 2.0.972
Standby MDMs:
Name: mdm19, ID: 0x26ee566356362451, Manager
IPs: 10.3.1.19, 192.168.1.19, Management IPs: 10.3.1.19, Port: 9011
8. Add the current server (57) back to the system as a standby MDM, and assign it the name
mdm57:
scli --query_cluster
Cluster:
Mode: 3_node, State: Normal, Active: 3/3, Replicas: 2/2
...
Tie-Breakers:
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
IPs: 192.168.1.20, 10.3.1.20, Port: 9011
Status: Normal, Version: 2.0.972
Standby MDMs:
Name: mdm19, ID: 0x26ee566356362451, Manager
IPs: 10.3.1.19, 192.168.1.19, Management IPs: 10.3.1.19, Port: 9011
Name: mdm57, ID: 0x13c925450656db74, Tie Breaker
IPs: 10.3.1.57, 192.168.1.57, Port: 9011
The server mdm57 is now a standby MDM, so it can be promoted to the MDM cluster.
10. Switch to 5-node cluster by adding the standby MDMs to the cluster:
scli --query_cluster
Cluster:
Mode: 5_node, State: Normal, Active: 5/5, Replicas: 3/3
Master MDM:
Name: mdm17, ID: 0x5d07497754427fd0
IPs: 10.3.1.17, 192.168.1.17, Management IPs: 10.3.1.17, Port: 9011
Version: 2.0.972
Slave MDMs:
Name: mdm18, ID: 0x5843c4d16d8f1082
IPs: 10.3.1.18, 192.168.1.18, Management IPs: 10.3.1.18, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm19, ID: 0x26ee566356362451
IPs: 10.3.1.19, 192.168.1.19, Management IPs: 10.3.1.19, Port: 9011
Status: Normal, Version: 2.0.972
Tie-Breakers:
Name: mdm20, ID: 0x6dfe1c5f4062b5b3
IPs: 192.168.1.20, 10.3.1.20, Port: 9011
Status: Normal, Version: 2.0.972
Name: mdm57, ID: 0x13c925450656db74
IPs: 10.3.1.57, 192.168.1.57, Port: 9011
Status: Normal, Version: 2.0.972
12. When changing an MDM IP address, it is mandatory to update and restart the all the SDCs
in the system as well.
a. Update the IP addresses:
Windows:
Linux:
/opt/emc/scaleio/sdc/bin/drv_cfg --mod_mdm_ip
--ip <EXISTING_MDM_IP_ADDRESS>
--new_mdm_ip <NEW_MDM_IP_ADDRESSES>
Linux:
/opt/emc/scaleio/sdc/bin/drv_cfg --query_mdms
Retrieved 1 mdm(s)
MDM-ID 043925027bbed30e SDC ID 28c5479b00000000 INSTALLATION ID
7214f7ca647c185b IPs [0]-9.4.4.12 [1]-9.4.4.11
VxFlex OS Installer Add virtual IP addresses only. For details, see the
deployment documentation.
vSphere Web plug-in Add virtual IP addresses only. For details, see the VxFlex OS
User Guide.
CLI Add, modify, and remove For details, see the VxFlex OS
virtual IP addresses. CLI Reference Guide.
REST API Add, modify, and remove For details, see VxFlex OS
virtual IP addresses. REST API Reference Guide.
scli --query_cluster
Cluster status is returned, where you can identify the Master, the Slave, and the Tie
Breaker.
2. Switch to single cluster mode:
5. Add the MDM as standby with its IP addresses (including the additional IP addresses):
For example:
6. Add the Tie Breaker as standby with its IP addresses (including the additional IP addresses):
For example:
scli --query_cluster
Cluster status is returned, where you can check that the cluster is configured and operating
as expected.
9. Switch MDM ownership to verify cluster functionality:
For example:
scli --query_cluster
Cluster status is returned, where you can check that the cluster is operating as expected.
11. Add IP addresses for the Master MDM (presently Slave MDM) by following steps 2, 3, 5, 7,
and 8.
12. Optional: Switch MDM ownership back to the original MDM:
Action Command
sysctl net.ipv6.conf.<interface_name>.dad_transmits=0
4. For each MDM that you wish to set a virtual IP address, enter a virtual IP address and the
NIC to which it will be mapped. For each new virtual IP address, enter the virtual IP address
and NIC name for each MDM to which it will be mapped.
With the VxFlex OS Installer, you can configure NIC names that contain the following
characters only: a-z, A-Z, 0-9. If a NIC name contains the "-" or "_" character (for example
eth-01), don't use VxFlex OS Installer. Configure this IP address with the CLI
modify_virtual_ip_interfaces command and the --
new_mdm_virtual_ip_interface <INTF> parameter.
5. Click Set Virtual IPs.
Results
The virtual IP address is configured and all of the SDCs are updated with the new virtual IP
address. See section VxFlex OS plug-in for information on "Configuring virtual IP addresses -
VxFlex OS plug-in".
The following topics describe how to configure security for the VxFlex OS system.
Replace the default self-signed security certificate with your own trusted
certificate
Create your own trusted certificate, and then replace the default certificate with the one that you
created.
Procedure
1. Find the location of keytool on your server, and open it.
It is a part of the Java (JRE or JDK) installation on your server, in the bin directory. For
example:
l C:\Program Files\Java\jdk1.8.0_25\bin\keytool.exe
l /usr/bin/keytool
2. Generate your RSA private key:
a. If you want to define a password, add the following parameters to the command. Use the
same password for both parameters.
Note: Specify a directory outside the VxFlex OS Gateway installation directory for
the newly created keystore file. This will prevent it from being overwritten when the
VxFlex OS Gateway is upgraded or reinstalled.
3. If you already have a Certificate Signing Request (CSR), skip this step.
If you need a CSR, generate one by typing the following command. (If you did not define a
keystore password in the previous step, omit the password flags.)
If a message appears saying that the root is already in the system-wide store, import it
anyway.
6. Import the intermediate certificates, by typing the command. (If you did not define a
keystore password, omit the password flags.)
You must provide a unique alias name for every intermediate certificate that you upload with
this step.
7. Install the SSL Certificate under the same alias that the CSR was created from
(<YOUR_ALIAS> in previous steps), by typing the command (if you did not define a
keystore password, omit the password flags):
Replace the default self-signed security certificate with your own self-signed
certificate
Replace the default self-signed security certificate with your own self-signed security certificate.
About this task
Procedure
1. Find the location of keytool on your server, and open it.
It is usually a part of the Java (JRE or JDK) installation on your server, in the bin directory.
For example:
l C:\Program Files\Java\jdk1.7.0_25\bin\keytool.exe
l /usr/bin/keytool
a. If you want to define a password, add the following parameters to the command. Use the
same password for both parameters.
Note: Specify a directory outside the VxFlex OS Gateway installation directory for
the newly created keystore file. This will prevent it from being overwritten when the
VxFlex OS Gateway is upgraded or reinstalled.
Results
Replacement of the security certificate is complete.
2. After the upgrade is complete, copy these files back to their original location.
To enable certificate verification, add the following parameters to the file /etc/cinder/
cinder_scaleio.config on the Cinder node:
verify_server_certificate=true
server_certificate_path=<PATH_TO_PEM_FILE>
Procedure
1. Create a keystore file (.JKS):
l VxFlex OS Gateway—The VxFlex OS Gateway maintains the SSL certificates for itself and for
the following components:
n SNMP
n REST API
n VxFlex OS Installer
l vSphere plug-in
l VxFlex OS GUI
l CLI
2. When using the CLI, on the first connection to the MDM, the CLI will display the MDM's
certificate and will prompt the user to approve the certificate.
Upon approval, the trusted certificate will be saved.
3. When using the VxFlex OS GUI, approve the MDM certificate at login, and then approve
other certificates using the System Settings menu, Renew Certificates option.
a. Linux: /opt/emc/scaleio/mdm/cfg
b. Windows: C:\Program Files\emc\scaleio\mdm\cfg
To run CLI commands, you must be logged in. Actual command syntax is operating-
system dependent. For more information, see the VxFlex OS CLI Reference Guide.
3. Submit the CSR file created in the previous step to your Certificate Authority.
The Certificate Authority must sign your CSR and return two files to you:
where --mdm_ip is the IP address of the Master MDM, and --local_mdm_ip is the IP
address of the MDM where you want to change the certificate.
If the remote read-only feature is enabled on the MDM, add --skip_cli_command to the
command, and later, while logged in with security permissions, run the command scli --
replace_mdm_security_files.
Note:
This step changes the MDM certificate, and might cause a brief single point of failure
period (switch ownership).
7. For all external components that will communicate with the MDM (VxFlex OS GUI, CLI,
vSphere Plugin, REST, VxFlex OS Installer) add the Trusted or Root certificate from the
Certificate Authority to each component.
The Trusted/Root certificate must be added to the file called truststore.jks, using
Keytool.
For more information, see "Using Keytool to add certificates to external components".
8. When using the CLI, on the first connection to the MDM, the CLI will display a message
similar to the following:
Add the Trusted/Root certificate using the --add_certificate command. For more
information, see the VxFlex OS CLI Reference Guide.
VxFlex OS Gateway
l Linux:
/opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes/certificates
l Windows (64-bit):
C:\Program Files\EMC\ScaleIO\Gateway\webapps\ROOT\WEB-INF\classes
\certificates
VxFlex OS GUI
l Linux:
/opt/emc/scaleio/gui/certificates
l Windows:
C:\Users\[user_name]\AppData\Roaming\EMC\scaleio\certificates
vSphere
l Linux:
$HOME/.vmware/scaleio/certificates
l Windows:
C:\Users\[user_name]\AppData\Roaming\VMware\scaleio\certificates
\truststore.jks
C:\Windows\System32\config\systemprofile\AppData\Roaming\VMware
\scaleio\certificates
Using Keytool
Use the Java Keytool utility to modify or view the content of the trust store file. The remainder of
this topic lists some useful Keytool commands. Keytool is a part of the Java (JRE or JDK)
installation and can be found in the bin directory.You can add -storepass changeit to all
commands that require a password. The password for the trust store is "changeit" (Java default).
Note:
The certificate alias must be unique in the trust store file. We usually use the certificate's full
subject.
For example: givenname=mdm, ou=asd, o=emc, l=hopkinton, st=massachusetts, c=us,
cn=centos-6.4-adi5
l List the certificates in the trust store:
Example:
Example:
Example:
Example:
Action Command
l TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
l TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
l TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
l TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
l TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
l TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
l TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
l TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
l TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
l TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
l TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
l TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
l TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
l TLS_RSA_WITH_AES_256_CBC_SHA256
Note: In order to use CURL with VxFlex OS Gateway v2.0.0.3 and higher on a
server running RHEL6, upgrade the NSS package to 3.21.0. (using YUM ).
Option Description
Create (or a. Create a text file (or modify an existing file) with the message that
modify) a new you want to display in the login banner.
banner
b. Run the following command:
of preemptive acceptance. By default, preemptive acceptance is enabled and the login banner can
be bypassed using a CLI command.
Before you begin
To enable or disable the preemptive acceptance option, you must have administrative rights.
Procedure
1. Log in to VxFlex OS:
4. Enter the admin password to the MDM in the MDM admin password box.
5. Enter the user name in the LDAP User Name box.
6. Enter the password to access the LDAP server in the LDAP password box.
7. SelectForce LDAP authentication mode to force users to enter LDAP User Name and
LDAP password when logging in to the VxFlex OS system.
8. Click Change LIAs authentication method to LDAP.
Results
Use LDAP credentials to login to system.
Procedure
1. In the web browser, go to the IP address of your system's VxFlex OS Gateway.
2. Log in to the VxFlex OS Gateway.
3. From the Maintain tab, click Security Settings and select Add LDAP Server
The Add LDAP Server to VxFlex OS system dialog box is displayed.
4. Enter the admin password to the MDM.in the MDM admin password box.
5. Enter the URI of the LDAP server in the Server URI box.
6. Enter the LDAP group identifier that you can retrieve from the LDAP server in the Group
box.
7. Enter the LDAP BaseDN identifier that you can retrieve from the LDAP server in the
BaseDN box. Run the query on the server to retrieve the Base DN.
8. Click Add LDAP Server.
User roles
The authorization permissions of each user role are defined differently for local authentication, and
for LDAP authentication. Although the role names are similar, the permissions granted to them are
not.
User roles defined in the LDAP domain are mutually exclusive, with no overlap—with the exception
of the Configurator role. If you want to give an LDAP user permission to perform both monitoring
and configuration roles, for example, assign that user to both the Backend/Frontend Configurator
and Monitor LDAP groups.
The Configurator and Super User roles do not exist at all for LDAP.
The following table describes the permissions that can be defined for local domain users and for
LDAP domain users.
Configure user
User role Query Configure parameters credentials
Configure user
User role Query Configure parameters credentials
set_user_authentication_method
Set the user authentication method for the system.
WARNING Use this command with caution. The operation is complex to roll back.
Note: For details about setting up LDAP, refer to the VxFlex OS User Roles and LDAP Usage
Technical Notes.
Syntax
Parameters
--ldap_authentication
LDAP-based authentication method where users are managed on an LDAP-compliant server.
Configure LDAP service and LDAP user before switching to this authentication method.
--native_authentication
Native authentication method where users are managed locally in the system
--native_and_ldap_authentication
A hybrid authentication method. Both LDAP and Native users may log in to the system after it
is set.
--i_am_sure
Skip the safety questions for command execution. (For example: “This could damage the
stored data. Are you sure?”)
Example
add_user
Add a user to the system. A randomly generated password for the created user is returned.
This command is available only to administrator users.
Each user name should conform to the following rules:
1. Contains fewer than 32 characters
2. Contains only alphanumeric and punctuation characters (when punctuation characters are
being used, you may need to use the " or ' characters in order to allow it).
3. Is unique within the object type
Syntax
Parameters
--username <NAME>
User name to add to the system
Example
delete_user
Delete the specified user from the system.
This command is available only to administrator users.
Syntax
Parameters
--user_id <ID>
ID of the user to be deleted
--username <NAME>
Username of the user to be deleted
Example
modify_user
Modify the user role of the specified user in the system.
This command is available only to administrator users.
Syntax
Parameters
--user_id <ID>
User ID of the user to modify
Note: The user ID is displayed when you create the user. To find this ID at a later time, use
the query_user command.
--username <NAME>
User name of the user to modify
Example
query_users
Display all the users defined in the system, with their roles and user ID.
Syntax
scli --query_users
Parameters
None.
Example
scli --query_users
query_user
Display information about the specified user.
This command is available only to administrator users.
Syntax
Parameters
--user_id <ID>
User's ID number
Note: The user ID is displayed when you create the user. To find this ID at a later time, use
the query_user command.
--username <NAME>
Name of the user
Example
reset_password
Generate a new password for the specified user. The user must change the password again after
logging in with the generated password.
This command is available only to administrator users.
Syntax
Parameters
--user_id <ID>
User ID of the user whose password will be reset
Note: The user ID is displayed when you create the user. To find this ID at a later time, use
the query_user command.
--username <NAME>
User name of the user whose password will be reset
Example
set_password
Change the password of the user currently logged in to the system.
This command is available only to administrator users.
Syntax
Parameters
None.
--old_password <OLD_PASSWORD>
User's current password
--new_password <NEW_PASSWORD>
User's new password
Note: In Linux, to prevent the password from being recorded in the history log, omit the
old_password or new_password flag and enter the password interactively.
Example
Password rules
The password must conform to the following rules:
1. Contains between six and 31 characters.
2. Contains characters from at least three of the following groups: [a-z], [A-Z], [0-9], special
characters (!@#$ …)
3. The current password is not allowed.
disable_admin
Disables the default Superuser.
The Superuser is the default user for setting up the system, and has all the privileges of all user
roles. In some cases you may need to disable the Superuser in order to ensure that all users are
associated with specific user roles.
Note: To re-enable the Superuser, use the reset_admin command.
Syntax
scli --disable_admin
[--i_am_sure]
Parameters
--i_am_sure
Skip the safety questions for command execution.
Example
2. In the body of the file, type the text Reset Admin, and save the file.
3. From the CLI, run the reset_admin command:
scli --reset_admin
Results
The admin user password is reset to admin.
reset_admin
Reset the default Superuser.
Reset the password of the default admin user with Superuser permissions.
reset_admin
scli --reset_admin
[--i_am_sure]
Syntax
scli --reset_admin
[--i_am_sure]
Parameters
--i_am_sure
Skip the safety questions for command execution.
Example
groupadd admin
passwd non_root
When prompted, enter the new password and then confirm it by entering it again.
4. Open the sudoers /etc/sudoers file for editing.
vi /etc/sudoers
5. Search the sudoers file for "## Same thing without a password".
6. In the line below the search result, add the text %admin ALL=(ALL) NOPASSWD: ALL to
the file.
7. Search the sudoers file for "Defaults requiretty", and replace it with Defaults !
requiretty.
8. Exit the vi editor by typing the following command to exit: :wq!
9. Create a hidden directory in the non_root user's home directory to store the SSH
configuration.
mkdir /home/non_root/.ssh
10. Copy the SSH configuration from the root user to the non_root user's directory.
The following topics describe how to configure fault reporting features in the VxFlex OS system.
l General................................................................................................................................ 192
l Configure SNMP properties after deployment..................................................................... 192
l Configure Dynamic Host Name resolution for SNMP in VxFlex OS...................................... 192
l Configure VxFlex OS Gateway properties............................................................................ 194
General
SNMP traps are implemented as part of the VxFlex OS Gateway, using SNMP v2. UDP transport is
used for SNMP, and the default port for trap communication is 162. The SNMP feature is disabled
by default. If you want to use the SNMP feature, enable it by editing the
gatewayUser.properties file. For more information, see "Configure SNMP properties after
deployment" in the Customize and Configure Guide
The SNMP trap sender uses a proprietary/custom MIB called scaleio.mib. This MIB file is
located on the VxFlex OS Gateway server, in the webapps/ROOT/WEB-INF/classes folder
under the VxFlex OS Gateway installation directory. All traps are sent using a single notification
type with a unique identification number (OID). All the SNMP traps contain variable bindings for
severity; alert type, which is the alert classification text ; the ID of the source object for which the
alert was created; and an action code, which is the event number.
When using HP OpenView, ensure that the Dell EMC MIB file is loaded together with the VxFlex
OS MIB file, or save the Dell EMC MIB file in same directory as the VxFlex OS MIB file.
The alerts are calculated based on MDM polling. A trap will be sent the first time that an event
occurs, and will be sent again if the resend interval has passed and the alert is still in effect. The
resend frequency parameter can be configured using the Settings window in the VxFlex OS GUI.
Only SNMP traps are supported, and are initiated by the VxFlex OS SNMP traps manager.
GET/SET operations are not supported (or more specifically, GET/GET NEXT/GET BULK/SET/
INFORM/RESPONSE).
In addition to SNMP traps, alert messages are also displayed in the VxFlex OS GUI.
To enable SNMP-based fault reporting, both the VxFlex OS Gateway and the and the SNMP trap
receivers must be configured. Traps can be sent to up to two SNMP trap receivers. The VxFlex OS
Gateway service must be restarted after configuration.
l On the VxFlex OS Gateway, add the host name of the SNMP trap receiver to the appropriate
parameter in the gatewayUser.properties file
l On the DNS server, configure the SNMP trap receiver properties, in order to support dynamic
host name resolution
l On the DNS server, reduce the "Time To Live" (TTL) setting for the SNMP trap receiver
Procedure
1. On the VxFlex OS Gateway, open the gatewayUser.properties file.
l From Linux: /opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/ classes/
gatewayUser.properties
2. Add the host name of the SNMP trap receiver to the property
snmp.traps_receiver_ip=. If there is more than one IP address or host name, use a
comma-separated list.
3. Save the file and restart the VxFlex OS Gateway service.
4. Verify that traps are being received at all configured trap receiver hosts.
5. On the DNS server, configure dynamic host name resolution support for the trap receiver.
6. On the DNS server, reduce the TTL setting for the trap receiver. For example, on Windows,
perform the following:
a. Open the DNS manager window.
b. Click View, and select the Advanced option.
c. Right-click the trap receiver, and select Properties. A window similar to the following is
displayed:
d. At the bottom of the window, in the Time to live (TTL) field, change the value from one
hour to one or two seconds.
e. Click OK.
Results
Dynamic Host Name resolution configuration is complete.
Results
Configuration is complete.
The following topics describe how to add or remove components from the VxFlex OS system, or
rename components.
2. In the Upload CSV stage, browse to the updated CSV file, and select Add to existing sys.
3. Upload the CSV, and continue as normal.
2. In the Select Installation screen, select Add servers to a registered VxFlex OS system,
and select the system you want to extend.
3. Continue with the deployment steps, adding the new nodes.
You can skip steps that do not need to be changed.
Note: When adding components, the wizard adjusts the displayed screens to options
that are relevant to the current VxFlex OS system.
Remove VxFlex OS
You can remove VxFlex OS components and the vSphere plug-in from servers.
To uninstall VxFlex OS, use the VxFlex OS Installer. This requires that the LIA be installed in all
nodes to be changed.
When removing RFcache (the xcache package) on a Windows server, a server restart is
necessary after the removal.
To unregister the vSphere plug-in, see "Unregistering the VxFlex OS plug-in".
6. Enter the MDM password, select to reboot servers (optional), and click Uninstall.
7. To monitor the uninstallation progress, click Monitor.
8. When the uninstallation is complete, click Mark operation completed.
Renaming objects
About this task
Object names are used to identify the objects in the VxFlex OS GUI, and can also be used to
specify objects in CLI commands. You can view an object’s name in its Property Sheet, in the
Identity section.
Note: It is not possible to rename a Read Flash Cache device using this command.
The following topics describe how to configure logs in the VxFlex OS system.
The following topics describe how to configure LIA in the VxFlex OS system.
lia_token=5
lia_enable_install=1
lia_enable_uninstall=1
lia_enable_configure_fetch_logs=1
To restrict which VxFlex OS Gateway IP addresses can access the LIA, add those IP addresses
to this line in the conf.txt file:
lia_trusted_ips=<IP_ADDRESS_1>,<IP_ADDRESS_2>
To set this during LIA installation, set the TRUSTED_IPS environment variable. For example:
Example:
b. Query the MDM for the installation ID by running the following command:
pkill lia
Item Description
Dashboard I/O workload: Controls the time period used when averages are
Average calculation will include the last computed and displayed by the VxFlex OS GUI
n seconds (default: 10 seconds)
Show Property Sheet in advanced mode Displays additional details in Property Sheets:
l Capacity section—Snapshot Capacity Reserved
l Rebuild/Rebalance—Data Movement Jobs
l RAM Read Cache—Cache Evictions, Cache
Entry, and Cache Skip tables
These details are usually only relevant for advanced
users and technical support purposes.
Item Description
Configure components
You can configure settings for VxFlex OS from the VxFlex OS plug-in.
There are two levels of component configurations:
l Basic: The basic configurations are all performed the same way. The process is described just
once.
l Advanced: Each advanced configuration setting has a unique dialog box, which is described in
"Configuring components-advanced".
The following table lists the activities you can perform and categorizes each as basic or advanced:
Perform this
Object activity Basic or advanced Access from this screen
Unregister a Basic
system
Configure Advanced
virtual IPs
Remove a Basic
Protection
Domain
Remove a Basic
Storage Pool
Perform this
Object activity Basic or advanced Access from this screen
Remove a Basic
device from an
SDS
Remove a Basic
volume (must
be unmapped
first)
Remove a Basic
device from an
SDS
a. For Read RAM Cache to work on a volume, both the volume and its Storage Pool must have
the feature enabled.
b. When defining Fault Sets, you must follow the guidelines described in "Fault Sets". Failure
to do so may prevent creation of volumes.
Configure components—basic
Create a Protection Domain in the VxFlex OS system from the VxFlex OS plug-in.
About this task
Basic configuration activities are all performed in a similar manner. All activities are performed
from the Actions menu in each screen and by entering simple information.
Following is an example on how to create a Protection Domain from the VxFlex OS Systems
screen.
Procedure
1. From the VxFlex OS Systems screen, click Actions > Create Protection Domain:
Note: You can also click the action icons in the menu or right-click the item to
choose options from a list.
2. In the Create Protection Domain dialog box, enter a name for the Protection Domain, and
then click OK.
The process is similar for the rest of the basic activities.
Note: If you intend to enable zero padding on a Storage Pool, you must do so before you
add any devices to the Storage Pool. For more information, see "Storage Pools" in the
Getting to Know Guide.
Configuring components—advanced
This section describes how to use the VxFlex OS vSphere plug-in to perform activities that require
a little more attention.
Procedure
1. From the main VxFlex OS plug-in window, click Register VxFlex OS system.
2. Enter the following information, then click OK:
a. Master MDM IP: The IP address of the existing system's Master MDM
b. User name: The username of the existing system
c. Password: The password of the existing system
d. To enable the use of devices that may have been part of a previous VxFlex OS
system, select Allow the take over of devices with existing signature.
e. Click Assign.
3. Confirm the action by typing the VxFlex OS password.
4. When the add operation is complete, click Close.
Results
The devices are added.
The following tasks describe how to manually perform certain tasks in your VxFlex OS
environment.
Linux /etc/vmware/vsphere-client/vc-packages/vsphere-
client-serenity
Linux /etc/vmware/vsphere-client/vc-packages/scaleio
9. After you have logged in to the vSphere web client to complete the registration and you see
e. Exit MM.
2. In the output of the command, find the existing GUID and MDM IP addresses.
For example, in the output excerpt below, the GUID and IP addresses are marked in bold:
IoctlIniGuidStr string
39b89295-5cfc-4a42-bf89-4cc7e55a1e5b Ini Guid, for example:
12345678-90AB-CDEF-1234-567890ABCDEF
IoctlMdmIPStr string
9.99.101.22,9.99.101.23 Mdms IPs, IPs for MDM in same cluster should be
comma-separated. To configure more than one cluster use '+' to separate
between IPs.For Example:
10.20.30.40,50.60.70.80+11.22.33.44. Max 1024 characters
where <GUID> is the existing SDC GUID that you identified in the previous step, and
<MDM_IPS> is the list of MDM IP addresses. A maximum of 1024 characters is allowed.
a. To replace the old MDM IP addresses with new MDM IP addresses, omit the old
addresses from the command.
b. To add MDM IP addresses to the existing IP addresses, type both the existing IP
addresses and the new IP addresses in the command.
MDM IP addresses for MDMs in same cluster must be comma-separated. To configure
more than one cluster, use '+' to separate between IP addresses in different clusters. For
example:
6. In the output of the command, find the existing GUID and MDM IP addresses.
For example, in the output excerpt below, the GUID and IP addresses are marked in bold:
IoctlIniGuidStr string
39b89295-5cfc-4a42-bf89-4cc7e55a1e5b Ini Guid, for example:
12345678-90AB-CDEF-1234-567890ABCDEF
where <NEW_GUID> is the new SDC GUID, and <MDM_IPS> is the list of MDM IP
addresses that you identified in the previous step. You must include these IP addresses in
the command.
For example:
Use the VxFlex OS Gateway to run scripts that patch hosts' operating systems in a safe and
orchestrated manner.
c. MDMs
Note: The script will not run on a master MDM. A switch-over MDM command will be
run prior to running the script on a master MDM. The script will not be run in parallel on
multiple MDMs or s.
3. SDS enters Maintenance Mode.
4. The script runs on the host.
5. The host reboots (if configured to do so).
6. The validation script runs on the server (if configured to do so).
7. SDS exits Maintenance Mode.
os.patching.is.upload.needed=true
os.patching.patch.script.source.path=/opt/patch_script
os.patching.verification.script.source.path=/opt/verification_script
os.patching.is.upload.needed=true
os.patching.patch.script.source.path=C:\\temp\\patch_script
os.patching.verification.script.source.path=C:\\temp\
\verification_patch
2. In your browser, navigate to the IP address of the VxFlex OS Gateway, and log in.
3. Click the Maintain tab.
4. Enter the IP address and login credentials for the Master MDM, and for LIA.
5. At the bottom right of the screen, click Retrieve system topology.
The system topology is displayed.
6. Click System Logs & Analysis, and select the Run Script on Hosts option.
The Run script on hosts dialog box is displayed.
7. Enter the MDM password again.
8. For Running script on options, select one from the following:
Option Description
Entire System Run the script on all MDM and SDS nodes in the system.
If you choose this option, you can also choose whether to run the script
at the same time on SDSs that belong to different Protection Domains.
To do so, select the check box for In parallel on different Protection
Domains.
Protection Run the script on MDM and SDS nodes in a single Protection Domain.
Domain Select the required Protection Domain from the drop-down list.
Fault Set Run the script on MDM and SDS nodes in a single Fault Set. Select the
required Fault Set from the drop-down list.
SDS Run the script on a single SDS. Select the required SDS from the drop-
down list.
9. For Running configuration, select the Stop process on script failure option, if desired.
If problems occur, see the troubleshooting notes following these steps.
10. In the Script time-out box, enter the number of minutes that should elapse before the
VxFlex OS Installer stops waiting for a response about the running script, and prints a
timeout message.
11. In the Verification script box, select one of the following:
Option Action
Run a verification script after the script. Select Run
If a reboot is performed, the verification script is executed after the
reboot.
Do not run a verification script after the script Select Do not run
12. In the Post script action box, select one of the following:
Option Action
Reboot the server after execution of the script Select Reboot
Do not reboot the server after execution of the script Select Do not reboot
Results
Upgrade of the CentOS operating system on all SVMs except for the VxFlex OS Gateway is now
complete. Upgrade the VxFlex OS Gateway using the steps described in "Deploy and replace the
VxFlex OS Gateway SVM operating system using the VxFlex OS plug-in" in the Upgrade VxFlex OS
Guide.