SpectrumCDM UsersGuide 2213
SpectrumCDM UsersGuide 2213
™
Management 2.2.13.0
User's Guide
ibm221307/30/2021ug
User's Guide Table of Contents
Table of Contents
Audience and Purpose 9
IBM Spectrum Copy Data Management Overview 10
IBM Spectrum Copy Data Management Workflow 11
User Administration and Security Management 14
Installation and Setup 18
Deployment Checklist 19
Access and Default Credentials 20
System Requirements 23
VMware vSphere Privileges 28
File System Requirements 32
InterSystems Caché Requirements 38
Oracle Requirements 45
SAP HANA Requirements 55
Microsoft SQL Server Requirements 61
AWS Requirements 66
Install IBM Spectrum Copy Data Management as a Virtual Appliance 67
Start IBM Spectrum Copy Data Management 69
Dashboard 71
Sites & Providers 73
Sites & Providers Overview 74
Add a Site 76
Edit a Site 77
Delete a Site 78
Register a Provider 79
View a Provider 100
Edit a Provider 103
Unregister a Provider 105
Add Credentials to a Virtual Machine 106
2
User's Guide Table of Contents
3
User's Guide Table of Contents
4
User's Guide Table of Contents
5
User's Guide Table of Contents
6
User's Guide Table of Contents
7
User's Guide Table of Contents
8
User's Guide Audience and Purpose
9
User's Guide IBM Spectrum Copy Data Management Overview
10
User's Guide IBM Spectrum Copy Data Management Workflow
Register a Provider
Add providers such as application servers, DellEMC, IBM, or NetApp storage devices, or VMware
ESX resources to the Inventory by registering them.
1. Click the Configure tab and the Sites & Providers view.
2. In the Provider Browser pane, right-click a provider category and then click Register . The Register
dialog opens.
3. Populate the fields in the dialog, including name, host address, port, user name, and password.
3. Select one or more resources to catalog from the list of available providers.
11
User's Guide IBM Spectrum Copy Data Management Workflow
4. Select the options for your job. Also, enter notifications. If notification options are enabled, an email
message with information about the status of each task is sent when the job completes.
5. Optionally, select one or more defined schedules for your job and save the job definition.
Run a Job
A job that is based on an Inventory job definition discovers object information, catalogs it, and populates the
IBM Spectrum Copy Data Management database.
1. Click the Jobs tab
2. Select the job to run by clicking in the row containing the job name.
3. From the Operations drop-down menu, select Start , or right-click the job name and select Start .
Generate Reports
Run a report to summarize information about cataloged nodes as well as the data and resources that reside
on them.
1. Click the Report tab and the Reports view.
2. Select one of the predefined reports to run by clicking in the row containing the job name.
3. Click Run to run the report using default parameters.
4. Optionally, select a predefined report from the Report Browser pane and select report parameter values
in the Parameters pane.
5. Click Run . The customized report data is returned in the Report pane.
Backup
Create copies of your data. The RPO and copy data parameters are defined in an SLA Policy, which is then
applied to the Backup job definition along with a specified activation time to meet your copy data criteria.
12
User's Guide IBM Spectrum Copy Data Management Workflow
3. Select providers to copy or protect, as well as a storage workflow that meets your copy data criteria.
4. Complete the job definition including notification and other options. Save the job definition.
5. Run the backup job. The selected source data is copied to the destination in accordance with the defined
job parameters.
Restore
Leverage Copy Data Management technology for testing, cloning, and recovering copy data.
1. Click the Jobs tab.
RELATED TOPICS:
l Start IBM Spectrum Copy Data Management on page 69
l Sites & Providers Overview on page 74
l Start, Pause, and Hold a Job Session on page 170
l Search for Objects on page 355
l Report Overview on page 369
13
User's Guide User Administration and Security Management
Note that IBM Spectrum Copy Data Management uses FIPS compliant encryption algorithms.
14
User's Guide User Administration and Security Management
LDAP/S for communication with the LDAP directory server. For backend processes, protection is secured via
HTTPS authentication to the storage system and ESXi.
IBM Spectrum Copy Data Management identifies the following types of sensitive data: native user credentials,
DellEMC, IBM, NetApp, and Pure Storage FlashArray storage system credentials, VMware/ESX host
credentials, and user credentials.
Security Management
Security management identifies the interfaces that manage the security functions in the IBM Spectrum Copy
Data Management application. Only an authenticated, authorized user can configure the security functions.
Examples of security management include adding users, assigning roles, configuring IBM Spectrum Copy
Data Management to use LDAP, and configuring IBM Spectrum Copy Data Management to use HTTPS.
Following are the security management functions in IBM Spectrum Copy Data Management:
l Adding, editing, and deleting a user
l Assigning roles to a user
l Configuring authentication mode
l Configuring LDAP
l Importing certifications
l Configuring HTTPS
Encryption
IBM Spectrum Copy Data Management provides encryption solutions for complete security. The solution
includes certificates, use of HTTPS, and safe storage of passwords in the database. Sensitive data such as
data in transit is encrypted or transported using SSL and HTTPS. User credentials such as passwords are
safely stored in the IBM Spectrum Copy Data Management database. Obtaining and storing this sensitive
data constitutes the basic function of the IBM Spectrum Copy Data Management application. This data is
subject to the user data security requirements.
Ports
15
User's Guide User Administration and Security Management
The following ports are used by IBM Spectrum Copy Data Management:
Ports
smtp, non-SSL connection for Simple Mail Service used by IBM Spectrum
25
Transfer Protocol Copy Data Management
smtp, SSL connection for Simple Mail Service used by IBM Spectrum
443
Transfer Protocol Copy Data Management
16
User's Guide User Administration and Security Management
RELATED TOPICS:
l Register a Provider on page 79
l Role-Based Access Control Overview on page 109
17
User's Guide Installation and Setup
18
User's Guide Deployment Checklist
Deployment Checklist
Following are the pre-deployment, deployment, and post-deployment procedures. This checklist is for IBM
Spectrum Copy Data Management deployment to a VMware appliance host.
Pre-Deployment Checklist
Deployment Checklist
Launch IBM Spectrum Copy Data Management to set Start IBM Spectrum Copy Data
3
a new Super User password. Management on page 69
19
User's Guide Access and Default Credentials
Post-Deployment Checklist
20
User's Guide Access and Default Credentials
The administrator
account may be used
to access the IBM
Spectrum Copy Data
Management server
through SSH or
through console
IBM Spectrum Copy access. This account
Username: may also be used to
Data Management
SSH or Console administrator Password: access the
Command Line /
ecxadLG235 Administrative
Console
Console (above).
Note: This account
does not have sudo
access. If elevated
privileges are
required, use the root
account.
21
User's Guide Access and Default Credentials
When adding a
database or file
system, you must
create an agent user
on that system. For
more information,
see the appropriate
topic, "Sample
Configuration of an
IBM Spectrum Copy
Application Server - SSH or Suggested Username: IBM Spectrum Copy
Data Management
Console ecxagent Data Management
Agent User
Agent User" for the
file system or
application. This
account is only
needed on the client
systems, not on the
IBM Spectrum Copy
Data Management
OVA.
22
User's Guide System Requirements
System Requirements
Ensure that you have the required system configuration and browser to deploy and run IBM Spectrum Copy
Data Management.
Upgrades to new versions of IBM Spectrum Copy Data Management will continue to use the previous network
configuration found at this location: /opt/vmware/share/vami/vami_config_net.
For initial deployment, configure your virtual appliance to meet the following recommended minimum
requirements:
l 64-bit dual core machine
l 48 GB memory
The appliance has three virtual disks that total 400 GB storage:
l 50 GB for operating system and application, which includes 16 GB for the swap partition, 256 MB for the
boot partition, and the remainder for the root partition
l 100 GB for configuration data related to jobs, events, and logs
l 250 GB for Inventory data
It is recommended that the IBM Spectrum Copy Data Management appliance, storage arrays, hypervisors
and application servers in your environment use NTP. If the clocks on the various systems are significantly out
of sync, you may experience errors during application registration, Inventory, Backup, or Restore jobs. For
more information about identifying and resolving timer drift, see the following VMware knowledge base article:
Time in virtual machine drifts due to hardware timer drift.
Browser Support
Run IBM Spectrum Copy Data Management from a computer that has access to the installed virtual
appliance.
23
User's Guide System Requirements
IBM Spectrum Copy Data Management was tested and certified against the following web browsers. Note that
newer versions may be supported.
l Microsoft Edge 20.10240
l Firefox 49.0
l Chrome 53.0.27
If your resolution is below 1024 x 768, some items may not fit on the window. Pop-up windows must be
enabled in your browser to access the Help system and some IBM Spectrum Copy Data Management
operations.
24
User's Guide System Requirements
l IBM FlashCopy Manager systems running IBM Tivoli® Storage FlashCopy® Manager version 4.1.3 and
later
Note: IBM providers must be registered by an IBM user with administrator-level privileges.
IBM Spectrum Copy Data Management 2.2.7 and later supports NetApp MetroCluster configurations running
ONTAP 9.x and later. After successful completion of MetroCluster Switchover or Switchback operations,
mirror or vault relationships must be reestablished through an IBM Spectrum Copy Data Management job.
See Create a Backup Job Definition - NetApp ONTAP on page 239.
Clustered Data ONTAP providers must be registered with a cluster administrator account. Cluster peering
must be enabled. Peer relationships enable communication between SVMs. See NetApp ONTAP's Cluster
and Vserver Peering Express Guide.
Note: Ensure that TLS protocol is enabled on the NetApp storage system by setting the tls.enable option
to ON. For TLS to take effect on HTTPS, ensure that the httpd.admin.ssl.enable option is also set to
ON. See Enabling or disabling TLS on NetApp's Support site.
NetApp ONTAP File Inventory Job Requirements:
IBM Spectrum Copy Data Management uses SnapDiff in the NetApp ONTAP file level jobs to perform
cataloging based on snapshot differences. SnapDiff cataloging is supported on storage system models
running the following versions of Data ONTAP:
The following options must be enabled on the volume of the NetApp storage system to catalog:
l create_ucode and convert_ucode - These options are turned off by default. See Related Topics.
l Inode to Pathname - The Inode to Pathname function creates relationships between file names and
relative paths. If Inode to Pathname is disabled on a volume, you must enable it, then delete existing
snapshots on the volume. When new snapshots are created with Inode to Pathname enabled, the volume
can be cataloged.
25
User's Guide System Requirements
Internationalization Requirements
l The language code must be set and the UTF-8 variant must be specified on the NetApp storage system.
For example, en_US.UTF-8. Only the English locale for vol0 for UTF-8 is supported. See Related Topics.
l The IBM Spectrum Copy Data Management application and documentation are available in English only.
However, cataloging, searching, and reporting functions support international metadata.
VMware Requirements
l vSphere 6.0 and 6.0 update and patch levels
l vSphere 6.5 and 6.5 update and patch levels
l vSphere 6.7 and 6.7 update and patch levels
l vSphere 7.0 update and patch levels
Ensure the latest version of VMware Tools is installed in your environment. IBM Spectrum Copy Data
Management was tested against VMware Tools 9.10.0.
RELATED TOPICS:
l Install IBM Spectrum Copy Data Management as a Virtual Appliance on page 67
l File System Requirements on page 32
26
User's Guide System Requirements
27
User's Guide VMware vSphere Privileges
Datastore
l Allocate space
l Browse datastore
l Low level file operations
l Remove datastore
l Remove file
l Update virtual machine files
Distributed switch
l Port configuration operation
l Port setting operation
Folder
l Create folder
Global
l Cancel task
28
User's Guide VMware vSphere Privileges
Host > Configuration
l Storage partition configuration
Network
l Assign network
Resource
l Apply recommendation
l Assign a vApp to resource pool
l Assign virtual machine to resource pool
l Migrate powered off virtual machine
l Migrate powered on virtual machine
l Query vMotion
29
User's Guide VMware vSphere Privileges
vApp
l Add virtual machine
l Assign resource pool
l Assign vApp
l Create
30
User's Guide VMware vSphere Privileges
l Delete
l Power Off
l Power On
l Rename
l Unregister
l vApp resource configuration
31
User's Guide File System Requirements
Supported Platforms/Configurations
Storage
Operating Systems Server Types Storage Systems
Configuration
l HPE Nimble Storage 5.2 and later [6] [7]
l Red Hat Enterprise
Linux 6.5+ l IBM Spectrum Accelerate 11.5.3 and later:
l SUSE Linux
l IBM Spectrum Virtualize Software 7.3 and
Enterprise Server
later/8.1.2 and later:
12.0+
o IBM SAN Volume Controller [8]
l Fibre Channel [6]
l AIX 6.1+ Physical [4] o IBM Storwize [8]
l iSCSI [6]
l AIX 7.1+ o IBM FlashSystem V9000 and 9100 [8]
l Windows Server 2012 l Pure Storage running Pure APIs 1.5 and
R2 [1] later:
l Windows Server 2016 o FlashArray//c
[1] o FlashArray//m
l Windows Server 2019 o FlashArray//x
[1]
o FlashArray 4xx series
32
User's Guide File System Requirements
l DellEMC Unity
o EMC Unity 300, 400, 500, 600 (All-Flash
and Hybrid Flash)
o EMC UnityVSA
o EMC VNXe 1600 running version 3.1.3 +
o EMC VNXe 3200 running version 3.1.1 +
33
User's Guide File System Requirements
[5] On AIX LPAR/VIO servers, data mount points must reside on disks attached to the server using NPIV.
Virtual SCSI disks are not supported.
[6] HPE Nimble Storage must be version 5.2 or later to support iSCSI and Fibre Channel.
[7] The make permanent option is not available for HPE Nimble Storage using physical disks.
[8] On IBM Systems Storage, condense is run during maintenance jobs.
Software
l The bash and sudo packages must be installed. Sudo must be version 1.7.6p2 or above. Run sudo -V to
check the version.
l Python version 2.6.x or 2.7.x must be installed.
l AIX only: If data resides on IBM Spectrum Accelerate storage, the IBM Storage Host Attachment Kit (also
known as IBM XIV Host Attachment Kit) must be installed on the server.
l RHEL/OEL/CentOS 6.x only: Ensure the util-linux-ng package is up-to-date by running yum update
util-linux-ng. Depending on your version or distribution, the package may be named util-linux.
l RHEL/OEL/CentOS 7.3 and above: A required Perl module, Digest::MD5, is not installed by default.
Install the module by running yum install perl-Digest-MD5.
l Linux only: If data resides on LVM volumes, ensure the LVM version is 2.0.2.118 or later. Run lvm
version to check the version and run yum update lvm2 to update the package if necessary.
l Linux only: If data resides on LVM volumes, the lvm2-lvmetad service must be disabled as it can
interfere with IBM Spectrum Copy Data Management's ability to mount and resignature volume group
snapshots/clones.
Run the following commands to stop and disable the service:
systemctl stop lvm2-lvmetad
systemctl disable lvm2-lvmetad
Additionally, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
Connectivity
34
User's Guide File System Requirements
l The SSH service must be running on port 22 on the server and any firewalls must be configured to allow
IBM Spectrum Copy Data Management to connect to the server using SSH. The SFTP subsystem for
SSH must also be enabled.
l The server can be registered using a DNS name or IP address. DNS names must be resolvable by IBM
Spectrum Copy Data Management.
l In order to mount clones/copies of data, IBM Spectrum Copy Data Management automatically maps and
unmaps LUNs to the servers. Each server must be preconfigured to connect to the relevant storage
systems at that site.
o For Fibre Channel, the appropriate zoning must be configured beforehand.
o For iSCSI, the servers must be configured beforehand to discover and log in to the targets on the
storage servers.
Authentication
l The application server must be registered in IBM Spectrum Copy Data Management using an operating
system user that exists on the server (referred to as "IBM Spectrum Copy Data Management agent user"
for the rest of this topic).
l During registration you must provide either a password or a private SSH key that IBM Spectrum Copy
Data Management will use to log in to the server.
l For password-based authentication ensure the password is correctly configured and that the user can log
in without facing any other prompts, such as prompts to reset the password.
l For key-based authentication ensure the public SSH key is placed in the appropriate authorized_keys file
for the IBM Spectrum Copy Data Management agent user.
o Typically, the file is located at /home/<username>/.ssh/authorized_keys
o Typically, the .ssh directory and all files under it must have their permissions set to 600.
Privileges
The IBM Spectrum Copy Data Management agent user must have the following privileges:
l Privileges to run commands as root and other users using sudo. IBM Spectrum Copy Data Management
requires this for various tasks such as discovering storage layouts and mounting and unmounting disks.
o The sudoers configuration must allow the IBM Spectrum Copy Data Management agent user to run
commands without a password.
o The !requiretty setting must be set.
For examples on creating a new user with the necessary privileges, see Sample Configuration of an IBM
Spectrum Copy Data Management Agent User on page 36.
Restore Jobs
35
User's Guide File System Requirements
When performing a database or filesystem restore from an XFS filesystem, the restore process may fail if the
"xfsprogs" package version on the destination server is between 3.2.0 and 4.1.9. To resolve the issue,
upgrade xfsprogs to version 4.2.0 or above.
Revert Jobs
When performing File System revert jobs, there are considerations that must be met for jobs to successfully
complete:
l Ensure that the databases are on independent storage. The underlying storage volume for the databases
being reverted should not contain data for other databases and should not contain a datastore that is
shared by other virtual machines (VMs) or by other databases not being reverted.
l Ensure that the production databases are not on VMDK disks that are part of a VMware VM snapshot.
l Please shut down any application databases on the volumes to be reverted prior to running revert.
l When the revert action completes, take appropriate steps to restore the applications to use the old data
that is now contained on the volume(s) that were reverted.
RELATED TOPICS:
36
User's Guide File System Requirements
37
User's Guide InterSystems Caché Requirements
Supported Platforms/Configurations
38
User's Guide InterSystems Caché Requirements
[1] IBM Spectrum Copy Data Management was tested against Caché 2015.3, Caché 2017, and Caché 2018.
[2] See System Requirements on page 23 for supported VMware vSphere versions.
[3] Select the Physical provider type when registering the provider in IBM Spectrum Copy Data Management.
Note that NetApp ONTAP and DellEMC storage systems are not supported. All data files and log files for a
given instance should reside on either a pRDM or virtual disk.
[4] InterSystems Caché servers registered as virtual must have VMware Tools installed and running.
[5] Supported platforms: IBM Power Systems (Little Endian).
[6] On IBM Systems Storage, condense is run during maintenance jobs.
Cluster Support
For cluster configurations, register the application server in IBM Spectrum Copy Data Management using the
cluster IP address. IBM Spectrum Copy Data Management will utilize the cluster IP during application
discovery and snapshot operations. In addition, the user configured in IBM Spectrum Copy Data Management
39
User's Guide InterSystems Caché Requirements
with the cluster must be present on all cluster nodes and should preferably use the same UID and GID across
all nodes.
Additionally, on each cluster node, create the file /etc/guestapps.conf with the following contents:
[unixagent]
overrideHostname = <CLUSTER NAME>
Software
l The bash and sudo packages must be installed. Sudo must be version 1.7.6p2 or above. Run sudo -V to
check the version.
l Python version 2.6.x or 2.7.x must be installed.
l AIX only: If data resides on IBM Spectrum Accelerate storage, the IBM Storage Host Attachment Kit (also
known as IBM XIV Host Attachment Kit) must be installed on the server.
l RHEL/OEL/CentOS 6.x only: Ensure the util-linux-ng package is up-to-date by running yum update
util-linux-ng. Depending on your version or distribution, the package may be named util-linux.
l RHEL/OEL/CentOS 7.3 and above: A required Perl module, Digest::MD5, is not installed by default.
Install the module by running yum install perl-Digest-MD5.
l Linux only: If data resides on LVM volumes, ensure the LVM version is 2.0.2.118 or later. Run lvm
version to check the version and run yum update lvm2 to update the package if necessary.
l Linux only: If data resides on LVM volumes, the lvm2-lvmetad service must be disabled as it can
interfere with IBM Spectrum Copy Data Management's ability to mount and resignature volume group
snapshots/clones.
Run the following commands to stop and disable the service:
systemctl stop lvm2-lvmetad
systemctl disable lvm2-lvmetad
Additionally, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
Connectivity
l The SSH service must be running on port 22 on the server and any firewalls must be configured to allow
IBM Spectrum Copy Data Management to connect to the server using SSH. The SFTP subsystem for
SSH must also be enabled.
l The server can be registered using a DNS name or IP address. DNS names must be resolvable by IBM
Spectrum Copy Data Management.
40
User's Guide InterSystems Caché Requirements
l In order to mount clones/copies of data, IBM Spectrum Copy Data Management automatically maps and
unmaps LUNs to the servers. Each server must be preconfigured to connect to the relevant storage
systems at that site.
o For Fibre Channel, the appropriate zoning must be configured beforehand.
o For iSCSI, the servers must be configured beforehand to discover and log in to the targets on the
storage servers.
Authentication
l The application server must be registered in IBM Spectrum Copy Data Management using an operating
system user that exists on the server (referred to as "IBM Spectrum Copy Data Management agent user"
for the rest of this topic).
l During registration you must provide either a password or a private SSH key that IBM Spectrum Copy
Data Management will use to log in to the server.
l For password-based authentication ensure the password is correctly configured and that the user can log
in without facing any other prompts, such as prompts to reset the password.
l For key-based authentication ensure the public SSH key is placed in the appropriate authorized_keys file
for the IBM Spectrum Copy Data Management agent user.
o Typically, the file is located at /home/<username>/.ssh/authorized_keys
o Typically, the .ssh directory and all files under it must have their permissions set to 600.
OS-level authentication must be enabled on the InterSystems Caché server. From the InterSystems Caché
interface, navigate to:
System Administration > Security > System Security > Authentication/CSP Session Options, then
select Allow Operating System authentication.
Next, navigate to System Administration > Security > Services > %Service_Terminal, then enable
Operating System if the option is not already enabled.
The user identity associated with IBM Spectrum Copy Data Management registration must have sufficient
privileges to invoke some system commands, such as those used to find disk information, as well as to run
InterSystems Caché’s "csession" command without asking for a database user name and password. To
enable this feature, use the InterSystems Caché instance owner user after adding the user to "sudoers" file.
The IBM Spectrum Copy Data Management agent user must have the following privileges:
l Privileges to run commands as root and other users using sudo. IBM Spectrum Copy Data Management
requires this for various tasks such as discovering storage layouts and mounting and unmounting disks.
o The sudoers configuration must allow the IBM Spectrum Copy Data Management agent user to run
commands without a password.
o The !requiretty setting must be set.
41
User's Guide InterSystems Caché Requirements
For examples on creating a new user with the necessary privileges, see Sample Configuration of an IBM
Spectrum Copy Data Management Agent User on page 42.
Backup Jobs
InterSystems Caché backup jobs occur at the instance level. Multiple instances can be added to a single
Backup job definition, and all instances on a given host are automatically discovered.
Note that it is possible to scan in an InterSystems Caché backup failover member instance or an async
member instance and run snapshots against the mirror copy instead of the primary failover member.
Virtual instances can be selected for backup (such as "cachecentos6/cache_2015.3"), but it is recommended
to select Caché instances explicitly. If virtual instances are selected, the associated job definition must be
adjusted when the Caché instances are upgraded to later versions.
Restore Jobs
When performing a database or filesystem restore from an XFS filesystem, the restore process may fail if the
"xfsprogs" package version on the destination server is between 3.2.0 and 4.1.9. To resolve the issue,
upgrade xfsprogs to version 4.2.0 or above.
Caché software is not required on the target host. However, the target host should have similar specifications
to the source host, including operating system and processor.
42
User's Guide InterSystems Caché Requirements
Note that the following users and groups must be created on the target host: instance owner, effective user for
InterSystems Caché superserver and its jobs, effective group for InterSystems Caché processes, and a group
that has permissions to start and stop InterSystems Caché instances. The user and group IDs should match
those on the source host. The instance will be brought up using the same mount points as those found on the
source machine, so ensure these mounts are not in use on the target.
When restoring to a target with running InterSystems Caché instances, the instances display as valid targets.
Note that IBM Spectrum Copy Data Management will not interact with these instances, but instead bring up a
new instance using mapped mount points. When restoring to a target with no prior InterSystems Caché
instances, IBM Spectrum Copy Data Management creates a placeholder that acts as a restore target named
cache_general. Note that cache_general should only be used as a restore target and should not be selected
for backup.
Single InterSystems Caché databases can be restored through an Instant Disk Restore job, which mounts
physical volumes on the target machine. Granular recovery can then be performed through InterSystems
Caché commands.
43
User's Guide InterSystems Caché Requirements
l Run the Caché instant restore job in IBM Spectrum Copy Data Management. Running the instant restore
job will create the additional directories /boot and /root under the /mnt directory. Additionally, a
/cache subdirectory will be created in the /mnt/root directory.
l Move the /cache directory created in the second step to another location and copy /cache from
/mnt/root to the / location.
$ mv /cache /<some other location>
$ cp -r /mnt/root/cache /
l Bring up the restored Caché instance by issuing the following command:
$ cd /cache/bin
$ ccontrol start cache
RELATED TOPICS:
l Register a Provider on page 79
l Identities Overview on page 134
l System Requirements on page 23
44
User's Guide Oracle Requirements
Oracle Requirements
Before registering each Oracle server in IBM Spectrum Copy Data Management, ensure it meets the following
requirements.
45
User's Guide Oracle Requirements
l Red Hat
Enterprise Linux /
Centos / Oracle
6.5+ [9]
l Red Hat o IBM FlashSystem
Enterprise Linux / A9000/A9000R [22]
l Fibre Channel [18]
Centos / Oracle o IBM XIV storage systems
l iSCSI [18]
7.0+ [9] [22]
l NFS
l SUSE Linux
l NetApp ONTAP storage
Enterprise Server
systems running the
11.0 SP4+ [9] following versions [14][15]:
l SUSE Linux
o Data ONTAP 8.1.0,
Enterprise Server
Oracle 19c configured 8.2.0 and later
as: 12.0+ [9]
o Data ONTAP 9.x
l Standalone or l Red Hat l Physical RDM backed
o Clustered Data
RAC [17] Enterprise Linux / by Fibre Channel or
ONTAP 8.1, 8.2, 8.3,
Centos / Oracle iSCSI disks attached
9.4, 9.5, 9.6 and later
6.5+ [9] to ESX [18]
l Red Hat l VMDK (dependent as l Pure Storage running Pure
Enterprise Linux / well as independent APIs 1.5 and later:
Virtual Centos / Oracle disks) on VMFS o FlashArray//c
(VMware) [6, 7.0+ [9] datastores (Fibre
o FlashArray//m
7, 10, 11, 12] l SUSE Linux Channel / iSCSI) or
NFS datastores [18]
o FlashArray//x
Enterprise
o FlashArray 4xx series
Server 11.0 l iSCSI disks directly
SP4+ [9] attached to guest
l SUSE Linux operating system [18]
Enterprise l NFS share mapped
Server 12.0+ [9] directly to the guest
[1] For Oracle 12c multitenant databases, IBM Spectrum Copy Data Management supports protection and
recovery of the container database, including all pluggable databases under it. Granular recovery of specific
PDBs can be performed via Instant Disk Restore recovery combined with RMAN.
[2] Standalone databases protected by IBM Spectrum Copy Data Management can be recovered to the same
or another standalone server as well as to a RAC installation. When recovering from standalone to RAC, if the
source database uses Automatic Storage Management, then it will be successfully recovered to all nodes in
the destination cluster. If the source database uses non-ASM storage, the database will be mounted only on
the first node in the destination RAC. Source disks for standalone databases restoring to a virtual RAC
environment must be thick provisioned.
46
User's Guide Oracle Requirements
RAC databases protected by IBM Spectrum Copy Data Management can be recovered to the same or
another RAC installation as well as to a standalone server. In order to recover a RAC database to a
standalone server, the destination server must have Grid Infrastructure installed and an ASM instance must
be running.
[3] Oracle Data Guard primary and secondary databases support inventory and backup operations. Oracle
Data Guard databases will always be restored as primary databases without any Data Guard configurations
enabled.
[4] Oracle Flex ASM is not supported.
[5] RAC database recoveries are not server pool-aware. IBM Spectrum Copy Data Management can recover
databases to a RAC, but not to specific server pools.
[6] IBM Spectrum Copy Data Management supports recovering databases from a source physical server to a
destination virtual server by provisioning disks as physical RDMs. Similarly, IBM Spectrum Copy Data
Management can recover databases from a source virtual server that uses physical RDM to a destination
physical server. However, source databases on VMDK virtual disks can only be recovered to another virtual
server and not to a physical server.
[7] IBM Spectrum Copy Data Management does not support VADP-based protection of virtual Oracle servers.
Oracle data must reside directly on one of the supported storage systems listed above.
[8] On AIX LPAR/VIO servers, Oracle data must reside on disks attached to the server using NPIV. Virtual
SCSI disks are not supported.
[9] Linux LVM volumes containing Oracle data must use LVM version 2.02.118 or above. On SLES 11 SP4,
this version of LVM may not be available through the official repositories, in which case, databases running on
or recovered to SLES 11 SP4 systems must use ASM or non-LVM filesystems only.
[10] Masking and DevOps recoveries are not supported on virtual servers.
[11] See System Requirements on page 23 for supported VMware vSphere versions.
[12] Oracle servers registered as virtual must have VMware Tools installed and running.
[13] Oracle 12c multithreaded configurations are not supported.
[14] Data masking is not supported for Oracle in NetApp storage environments. Masking is not supported on
source database on NFS (a copy of which will be cloned and masked) or source databases on replica copies.
Instead of the default mirror copy, you must select a snapshot copy as a replication source.
[15] NetApp systems running in 7-Mode are not supported.
[16] The ORACLE_HOME directory may reside on ACFS. The data and log directories are required to be on
ACFS. IBM Spectrum Copy Data Management does not protect the Oracle installation that is contained in the
ORACLE_HOME directory.
[17] Oracle 19c standalone is supported for AIX 7.2. Oracle 19c RAC is supported for both IBM and Pure
storage systems.
Additional Requirements:
47
User's Guide Oracle Requirements
l Oracle database data and the flash recovery area (FRA) must reside on supported storage systems. IBM
Spectrum Copy Data Management can back up archived logs to a supported storage system if they are
not already on one.
l During an Instant Database Restore, there may be failures if the new name specified for the restored
database is similar to an existing database only differing by numerical suffix. For clustered instances of
Oracle databases, the appliance always uses global database name in the UI. During the inventory and
restore processes, individual instances using the numerical suffixes of the cluster must be correlated to
the global database name. The issue with this comes when, as an example, “Production12” is discovered.
Is this the instance 12 of the “Production” database, or instance two (2) of the “Production1” database, or a
database named “Production12.”
[18] HPE Nimble Storage must be version 5.2 or later to support iSCSI and Fibre Channel.
[19] Oracle PIT (point-in-time) recovery is not supported with HPE Nimble Storage.
[20] The make permanent option is not available for HPE Nimble Storage using physical disks.
[21] Oracle ACFS on AIX 7.2 is supported. Oracle ACFS on AIX is not supported for flashcopy incremental
backups and iSCSI protocol.
[22] On IBM Systems Storage, condense is run during maintenance jobs.
For more information about Oracle requirements, see Oracle Database Support FAQ on page 572.
Oracle Support for VMware Virtual Machines
For Oracle servers running as VMware virtual machines, UUID must be enabled to perform Oracle-based
Backup functions. To enable, power off the guest machine through the vSphere client, then select the guest
and click Edit Settings. Select Options, then General under the Advanced section. Select Configuration
Parameters..., then find the disk.EnableUUID parameter. If set to FALSE, change the value to TRUE. If the
parameter is not available, add it by clicking Add Row, set the value to TRUE, then power on the guest.
Oracle support for VMware virtual machines requires Oracle data/logs to be stored on VMDK virtual disks or
physical RDMs. Virtual RDM disks are not supported. The VMDKs must reside on a datastore created on
LUNs from supported storage systems. Similarly, the physical RDMs must be backed by LUNs from
supported storage systems.
For Oracle RAC clustered nodes running prior to vSphere 6.0, virtual machines cannot use a virtual SCSI
controller whose SCSI Bus Sharing option is set to None. This is required to ensure that IBM Spectrum Copy
Data Management can hot-add shared virtual disks to the cluster nodes. For vSphere 6.0 and above, this
requirement does not apply. Instead, if an existing shared SCSI controller is not found on vSphere 6.0 and
above, IBM Spectrum Copy Data Management automatically enables the "multi-writer" sharing option for
each shared virtual disk.
Restore Jobs
When performing a database or filesystem restore from an XFS filesystem, the restore process may fail if the
"xfsprogs" package version on the destination server is between 3.2.0 and 4.1.9. To resolve the issue,
upgrade xfsprogs to version 4.2.0 or above.
48
User's Guide Oracle Requirements
Software
l The bash and sudo packages must be installed. Sudo must be version 1.7.6p2 or above. Run sudo -V to
check the version.
l Python version 2.6.x or 2.7.x must be installed.
l AIX only: If Oracle data resides on IBM Spectrum Accelerate storage, the IBM Storage Host Attachment
Kit (also known as IBM XIV Host Attachment Kit) must be installed on the Oracle server.
l RHEL/OEL/CentOS 6.x only: Ensure the util-linux-ng package is up-to-date by running yum update
util-linux-ng. Depending on your version or distribution, the package may be named util-linux.
l RHEL/OEL/CentOS 7.3 and above: A required Perl module, Digest::MD5, is not installed by default.
Install the module by running yum install perl-Digest-MD5.
l Linux only: If Oracle data resides on LVM volumes, ensure the LVM version is 2.0.2.118 or later. Run
lvm version to check the version and run yum update lvm2 to update the package if necessary.
l Linux only: If Oracle data resides on LVM volumes, the lvm2-lvmetad service must be disabled as it can
interfere with IBM Spectrum Copy Data Management's ability to mount and resignature volume group
snapshots/clones.
Run the following commands to stop and disable the service:
$ systemctl stop lvm2-lvmetad
$ systemctl disable lvm2-lvmetad
Additionally, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
Connectivity
l The SSH service must be running on port 22 on the server and any firewalls must be configured to allow
IBM Spectrum Copy Data Management to connect to the server using SSH. The SFTP subsystem for
SSH must also be enabled.
l The server can be registered using a DNS name or IP address. DNS names must be resolvable by IBM
Spectrum Copy Data Management.
l When registering Oracle RAC nodes, register each node using its physical IP or name. Do not use a
virtual name or Single Client Access Name (SCAN).
l In order to mount clones/copies of Oracle data, IBM Spectrum Copy Data Management automatically
maps and unmaps LUNs to the Oracle servers. Each server must be preconfigured to connect to the
relevant storage systems at that site.
o For Fibre Channel, the appropriate zoning must be configured beforehand.
o For iSCSI, the Oracle servers must be configured beforehand to discover and log in to the targets on
the storage servers.
49
User's Guide Oracle Requirements
Authentication
l The Oracle server must be registered in IBM Spectrum Copy Data Management using an operating
system user that exists on the Oracle server (referred to as "IBM Spectrum Copy Data Management agent
user" for the rest of this topic).
l During registration you must provide either a password or a private SSH key that IBM Spectrum Copy
Data Management will use to log in to the server.
l For password-based authentication ensure the password is correctly configured and that the user can log
in without facing any other prompts, such as prompts to reset the password.
l For key-based authentication ensure the public SSH key is placed in the appropriate authorized_keys file
for the IBM Spectrum Copy Data Management agent user.
o Typically, the file is located at /home/<username>/.ssh/authorized_keys
o Typically, the .ssh directory and all files under it must have their permissions set to 600.
Privileges
The IBM Spectrum Copy Data Management agent user must have the following privileges:
l Privileges to run commands as root and other users using sudo. IBM Spectrum Copy Data Management
requires this for various tasks such as discovering storage layouts and mounting and unmounting disks.
o The sudoers configuration must allow the IBM Spectrum Copy Data Management agent user to run
commands without a password.
o The !requiretty setting must be set.
o The ENV_KEEP setting must allow the ORACLE_HOME and ORACLE_SID environment variables to
be retained.
l Privileges to read the Oracle inventory. IBM Spectrum Copy Data Management requires this to discover
and collect information about Oracle homes and databases.
o To achieve this, the IBM Spectrum Copy Data Management agent user must belong to the Oracle
inventory group, typically named oinstall.
o The agent user must have read permissions for all paths under the ORACLE_HOME directory. This
includes privileges to any Oracle home directory pointed to by HOMELOC or any other environment
variables.
l SYSDBA privileges for database instances. IBM Spectrum Copy Data Management needs to perform
database tasks like querying instance details, hot backup, RMAN cataloging, as well as starting/stopping
instances during recovery.
o To achieve this, the IBM Spectrum Copy Data Management agent user must belong to the OSDBA
operating system group, typically named dba.
50
User's Guide Oracle Requirements
o In the case of multiple Oracle homes each with a different OSDBA group, the IBM Spectrum Copy
Data Management agent user must belong to each group.
l SYSASM privileges, if Automatic Storage Management (ASM) is installed. IBM Spectrum Copy Data
Management needs to perform storage tasks like querying ASM disk information, as well as renaming,
mounting, and unmounting diskgroups.
o To achieve this, the IBM Spectrum Copy Data Management agent user must belong to the OSASM
operating system group, typically named asmadmin.
l Shell user limits for the IBM Spectrum Copy Data Management agent user must be the same as those for
the user that owns the Oracle home, typically named oracle. Refer to Oracle documentation for
requirements and instructions on setting shell limits. Run ulimit -a as both the oracle user and the IBM
Spectrum Copy Data Management agent user and ensure their settings are identical.
For examples on creating a new user with the necessary privileges, see Sample Configuration of an IBM
Spectrum Copy Data Management Agent User on page 53.
51
User's Guide Oracle Requirements
To ensure that the configuration file is readable by the agent user, run the following command:
chmod 644 /etc/guestapps_oraHomes.conf
Database Discovery
IBM Spectrum Copy Data Management discovers Oracle installations and databases by looking through the
files /etc/oraInst.loc and /etc/oratab, as well as the list of running Oracle processes. If the files are
not present in their default location, the "locate" utility must be installed on the system so that IBM Spectrum
Copy Data Management can search for alternate locations of these files.
IBM Spectrum Copy Data Management discovers databases and their storage layouts by connecting to
running instances and querying the locations of their datafiles, log files, etc. In order for IBM Spectrum Copy
Data Management to correctly discover databases during cataloging and copy operations, databases must be
in "MOUNTED," "READ ONLY," or "READ WRITE" mode. IBM Spectrum Copy Data Management cannot
discover or protect database instances that are shut down.
Databases must be started using a server parameter file (spfile). IBM Spectrum Copy Data Management does
not support copy operations for databases that are started using a text-based parameter file (pfile).
Additionally, IBM Spectrum Copy Data Management creates aliases/symbolic links with names that follow a
consistent pattern. To ensure that ASM is able to discover the disks mapped by IBM Spectrum Copy Data
Management, you must update the ASM_DISKSTRING parameter to add this pattern.
Linux:
IBM Spectrum Copy Data Management creates udev rules for each disk to set the appropriate ownership and
permissions. The udev rules also create symbolic links of the form /dev/ecx-asmdisk/<diskId> that point to the
appropriate device under /dev.
To ensure the disks are discoverable by ASM, add the following pattern to your existing ASM_DISKSTRING:
/dev/ecx-asmdisk/*
AIX:
IBM Spectrum Copy Data Management creates a device node (using mknod) of the form /dev/ecx_
asm<diskId> that points to the appropriate hdisk under /dev. IBM Spectrum Copy Data Management also sets
the appropriate ownership and permissions for this new device.
To ensure that the disks are discoverable by ASM, add the following pattern to your existing ASM_
DISKSTRING: /dev/ecx_asm*
52
User's Guide Oracle Requirements
Notes:
l If the existing value of the ASM_DISKSTRING is empty, you may have to first set it to an appropriate value
that matches all existing disks, then append the value above.
l If the existing value of the ASM_DISKSTRING is broad enough to discover all disks (for example,
/dev/*), you may not need to update it.
l Refer to Oracle documentation for details about retrieving and modifying the ASM_DISKSTRING
parameter.
Note:If on AIX, the append argument (-a) should be omitted when using the usermod command.
l Place the following lines at the end of your sudoers configuration file, typically /etc/sudoers. If your existing
sudoers file is configured to import configuration from another directory (for example, /etc/sudoers.d), you
can also place the lines in a new file in that directory:
Defaults:cdmagent !requiretty
Defaults:cdmagent env_keep+="ORACLE_HOME"
Defaults:cdmagent env_keep+="ORACLE_SID"
cdmagent ALL=(ALL) NOPASSWD:ALL
RELATED TOPICS:
53
User's Guide Oracle Requirements
54
User's Guide SAP HANA Requirements
Supported Platforms/Configurations
55
User's Guide SAP HANA Requirements
l Physical RDM
backed by Fibre
Channel or iSCSI o IBM FlashSystem
disks attached to ESX A9000/A9000R [10]
l Red Hat
[2, 6]
Enterprise Linux o IBM XIV storage
6.5+ l VMDK (dependent as systems [10]
well as independent
Virtual l Red Hat
disks) on VMFS l Pure Storage running Pure
note below.) Enterprise Linux
(VMware) [1, datastores backed by APIs 1.5 and later:
3, 4] 7.0+
Fibre Channel or o FlashArray//c
l SUSE Linux iSCSI disks attached o FlashArray//m
Enterprise to ESX [6]
Server 12.0+
o FlashArray//x
l iSCSI disks directly
o FlashArray 4xx series
attached to guest
operating system [2,
6]
[1] See System Requirements on page 23 for supported VMware vSphere versions.
[2] Select the Physical provider type when registering the provider in IBM Spectrum Copy Data Management.
Note that NetApp ONTAP and DellEMC storage systems are not supported.
[3] SAP HANA servers registered as virtual must have VMware Tools installed and running.
[4] Supported platforms: Intel based x86, IBM Power Systems (Little Endian)
[5] Single tenant configurations can be automatically protected using storage snapshots.
[6] HPE Nimble Storage must be version 5.2 or later to support iSCSI and Fibre Channel.
[7] SAP HANA revert is not supported with HPE Nimble Storage.
[8] The make permanent option is not available for HPE Nimble Storage using physical disks.
[9] Only the xfs filesystem is supported for SAP HANA data and logs locations.
[10] On IBM Systems Storage, condense is run during maintenance jobs.
[11] IBM Spectrum Copy Data Management supports protection and recovery of multitenant databases on
SAP HANA 2.0 SPS 04 and SPS 05.
Prerequisites
l The SAP HANA Client must be installed on your SAP HANA machine.
l Create a symbolic link to the SAP HANA Client installation directory through the following command:
ln -s <installation directory of SAP HANA Client> /opt/hana.
56
User's Guide SAP HANA Requirements
For example, if SAP HANA Client is installed in /hana/shared/<SID>/hdbclient, you would enter
the following: ln -s /hana/shared/<SID>/hdbclient/ /opt/hana.
l For SAP HANA 2.0 SPS 04 and SPS 05, the hdbcli module must be installed. The hdbcli module
should only be installed after the complete installation of the HANA client. The module may be extracted
from <path to directory>/hdbclient/hdbcli-<version>.tar.gz.
l Log backups require that the log backup option is enabled on the SAP HANA system. Additionally, the
Universal Destination Directory for log backups must match the directory that is configured on the SAP
HANA system when you enable log backups.
l Each SAP HANA system has a system ID (SID). It is good practice to have the SID in the path. For
example, if /hana/logbackup is the mount point, create these directories:
mkdir /hana/logbackup/<SID>
mkdir /hana/logbackup/<SID>/catalog
To ensure that database logs can be saved, permissions need to be appropriately specified for the
created directories:
chown --reference=/hana/shared/<SID>/HDB01 /hana/logbackup/<SID>
chmod --reference=/hana/shared/<SID>/HDB01 /hana/logbackup/<SID>
chown --reference=/hana/shared/<SID>/HDB01 /hana/logbackup/<SID>/catalog
chmod --reference=/hana/shared/<SID>/HDB01 /hana/logbackup/<SID>/catalog
Software
l The bash and sudo packages must be installed. Sudo must be version 1.7.6p2 or above. Run sudo -V to
check the version.
l Python version 2.6.x or 2.7.x must be installed.
l RHEL/OEL/CentOS 6.x only: Ensure the util-linux-ng package is up-to-date by running yum update
util-linux-ng. Depending on your version or distribution, the package may be named util-linux.
l RHEL/OEL/CentOS 7.3 and above: A required Perl module, Digest::MD5, is not installed by default.
Install the module by running yum install perl-Digest-MD5.
l SLES only: The Python pip package will need to be installed. Follow these steps to install the pip module
on SLES:
$ sudo zypper addrepo
https://download.opensuse.org/repositories/Cloud:Tools/SLE_12_
SP4/Cloud:Tools.repo
$ sudo zypper refresh
$ sudo zypper python-pip
57
User's Guide SAP HANA Requirements
l Linux only: If data resides on LVM volumes, ensure the LVM version is 2.0.2.118 or later. Run lvm
version to check the version and run yum update lvm2 to update the package if necessary.
l Linux only: If data resides on LVM volumes, the lvm2-lvmetad service must be disabled as it can
interfere with IBM Spectrum Copy Data Management's ability to mount and resignature volume group
snapshots/clones.
Run the following commands to stop and disable the service:
systemctl stop lvm2-lvmetad
systemctl disable lvm2-lvmetad
Additionally, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
Connectivity
l The SSH service must be running on port 22 on the server and any firewalls must be configured to allow
IBM Spectrum Copy Data Management to connect to the server using SSH. The SFTP subsystem for
SSH must also be enabled.
l The server can be registered using a DNS name or IP address. DNS names must be resolvable by IBM
Spectrum Copy Data Management.
l In order to mount clones/copies of data, IBM Spectrum Copy Data Management automatically maps and
unmaps LUNs to the servers. Each server must be preconfigured to connect to the relevant storage
systems at that site.
o For Fibre Channel, the appropriate zoning must be configured beforehand.
o For iSCSI, the servers must be configured beforehand to discover and log in to the targets on the
storage servers.
Authentication
l The application server must be registered in IBM Spectrum Copy Data Management using an operating
system user that exists on the server (referred to as "IBM Spectrum Copy Data Management agent user"
for the rest of this topic).
l During registration you must provide either a password or a private SSH key that IBM Spectrum Copy
Data Management will use to log in to the server.
l For password-based authentication ensure the password is correctly configured and that the user can log
in without facing any other prompts, such as prompts to reset the password.
l For key-based authentication ensure the public SSH key is placed in the appropriate authorized_keys file
for the IBM Spectrum Copy Data Management agent user.
o Typically, the file is located at /home/<username>/.ssh/authorized_keys
o Typically, the .ssh directory and all files under it must have their permissions set to 600.
58
User's Guide SAP HANA Requirements
Registration
When registering a SAP HANA provider in IBM Spectrum Copy Data Management, note the following:
l The format for the port number is 3<instance number>15. So, for example, if the instance number is 07,
then enter the following port number: 30715.
l The System Credentials are the credentials required to access the system. These credentials are used to
log in to the system and perform restore operations.
l The Database Credentials are used for querying the database and performing copy operations. The user
must have sufficient privileges to access the database.
Privileges
The IBM Spectrum Copy Data Management agent user must have the following privileges:
l Privileges to run commands as root and other users using sudo. IBM Spectrum Copy Data Management
requires this for various tasks such as discovering storage layouts and mounting and unmounting disks.
o The sudoers configuration must allow the IBM Spectrum Copy Data Management agent user to run
commands without a password.
o The !requiretty setting must be set.
For examples on creating a new user with the necessary privileges, see SAP HANA Requirements on page
55.
Restore Jobs
When performing a database or filesystem restore from an XFS filesystem, the restore process may fail if the
"xfsprogs" package version on the destination server is between 3.2.0 and 4.1.9. To resolve the issue,
upgrade xfsprogs to version 4.2.0 or above.
Revert Jobs
When performing revert jobs on SAP Hana, there are special considerations that must be met for jobs to
successfully complete:
l SAP Hana data and the operating system (OS) filesystem must be on separate datastores.
l Ensure that the databases are on independent storage. The underlying storage volume for the databases
being reverted should not contain data for other databases and should not contain a datastore that is
shared by other virtual machines (VMs) or by other databases not being reverted.
l Ensure that the production databases are not on VMDK disks that are part of a VMware VM snapshot.
l All VM snapshots need to be removed from the SAP Hana server prior to executing the revert function.
l Production databases will be automatically shut down during the revert.
59
User's Guide SAP HANA Requirements
When creating a job, you will need to set the default revert action for the job. Only SAP Hana can utilize the
revert database functionality. This behavior is controlled through the Revert Database option during the job
creation process:
l Enabled – Always will revert the database.
l Disabled – Never reverts the database.
l User Selection – Allows the user to make the determination to revert the database when the job session
is pending.
RELATED TOPICS:
l Register a Provider on page 79
l Identities Overview on page 134
l System Requirements on page 23
60
User's Guide Microsoft SQL Server Requirements
l SQL 2014 on Windows Server l IBM Spectrum Virtualize Software 7.3 and
2012 R2 [1] later/8.1.2 and later:
l Fibre Channel
l SQL 2014 on Windows Server Physical [7] [8]
o IBM SAN Volume Controller [15]
[12]
2016 [9] [10] [11] o IBM Storwize [15]
l iSCSI [12]
o IBM FlashSystem V9000 and 9100 [15]
l SQL 2016 on Windows Server
l Pure Storage running Pure APIs 1.5 and
2012 R2 [1]
above:
l SQL 2016 on Windows Server
o FlashArray//c
2016
o FlashArray//m
l SQL 2017 on Windows Server
o FlashArray//x
2019
o FlashArray 4xx series
l SQL 2019 on Windows Server
2019
61
User's Guide Microsoft SQL Server Requirements
l DellEMC Unity
o EMC Unity 300, 400, 500, 600 (All-
Flash and Hybrid Flash)
o EMC UnityVSA
o EMC VNXe 1600 running version 3.1.3
+
o EMC VNXe 3200 running version 3.1.1
+
62
User's Guide Microsoft SQL Server Requirements
[2] Note that Clustered Shared Volumes (CSV) are not supported.
[3] See System Requirements for supported VMware vSphere versions.
[4] Select the Physical provider type when registering the provider in IBM Spectrum Copy Data Management.
Recoveries require direct access to storage. Note that NetApp ONTAP and DellEMC storage systems are not
supported.
[5] vRDMs are supported through VM Replication jobs.
[6] Independent disks are supported only if the underlying storage utilizes supported storage systems.
Register the SQL resource as Physical when configuring the provider in IBM Spectrum Copy Data
Management. Note that independent disks do not allow snapshots to be taken in VMware virtual scenarios.
The above listed IBM Spectrum Accelerate, IBM Spectrum Virtualize, and Pure Storage FlashArrays are
supported for physical registration.
[7] When registering physical SQL servers it is recommended to register via the DNS server. The IBM
Spectrum Copy Data Management appliance must be resolvable and route-able by the DNS server; the
physical SQL server will communicate back to IBM Spectrum Copy Data Management through DNS.
[8] Recovery for target servers registered as Physical provider types requires direct access to storage.
[9] Any Windows node with iSCSI or Fibre Channel access to the storage can be selected as a proxy server,
provided that the node is not part of the original cluster. It is recommended to select a standalone virtual or
physical Windows node as a proxy server.
[10] For physical SQL servers you must allow outgoing connections to port 8443 on the IBM Spectrum Copy
Data Management appliance from the SQL server.
[11] Dynamic disks are not supported.
[12] HPE Nimble Storage must be version 5.2 or later to support iSCSI and Fibre Channel.
[13] The make permanent option is not available for HPE Nimble Storage using physical disks.
[14] SQL PIT (point-in-time) recovery is not supported with HPE Nimble Storage.
SQL servers residing on any storage can also be protected to supported storage systems through
VM Replication jobs.
For both physical and virtual SQL environments, point-in-time recoveries beyond the last snapshot taken are
incompatible with workflows utilizing more than one Site. In a virtual environment, the SQL server, associated
vCenter, and storage must be registered to the same site. In a physical environment, the SQL server and
storage must be registered to the same site.
[15] On IBM Systems Storage, condense is run during maintenance jobs.
For more information about Microsoft SQL Server requirements, see Microsoft SQL Server Support FAQ on
page 592.
SQL Support for VMware Virtual Machines
UUID must be enabled to perform Microsoft SQL-based backup functions. To enable, power off the guest
machine through the vSphere client, then select the guest and click Edit Settings. Select Options, then
General under the Advanced section. Select Configuration Parameters..., then find the disk.EnableUUID
63
User's Guide Microsoft SQL Server Requirements
parameter. If set to FALSE, change the value to TRUE. If the parameter is not available, add it by clicking Add
Row, set the value to TRUE, then power on the guest.
The virtual machine must use SCSI disks only, dynamic disks are not supported.
The latest VMware Tools must be installed on the virtual machine node.
In-Memory OLTP Requirements and Limitations
In-Memory OLTP is a memory-optimized database engine used to improve database application
performance, supported in SQL 2014 and 2016. Note the following IBM Spectrum Copy Data Management
requirements and limitations for In-Memory OLTP usage:
l The maximum restore file path must be less than 256 characters, which is a SQL requirement. If the
original path exceeds this length, consider using a customized restore file path to reduce the length.
l The metadata that can be restored is subject to VSS and SQL restore capabilities.
64
User's Guide Microsoft SQL Server Requirements
machine node DNS name must be resolvable and route-able from the IBM Spectrum Copy Data Management
appliance.
The user identity must have sufficient rights to install and start the IBM Spectrum Copy Data Management
Tools Service on the node. This includes "Log on as a service" rights. For more information about the "Log on
as a service" right, see https://technet.microsoft.com/en-us/library/cc794944.aspx.
The default security policy uses the Windows NTLM protocol, and the user identity format follows the default
domain\Name format.
You must manually create a directory to store VSS provider logs when running IBM Spectrum Copy Data
Management 2.2.6 and earlier. Create the following directory structure on the SQL server: c:\temp\CDM\logs
Kerberos Requirements
Kerberos-based authentication can be enabled through a configuration file on the IBM Spectrum Copy Data
Management appliance. This will override the default Windows NTLM protocol.
For Kerberos-based authentication only, the user identity must be specified in the username@FQDN format.
The username must be able to authenticate using the registered password to obtain a ticket-granting ticket
(TGT) from the key distribution center (KDC) on the domain specified by the fully qualified domain name.
Kerberos authentication also requires that the clock skew between the Domain Controller and the IBM
Spectrum Copy Data Management appliance is less than 5 minutes. Note that the default Windows NTLM
protocol is not time dependent.
Privileges
On the SQL server, the system login credential must have public and sysadmin permissions enabled, plus
permission to access cluster resources in a SQL AlwaysOn environment. If one user account is used for all
SQL functions, a Windows login must be enabled for the SQL server, with public and sysadmin permissions
enabled.
Every SQL instance can use a specific user account to access the resources of that particular SQL instance.
To perform log backups, the SQL user registered with IBM Spectrum Copy Data Management must have the
sysadmin permission enabled to manage SQL server agent jobs. If the SQL server agent service user is the
default NT user, the agent will use that account to enable/access log backup jobs.
RELATED TOPICS:
l Microsoft SQL Server Support FAQ on page 592
l Register a Provider on page 79
l Identities Overview on page 134
l System Requirements on page 23
65
User's Guide AWS Requirements
AWS Requirements
Review the following requirements and pre-requesities for configuring an AWS provider for use with IBM
Spectrum Copy Data Management.
IBM Spectrum Copy Data Management supports the on-premise cached storage gateway. An OVA is
downloaded, deployed and activated through a wizard.
See docs.aws.amazon.com/storagegateway/latest/userguide/on-premises-gateway-common.html for more
information.
For cache and upload buffer settings, best practices are defined in the following Amazon AWS topics:
docs.aws.amazon.com/storagegateway/latest/userguide/managing-cache-common.html
docs.aws.amazon.com/storagegateway/latest/userguide/managing-upload-buffer-common.html
Once the storage gateway is created with AWS and vCenter, register the AWS provider in IBM Spectrum
Copy Data Management. IBM Spectrum Copy Data Management will automatically discover the storage
gateway when a job is created. Note that 32 volumes are supported per storage gateway.
RELATED TOPICS:
l Register a Provider on page 79
l Identities Overview on page 134
l System Requirements on page 23
66
User's Guide Install IBM Spectrum Copy Data Management as a Virtual Appliance
67
User's Guide Install IBM Spectrum Copy Data Management as a Virtual Appliance
and virtual disk files are stored on the datastore. Select a datastore large enough to accommodate the
virtual machine and all of its virtual disk files. Click Next.
6. Select a disk format to store the virtual disks. It is recommended that you select thick provisioning, which
is preselected for optimized performance. Thin provisioning requires less disk space, but may impact
performance. Click Next.
7. Select networks for the deployed template to use. Several available networks on the ESX server may be
available by clicking Destination Networks. Select a destination network that allows you to define the
appropriate IP address allocation for the virtual machine deployment. Click Next.
8. Enter network properties for the virtual machine's default gateway, DNS, IP address and netmask. Leave
fields blank to retrieve settings from a DHCP server. The virtual machine needs access to a DHCP server
available on the configured destination network. Click Next.
9. Review your template selections. Click Finish to exit the wizard and to start deployment of the OVF
template. Deployment might take significant time.
10. After OVF template deployment completes, power on your newly created virtual machine. This can be
done from vSphere Client.
Note: The virtual machine must remain powered on for the IBM Spectrum Copy Data Management
application to be accessible.
11. Make a note of the IP address of the newly created virtual machine. This is needed to log on to the
application. Find the IP address in vSphere Client by clicking your newly created virtual machine and
looking in the Summary tab.
Note: You must allow several minutes for IBM Spectrum Copy Data Management to initialize completely.
NEXT STEPS:
l Set your local time zone. See Set Time Zone on page 525.
l After the first use, you can enable additional users to logon by linking to an LDAP
server.
l Start IBM Spectrum Copy Data Management and begin using it from any supported
web browser. See Start IBM Spectrum Copy Data Management on page 69.
RELATED TOPICS:
l Start IBM Spectrum Copy Data Management on page 69
68
User's Guide Start IBM Spectrum Copy Data Management
NEXT STEPS:
l After the first use, enable additional users to logon by adding native users or linking to
an LDAP server. See Role-Based Access Control Overview on page 109.
l Add storage systems and virtual machine resources to the IBM Spectrum Copy Data
Management database. See Register a Provider on page 79 and Jobs Overview on
page 169.
l Search or browse for objects that match certain criteria. See Search for Objects on
page 355 and Browse Inventory on page 366.
l Generate reports with predefined or customized parameters. See Report Overview on
page 369.
RELATED TOPICS:
69
User's Guide Start IBM Spectrum Copy Data Management
70
User's Guide Dashboard
Dashboard
The dashboard displays an overview of your IBM Spectrum Copy Data Management environment. Quickly
review the status of your jobs and recently run reports.
To re-enable widgets after closing and arrange them in their default position, click Show All Widgets . To
collapse all widgets, click Collapse All Widgets . To exand all widgets, click Expand All Widgets .
Failed - Indicates the job session did not successfully complete due to mixed task statuses.
71
User's Guide Dashboard
Aborted - Indicates the job session did not successfully complete due to a reset, reboot, or
Skipped - Indicates that a volume was not cataloged. See the Task tab for more information about
skipped jobs.
Stopped - Indicates the job was stopped using the Stop button.
NEXT STEPS:
l Review report details from the My Reports widget, such as available parameters and
field definitions. Reports can be downloaded as HTML files, Adobe PDFs, Microsoft
Excel spreadsheets, and Microsoft Word files.
RELATED TOPICS:
l Sites & Providers Overview on page 74
l Role-Based Access Control Overview on page 109
l Jobs Overview on page 169
l Monitor a Job Session on page 172
l Search Overview on page 354
l Report Overview on page 369
72
User's Guide Sites & Providers
73
User's Guide Sites & Providers Overview
A site is a user-defined grouping of providers that is generally based on location, to help quickly identify and
interact with data created through Copy Data Management jobs.
Sites are assigned when registering providers. When creating Backup and Restore jobs, sites clearly identify
where your data is replicated by location.
Providers are physical servers that host objects and attributes. Once a provider is registered in IBM Spectrum
Copy Data Management, cataloging, searching, and reporting can be performed.
Supported provider types are:
l Amazon Web Services (AWS). Supported types include Amazon Simple Storage Service (S3) cloud
storage.
l Application servers. Supported application database types include InterSystems Caché, Oracle,
SAP HANA, and SQL. Use the File System application type to register file systems for physical servers
running Windows, Linux, and AIX. See associated application requirements for supported storage types.
l DellEMC storage systems. Supported types include DellEMC Unity.
l IBM storage systems. Supported types include IBM Spectrum Accelerate, IBM Spectrum Protect
Snapshot, and IBM Spectrum Virtualize.
l LDAP servers. Register an LDAP server to enable LDAP users to be provisioned through a group import.
LDAP also supports authentication using the sAMAccountName Windows user naming attribute or an
associated e-mail address. Unlike storage systems and VMware servers, LDAP servers are not
cataloged.
l NetApp ONTAP storage systems. Supported types include NetApp ONTAP 7-Mode and Cluster-Mode.
l SQL and Oracle application servers. Supported storage platforms for Oracle include IBM storage systems
running IBM Spectrum™ Virtualize Software version 7.3 and later/8.1.2 and later, including IBM SAN
Volume Controller, IBM Storwize, and IBM FlashSystem V9000 and 9100 systems. Supported
SQL Server versions include SQL Server 2012, SQL Server 2014, SQL Server 2016 standalone and
AlwaysON. Supported operating system platforms include Windows 2012R2, Windows 2016 running on
vSphere VM using VMDK configuration.
l HPE Nimble Storage systems.
l Pure Storage systems. Supported storage platforms include Pure Storage FlashArray.
74
User's Guide Sites & Providers Overview
l SMTP hosts. Register an SMTP server to enable email notifications from IBM Spectrum Copy Data
Management. Unlike storage systems and VMware servers, SMTP servers are not cataloged.
l VMware servers. Supported types includes vCenter and ESX/ESXi hosts.
Adding a provider requires specifying the user name and password of the provider.
Note: Users that register providers, such as storage devices, or add resources to IBM Spectrum Copy Data
Management, such as jobs or customized reports, will have full access to interact with those providers or
resources regardless of role-based access control restrictions. For example, if a user's permission allows
them to register NetApp providers, they will also be able to view, edit, and unregister the NetApp providers
that they registered, even if the necessary permissions are not assigned to them through role-based access
control.
RELATED TOPICS:
l Register a Provider on page 79
l View a Provider on page 100
l Edit a Provider on page 103
l Unregister a Provider on page 105
75
User's Guide Add a Site
Add a Site
Create sites to define a grouping of providers based on their location in your IBM Spectrum Copy Data
Management environment. Once sites are created in IBM Spectrum Copy Data Management, they can be
applied to your providers.
Click View Relationship to view the resources that are assigned to the site.
To add a site:
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Sites tab.
The Sites pane opens.
2. In the Sites pane, click New . The Create Site dialog opens.
NEXT STEPS:
l Assign sites to new and existing providers. See Register a Provider on page 79 and Edit
a Provider on page 103.
RELATED TOPICS:
l Edit a Site on page 77
l Delete a Site on page 78
76
User's Guide Edit a Site
Edit a Site
Revise site names and descriptions to reflect location changes in your IBM Spectrum Copy Data Management
environment.
To edit a site:
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Sites tab.
The Sites pane opens.
2. In the Sites pane, select the site to edit by clicking in the row containing the site name.
3. Click Edit . The Edit Site dialog opens.
NEXT STEPS:
l Assign sites to new and existing providers. See Register a Provider on page 79 and Edit
a Provider on page 103.
RELATED TOPICS:
l Add a Site on page 76
l Delete a Site on page 78
77
User's Guide Delete a Site
Delete a Site
Delete a site when it becomes obsolete.
A site cannot be deleted if it is assigned to a provider. On the Sites pane, click View Relationship to view the
providers that are assigned to the site. Re-assign your providers to different sites before deleting.
To delete a site:
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Sites tab.
The Sites pane opens.
2. In the Sites pane, select the site to delete by clicking in the row containing the site name.
3. Click Delete . A confirmation dialog box displays.
NEXT STEPS:
l Assign sites to new and existing providers. See Register a Provider on page 79 and Edit
a Provider on page 103.
RELATED TOPICS:
l Add a Site on page 76
l Edit a Site on page 77
78
User's Guide Register a Provider
Register a Provider
Providers are physical servers that host objects and attributes. Once a provider is registered in IBM Spectrum
Copy Data Management, cataloging, searching, and reporting can be performed.
Supported provider types are:
l Amazon Web Services (AWS). Supported types include Amazon Simple Storage Service (S3) cloud
storage.
l Application servers. Supported application database types include InterSystems Caché, Oracle,
SAP HANA, and SQL. Use the File System application type to register file systems for physical servers
running Windows, Linux, and AIX. See associated application requirements for supported storage types.
l DellEMC storage systems. Supported types include DellEMC Unity.
l IBM storage systems. Supported types include IBM Spectrum Accelerate, IBM Spectrum Protect
Snapshot, and IBM Spectrum Virtualize.
l LDAP servers. Register an LDAP server to enable LDAP users to be provisioned through a group import.
LDAP also supports authentication using the sAMAccountName Windows user naming attribute or an
associated e-mail address. Unlike storage systems and VMware servers, LDAP servers are not
cataloged.
l NetApp ONTAP storage systems. Supported types include NetApp ONTAP 7-Mode and Cluster-Mode.
l SQL and Oracle application servers. Supported storage platforms for Oracle include IBM storage systems
running IBM Spectrum™ Virtualize Software version 7.3 and later/8.1.2 and later, including IBM SAN
Volume Controller, IBM Storwize, and IBM FlashSystem V9000 and 9100 systems. Supported
SQL Server versions include SQL Server 2012, SQL Server 2014, SQL Server 2016 standalone and
AlwaysON. Supported operating system platforms include Windows 2012R2, Windows 2016 running on
vSphere VM using VMDK configuration.
l HPE Nimble Storage systems.
l Pure Storage systems. Supported storage platforms include Pure Storage FlashArray.
l SMTP hosts. Register an SMTP server to enable email notifications from IBM Spectrum Copy Data
Management. Unlike storage systems and VMware servers, SMTP servers are not cataloged.
l VMware servers. Supported types includes vCenter and ESX/ESXi hosts.
Adding a provider requires specifying the user name and password of the provider.
Storage providers can be automatically cataloged after registration. If the Run Inventory job after
registration option is selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider.
Note: Ensure that TLS protocol is enabled on the NetApp storage system by setting the tls.enable option
to ON. For TLS to take effect on HTTPS, ensure that the httpd.admin.ssl.enable option is also set to
ON. See Enabling or disabling TLS on NetApp's Support site.
79
User's Guide Register a Provider
Note: If an associated provider is unregistered before, during, or after a Backup or Restore job executes, the
job fails with a task framework error. If the unregistered providers are re-registered in IBM Spectrum Copy
Data Management, new Backup or Restore jobs must be defined for the providers.
Name
A user-defined name for the file system or volume. This can be the same as the host name or it can be
a meaningful name that is used within your organization to refer to the provider. Provider names must
be unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. The default Windows port is 5985, and the
default Linux and AIX port is 22,
OS Type
Select the file system or volume's operating system type. Available options include Windows, Linux,
and AIX.
System Credential
80
User's Guide Register a Provider
Select or create your file system or volume's credentials. See Identities Overview on page 134.
6. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
Name
A user-defined name for the InterSystems Caché server. This can be the same as the host name or it
can be a meaningful name that is used within your organization to refer to the provider. Provider names
must be unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Type
Select a Virtual or Physical InterSystems Caché server type. Select Virtual if the InterSystems Caché
server is a VMware virtual machine.
If selecting Virtual, enter the vCenter location of the InterSystems Caché application server in the
vCenter field.
OS Type
81
User's Guide Register a Provider
Select the InterSystems Caché server's operating system type. Available options include Windows,
Linux, and AIX.
Authentication
IBM Spectrum Copy Data Management connects to the InterSystems Caché server as a local
operating system user through an SSH key or password. See Identities Overview on page 134.
To use an SSH key, select Key, enter a username and select or create an SSH key.
To use a password, select Password, then select or create a Local credential.
System Credential:
Select or create your InterSystems Caché credentials. See Identities Overview on page 134.
6. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
To troubleshoot an application server after registration, use the Test & Configure option. This option
verifies communication with the server, tests DNS settings between the IBM Spectrum Copy Data
Management appliance and the server, and installs an IBM Spectrum Copy Data Management agent on
the server. From the Provider Browser pane, right-click the application server, then click Test
& Configure .
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
Name
A user-defined name for the Oracle server. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
82
User's Guide Register a Provider
A resolvable IP address or a resolvable path and machine name. When registering an Oracle RAC
cluster, register each node using its physical IP or name. Do not register a virtual name or SCAN
(Single Client Access Name).
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Type
Select a Virtual or Physical Oracle server type. Select Virtual if the Oracle server is a VMware virtual
machine. Note that AIX virtual servers should be registered as Physical.
If selecting Virtual, enter the vCenter location of the Oracle application server in the vCenter field.
Authentication
IBM Spectrum Copy Data Management connects to the Oracle server as a local operating system user
through an SSH key or password. See Identities Overview on page 134.
To use an SSH key, select Key, enter a username and select or create an SSH key.
To use a password, select Password, then select or create a Local credential.
6. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
To troubleshoot an application server after registration, use the Test & Configure option. This option
verifies communication with the server, tests DNS settings between the IBM Spectrum Copy Data
Management appliance and the server, and installs an IBM Spectrum Copy Data Management agent on
the server. From the Provider Browser pane, right-click the application server, then click Test
& Configure .
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
83
User's Guide Register a Provider
A user-defined provider location, created in the Sites & Providers view on the Configure tab.
Name
A user-defined name for the SAP HANA server. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. The format for the port number is 3<instance
number>15. So, for example, if the instance number is 07, then enter the following port number: 30715.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
vCenter
The vCenter location of the SAP HANA application server.
System Credential / Database Credential(s):
Select or create your SAP HANA credentials. See Identities Overview on page 134.
6. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
To troubleshoot an application server after registration, use the Test & Configure option. This option
verifies communication with the server, tests DNS settings between the IBM Spectrum Copy Data
Management appliance and the server, and installs an IBM Spectrum Copy Data Management agent on
the server. From the Provider Browser pane, right-click the application server, then click Test
& Configure .
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
84
User's Guide Register a Provider
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Providers
tab.
2. In the Provider Browser pane, select Application Server.
3. Right-click Application Server. Then click Register . The Register Application Server dialog opens.
Name
A user-defined name for the SQL server. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. The default port is 5985.
In previous versions of IBM Spectrum Copy Data Management, the default SQL port was 1443.
SQL providers that were registered in previous versions using port 1443 will continue to function in
newer versions of IBM Spectrum Copy Data Management.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Type
Select a Virtual or Physical SQL server type. Select Virtual if the SQL server is a VMware virtual
machine.
If selecting Virtual, enter the vCenter location of the SQL application server in the vCenter field.
vCenter
The vCenter location of the virtual SQL application server.
System Credential:
Select or create your SQL credentials. See Identities Overview on page 134.
Note: For Kerberos-based authentication only, the user identity must be specified in the
username@FQDN format. The username must be able to authenticate using the registered password
85
User's Guide Register a Provider
to obtain a ticket-granting ticket (TGT) from the key distribution center (KDC) on the domain specified
by the fully qualified domain name.
6. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
To troubleshoot an application server after registration, use the Test & Configure option. This option
verifies communication with the server, tests DNS settings between the IBM Spectrum Copy Data
Management appliance and the server, and installs an IBM Spectrum Copy Data Management agent on
the server. From the Provider Browser pane, right-click the application server, then click Test
& Configure .
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
3. Right-click AWS . Then click Register . The Register Amazon Web Services dialog opens.
Name
A user-defined name for the AWS provider. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Region
Select an associated AWS region.
Comment
Optional provider description.
Access Key
Select or create your AWS access key. See Identities Overview on page 134.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
86
User's Guide Register a Provider
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Providers
tab.
2. In the Provider Browser pane, select DellEMC Unity .
3. Right-click DellEMC Unity . Then click Register . The Register dialog opens.
Name
A user-defined name for the DellEMC provider. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Comment
Optional provider description.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Credentials
Select or create your DellEMC Unity credentials. See Identities Overview on page 134.
Note: If upgrading from a previous version of IBM Spectrum Copy Data Management in which a
username and password was entered during the provider registration process, an Identify will be
automatically created for the provider.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
87
User's Guide Register a Provider
3. Right-click DellEMC Unity . Then click Discover. The Discover DellEMC Unity Providers dialog
opens.
4. Enter an IP address or range of IP addresses associated with your DellEMC Unity storage systems in
the IP Address field.
5. Click Discover. Discovered DellEMC Unity providers display.
6. Select providers to register along with universal custom parameters such as Site and credentials. To
select individual parameters for each provider, click the parameters in the Custom Parameters field.
Select specific Sites, credentials, ports and SSL parameters for each provider. If Run Inventory job
after registration is selected, IBM Spectrum Copy Data Management creates a high-level Inventory job
and automatically catalogs the objects on the provider.
7. Click Register. IBM Spectrum Copy Data Management adds the providers to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
3. Right-click IBM Spectrum Accelerate . Then click Register . The Register dialog opens.
Name
A user-defined name for the IBM provider. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Comment
Optional provider description.
Run Inventory job after registration
88
User's Guide Register a Provider
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Credentials
Select or create your IBM Spectrum Accelerate credentials. See Identities Overview on page 134.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
Note: IBM providers utilize port 22 for communication with IBM Spectrum Copy Data Management.
3. Right-click IBM Spectrum Protect Snapshot . Then click Register . The Register dialog opens.
Name
A user-defined name for the IBM provider. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Comment
Optional provider description.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
89
User's Guide Register a Provider
Note: An automatic IBM Spectrum Protect Snapshot catalog job fails if associated IBM storage and
vCenter providers are not registered. Ensure associated resources are registered before the automatic
catalog job begins.
Credentials
Select or create your IBM Spectrum Protect credentials. See Identities Overview on page 134.
Note: When registering an IBM Spectrum Protect system, you must use credentials for the tdpvmware
user.
Note: If upgrading from a previous version of IBM Spectrum Copy Data Management in which a
username and password was entered during the provider registration process, an Identify will be
automatically created for the provider.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
Note: IBM providers utilize port 22 for communication with IBM Spectrum Copy Data Management.
3. Right-click IBM Spectrum Virtualize . Then click Register . The Register dialog opens.
Name
A user-defined name for the IBM provider. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Comment
Optional provider description.
Run Inventory job after registration
90
User's Guide Register a Provider
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Credentials
Select or create your IBM Spectrum Virtualize credentials. See Identities Overview on page 134.
Note: If upgrading from a previous version of IBM Spectrum Copy Data Management in which a
username and password was entered during the provider registration process, an Identify will be
automatically created for the provider.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
Note: IBM providers utilize port 22 for communication with IBM Spectrum Copy Data Management.
3. Right-click LDAP . Then click Register . The Register LDAP Server dialog opens.
91
User's Guide Register a Provider
Name
92
User's Guide Register a Provider
A user-defined name for the NetApp storage system. This can be the same as the host name or it can
be a meaningful name that is used within your organization to refer to the provider. Provider names
must be unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. Select the Use SSL check box to enable an
encrypted Secure Socket Layer connection. The typical default port is 80 for non SSL connections or
443 for SSL connections.
Comment
Optional provider description.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
Credentials
Select or create your NetApp ONTAP credentials. See Identities Overview on page 134.
Note: If upgrading from a previous version of IBM Spectrum Copy Data Management in which a
username and password was entered during the provider registration process, an Identify will be
automatically created for the provider.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
3. Right-click NetApp ONTAP . Then click Discover. The Discover NetApp ONTAP Providers dialog
opens.
93
User's Guide Register a Provider
4. Enter an IP address or range of IP addresses associated with your NetApp ONTAP storage systems in
the IP Address field. Enter Simple Network Management Protocol settings such as the community name
and protocol version in the SNMP Options fields. The default community name is "public."
5. Click Discover. Discovered NetApp ONTAP providers display.
6. Select providers to register along with universal custom parameters such as Site and credentials. To
select individual parameters for each provider, click the parameters in the Custom Parameters field.
Select specific Sites, credentials, ports and SSL parameters for each provider. If Run Inventory job
after registration is selected, IBM Spectrum Copy Data Management creates a high-level Inventory job
and automatically catalogs the objects on the provider.
7. Click Register. IBM Spectrum Copy Data Management adds the providers to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
3. Right-click HPE Nimble Storage. Then click Register . The Register dialog opens.
Name
A user-defined name for the HPE Nimble Storage provider. This can be the same as the host name or it
can be a meaningful name that is used within your organization to refer to the provider. Provider names
must be unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. Select the Use SSL check box to enable an
encrypted Secure Socket Layer connection. Enter the port number 5392. Select the Use SSL check
box to enable an encrypted Secure Socket Layer connection. The typical default port is 80 for non SSL
connections or 443 for SSL connections.
Credentials
Select or create your HPE Nimble Storage credentials. See Identities Overview on page 134.
94
User's Guide Register a Provider
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
Name
A user-defined name for the Pure provider. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. Select the Use SSL check box to enable an
encrypted Secure Socket Layer connection. The typical default port is 80 for non SSL connections or
443 for SSL connections.
Credentials
Select or create your Pure credentials. See Identities Overview on page 134.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
95
User's Guide Register a Provider
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Providers
tab.
2. In the Provider Browser pane, select SMTP .
3. Right-click SMTP . Then click Register . The Register SMTP Server dialog opens.
96
User's Guide Register a Provider
3. Right-click VMware . Then click Register . The Register VMware Server dialog opens.
Name
A user-defined name for the VMware server. This can be the same as the host name or it can be a
meaningful name that is used within your organization to refer to the provider. Provider names must be
unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. Select the Use SSL check box to enable an
encrypted Secure Socket Layer connection. The typical default port is 80 for non SSL connections or
443 for SSL connections.
Credentials
Select or create your VMware credentials. See Identities Overview on page 134.
Note: If upgrading from a previous version of IBM Spectrum Copy Data Management in which a
username and password was entered during the provider registration process, an Identify will be
automatically created for the provider.
Comment
Optional provider description.
Run Inventory job after registration
If selected, IBM Spectrum Copy Data Management creates a high-level Inventory job and
automatically catalogs the objects on the provider. Note that the Inventory job may take considerable
time to complete.
5. Click OK. IBM Spectrum Copy Data Management first confirms a network connection and then adds the
provider to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
To add credentials to a registered virtual machine, see Add Credentials to a Virtual Machine on page 106.
97
User's Guide Register a Provider
3. Right-click VMware . Then click Discover. The Discover VMware Providers dialog opens.
4. Enter an IP address or range of IP addresses associated with your VMware providers in the IP Address
field.
5. Click Discover. Discovered VMware providers display.
6. Select providers to register along with universal custom parameters such as Site and credentials. To
select individual parameters for each provider, click the parameters in the Custom Parameters field.
Select specific Sites, credentials, ports and SSL parameters for each provider. If Run Inventory job
after registration is selected, IBM Spectrum Copy Data Management creates a high-level Inventory job
and automatically catalogs the objects on the provider.
7. Click Register. IBM Spectrum Copy Data Management adds the providers to the database.
If a message appears indicating that the connection is unsuccessful, review your entries. If your entries
are correct and the connection is unsuccessful, contact a system administrator to review the connections.
To add credentials to a registered virtual machine, see Add Credentials to a Virtual Machine on page 106.
NEXT STEPS:
l Once providers are available in IBM Spectrum Copy Data Management and associated
with a site, assign them to a resource pool. See Configure Resource Pools on page
111.
l Add credentials to virtual machines in a VMware environment. See Add Credentials to a
Virtual Machine on page 106.
RELATED TOPICS:
l Sites & Providers Overview on page 74
l View a Provider on page 100
l Edit a Provider on page 103
l Unregister a Provider on page 105
98
User's Guide Register a Provider
99
User's Guide View a Provider
View a Provider
Navigate through the Provider Browser to view a list of registered provider and resources that reside on those
providers. The Provider Browser scans the actual provider and returns native properties.
Use the Provider Details window to examine registered providers. The information presented depends on the
type of provider selected.
LDAP Providers
Select an LDAP Server.
The LDAP Provider Details provides information about the LDAP server and the user accounts it manages.
The General tab provides information about the selected server including host address, port, and the use of
SSL. The Users tab provides a list of all the user accounts configured on the server.
NetApp Providers
Select a volume within the NetApp ONTAP provider.
The NetApp ONTAP Provider Details provides information about the data storage in the selected volume. The
General tab provides information about the selected volume including type, state, storage usage, reserve
usage, and file usage. The gauges display the percentage of usage. Green represents data and blue
100
User's Guide View a Provider
represents available space. The Qtrees tab lists qtrees stored in the selected volume and their status. The
Snapshots tab lists the snapshots stored in the selected volume.
IBM Providers
Select a volume within the IBM provider.
The IBM Provider Details provides information about the data storage in the selected volume. The General
tab provides information about the selected volume including the capacity, associated storage pool name, and
mirrored copies synchronization rate.
DellEMC Providers
Select a volume within the DellEMC Unity provider.
The DellEMC Provider Details provides information about the data storage in the selected volume. The
General tab provides information about the selected volume including the capacity, associated storage pool
name, and mirrored copies synchronization rate.
SMTP Providers
Select an SMTP server.
The SMTP Provider Details provides information about the selected server including host address and port.
VMware Providers
Select a VMware vCenter within the VMware provider.
The General tab provides information about the selected vCenter including host address and software
version. Use the Hosts, VApps, and VMs tabs to view a list of the virtual machine hosts, virtual appliances,
and virtual machines that are configured on the selected vCenter.
The Datacenters tab provides a list of the datacenters configured on the vCenter. Use the Datacenters tab to
view the list of Datastores, Hosts, and VMs filtered by the selected datacenter.
The Datastores tab lists the datastores configured on the vCenter. Select a datastore to view the list of Hosts
and VMs filtered by the selected datastore.
Tip: Periodically closing tabs helps simplify navigation and browsing. To close multiple tabs, right-click a tab
then select Close Tab, Close Other Tabs, or Close All Tabs.
Oracle Providers
Select an Oracle server.
The Oracle Provider Details provides information about the selected server including name and host address.
AWS Providers
Select an AWS provider.
101
User's Guide View a Provider
The AWS Provider Details provides information about the selected server including region, name, type, and
associated storage gateways.
NEXT STEPS:
l Run an Inventory job against storage providers that have not been recently cataloged.
See Jobs Overview on page 169.
RELATED TOPICS:
l Sites & Providers Overview on page 74
l Register a Provider on page 79
l Edit a Provider on page 103
l Unregister a Provider on page 105
102
User's Guide Edit a Provider
Edit a Provider
Revise the properties of a provider as needed.
Name
A user-defined name for the provider. This can be the same as the host name or it can be a meaningful
name that is used within your organization to refer to the provider. Provider names must be unique.
Host Address
A resolvable IP address or a resolvable path and machine name.
Port
The communications port of the provider you are adding. Select the Use SSL check box to enable an
encrypted Secure Socket Layer connection. The typical default port is 80 for non SSL connections or
443 for SSL connections.
Credentials
Select or create your credentials. See Identities Overview on page 134.
Comment
Optional provider description.
5. Click OK when you are satisfied that the job-specific information is correct.
NEXT STEPS:
103
User's Guide Edit a Provider
l If the storage provider you edited has not recently been cataloged, catalog it. See Jobs
Overview on page 169.
RELATED TOPICS:
l Sites & Providers Overview on page 74
l Register a Provider on page 79
l View a Provider on page 100
l Unregister a Provider on page 105
104
User's Guide Unregister a Provider
Unregister a Provider
Unregister a registered provider if you do not want to run reports against it, search for objects on it, or create a
job.
Note: A provider cannot be deleted if it is assigned to a Resource Pool. Remove your providers from
Resource Pools before deleting.
Best Practice: Ensure the provider you unregister is not associated with any defined job. To view job
definitions associated with a provider, click the Configure tab. On the Views pane, select Sites
& Providers , then select the Providers tab. Right-click a provider from the Provider Browser view, then
select View Relationship. All associated job definitions that interact with the provider display.
Note: If an associated provider is unregistered before, during, or after a Backup or Restore job executes, the
job fails with a task framework error. If the unregistered providers are re-registered in IBM Spectrum Copy
Data Management, new Backup or Restore jobs must be defined for the providers.
To unregister a provider:
1. Click the Configure tab. On the Views pane, select Sites & Providers , then select the Providers
tab.
2. In the Provider Browser pane, browse to the desired provider and select it.
3. Right-click the provider. Then click Unregister . A confirmation dialog box opens.
4. Confirm unregistration. The provider is unregistered.
RELATED TOPICS:
l Sites & Providers Overview on page 74
l Register a Provider on page 79
l View a Provider on page 100
l Edit a Provider on page 103
105
User's Guide Add Credentials to a Virtual Machine
6. Enter the username, password and an optional description in the Comment field.
7. Select the credential type. Options include System and SQL.
8. If entering SQL credentials, enter the name of the SQL instance in the Instance Name field.
9. To apply System credentials to application instances (for example, SQL instances), enable the Use
System Credentials for apps option. Note that System credentials are always required. If your
application instances use credentials that differ from your System credentials, you must repeat the above
procedure for each application instance using different Instance Names.
3. Enter a wildcard to search for virtual machines available on the VMware provider. For example, vm* or
vm[1-50].
4. Select virtual machines with universal credentials.
106
User's Guide Add Credentials to a Virtual Machine
5. Enter the universal credential information for the virtual machines, along with the credential type and
instance name if applicable.
6. To apply System credentials to application instances (for example, SQL instances), enable the Use
System Credentials for apps option. Note that System credentials are always required. If your
application instances use credentials that differ from your System credentials, you must repeat the above
procedure for each application instance.
7. Click Close to exit the Manage Virtual Machines dialog.
Existing credentials for multiple virtual machines can also be updated through the Manage VMs feature.
To update credentials for an individual virtual machine, click Manage in the row containing the virtual
machine.
RELATED TOPICS:
l Register a Provider on page 79
l Create an Inventory Job Definition - VMware on page 202
107
User's Guide Role-Based Access Control
108
User's Guide Role-Based Access Control Overview
Note: Users that register providers, such as storage devices, or add resources to IBM Spectrum Copy Data
Management, such as jobs or customized reports, will have full access to interact with those providers or
resources regardless of role-based access control restrictions. For example, if a user's permission allows
them to register NetApp providers, they will also be able to view, edit, and unregister the NetApp providers
that they registered, even if the necessary permissions are not assigned to them through role-based access
control.
Configure role-based access control in the Access Control view on the Configure tab.
Resource Pools - A resource pool defines the resources that will be made available to an account. Every
provider added to IBM Spectrum Copy Data Management, such as storage devices and LDAP servers, can
be included in a resource pool, along with individual IBM Spectrum Copy Data Management functions and
screens. This gives you the ability to finely-tune the experience of a user. For example, a resource pool could
include only storage devices associated with a single vendor, with access to only the IBM Spectrum Copy
Data Management search and reporting functionality. When the resource pool is associated with a role and an
account, the account user will only see the screens associated with search and reporting, and will only have
access to the storage devices defined in the resource pool. See Configure Resource Pools on page 111.
Roles - Roles define the actions that can be performed on the resources defined in a resource pool. A
resource pool defines the providers that will be made available to an account, such as storage devices, and
resources, such as IBM Spectrum Copy Data Management functions and screens; a role sets the permissions
to interact with the resources defined in the resource pool. For example, if a resource pool is created that
includes IBM Spectrum Copy Data Management Backup and Restore jobs, the role will determine how a user
can interact with the jobs. Permissions can be set to allow a user to create, view, and run the Backup and
Restore jobs defined in a resource pool, but not delete them. Similarly, permissions can be set to create
administrator accounts, allowing a user to create and edit other accounts, set up sites and resources, and
interact with all of the available IBM Spectrum Copy Data Management features. See Configure Roles on
page 114.
109
User's Guide Role-Based Access Control Overview
Accounts - An account associates a resource pool with a role. To enable a user to log on to IBM Spectrum
Copy Data Management and use its functions, you must first add the user to IBM Spectrum Copy Data
Management as a native user or as part of an imported group of LDAP users, then assign resource pools and
roles to the user account. The account will have access to the resources and features defined in the resource
pool as well as the permissions to interact with the resources and features defined in the role. See Configure
Accounts on page 117.
RELATED TOPICS:
l Configure Resource Pools on page 111
l Configure Roles on page 114
l Configure Accounts on page 117
l VMware Admin Role-Based Access Control Configuration on page 120
l NetApp ONTAP Admin Role-Based Access Control Configuration on page 122
l IBM Admin Role-Based Access Control Configuration on page 124
110
User's Guide Configure Resource Pools
3. Click the Step 1: Providers tab. From the list of available sites and providers, select one or more
providers to add to the resource pool. Note that your providers are grouped into sites, which allows you to
add entire sites to the resource pool, or specific providers within the site. Individual storage virtual
machines and VMware datacenters can also be selected for use with a resource pool.
4. Click the Step 2: Jobs tab. Select one or more job types, individual custom jobs, schedules, and scripts
to include in the resource pool.
5. Click the Step 3: Reports tab. Select one or more report types or individual reports to include in the
resource pool.
6. Click the Step 4: Applications tab. Select one or more application servers to include in the resource
pool.
111
User's Guide Configure Resource Pools
7. Click the Step 5: Identities tab. Select one or more keys and credentials to include in the resource pool.
8. Click the Step 6: Access Control tab. Select security options that will be configurable by accounts
associated with this resource pool. Available options include All Roles, All Accounts, All Resource Pools,
and All SLA Policies. For example, if All Resource Pools is selected in this step, users associated with
this resource pool can create, view, edit, and delete Resource Pools, if paired with the necessary
"resourcepool" permission, set on the Roles pane.
9. Click the Step 7: Screens tab. Select the IBM Spectrum Copy Data Management screens to include in
the resource pool.
10. Click the Step 8: Finish tab. Enter a name for your resource pool and a meaningful description. When
you are satisfied that the entered information is correct, click Finish. The resource pool appears on the
All Resources pane and can be applied to new and existing accounts.
4. Update the resources and IBM Spectrum Copy Data Management features to assign to the resource
pool.
5. Click Finish. The revisions are applied to the resource pool.
112
User's Guide Configure Resource Pools
NEXT STEPS:
l Create roles to define the actions that can be performed by the user of an account
associated with a resource pool. Roles are used to define permissions to interact with
the resources defined in the resource pool. See Configure Roles on page 114.
RELATED TOPICS:
l Configure Roles on page 114
l Configure Accounts on page 117
113
User's Guide Configure Roles
Configure Roles
A role is a component of the role-based access system, and is used to define the actions that can be
performed by the user of an account associated with a resource pool. A resource pool defines the resources
that will be made available to an account, such as storage devices and IBM Spectrum Copy Data
Management features; a role sets the permissions to interact with the resources defined in the resource pool.
For example, if a resource pool is created that includes IBM Spectrum Copy Data Management Backup and
Restore jobs, the role will determine how a user can interact with the jobs. Permissions can be set to allow a
user to create, view, and run the Backup and Restore jobs defined in a resource pool, but not delete them.
Similarly, permissions can be set to create administrator accounts, allowing a user to create and edit other
accounts, set up sites and providers, and interact with all of the available IBM Spectrum Copy Data
Management features.
Add a Role
1. Click the Configure tab. On the Views pane, select Access Control , then select the Roles tab.
2. In the All Roles pane, click New . The New Role dialog opens.
114
User's Guide Configure Roles
To set bulk permissions for multiple roles, select the check boxes next to the role names, then click the
Add Permissions drop-down menu. Select permissions to apply to the selected roles, then click
Edit a Role
Revise a role to change the resources and permissions assigned to the role. Updated role settings take affect
once accounts associated with the role log in to IBM Spectrum Copy Data Management.
Note: The SYSADMIN and USER roles cannot be edited.
1. Click the Configure tab. On the Views pane, select Access Control , then select the Roles tab.
2. In the All Roles pane, select the role to edit by clicking in the row containing the role name.
3. Click Edit . The Edit Role dialog opens.
Delete a Role
Delete a role when it becomes obsolete.
A role cannot be deleted if it is assigned to an account. On the All Roles pane, click View Relationship to
view the accounts that are associated with the role. Re-assign your accounts to different roles before deleting.
Note: The SYSADMIN, Read Only, and Create Only roles cannot be deleted.
1. Click the Configure tab. On the Views pane, select Access Control , then select the Roles tab.
2. In the All Roles pane, select the role to delete by clicking in the row containing the role name.
3. Click Delete . A confirmation dialog box displays.
NEXT STEPS:
l Create an account. An account associates resource pools and roles with a user.
Accounts can be native to IBM Spectrum Copy Data Management or can be imported
as an LDAP group. See Configure Accounts on page 117.
RELATED TOPICS:
115
User's Guide Configure Roles
116
User's Guide Configure Accounts
Configure Accounts
An account is a component of the role-based access system, and is used to associate resource pools and
roles with a user. To enable a user to log on to IBM Spectrum Copy Data Management and use its functions,
you must first add the user to IBM Spectrum Copy Data Management as a native user or as part of an
imported group of LDAP users, then assign a resource pool and a role to the user account. The account will
have access to the resources defined by the resource pool as well as the permissions to interact with the
resources defined in the role.
Note that if multiple roles are assigned to a resource pool during account configuration, all permissions
associated with the roles will be available to the account.
3. In the New Account pane, click Create Native User. The New Account dialog opens.
4. Enter a user name and password for the account.
5. Select one or more resource pools to add to the account.
6. Select roles to associate with each resource pool.
7. Click Finish. The account appears on the Accounts pane.
117
User's Guide Configure Accounts
1. Click the Configure tab. On the Views pane, select Access Control , then select the Accounts
tab.
2. Click New .
3. In the New Account pane, click Import LDAP Group. The New Account dialog opens and a list of
available LDAP groups displays.
4. Select one or more LDAP groups to assign to the selected account.
5. Select one or more resource pools to add to the account.
6. Select roles to associate with each resource pool.
7. Click Finish. The account appears on the Accounts pane.
Edit an Account
Revise an account to edit the username, password, associated resource pools and roles. Updated account
settings take affect once the account logs in to IBM Spectrum Copy Data Management.
1. Click the Configure tab. On the Views pane, select Access Control , then select the Accounts
tab.
2. In the Accounts pane, select the account to edit by clicking in the row containing the account name.
3. Click Edit. The Edit Role dialog opens.
4. Set a new username, password and select new resource pools and roles to assign to the account.
5. Click OK. The revisions are applied to the account.
Delete an Account
Delete an account to remove access to all IBM Spectrum Copy Data Management functions.
1. Click the Configure tab. On the Views pane, select Access Control , then select the Accounts
tab.
2. In the Accounts pane, select the account to delete by clicking in the row containing the account name.
3. Click Delete. A confirmation dialog box displays.
4. Confirm deletion. The account is deleted.
NEXT STEPS:
l Ensure the user has access to the appropriate IBM Spectrum Copy Data Management
resources as well as the necessary permissions to interact with the resources. See
Configure Resource Pools on page 111 and Configure Roles on page 114.
RELATED TOPICS:
118
User's Guide Configure Accounts
119
User's Guide VMware Admin Role-Based Access Control Configuration
Providers Tab
1. Set up the Providers screen to include the root level for all VMware resources, the IBM Spectrum
Virtualize resources, and the IBM Spectrum Protect Snapshot resources.
Note: The VMware administrator must manage IBM Spectrum Protect Snapshot resources because
access to VMware resources is required.
2. Select a specific SMTP server.
Jobs Tab
3. Select the root level of all VMware related jobs.
4. Select the root level of the IBM Spectrum Protect Snapshot Inventory job.
5. Select the root Report job.
6. Select the root All SLA Policies.
7. Select the root All Schedules.
Reports Tab
8. Under Protection Compliance, select the VMware related reports.
9. Under Storage Utilization, select the VMware related reports.
Access ControlTab
10. No Security Resources will be assigned to this role.
Screens Tab
11. Select all available Screens except Logs. The Logs function contains audit logs that this user should not
have access to.
120
User's Guide VMware Admin Role-Based Access Control Configuration
Role Configuration
13. Select the permissions listed above.
14. Name and submit the Role.
RELATED TOPICS:
l Configure Resource Pools on page 111
l Configure Roles on page 114
l Configure Accounts on page 117
121
User's Guide NetApp ONTAP Admin Role-Based Access Control Configuration
Providers Tab
1. Set up the Providers screen to include the root level for all NetApp resources.
2. Select a specific SMTP server.
Jobs Tab
3. Select the root level of all NetApp related jobs.
4. Select the root Report job.
5. Select the root All SLA Policies.
6. Select the root All Schedules.
Reports Tab
7. Select the root File Analytics tree.
8. Under Protection Compliance, select the NetApp RPO and NetApp Protection Usage.
9. Select the root Storage Protection tree.
10. Under Storage Utilization, select the NetApp related reports.
Screens Tab
12. Select all available Screens except Logs. The Logs function contains audit logs that this user should not
have access to.
13. Name and submit the Resource Pool.
Role Configuration
122
User's Guide NetApp ONTAP Admin Role-Based Access Control Configuration
RELATED TOPICS:
l Configure Resource Pools on page 111
l Configure Roles on page 114
l Configure Accounts on page 117
123
User's Guide IBM Admin Role-Based Access Control Configuration
Providers Tab
1. Set up the Providers screen to include the root level for all IBM storage resources.
Note: IBM Spectrum Accelerate resources are managed by the VMware Admin Role.
2. Select a specific SMTP server.
Jobs Tab
3. Select the root level of IBM Inventory, IBM Backup, and IBM Restore jobs.
4. Select the root Report job.
5. Select the root All SLA Policies.
6. Select the root All Schedules.
Reports Tab
7. Under Protection Compliance, select the IBM RPO Compliance report.
8. Under Storage Utilization, select the IBM related reports.
Screens Tab
10. Select all available Screens except Logs. The Logs function contains audit logs that this user should not
have access to.
11. Name and submit the Resource Pool.
Role Configuration
12. Select the permissions listed above.
13. Name and submit the Role.
124
User's Guide HPE Nimble Storage Admin Role-Based Access Control Configuration
14. Create a new account and link it to the newly created IBM Admin Resource Pool and Role.
RELATED TOPICS:
l Configure Resource Pools on page 111
l Configure Roles on page 114
l Configure Accounts on page 117
Providers Tab
1. Set up the Providers screen to include the root level for all HPE Nimble Storage resources.
2. Select a specific SMTP server.
Jobs Tab
3. Select the root level of all HPE Nimble Storage related jobs.
4. Select the root Report job.
5. Select the root All SLA Policies.
6. Select the root All Schedules.
Reports Tab
7. Under Protection Compliance, select the HPE Nimble Storage RPO Compliance.
8. Under Storage Utilization, select the HPE Nimble Storage related reports.
Screens Tab
125
User's Guide HPE Nimble Storage Admin Role-Based Access Control Configuration
10. Select all available Screens except Logs. The Logs function contains audit logs that this user should not
have access to.
11. Name and submit the Resource Pool.
Role Configuration
12. Select the permissions listed above.
13. Name and submit the Role.
RELATED TOPICS:
l Configure Resource Pools on page 111
l Configure Roles on page 114
l Configure Accounts on page 117
126
User's Guide Pure Storage FlashArray Admin Role-Based Access Control Configuration
Providers Tab
1. Set up the Providers screen to include the root level for all Pure Storage resources.
2. Select a specific SMTP server.
Jobs Tab
3. Select the root level of all Pure Storage related jobs.
4. Select the root Report job.
5. Select the root All SLA Policies.
6. Select the root All Schedules.
Reports Tab
7. Under Protection Compliance, select the Pure Storage FlashArray RPO Compliance.
8. Under Storage Utilization, select the Pure Storage related reports.
Screens Tab
10. Select all available Screens except Logs. The Logs function contains audit logs that this user should not
have access to.
11. Name and submit the Resource Pool.
Role Configuration
12. Select the permissions listed above.
13. Name and submit the Role.
127
User's Guide Pure Storage FlashArray Admin Role-Based Access Control Configuration
RELATED TOPICS:
l Configure Resource Pools on page 111
l Configure Roles on page 114
l Configure Accounts on page 117
128
User's Guide Configure Tenants
Configure Tenants
A tenant is a grouping of resources and users that are administered by a tenant administrator. An IBM
Spectrum Copy Data Management administrator creates tenants, assigns resources to be made available to
the tenants, and creates the tenant administrator. The tenant administrator can then further control and
restrict resources for users in the tenant group, as well as add additional users to the tenant through LDAP.
Tenants can be assigned shared resources, but in most cases would not have access to the resources or
users of other tenants. Only IBM Spectrum Copy Data Management administrators and tenant administrators
can configure a tenant; tenant users cannot configure a tenant.
A resource pool and a role determines the IBM Spectrum Copy Data Management resources and actions
available within a tenant. A built-in Tenant role may be selected, which gives tenant users the ability to register
resources, create job definitions, and other predefined IBM Spectrum Copy Data Management tasks.
To log in to the tenant, use the following format: tenant name/user name. For example, if the tenant is named
"tenant1," a user with the username "tenant_user" would log in by entering the following in the IBM Spectrum
Copy Data Management username field: tenant1/tenant_user.
To ensure tenant administrators and users can only view job definitions associated with their tenant, you must
assign the Create permission, not the View permission, for jobs in the Select the roles/permissions for the
resource pool step. Assigning the View permission gives tenant administrators and users full access to all
jobs in the Resource Pool, including jobs that are not associated with the tenant. By granting only Create
permissions for jobs, tenant administrators and users can create their own tenant-specific jobs. Tenant
administrators can always view the jobs created by their tenant users, regardless of assigned permissions.
Add a Tenant
1. Click the Configure tab. On the Views pane, select Access Control , then select the Tenants tab.
3. In the Enter Tenant Info section, enter a name for the tenant in the Tenant Name field as well as a
tenant administrator name in the Tenant Admin Name field. Enter and confirm a password for the tenant
administrator.
129
User's Guide Configure Tenants
4. In the Select resource pools section, select one or more resource pools to add to the tenant.
5. In the Select the roles/permissions for the resource pool section, click Click to select roles to
assign roles to the selected resource pools. Note that a built-in Tenant role may be selected, which gives
tenant users the ability to register resources, create job definitions, and other predefined IBM Spectrum
Copy Data Management tasks.
6. When you are satisfied that the entered information is correct, click Finish. The tenant appears on the All
Tenants pane and the administrator account can log in to the newly created tenant using the following
format: tenant name/tenant admin name.
Edit a Tenant
Revise a tenant to change the associated resource pools and permissions. Updated tenant settings take
affect once accounts associated with the tenant log in.
1. Click the Configure tab. On the Views pane, select Access Control , then select the Tenants tab.
2. Select the tenant to edit by clicking in the row containing the tenant name.
3. Click Edit . The Update Tenant Info editor displays.
4. Update the Tenant Name, Tenant Admin Name, and resource pools associated with the tenant.
5. Click Finish. The revisions are applied to the tenant.
Delete a Tenant
Delete a tenant when it becomes obsolete. Note that before deletion, associated jobs and resources must be
cleaned up through the Maintenance job. The Maintenance job removes resources and associated objects
created by IBM Spectrum Copy Data Management when a job in a pending state is deleted. The cleanup
procedure reclaims space on your storage devices, cleans up your IBM Spectrum Copy Data Management
catalog, and removes related snapshots.
1. Click the Configure tab. On the Views pane, select Access Control , then select the Tenants tab.
2. Select the tenant to delete by clicking in the row containing the tenant name.
3. Click Delete . A confirmation dialog box displays.
RELATED TOPICS:
l Best Practices for Configuring Tenants on page 131
l Configure Resource Pools on page 111
130
User's Guide Best Practices for Configuring Tenants
To assign resources to a tenant without granting the tenant users the ability to modify or delete the
resources:
Create a new resource pool, and assign resources to be made available to the tenant in a resource pool on
the Resource Pools tab. On the Tenants tab, select the newly created resource pool and assign the Read
Only permission. This allows a tenant user to view the resources defined in the resource pool, but not modify
or delete them.
To assign permissions to a tenant that allows tenant users to create new job definitions and reports,
but prevents them from viewing existing IBM Spectrum Copy Data Management job definitions:
Create a new resource pool, and assign resources to be made available to the tenant in a resource pool on
the Resource Pools tab. On the Tenants tab, select the newly created resource pool and assign the Create
Only permission. This allows a tenant user to create new job definitions and reports, but prevents them from
viewing existing IBM Spectrum Copy Data Management job definitions.
To assign resources to a tenant and allow the tenant users to create and run job sessions, reports
and perform searches:
Create a new resource pool, and assign resources to be made available to the tenant in a resource pool on
the Resource Pools tab. On the Tenants tab, select the newly created resource pool and assign the Read
Only and Create Only permissions. This allows a tenant user to create as well as run new job sessions and
reports.
General Recommendations
l For a tenant admin, it is recommended to create two resource pools. In the first resource pool, add the
providers to be made available to the tenant, and assign a Read Only permission to the resource pool. In
the second resource pool, assign jobs, security, and screens, and assign the Create Only permission.
Once complete, assign both resource pools to the tenant.
l When configuring a resource pool for a tenant user, it is recommended to exclude the security resources
found in the Step 5: Security step or the Home and Logs resources found in the 6. Screens step. These
resources contain general IBM Spectrum Copy Data Management information that may not apply to the
tenant user.
l The built-in All Resources resource pool should not be assigned to a tenant as it includes all of the
resources in the IBM Spectrum Copy Data Management system.
l Selecting higher level objects instead of specific resources and assigning the View, Edit, and Delete
permissions may cause tenants to see resources from other tenants. Add lower-level resources to ensure
the tenants can only see objects assigned to the tenant.
131
User's Guide Best Practices for Configuring Tenants
RELATED TOPICS:
l Configure Tenants on page 129
l Configure Resource Pools on page 111
132
User's Guide Identities
Identities
The topics in the following section cover adding SSH keys and adding, editing, and deleting credentials.
133
User's Guide Identities Overview
Identities Overview
Some features in IBM Spectrum Copy Data Management require credentials and keys to access your
providers. For example, IBM Spectrum Copy Data Management connects to the Oracle servers as the local
operating system user specified during registration in order to perform tasks like cataloging, data protection,
and data restores. IBM Spectrum Copy Data Management also logs into local database and ASM instances
as this user through password-less OS authentication. Therefore, the user must have all the privileges IBM
Spectrum Copy Data Management needs to perform its tasks.
Credentials and keys are configured through the Identities view.
RELATED TOPICS:
l Add a Key on page 135
l Add a Credential on page 139
134
User's Guide Add a Key
Add a Key
Some features in IBM Spectrum Copy Data Management require credentials and keys to access your
providers. For example, IBM Spectrum Copy Data Management connects to the Oracle servers as the local
operating system user specified during registration in order to perform tasks like cataloging, data protection,
and data restores. IBM Spectrum Copy Data Management also logs into local database and ASM instances
as this user through password-less OS authentication. Therefore, the user must have all the privileges IBM
Spectrum Copy Data Management needs to perform its tasks.
IBM Spectrum Copy Data Management connects to Oracle servers as a local operating system user through
a password or an SSH key. To use a key, enter a username and select or create an SSH key. When using a
key, the username must exist as a local user on the Oracle server. For password-based authentication, the
password must be correctly configured for the appropriate user on the Oracle server. For key-based
authentication, the public key must be placed in the authorized_keys file for the appropriate user on the Oracle
server. See Oracle Requirements on page 45.
Amazon Web Services (AWS) access keys and secret keys are configured through the AWS Management
Console and then added to IBM Spectrum Copy Data Management.
The procedures below describe how to add keys and register associated Oracle or AWS providers.
Add an SSH key through the Generate a keypair for me method and register an associated provider
1. In IBM Spectrum Copy Data Management, click the Configure tab. On the Views pane, select
Identities , then the Keys tab.
3. Select SSH as the key type and enter a key name in the Name field.
4. Select Generate a keypair for me as the creation type and enter an optional comment. Click OK. A
public key is generated and displays in the Create Key dialog. Copy the key. See the following steps to
use this key to register an Oracle provider.
5. On the Oracle server, execute cd ~/.ssh while logged in as Oracle user assigned to IBM Spectrum
Copy Data Management. Paste and save the generated public key to the authorized_keys file.
6. In IBM Spectrum Copy Data Management, click the Configure tab. On the Views pane, select Sites
& Providers . The Provider Browser opens.
7. Right-click Oracle in the Provider Browser, then click Register . The Register Oracle Server dialog
opens.
8. Select a Site, enter a Name and Host Address.
9. Select Key as the Authentication type. Enter the Oracle username, then select the key created in Step 2
in the Key field. Click OK.
Add an SSH key through the I will provide a keypair method and register an associated provider
135
User's Guide Add a Key
Generation of keys can occur on the IBM Spectrum Copy Data Management appliance using the command-
line interface (CLI) or any other compatible server. In some circumstances, creating and adding a
private/public SSH keypair generated on another host may be desirable. It is possible to generate SSH
keypairs on another computer and then import them onto the IBM Spectrum Copy Data Management
appliance as needed.
Note: Generally, private keys should not be generated on a client server and then transferred to the IBM
Spectrum Copy Data Management appliance. It is strongly suggested that appropriate security measures be
taken to protect the secrecy of the private keys. Loss or exposure of SSH private keys outside of the SSH host
can severely compromise the security of communications using the SSH protocol. It is not recommended to
copy private keys between different systems. If a new SSH keypair is needed, it is strongly advised that the
procedure in the Add an SSH key through the Generate a keypair for me method and register an
associated provider topic be followed to have the IBM Spectrum Copy Data Management appliance
generate the keypair and then copy the public key to the intended host. If there is a special need to
generate a keypair on another host, use the procedure outlined below and ensure that appropriate
security measures are taken to create, secure, and enter the private key.
1. Identify a machine that has SSH installed. This machine will be used to generate the new SSH keypair.
Log in to the identified machine and launch the terminal.
2. In the terminal, generate an SSH keypair by using the ssh-keygen command. Execute the ssh-
keygen command:
$ ssh-keygen
3. When prompted, enter the full path name where the key pair will be output. A default file will be suggested
by the ssh-keygen command. The default should only be used if a key has not yet been generated,
otherwise, using the default may overwrite an existing SSH key pair. The default will typically appear as
/home/<user_account>/.ssh/id_rsa.pub where <user_account> is the account used to log in to
this system. Any valid path name could be used for the new SSH key, for example /home/<user_
account>/newkey. If a key with the default name already exists, this will be indicated with the message
displayed below. Be careful not to overwrite preexisting keys if they are in use and only overwrite these
files if you intend to do so. Press N to enter a different file in which to save the key to avoid overwriting an
existing keypair.
/home/<user_account>/.ssh/id_rsa already exists. Overwrite (y/n)?
4. Supply a passphrase and press Enter. Otherwise, press Enter for no passphrase.
6. The key generation will produce two files, one with the path name supplied in the previous steps for the
private key, and another ending in .pub is the public key. Using the default naming, this will be id_rsa
and id_rsa.pub. The generated public key (ida_rsa.pub) will need to be transferred to the server to
which the IBM Spectrum Copy Data Management appliance will connect. In this example, it will be an
Oracle server. Transfer the public key to the Oracle server. For the remainder of this procedure, it is
assumed that the keypair is saved in the default location using the default file names for the
keypair: /home/<user_account>/.ssh/. If the keypair is created using a different file name, use that
file name in the steps that follow.
136
User's Guide Add a Key
7. On the server to which the IBM Spectrum Copy Data Management appliance will connect and to which
the public key has been copied, the key (ida_rsa.pub) will need to be appended to the user’s authorized_
keys file. The authorized_keys file is generally found in the user’s SSH directory. For example, it may be
found at the following location: /home/<user_account>/.ssh/authorized_keys. If the
authorized_keys file does not exist, consult the operating system’s documentation for the procedure
to properly creating this file. If the file exists, append the contents of the public key to the authorized_
keys file. If this is not being done from the account that contains the authorized_keys file, it may be
necessary to enter the su command to switch to that user. The step below assumes that you are logged
into the server with the account that contains the authorized_keys file:
$ cat ida_rsa.pub >> authorized_keys
Note: This process can be automated using the ssh-copykey program from the computer used to
generate the key. Consult the vendor’s documentation for details on usage of this program.
8. Log in to the IBM Spectrum Copy Data Management appliance.
9. Click on the Configure tab. On the Views pane, select Identities , and then click on the Keys tab.
11. Select SSH as the key type and enter a name for the key in the Name field.
13. On the server where the SSH keypair was generated, locate the private key (ida_rsa). For example, the
key generated by this process is in the following directory: /home/<user_account>/.ssh/. Copy the
contents of the private key (ida_rsa) to the IBM Spectrum Copy Data Management appliance into the
Private Key field in the Create Key dialog.
14. (Optional) It is highly recommended to copy the public key (ida_rsa.pub) into the Public Key field.
15. (Optional) Enter a helpful comment so that the usage of the key can be easily recalled.
17. Once the key has been added to the IBM Spectrum Copy Data Management appliance, the server to
which the IBM Spectrum Copy Data Management appliance will connect needs to be registered. In this
example, an Oracle server is used. In the IBM Spectrum Copy Data Management appliance, click on the
Configure tab.
18. On the Views pane, select Sites & Providers . The Provider Browser opens.
19. Right-click Oracle in the Provider Browser dialog and then click Register . The Register Oracle
Server dialog opens.
20. Select a Site, enter a name in the Name field and a host address in the Host Address field.
21. Select Key as the Authentication type. Enter the username of the user account to which the public key
was appended to the authorized_keys file on the host to which the IBM Spectrum Copy Data
137
User's Guide Add a Key
Management appliance will connect in Step 7. In this example, it is the Oracle server. Enter the Oracle
username.
22. Select the key created in Step 10 in the Key field. Click OK.
Add an Amazon Web Services (AWS) key and register an associated provider
1. Create your AWS access key and secret key through the AWS Management Console. Make note of the
access and secret keys, which will be used later in this procedure. See Managing Access Keys for IAM
Users.
2. In IBM Spectrum Copy Data Management, click the Configure tab. On the Views pane, select
Identities , then the Keys tab.
4. Select AWS as the key type and enter a key name in the Name field.
5. Enter the access key and secret key created in Step 1 in the Access and Secret fields. Enter an optional
comment. See the following steps to use this key to register an AWS provider.
6. In IBM Spectrum Copy Data Management, click the Configure tab. On the Views pane, select Sites
& Providers . The Provider Browser opens.
7. Right-click AWS in the Provider Browser, then click Register . The Register Amazon Web
RELATED TOPICS:
l Add a Credential on page 139
l Oracle Requirements on page 45
l Register a Provider on page 79
138
User's Guide Add a Credential
Add a Credential
Add a credential
1. Click the Configure tab. On the Views pane, select Identities , then the Credentials tab.
3. Select a credential type in the Type field. Available options include System and Oracle.
4. Enter a name for the credential in the Name field.
5. Enter your login information for the associated provider in the Username and Password fields. For
example, if creating a credential for an Oracle database, enter your login information associated with the
Oracle database.
6. Enter an optional comment, then click OK. The credential appears on the Credentials pane and can be
applied to new and existing storage providers.
Edit a Credential
Revise a credential to change the associated username and password.
1. Click the Configure tab. On the Views pane, select Identities , then the Credentials tab.
2. In the Credentials pane, select the credential to edit by clicking in the row containing the credential name.
3. Click Edit . The Edit Credential dialog opens.
Delete a Credential
1. Click the Configure tab. On the Views pane, select Identities , then the Credentials tab.
2. In the Credentials pane, select the credential to delete by clicking in the row containing the credential
name.
3. Click Delete . A confirmation dialog box displays.
RELATED TOPICS:
l Identities Overview on page 134
l Register a Provider on page 79
139
User's Guide Configure SLA Policies
CONSIDERATIONS:
l Note that a VADP-based VM Replication of a virtual machine with vRDM will convert the
vRDM to a VMDK. When restoring the virtual machine, consider the size requirements
of the virtual machine in addition to the vRDM when selecting a destination datastore.
The original datastore may not have enough free space to store the converted vRDM
140
User's Guide Configure SLA Policies
3. Select a type of policy to create based on your storage provider. Select AWS to create an Amazon Web
Services Backup job containing VM copies, NetApp ONTAP to create a NetApp ONTAP Backup job
containing snapshots, VM copies, mirrors and vaults, IBM to create an IBM Backup job containing
FlashCopies, Global Mirrors with Change Volumes, and VM Copies, or DellEMC Unity to create a
DellEMC Unity Backup job containing snapshots and replication copies. VMware Backup jobs support
IBM, DellEMC Unity, and NetApp ONTAP SLA Policies depending on your storage provider.
4. Enter a name and a meaningful description of the SLA Policy.
Configure AWS SLA Policies
To add a VM Replication sub-policy to an AWS SLA Policy:
1. Select the source icon and define the recovery point objective to determine the minimum frequency
and interval with which backups must be made. In the Frequency field select Minutes, Hourly, Daily,
Weekly, or Monthly, then set the interval in the Interval field. The lowest available frequency is five
minutes.
Note: Edits to the frequency and interval of an SLA Policy apply to all associated job schedules.
2. Click Add VM Copy .
141
User's Guide Configure SLA Policies
3. In the VM Replication Destination pane select an AWS cloud destination from the list of available
resources as the VM Replication destination, along with an associated storage gateway. Note that
storage gateways are discovered based on the region of the AWS provider.
4. In the Options pane set the VM Replication sub-policy options.
Keep Copies
After a certain number of copies are created for a resource, older copies are purged from the
gateway. Enter the age of the copies to purge in the Days field, or the number of copies to keep in
the Copies (maximum) field.
Target Volume Prefix Label
Enter an optional label to identify the target volume. This label is added as a prefix to the volume
name created by the job.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
3. In the VM Replication Destination pane select a DellEMC host destination from the list of available
resources as the VM Replication destination, along with an associated storage pool. If no storage
pool is selected, the storage pool with the largest amount of space available is chosen by default.
4. In the Options pane set the VM Replication sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
142
User's Guide Configure SLA Policies
Enter an optional label to replace the default snapshot sub-policy label displayed in IBM Spectrum
Copy Data Management. The default initial label is VM Copy0
Access Type
Select the access type for file-based storage. Available access types include Hidden .ckpt folder
(read-only) and Shares, based on CIFS or NFS mounting.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
143
User's Guide Configure SLA Policies
minutes.
Note: Edits to the frequency and interval of an SLA Policy apply to all associated job schedules.
2. Click Add Replication .
3. In the Replication Destination pane select a DellEMC host destination from the list of available
resources as the Replication destination, along with an associated storage pool. If no storage pool is
selected, the storage pool with the largest amount of space available is chosen by default.
4. In the Options pane set the Replication sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Name
Enter an optional name to replace the default Replication sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Replication0.
Keep Source Volume name for target volume
Enable to retain the source volume name for copy data generated by IBM Spectrum Copy Data
Management.
Target Volume Prefix Label
Enter an optional label to identify the target volume. This label is added as a prefix to the volume
name created by the job.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Synchronization
Specify the time in minutes in which the change volumes will be synchronized with a consistent
copy of the data. If a copy does not complete in the cycle period, the next cycle period will not start
until the copy is complete. Synchronization can also be initiated manually.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
144
User's Guide Configure SLA Policies
3. In the Destination pane select an IBM host destination from the list of available resources as the VM
Replication destination, along with an associated storage pool. If no storage pool is selected, the
storage pool with the largest amount of space available is chosen by default. To select the original
target destination, select Use Original.
4. In the Options pane set the VM Replication sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Target Volume Prefix Label
Enter an optional label to identify the target volume. This label is added as a prefix to the volume
name created by the job.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional label to replace the default snapshot sub-policy label displayed in IBM Spectrum
Copy Data Management. The default initial label is VM Copy0.
Protocol
If more than one storage protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
Full Copy Method
145
User's Guide Configure SLA Policies
Select the full copy method. Available full copy methods include Clone or VADP-based VM
Replication.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Name
146
User's Guide Configure SLA Policies
Enter an optional name to replace the default FlashCopy sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is FlashCopy0.
FlashCopy Volume Prefix
Enter an optional label to identify the FlashCopy. This label is added as a prefix to the FlashCopy
name created by the job.
Note: FlashCopy labels must contain only alphanumeric characters and underscores.
Do not stretch FlashCopy (Enhanced Stretch Cluster Feature)
Enabling this option will not create stretch FlashCopies. This is only applies to volumes that are
stretched in storages that have the Enhanced Stretch Cluster feature enabled.
To add a Global Mirror with Change Volumes sub-policy to an IBM Spectrum Virtualize
SLA Policy:
1. Select the source icon and define the recovery point objective to determine the minimum frequency
and interval with which backups must be made. In the Frequency field select Minutes, Hourly, Daily,
Weekly, or Monthly, then set the interval in the Interval field. The lowest available frequency is five
minutes.
Note: Edits to the frequency and interval of an SLA Policy apply to all associated job schedules.
2. Click Add Global Mirror with Change Volumes .
3. In the Global Mirror with Change Volumes Destination pane select an IBM host destination from the
list of available resources as the Global Mirror destination, along with an associated storage pool. If
no storage pool is selected, the storage pool with the largest amount of space available is chosen by
default. To select the original target destination, select Use Original.
4. In the Options pane set the Global Mirror with Change Volumes sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Name
Enter an optional name to replace the default Global Mirror sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Global Mirror0.
Keep Source Volume name for target volume
Enable to retain the source volume name for copy data generated by IBM Spectrum Copy Data
Management.
Volume Prefix Label
147
User's Guide Configure SLA Policies
Enter an optional label to identify the volume. This label is added as a prefix to the volume name
created by the job and cannot be edited after the job is submitted.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Cycle Period (seconds)
Specify the time in which the change volumes will be refreshed with a consistent copy of the data.
If a copy does not complete in the cycle period, the next cycle period will not start until the copy is
complete. The range of possible values is 60 through 86400. The default is 300.
Global Mirror Volume Prefix
Enter an optional label to identify the Global Mirror. This label is added as a prefix to the Global
Mirror name created by the job.
Note: Global Mirror labels must contain only alphanumeric characters and underscores.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
3. In the Destination pane select an IBM host destination from the list of available resources as the VM
Replication destination, along with an associated storage pool. If no storage pool is selected, the
storage pool with the largest amount of space available is chosen by default. To select the original
target destination, select Use Original.
4. In the Options pane set the VM Replication sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Golden Snapshot
148
User's Guide Configure SLA Policies
Enable to create a golden snapshot on a thin provisioned pool, which cannot be deleted through
an automated process .
Note that when space is limited, IBM Spectrum Accelerate snapshots are deleted after associated
jobs complete. In a thin provisioned pool, the golden snapshot option ensures the snapshot will
not be deleted. This option is not compatible with thick provisioned pools, and may lead to the loss
of data if the SLA Policy snapshot is expired prematurely by the storage system to reclaim space.
Target Volume Prefix Label
Enter an optional label to identify the target volume. This label is added as a prefix to the volume
name created by the job.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional label to replace the default snapshot sub-policy label displayed in IBM Spectrum
Copy Data Management. The default initial label is VM Copy0.
Protocol
If more than one storage protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
Full Copy Method
Select the full copy method. Available full copy methods include Clone or VADP-based VM
Replication.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
149
User's Guide Configure SLA Policies
2. Click Add Snapshot . In the Options pane, set the snapshot sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Golden Snapshot
Enable to create a golden snapshot on a thin provisioned pool, which cannot be deleted through
an automated process .
Note that when space is limited, IBM Spectrum Accelerate snapshots are deleted after associated
jobs complete. In a thin provisioned pool, the golden snapshot option ensures the snapshot will
not be deleted. This option is not compatible with thick provisioned pools, and may lead to the loss
of data if the SLA Policy snapshot is expired prematurely by the storage system to reclaim space.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional name to replace the default snapshot sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Snapshot0.
150
User's Guide Configure SLA Policies
resource, older instances are purged from the Source and Destination. In the Keep Source
Snapshots and Keep Destination Snapshots fields, enter the age of the snapshot instances to
purge in the Days field, or the number of instances to keep in the Snapshots (maximum) field.
Golden Snapshot
Enable to create a golden snapshot on a thin provisioned pool, which cannot be deleted through
an automated process .
Note that when space is limited, IBM Spectrum Accelerate snapshots are deleted after associated
jobs complete. In a thin provisioned pool, the golden snapshot option ensures the snapshot will
not be deleted. This option is not compatible with thick provisioned pools, and may lead to the loss
of data if the SLA Policy snapshot is expired prematurely by the storage system to reclaim space.
Name
Enter an optional name to replace the default Replication sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Replication0.
Keep Source Volume name for target volume
Enable to retain the source volume name for copy data generated by IBM Spectrum Copy Data
Management.
Target Volume Prefix Label
Enter an optional label to identify the target volume. This label is added as a prefix to the volume
name created by the job.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Synchronization
Specify the time in minutes in which the change volumes will be synchronized with a consistent
copy of the data. If a copy does not complete in the cycle period, the next cycle period will not start
until the copy is complete. Synchronization can also be initiated manually.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
151
User's Guide Configure SLA Policies
1. Select the source icon and define the recovery point objective to determine the minimum frequency
and interval with which backups must be made. In the Frequency field select Minutes, Hourly, Daily,
Weekly, or Monthly, then set the interval in the Interval field. The lowest available frequency is five
minutes.
Note: Edits to the frequency and interval of an SLA Policy apply to all associated job schedules.
2. Click Add Snapshot . In the Options pane, set the snapshot sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Disable system snapshot policy
Disables all the system snapshot jobs on the storage volumes.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional name to replace the default snapshot sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Snapshot0.
3. In the VM Replication Destination pane select an SVM from the list of available resources as the VM
Replication destination, along with an associated aggregate. If no aggregate is selected, the
aggregate with the largest amount of space available is chosen by default.
4. In the Options pane set the VM Replication sub-policy options.
Keep Snapshots
152
User's Guide Configure SLA Policies
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Disable system snapshot policy
Disables all the system snapshot jobs on the storage volumes.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional label to replace the default snapshot sub-policy label displayed in IBM Spectrum
Copy Data Management. The default initial label is VM Copy0.
Storage Efficiency (Deduplication)
Enable or disable storage efficiency. Storage efficiency uses data deduplication to store the
maximum amount of data while consuming less space.
Destination Datastore Type
Set the destination datastore type. Available datastore types include NFS and VMFS.
NFS - A NetApp volume will be created for NFS access and the target datastore will be created on
that NFS share.
VMFS - A NetApp volume will be created and a LUN will be created on the volume. The volume
will be mapped to the ESX, and the LUN will be formatted for VMFS. A VMFS datastore will be
created on the LUN.
153
User's Guide Configure SLA Policies
154
User's Guide Configure SLA Policies
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Storage Efficiency (Deduplication)
Enable or disable snapshot storage efficiency. Storage efficiency uses data deduplication to store
the maximum amount of data while consuming less space.
Throttle
Set the transfer throughput in KBs per second between the source and the destination, which
controls the number of parallel transfers that can take place.
Destination storage limit in GB / Destination volumes limit
Specify quotas for storage usage and the number of volume created on the destination for all jobs
utilizing the SLA Policy.
155
User's Guide Configure SLA Policies
1. Select the source icon and define the recovery point objective to determine the minimum frequency
and interval with which backups must be made. In the Frequency field select Minutes, Hourly, Daily,
Weekly, or Monthly, then set the interval in the Interval field. The lowest available frequency is five
minutes.
Note: Edits to the frequency and interval of an SLA Policy apply to all associated job schedules.
2. Click Add VM Copy .
3. In the VM Replication Destination pane select a Pure Storage FlashArray host destination from the
list of available resources as the VM Replication destination.
4. In the Options pane set the VM Replication sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Target Volume Prefix Label
Enter an optional label to identify the target volume. This label is added as a prefix to the volume
name created by the job.
Note: Volume prefix labels must contain only alphanumeric characters and underscores. Labels
cannot begin with numeric characters.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional label to replace the default snapshot sub-policy label displayed in IBM Spectrum
Copy Data Management. The default initial label is VM Copy0
Protocol
If more than one storage protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
Full Copy Method
Select the full copy method. Available methods include Clone or VADP-based VM Replication.
156
User's Guide Configure SLA Policies
Weekly, or Monthly, then set the interval in the Interval field. The lowest available frequency is five
minutes.
Note: Edits to the frequency and interval of an SLA Policy apply to all associated job schedules.
2. Click Add Snapshot . In the Options pane, set the snapshot sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional name to replace the default snapshot sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Snapshot0.
3. If your Pure Storage FlashArray supports CloudSnap functionality, you can add snapshot offload
which will create an offload copy on a S3 cloud storage target or NFS share. This option is only
available after a snapshot sub-policy is added. Click Add Snapshot Offload or right-click on the
snapshot sub-policy and click Add Snapshot Offload . In the Options pane, set the snapshot
offload sub-policy options.
Keep Snapshots
After a certain number of snapshot instances are created for a resource, older instances are
purged from the storage controller. Enter the age of the snapshot instances to purge in the Days
field, or the number of instances to keep in the Snapshots field.
Best Practice: The IBM Spectrum Copy Data Management user interface will indicate that a Pure
Storage FlashArray CloudSnap offload job has completed even though the transfer is still
occurring in the background which is dependent on network speeds. Consider setting an age as
the retention for offload copies when using the Pure Storage FlashArray CloudSnap functionality.
Doing so will ensure that a sufficient amount of time has passed for data to be transferred to the
S3 storage target or NFS share before it has to be condensed out from a backup. This is
particularly important if several offload jobs are run in quick succession.
Offload Target
Select Cloud from the menu as the offload target.
Snapshot Prefix Label
157
User's Guide Configure SLA Policies
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
Name
Enter an optional name to replace the default snapshot sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Snapshotoffload0.
3. In the Replication Destination pane select a Pure Storage FlashArray host destination from the list of
available resources as the Replication destination.
4. In the Options pane set the Replication sub-policy options.
Keep Source Snapshots / Keep Destination Snapshots
A Pure Storage replication sub-policy provides snapshots to both a Source, or Primary location,
and a Destination, or Replication location. After a certain number of snapshot instances are
created for a resource, older instances are purged from the Source and Destination. In the Keep
Source Snapshots and Keep Destination Snapshots fields, enter the age of the snapshot
instances to purge in the Days field, or the number of instances to keep in the Snapshots
(maximum) field.
Name
Enter an optional name to replace the default Replication sub-policy name displayed in IBM
Spectrum Copy Data Management. The default initial name is Replication0.
Snapshot Prefix Label
Enter an optional label to identify the snapshot. This label is added as a prefix to the snapshot
name created by the job.
Note: Snapshot labels must contain only alphanumeric characters and underscores.
5. When you are satisfied that the SLA Policy-specific information is correct, click Finish. The SLA Policy
appears on the All SLA Policies pane and can be applied to new and existing Backup job definitions.
NEXT STEPS:
158
User's Guide Configure SLA Policies
RELATED TOPICS:
l Create a Backup Job Definition - DellEMC Unity on page 228
l Create a Backup Job Definition - IBM Spectrum Virtualize on page 235
l Create a Backup Job Definition - NetApp ONTAP on page 239
l Create a Backup Job Definition - Pure Storage FlashArray on page 246
l Create a Backup Job Definition - VMware on page 250
159
User's Guide Configure Scripts
Configure Scripts
Prescripts and postscripts are scripts that can be run before or after Backup and Restore jobs run, both at a
job-level and before or after snapshots are captured. A script can consist of one or many commands, such as
a shell script for Linux-based virtual machines or Batch and PowerShell scripts for Windows-based virtual
machines.
Scripts can be created locally, uploaded to your environment through the Scripts pane, then applied to job
definitions. In a Windows environment, if your application supports VSS, the Backup job triggers the VSS
application quiesce logic if the Make these VMs application/file system consistent option is enabled when
creating the VMware Backup job. However, for applications that don’t support VSS, or on Linux virtual
machines, pre and post snapshot scripts can be used to quiesce your application for the snapshot backup.
Note: If adding a script to a Windows-based File System job definition, the user running the script must have
the "Log on as a service" right enabled, which is required for running prescripts and postscripts. For more
information about the "Log on as a service" right, see https://technet.microsoft.com/en-
us/library/cc794944.aspx.
Upload a Script
Supported scripts include shell scripts for Linux-based virtual machines and Batch and PowerShell scripts for
Windows-based virtual machines. Scripts must be created using the operating system's associated file
format.
1. Click the Configure tab. On the Views pane, select Scripts .
3. In the Script field, browse for a local script to upload, then click Open.
4. Enter an optional comment, then click OK. The script appears on the Scripts pane and can be applied to
supported jobs.
Replace a Script
Upload a revised version of a script.
1. Click the Configure tab. On the Views pane, select Scripts .
2. In the Scripts pane, select the script to replace by clicking in the row containing the script name.
3. Click Replace . The Update Script dialog opens.
4. In the Script field, browse for a local updated script to upload, then click Open.
5. Enter an optional comment, then click OK. The revised script appears on the Scripts pane and can be
applied to supported jobs.
Delete a Script
160
User's Guide Configure Scripts
Delete a script when it becomes obsolete. Removing a script from an associated job definition allows you to
delete the script immediately. Deleting the job definition while a script is still assigned to a job definition
requires that you run the Maintenance job before deleting the script.
1. Click the Configure tab. On the Views pane, select Scripts .
2. In the Scripts pane, select the script to delete by clicking in the row containing the script name.
3. Click Delete . A confirmation dialog box displays.
RELATED TOPICS:
l Job Definition Overview on page 174
l Using State and Status Arguments in Postscripts on page 343
l Return Code Reference on page 562
161
User's Guide Schedules
Schedules
The topics in the following section cover creating, editing, and deleting schedules.
162
User's Guide Create a Schedule
Create a Schedule
A schedule is a set of rules for triggering a job. Create a schedule to apply to one or more jobs. Once applied,
the job sessions are run as defined by the parameters of the schedule.
Best Practice: Overlapping schedules may slow down your network. Decrease the strain on your network by
configuring multiple schedules to run at different times or days of the week.
To create a schedule:
1. Click the Configure tab. On the Views pane, select Schedules .
163
User's Guide Create a Schedule
NEXT STEPS:
l Assign the schedule to a new or existing job definition. See Jobs Overview on page 169
and Edit a Job Definition on page 351.
RELATED TOPICS:
l Edit a Schedule on page 165
l Delete a Schedule on page 167
164
User's Guide Edit a Schedule
Edit a Schedule
Revise a schedule to change the timetable for running a job session. Because a single schedule can be
applied to multiple jobs, all jobs associated with the schedule you are editing are impacted.
Best Practice: Overlapping schedules may slow down your network. Decrease the strain on your network by
configuring multiple schedules to run at different times or days of the week.
To edit a schedule:
1. Click the Configure tab. On the Views pane, select Schedules .
2. Select the schedule to edit by clicking in the row containing the schedule name.
3. Revise fields on the Properties pane:
a. In Name, enter a descriptive schedule name. By default, your schedule parameters are added to the
schedule description.
b. In Name, enter a descriptive schedule name. By default, your schedule parameters are added to the
schedule description.
Once to schedule a job session to run once. In Trigger date, select the day of the week for the
job session to run. In Time of day, select a starting time.
Hourly to schedule a job session to run hourly. In Interval, select the number of hours between
job sessions. In Time of day and Starts, select a starting time and date. If applicable, enter an
expiration date in Expires.
Daily to schedule a job session to run daily or every few days. In Interval, select the number of
days between job sessions. In Time of day and Starts, select a starting time and date. If
applicable, enter an expiration date in Expires.
Weekly to schedule a job session to run weekly or every few weeks. In Day of week, select the
day of the week for the job session to run during the week. In Time of day and Starts, select a
starting time and date. If applicable, enter an expiration date in Expires.
Monthly to schedule a job session to run monthly or every few months. In Day of month, select
the day or days of the month for the job session to run during the month. In Time of day and
Starts, select a starting time and date. If applicable, enter an expiration date in Expires.
NEXT STEPS:
165
User's Guide Edit a Schedule
l Assign the edited schedule to a new or existing job definition. See Jobs Overview on
page 169 and Edit a Job Definition on page 351.
RELATED TOPICS:
l Create a Schedule on page 163
l Delete a Schedule on page 167
166
User's Guide Delete a Schedule
Delete a Schedule
Delete a schedule from the application if it is not used to trigger jobs.
To delete a schedule:
1. Click the Configure tab. On the Views pane, select Schedules .
2. Select the schedule to delete by clicking in the row containing the schedule name.
3. Click Delete . A confirmation dialog box opens.
RELATED TOPICS:
l Create a Schedule on page 163
l Edit a Schedule on page 165
167
User's Guide Jobs
Jobs
The topics in the following section cover defining, editing, and deleting job definitions, as well as descriptions
of job types.
168
User's Guide Jobs Overview
Jobs Overview
From the Jobs tab you can create and edit job definitions. A job definition is a user-defined set of tasks and
rules. Once a job definition is added to IBM Spectrum Copy Data Management, it can be combined with a
schedule or trigger to create a job. There are several job types including Inventory, Backup, Restore, Reports,
and Scripts.
You can also start, monitor, stop, and resume job sessions from the Jobs tab. From this pane, you can
also view all scheduled and unscheduled job sessions, and start jobs before scheduled run times.
Note:If the IBM Spectrum Copy Data Management virtual appliance is shut down while jobs are in progress,
the jobs will automatically restart once the virtual appliance is back online. By default, jobs that began running
30 minutes before the appliance restarted will automatically restart.
In addition, you can view the job details. Select a job to view the current job status, the job schedule, and
control the activity for the selected job.
Select a job and click View Last Run Session to view the duration and completion status of the most
recent run of a selected job.
169
User's Guide Start, Pause, and Hold a Job Session
2. Select the job to run by clicking in the row containing the job name.
3. Click Start , or right-click the job name and select Start . A confirmation dialog box opens.
Note: If a job session has multiple run options, such as running a job session in Test, Recovery, or Clone
mode, you will be prompted to select a job session type.
4. Click Yes. The job session runs.
5. In the Activity pane, click the job name to view the job session details, including the job session's start
date and time, duration, description, status through a progress bar, and associated messages.
2. Select a running job to stop by clicking in the row containing the job name.
3. From the More Actions drop-down menu, select Pause , or right-click the job name and select
Pause . A confirmation dialog box opens.
2. Select the job to suspend by clicking in the row containing the job name.
170
User's Guide Start, Pause, and Hold a Job Session
3. From the More Actions drop-down menu, select Hold Schedule, or right-click the job name and select
Hold Schedule to hold the job. The job session status changes to Held, and all future scheduled
instances of the job will not run until released.
4. From the More Actions drop-down menu, select Release Schedule, or right-click the job name and
select Release Schedule to release the job
2. Select the job session to cancel by clicking in the row containing the job name.
3. From the More Actions drop-down menu, select Cancel to cancel the job. The job session status
changes to Canceled.
NEXT STEPS:
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l Once the job session completes, review cataloged data through the Search and Report
tabs. See Browse Inventory on page 366 and Report Overview on page 369.
RELATED TOPICS:
l Create a Schedule on page 163
171
User's Guide Monitor a Job Session
Failed - Indicates the job session did not successfully complete due to mixed task statuses.
Aborted - Indicates the job session did not successfully complete due to a reset, reboot, or
Skipped - Indicates that a volume was not cataloged. See the Task tab for more information
2. Click the drop-down arrow in the header of the Status or Last Run Status columns.
3. Select Filters, then choose a filter criteria.
172
User's Guide Monitor a Job Session
1. Click the Jobs tab and select a job by clicking in the row containing the job name.
2. The Activity / History pane displays. If the job is currently running, its status displays in the Activity tab.
To review the history of previous runs of the job, click the History tab, then click the session for details.
The following panes open:
Tasks
Displays a task-by-task view of the job session, including start and end times, duration, and status. It
also displays details of the underlying tasks that take place during the job session, including the task’s
type, duration, and status.
Details
Displays an overview of the job definition, including the job name, sources, options, and notification
settings.
Log
Displays the job log, which can be used for troubleshooting purposes.
RELATED TOPICS:
l Start, Pause, and Hold a Job Session on page 170
l Collect Logs For Troubleshooting on page 527
173
User's Guide Job Definition Overview
Job Types
Inventory jobs
Inventory jobs interrogate storage systems to gather and record metadata about high-level objects and
files. You can select one or more providers of the same type in a single job definition for cataloging.
Backup and Restore jobs
IBM Spectrum Copy Data Management utilizes automated Copy Data Management workflows for
replicating and intelligently reusing snapshots, vaults, and mirrors. Backup and Restore jobs offer control
over testing and cloning use cases, instant recovery, and full disaster recovery. Through Backup and
Restore jobs, you can:
l Copy data from a variety of storage providers to multiple locations.
l Reuse and recover resources from snapshots, vaults, mirrors, and other copies and replicas.
l Support use cases for automated data protection, recovery, DevOps, Dev/Test, data and database
validation with data masking, through the use of automated Instant Disk Restore, Instant VM Restore,
volume, and file restore functionalities.
Report jobs
A Report job is a System job that summarizes information about cataloged providers and the data and
other resources that reside on them.
Script jobs
A Script job defines a set of commands to run on the IBM Spectrum Copy Data Management appliance.
Use the script job to add functionality to IBM Spectrum Copy Data Management. A script can consist of one
or many commands, such as a shell script.
Maintenance job
174
User's Guide Job Definition Overview
The Maintenance job removes resources and associated objects created by IBM Spectrum Copy Data
Management when a job in a pending state is deleted. The cleanup procedure reclaims space on your
storage devices, cleans up your IBM Spectrum Copy Data Management catalog, and removes related
snapshots.
RELATED TOPICS:
l Configure SLA Policies on page 140
l Create a Schedule on page 163
l Monitor a Job Session on page 172
Inventory Jobs
l Create an Inventory Job Definition - Database on page 178
l Create an Inventory Job Definition - File System on page 181
l Create an Inventory Job Definition - DellEMC Unity on page 183
l Create an Inventory Job Definition - IBM Spectrum Accelerate on page 185
l Create an Inventory Job Definition - IBM Spectrum Protect Snapshot on page 187
l Create an Inventory Job Definition - IBM Spectrum Virtualize on page 189
l Create an Inventory Job Definition - NetApp ONTAP Storage on page 191
l Create an Inventory Job Definition - NetApp ONTAP File on page 194
l Create an Inventory Job Definition - HPE Nimble Storage on page 197
l Create an Inventory Job Definition - Pure Storage FlashArray on page 200
l Create an Inventory Job Definition - VMware on page 202
Backup Jobs
l Create a Backup Job Definition - InterSystems Caché on page 205
l Create a Backup Job Definition - SAP HANA on page 208
l Create a Backup Job Definition - Oracle on page 212
l Create a Backup Job Definition - SQL on page 219
l Create a Backup Job Definition - File System on page 224
l Create a Backup Job Definition - DellEMC Unity on page 228
l Create a Backup Job Definition - IBM Spectrum Accelerate on page 231
l Create a Backup Job Definition - IBM Spectrum Virtualize on page 235
l Create a Backup Job Definition - NetApp ONTAP on page 239
l Create a Backup Job Definition - HPE Nimble Storage on page 242
l Create a Backup Job Definition - Pure Storage FlashArray on page 246
l Create a Backup Job Definition - VMware on page 250
175
User's Guide Job Definition Overview
Restore Jobs
l Create a Restore Job Definition - InterSystems Caché on page 260
l Create a Restore Job Definition - SAP HANA on page 265
l Create a Restore Job Definition - Oracle on page 272
l Create a Restore Job Definition - SQL on page 278
l Create a Restore Job Definition - File System on page 285
l Create a Restore Job Definition - DellEMC Unity on page 289
l Create a Restore Job Definition - IBM Spectrum Accelerate on page 295
l Create a Restore Job Definition - IBM Spectrum Virtualize on page 301
l Create a Restore Job Definition - NetApp ONTAP on page 307
l Create a Restore Job Definition - HPE Nimble Storage on page 314
l Create a Restore Job Definition - Pure Storage FlashArray on page 320
l Create a Restore Job Definition - VMware on page 325
System Jobs
l Maintenance Job on page 345
l Create a Report Job Definition on page 346
l Create a Script Job Definition on page 348
176
User's Guide Inventory Jobs
Inventory Jobs
The topics in the following section cover Inventory job definitions.
177
User's Guide Create an Inventory Job Definition - Database
CONSIDERATIONS:
l If an Oracle Inventory job runs at the same time or short period after an Oracle Backup
job runs, copy errors may occur due to temporary mounts that are created during the
Backup job. As a best practice, schedule Oracle Inventory jobs so that they do not
overlap with Oracle Backup jobs.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
178
User's Guide Create an Inventory Job Definition - Database
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
After a certain number of job runs for a given job, older objects for that job are purged from the
Inventory. Enter the number of job runs for which high-level objects are to be retained.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
179
User's Guide Create an Inventory Job Definition - Database
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
180
User's Guide Create an Inventory Job Definition - File System
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
181
User's Guide Create an Inventory Job Definition - File System
After a certain number of job runs for a given job, older objects for that job are purged from the
Inventory. Enter the number of job runs for which high-level objects are to be retained.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
182
User's Guide Create an Inventory Job Definition - DellEMC Unity
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
After a certain number of job runs for a given job, older DellEMC Unity objects for that job are purged
from the Inventory. Enter the number of job runs for which high-level DellEMC Unity objects are to be
retained.
183
User's Guide Create an Inventory Job Definition - DellEMC Unity
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
184
User's Guide Create an Inventory Job Definition - IBM Spectrum Accelerate
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
After a certain number of job runs for a given job, older IBM Spectrum Accelerate objects for that job
are purged from the Inventory. Enter the number of job runs for which high-level IBM Spectrum
Accelerate objects are to be retained.
185
User's Guide Create an Inventory Job Definition - IBM Spectrum Accelerate
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
186
User's Guide Create an Inventory Job Definition - IBM Spectrum Protect Snapshot
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
187
User's Guide Create an Inventory Job Definition - IBM Spectrum Protect Snapshot
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
188
User's Guide Create an Inventory Job Definition - IBM Spectrum Virtualize
CONSIDERATIONS:
l IBM providers utilize port 22 for communication with IBM Spectrum Copy Data
Management.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
189
User's Guide Create an Inventory Job Definition - IBM Spectrum Virtualize
After a certain number of job runs for a given job, older IBM objects for that job are purged from the
Inventory. Enter the number of job runs for which high-level IBM objects are to be retained.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
190
User's Guide Create an Inventory Job Definition - NetApp ONTAP Storage
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
2. Click New , then select Inventory NetApp ONTAP Storage . The job editor opens.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Connection timeout (secs)
191
User's Guide Create an Inventory Job Definition - NetApp ONTAP Storage
To run a catalog job, the application needs to connect with the resource. If there is no response within a
certain time limit, it times out and the job session fails. Enter the number of seconds to wait before
timing out.
Number of catalog instances to keep
After a certain number of job runs for a given job, older NetApp ONTAP objects for that job are purged
from the Inventory. Enter the number of job runs for which high-level NetApp ONTAP objects are to be
retained.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
192
User's Guide Create an Inventory Job Definition - NetApp ONTAP Storage
193
User's Guide Create an Inventory Job Definition - NetApp ONTAP File
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
Best Practice: Create a schedule before creating a job definition so that you can easily add the schedule to
the job definition.
2. Click New , then select Inventory NetApp ONTAP Files . The job editor opens.
194
User's Guide Create an Inventory Job Definition - NetApp ONTAP File
4. From the list of available sites, select one or more volumes to catalog. For Cluster-Mode providers, the
SVM name appears in parentheses after the volume name. To view the number of files on the selected
volume, hover your cursor over the volume name.
5. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Skip root volume
Select this option to avoid cataloging the root volume of your NetApp ONTAP objects.
Skip unsupported volumes
Select this option to avoid cataloging unsupported volumes such as volumes that are offline. Selecting
this option also avoids cataloging volumes with i2p disabled when Catalog all available snapshots is
set to Yes and Traversal Method is set to Snapdiff.
Connection timeout (secs)
To run a catalog job, the application needs to connect with the resource. If there is no response within a
certain time limit, it times out and the job session fails. Enter the number of seconds to wait before
timing out.
Number of catalog instances to keep
After a certain number of job runs for a given job, older low-level NetApp ONTAP objects for that job
are purged from the Inventory. Enter the number of job runs for which low-level NetApp ONTAP objects
are to be retained. Note that by default, the old data for the job is purged from the IBM Spectrum Copy
Data Management Inventory after the newer data is cataloged.
Condense catalog before run
Select this option to purge older low-level NetApp ONTAP objects from the IBM Spectrum Copy Data
Management Inventory before newer data is cataloged through a NetApp ONTAP File Inventory job.
Condense catalog after failed run
Select this option to purge NetApp ONTAP objects from the IBM Spectrum Copy Data Management
Inventory for the unsuccessful run of the NetApp ONTAP File Inventory job.
Traversal Method
This option indicates the methodology to employ when cataloging snapshots. IBM Spectrum Copy
Data Management honors your preference if it is supported for the particular system configuration. If
the selected preference is not supported for your system configuration, the operation fails.
195
User's Guide Create an Inventory Job Definition - NetApp ONTAP File
SnapDiff. IBM Spectrum Copy Data Management performs cataloging based on snapshot differences.
If SnapDiff is selected, the default number of files requested in each query is 256 and the default
maximum number of volumes that are simultaneously cataloged is 8.
Filewalk. IBM Spectrum Copy Data Management retrieves owner information for files and folders for
volumes that have CIFS shares. This option is used in conjunction with the IBM Spectrum Copy Data
Management Filewalker tool. Once cataloging completes, searchable owner information is added to
the Inventory. See the Filewalker documentation for more information.
Base snapshot for catalog
For file cataloging, a base snapshot on the storage system is required.
Create a new snapshot before cataloging. A new snapshot is created when the job session begins
and is deleted after cataloging. This is the preferred method if sufficient resources are available.
Use latest snapshot. A new snapshot is not created; instead the latest healthy snapshot available for
that volume is used to assist with file level cataloging. You can exclude certain snapshots from the set
of snapshots IBM Spectrum Copy Data Management selects from by entering character patterns for
those snapshots. Use a comma to separate each pattern.
Note: To catalog a volume that does not support snapshot creation, such as a SnapMirror volume,
select Use latest snapshot.
Catalog all available snapshots
Yes. Catalogs snapshots on the volume in addition to the base snapshot. IBM Spectrum Copy Data
Management examines additional snapshots ensuring the Inventory has the latest data. You can
exclude certain snapshots from the set of snapshots IBM Spectrum Copy Data Management catalogs
by entering character patterns for those snapshots. Use a comma to separate each pattern.
No. Catalogs only the base snapshot.
Note: By cataloging all available snapshots, you can view multiple versions of your files through a file-
level search. Additional versions are available on the file's properties pane.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
196
User's Guide Create an Inventory Job Definition - HPE Nimble Storage
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create an Inventory Job Definition - NetApp ONTAP Storage on page 191
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Search and Filter Guidelines on page 557
197
User's Guide Create an Inventory Job Definition - HPE Nimble Storage
l At least one HPE Nimble Storage provider must be associated with a HPE Nimble
Storage Inventory job definition. Before defining an Inventory job, add HPE Nimble
Storage providers. See Register a Provider on page 79.
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
After a certain number of job runs for a given job, older HPE Nimble Storage objects for that job are
purged from the Inventory. Enter the number of job runs for which high-level HPE Nimble Storage
objects are to be retained.
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
198
User's Guide Create an Inventory Job Definition - HPE Nimble Storage
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
199
User's Guide Create an Inventory Job Definition - Pure Storage FlashArray
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Number of catalog instances to keep
After a certain number of job runs for a given job, older Pure Storage objects for that job are purged
from the Inventory. Enter the number of job runs for which high-level Pure Storage objects are to be
retained.
200
User's Guide Create an Inventory Job Definition - Pure Storage FlashArray
7. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
201
User's Guide Create an Inventory Job Definition - VMware
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum concurrent tasks
Set the maximum number of concurrent cataloging tasks that can be performed on the provider.
Connection timeout (secs)
To run a catalog job, the application needs to connect with the resource. If there is no response within a
certain time limit, it times out and the job session fails. Enter the number of seconds to wait before
timing out.
202
User's Guide Create an Inventory Job Definition - VMware
8. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
9. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
203
User's Guide Backup Jobs
Backup Jobs
The topics in the following section cover Backup job definitions as well as creating VMware Backup job
proxies.
204
User's Guide Create a Backup Job Definition - InterSystems Caché
CONSIDERATIONS:
l InterSystems Caché backup jobs occur at the instance level.
205
User's Guide Create a Backup Job Definition - InterSystems Caché
6. Click the job definition's associated Schedule Time field and select Enable Schedule to set a time to run
the SLA Policy. If a schedule is not enabled, run the job on demand through the Jobs tab. Repeat as
necessary to add additional SLA Policies to the job definition.
If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger
multiple SLA Policies to run concurrently.
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Skip IA Mount points and/or databases
Enable to skip Instant Disk Restore objects. By default, this option is enabled.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
206
User's Guide Create a Backup Job Definition - InterSystems Caché
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Create an InterSystems Caché Restore job definition. See Create a Restore Job
Definition - InterSystems Caché on page 260.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - InterSystems Caché on page 260
207
User's Guide Create a Backup Job Definition - SAP HANA
CONSIDERATIONS:
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
208
User's Guide Create a Backup Job Definition - SAP HANA
6. Click the job definition's associated Schedule Time field and select Enable Schedule to set a time to run
the SLA Policy. If a schedule is not enabled, run the job on demand through the Jobs tab. Repeat as
necessary to add additional SLA Policies to the job definition.
If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger
multiple SLA Policies to run concurrently.
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Skip IA Mount points and/or databases
Enable to skip Instant Disk Restore objects. By default, this option is enabled.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
209
User's Guide Create a Backup Job Definition - SAP HANA
Enter the email addresses of the status email notifications recipients. Click Add to add it to the
list.
10. To edit Log Backup options before creating the job definition, click Log Backup. If Backup Logs is
selected, IBM Spectrum Copy Data Management backs up database logs then protects the underlying
disks. Select resources in the Select resource(s) to add archive log destination field. Database logs
are backed up to the directory entered in the Use Universal Destination Mount Point field, or in the
Mount Point field after resources are selected. The destination must already exist, must reside on storage
from a supported vendor, and the SAP HANA user needs to have full read and write access. For more
information, see the Prerequisites on page 56.
If multiple databases are selected for backup, then each of the servers hosting the databases must have
their Destination Mount Points set individually. For example, if two databases, one from Server A and one
from Server B, are added to the same job definition, and a single mount point named /logbackup is
defined in the job definition, then you must create separate disks for each server and mount them both to
/logbackup on the individual servers. When the mount point is changed, you must manually go in and
clean up the previous log backup directory path.
To disable a log backup schedule on the SAP HANA server, edit the associated SAP HANA Backup job
definition and deselect the checkbox next to the database on which you wish to disable the log backup
schedule in the Select resource(s) for log backup destination field, then save and re-run the job.
When the mount point is disabled, you must manually go in and clean up the log backup directory path.
Note: The job definition must be saved and re-run for mount point changes or disablement to take effect.
The default setting for pruning SAP HANA log backups is 7 days. This value may be adjusted in the
property file located in /opt/virgo/repository/ecx-
usr/com.syncsort.dp.xsb.serviceprovider.properties. Modify the
application.logpurge.days parameter to the desired value. Finally, restart the virgo service by
issuing the following command:
systemctl restart virgo.service
11. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
210
User's Guide Create a Backup Job Definition - SAP HANA
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Create a SAP HANA Restore job definition. See Create a Restore Job Definition - SAP
HANA on page 265.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - SAP HANA on page 265
211
User's Guide Create a Backup Job Definition - Oracle
l If Oracle data resides on LVM volumes, you must stop and disable the lvm2-lvmetad
service before running Backup or Restore jobs. Leaving the service enabled can
prevent volume groups from being resignatured correctly during restore and can lead to
212
User's Guide Create a Backup Job Definition - Oracle
data corruption if the original volume group is also present on the same system. To
disable the lvm2-lvmetad service, run the following commands:
systemctl stop lvm2-lvmetad
systemctl disable lvm2-lvmetad
Next, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
l Note that Oracle databases must be registered in the recovery catalog before running
an Oracle Backup job utilizing the Record copies in RMAN recovery catalog feature.
l In your Linux environment, if Oracle data or logs reside on LVM volumes, ensure the
LVM version is 2.0.2.118 or later.
l For Oracle 12c databases, backups are created without placing the database in hot
backup mode through Oracle Storage Snapshot Optimization. All associated snapshot
functionality is supported. This feature requires the Advanced Compression feature of
Oracle to be licensed. If this feature is not licensed in your environment, perform the
following procedure to disable Snapshot Optimization and force the use of hot backup
mode:
Create the file /etc/guestapps.conf on the Oracle server and add the following to
it:
[DEFAULT]
skipSnapshotOptimization = true
If the file already exists, edit it and add the parameter under the existing [DEFAULT]
section. This is a per-host setting. The parameter must be set in this file on each Oracle
server where you want to force the use of hot backup mode.
l NOARCHIVELOG databases are not eligible for point-in-time recovery.
NOARCHIVELOG databases can only be recovered to specific or latest versions. If
upgrading from previous versions of IBM Spectrum Copy Data Management, the
associated Oracle Inventory job must be re-run after upgrading to discover
NOARCHIVELOG databases.
l When the option to create an additional log destination is selected, IBM Spectrum Copy
Data Management automatically purges the logs under this new location after each
successful backup. For IBM SVC, IBM Spectrum Copy Data Management purges logs
after a FlashCopy operation but not after a Global Mirror operation. If both FlashCopy
and Global Mirror are enabled for a database (whether in separate job definitions or the
same), IBM Spectrum Copy Data Management purges the logs after the FlashCopy
operation only. For databases that are protected only by a Global Mirror workflow, IBM
213
User's Guide Create a Backup Job Definition - Oracle
Spectrum Copy Data Management does not purge the logs at all so they must be
deleted using a retention policy externally managed by a database administrator, for
example, using RMAN. Note that in any case, IBM Spectrum Copy Data Management
does not purge logs from other log destinations so they must also be externally
managed.
l If an Oracle Inventory job runs at the same time or short period after an Oracle Backup
job runs, copy errors may occur due to temporary mounts that are created during the
Backup job. As a best practice, schedule Oracle Inventory jobs so that they do not
overlap with Oracle Backup jobs.
CONSIDERATIONS:
l Note that point-in-time recovery is not supported when one or more datafiles are added
to the database in the period between the chosen point-in-time and the time that the
preceeding Backup job ran.
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
214
User's Guide Create a Backup Job Definition - Oracle
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Skip IA Mount points and/or databases
Enable to skip Instant Disk Restore objects. By default, this option is enabled.
Record copies in RMAN local repository
Enable to create a local backup of the Recovery Manager (RMAN) catalog during the running of Oracle
Backup job. RMAN catalogs can be used for backup, recovery, and maintenance of Oracle databases
outside of IBM Spectrum Copy Data Management.
Record copies in RMAN recovery catalog
If Record copies in RMAN local repository is selected, select Record copies in RMAN recovery catalog
to also create a remote RMAN catalog. Select an eligible Remote Catalog Database from the list of
available sites. Select a Recovery Catalog Owner from the list of available Identities, or create a new
Recovery Catalog Owner, then click OK.
Note that Oracle databases must be registered in the recovery catalog before running an Oracle
Backup job utilizing the Record copies in RMAN recovery catalog feature.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
215
User's Guide Create a Backup Job Definition - Oracle
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. To edit Log Backup options before creating the job definition, click Log Backup. If Create additional
archive log destination is selected, IBM Spectrum Copy Data Management backs up database logs
then protects the underlying disks. Select resources in the Select resource(s) to add archive log
destination field. Database logs are backed up to the directory entered in the Universal destination
directory field, or in the Directory field after resources are selected. The destination must already exist
and must reside on storage from a supported vendor.
The default option is Use existing archive log destination(s). Note that IBM Spectrum Copy Data
Management automatically discovers the location where Oracle writes archived logs. If this location
resides on storage from a supported vendor, IBM Spectrum Copy Data Management can protect it. If the
216
User's Guide Create a Backup Job Definition - Oracle
existing location is not on supported storage, or if you wish to create an additional backup of database
logs, enable the Create additional archive log destination option, then specify a path that resides on
supported storage. When enabled, IBM Spectrum Copy Data Management configures the database to
start writing archived logs to this new location in addition to any existing locations where the database is
already writing logs.
Note: NOARCHIVELOG databases are not eligible for log backup as they do not have archive logging
enabled.
If multiple databases are selected for backup, then each of the servers hosting the databases must have
their destination directories set individually. For example, if two databases from Server A and Server
B are added to the same job definition, and a single destination directory named /logbackup is defined in
the job definition, then you must create separate disks for both servers and mount them both to
/logbackup on the individual servers.
If the No archive logs / Use existing archive log destination(s) option is selected, IBM Spectrum
Copy Data Management does not automatically purge any archived logs. The retention of archived logs
must be managed externally, for example using RMAN. In order to support point-in-time recovery, ensure
that the retention period is at least large enough to retain all archived logs between successive runs of the
Oracle Backup job.
If the Create additional archive log destination option is selected, IBM Spectrum Copy Data
Management automatically manages the retention of only those archived logs that are under the new
destination specified in the job definition. After a successful backup, logs older than that backup are
automatically deleted from the IBM Spectrum Copy Data Management-managed destination. Even in this
case, IBM Spectrum Copy Data Management does not control the deletion of archived logs in other pre-
existing destinations so they must still be managed externally as described above.
If the Create additional archive log destination option is selected, IBM Spectrum Copy Data
Management makes a one-time configuration change to the database to add the specified location as a
parameter log_archive_dest_<num> in the database's archive log destinations. If you delete the IBM
Spectrum Copy Data Management job definition, the database parameter is not affected so if you want to
stop using the log destination, you may need to manually disable it this parameter.
11. To edit Data Masking options before creating the job definition, click Data Masking. If enabled, IBM
Spectrum Copy Data Management mounts snapshot copies of the protected database onto a user-
specified staging server or source server. Select resources to be masked from the list of available
databases, select a backup to mask, and an Oracle home where masking takes place. Set a trigger, then
in the Enter path to masking command on Oracle Server field, enter the full path to an external script
or tool to perform the data masking. For example, /home/oracle/tools/maskDatabase.sh.
Whether the masking takes place on the source server or a staging server, the masking process spins up
the temporary database with a unique random name (for example, mask1234) that does not conflict with
any other instance on that system. IBM Spectrum Copy Data Management then invokes the masking
217
User's Guide Create a Backup Job Definition - Oracle
script with three arguments: the Oracle Home path, the new instance name, and the original instance
name. For example:
/path/to/masking/script /u01/app/home1 mask1234 proddb
IBM Spectrum Copy Data Management spins up a clone of the database, then executes the user-
specified command to perform masking. When the command completes successfully, IBM Spectrum
Copy Data Management cleans up the clone database, and catalogs and saves the masked copies
which are then available for selection in the DevOps workflow of IBM Spectrum Copy Data Management
Restore jobs.
Note: User-defined masking scripts that were in existence before IBM Spectrum Copy Data
Management 2.2.7.4 must be updated to ensure they read the correct arguments and connect to the
appropriate instance to perform masking.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Create an Oracle Restore job definition. See Create a Restore Job Definition - Oracle
on page 272.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - Oracle on page 272
218
User's Guide Create a Backup Job Definition - SQL
219
User's Guide Create a Backup Job Definition - SQL
l IBM Spectrum Copy Data Management supports one Microsoft SQL database per
SQL Backup job, therefore you must avoid protecting a SQL database through multiple
Backup jobs.
l Note that IBM Spectrum Copy Data Management does not support log backup of
Simple recovery models.
l An AlwaysOn of a replica of a SQL cluster instance is not supported. Replicas are
limited standalone SQL servers and instances.
CONSIDERATIONS:
l Note that point-in-time recovery is not supported when one or more datafiles are added
to the database in the period between the chosen point-in-time and the time that the
preceeding Backup job ran.
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
220
User's Guide Create a Backup Job Definition - SQL
8. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Skip IA Mount points and/or databases
Enable to skip Instant Disk Restore objects. By default, this option is enabled.
Maximum Concurrent Snapshots on ESX
Set the maximum number of concurrent snapshots on the vCenter.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
221
User's Guide Create a Backup Job Definition - SQL
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. To edit Log Backup options before creating the job definition, click Log Backup. If Backup Logs is
selected, IBM Spectrum Copy Data Management backs up database logs then protects the underlying
disks. Select resources in the Select resource(s) to add archive log destination field. Database logs
are backed up to the directory entered in the Use Universal Destination Mount Point field, or in the
Mount Point field after resources are selected. The destination must already exist and must reside on
storage from a supported vendor.
Note: Always On job definitions must specify the log destination as a path in the following
format: \\server\share\optional_subfolder. The server can be either an IP address or hostname that is
resolvable from the IBM Spectrum Copy Data Management appliance.
If multiple databases are selected for backup, then each of the servers hosting the databases must have
their Destination Mount Points set individually. For example, if two databases, one from Server A and one
from Server B, are added to the same job definition, and a single mount point named /logbackup is
defined in the job definition, then you must create separate disks for each server and mount them both to
/logbackup on the individual servers.
IBM Spectrum Copy Data Management automatically truncates post log backups of databases that it
backs up. If database logs are not backed up with IBM Spectrum Copy Data Management, logs are not
truncated by IBM Spectrum Copy Data Management and must be managed separately.
222
User's Guide Create a Backup Job Definition - SQL
To disable a log backup schedule on the SQL server, edit the associated SQL Backup job definition and
deselect the checkbox next to the database on which you wish to disable the log backup schedule in the
Select resource(s) for log backup destination field, then save and re-run the job. Note that the job
definition must be saved and re-run for the disablement to take effect.
When SQL backup job completes with log backups enabled, all transaction logs up to the point of the job
completing are purged from the SQL server. Note that log purging will only occur if the SQL Backup job
completes successfully. If log backups are disabled during a re-run of the job, log purging will not occur.
If a source database is overwritten, all old transaction logs up to that point are placed in a “condense”
directory once the restoration of the original database completes. When the next run of the SQL Backup
job completes, the contents of the condense folder is removed.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Create a SQL Application Restore job definition. See Create a Restore Job Definition -
SQL on page 278.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - SQL on page 278
223
User's Guide Create a Backup Job Definition - File System
CONSIDERATIONS:
l An AlwaysOn of a replica of a SQL cluster instance is not supported. Replicas are
limited standalone SQL servers and instances.
l Note that point-in-time recovery is not supported when one or more datafiles are added
to the database in the period between the chosen point-in-time and the time that the
preceeding Backup job ran.
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
224
User's Guide Create a Backup Job Definition - File System
If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger
multiple SLA Policies to run concurrently.
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Skip IA Mount points and/or databases
Enable to skip Instant Disk Restore objects. By default, this option is enabled.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
225
User's Guide Create a Backup Job Definition - File System
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Note: If adding a script to a Windows-based File System job definition, the user running the script must
have the "Log on as a service" right enabled, which is required for running prescripts and postscripts. For
more information about the "Log on as a service" right, see https://technet.microsoft.com/en-
us/library/cc794944.aspx.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the
list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
226
User's Guide Create a Backup Job Definition - File System
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Create an OS Volume Restore job definition. See Create a Restore Job Definition - File
System on page 285.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - File System on page 285
227
User's Guide Create a Backup Job Definition - DellEMC Unity
CONSIDERATIONS:
l Note that before running replication jobs, replication connections must be established
between VNX arrays. Create replication connections through the DellEMC Unisphere
wizard found under Hosts > Replication Connections.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
228
User's Guide Create a Backup Job Definition - DellEMC Unity
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
229
User's Guide Create a Backup Job Definition - DellEMC Unity
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
l Create a DellEMC Unity Restore job definition. See Create a Restore Job Definition -
DellEMC Unity on page 289.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - DellEMC Unity on page 289
230
User's Guide Create a Backup Job Definition - IBM Spectrum Accelerate
CONSIDERATIONS:
l IBM providers utilize port 22 for communication with IBM Spectrum Copy Data
Management.
l Note that snapshot postscript functionality applies only to FlashCopy subpolicies.
l In IBM storage environments, port grouping and IP partnerships are required to enable
remote copy connections. See IBM's SAN Volume Controller and Storwize Family
Native IP Replication Guide.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
231
User's Guide Create a Backup Job Definition - IBM Spectrum Accelerate
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Create Consistency Group
If multiple volumes are selected in the Source tab (for example, volumes that contain data tied to an
application) enable this option to add the volumes to a FlashCopy or Global Mirror Consistency Group
to perform Copy Data functions on the entire group. If the associated SLA Policy contains both
FlashCopy and Global Mirror subpolicies, a separate Consistency Group will be created for each copy
type. Note that if more than one IBM provider is selected in the job definition, a Consistency Group will
be created for each provider. Consistency Groups are named based on the prefix provided during job
creation plus the job name.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
232
User's Guide Create a Backup Job Definition - IBM Spectrum Accelerate
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
233
User's Guide Create a Backup Job Definition - IBM Spectrum Accelerate
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
l Create an IBM Spectrum Accelerate Restore job definition. See Create a Restore Job
Definition - IBM Spectrum Accelerate on page 295.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - IBM Spectrum Accelerate on page 295
234
User's Guide Create a Backup Job Definition - IBM Spectrum Virtualize
CONSIDERATIONS:
l IBM providers utilize port 22 for communication with IBM Spectrum Copy Data
Management.
l Note that snapshot postscript functionality applies only to FlashCopy subpolicies.
l In IBM storage environments, port grouping and IP partnerships are required to enable
remote copy connections. See IBM's SAN Volume Controller and Storwize Family
Native IP Replication Guide.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
235
User's Guide Create a Backup Job Definition - IBM Spectrum Virtualize
If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger
multiple SLA Policies to run concurrently.
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Skip the Flash Copy Target Volumes
Select this option to ensure FlashCopy target volumes are excluded from jobs associated with the
SLA Policy.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Create Consistency Group
If multiple volumes are selected in the Source tab (for example, volumes that contain data tied to an
application) enable this option to add the volumes to a FlashCopy or Global Mirror Consistency Group
to perform Copy Data functions on the entire group. If the associated SLA Policy contains both
FlashCopy and Global Mirror subpolicies, a separate Consistency Group will be created for each copy
type. Note that if more than one IBM provider is selected in the job definition, a Consistency Group will
be created for each provider. Consistency Groups are named based on the prefix provided during job
creation plus the job name.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
236
User's Guide Create a Backup Job Definition - IBM Spectrum Virtualize
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
237
User's Guide Create a Backup Job Definition - IBM Spectrum Virtualize
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
l Create an IBM Spectrum Virtualize Restore job definition. See Create a Restore Job
Definition - IBM Spectrum Virtualize on page 301.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - IBM Spectrum Virtualize on page 301
238
User's Guide Create a Backup Job Definition - NetApp ONTAP
CONSIDERATIONS:
l Note that NetApp ONTAP Backup jobs can only vault or mirror snapshots created
through IBM Spectrum Copy Data Management jobs.
l Note that cloned volumes will not be replicated through a Backup job.
l Note that snapshot postscript functionality applies only to NetApp ONTAP storage
snapshot subpolicies.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
239
User's Guide Create a Backup Job Definition - NetApp ONTAP
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
240
User's Guide Create a Backup Job Definition - NetApp ONTAP
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
l Create a NetApp ONTAP Restore job definition. See Create a Restore Job Definition -
NetApp ONTAP on page 307.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
241
User's Guide Create a Backup Job Definition - HPE Nimble Storage
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
Note: HPE Nimble Storage requires that the consistency group option be selected. Click Advanced to
enable this option.
2. Click New , then select Backup . The job editor opens.
242
User's Guide Create a Backup Job Definition - HPE Nimble Storage
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Create Consistency Group
If multiple volumes are selected in the Source tab (for example, volumes that contain data tied to an
application) enable this option to add the volumes to a Consistency Group to perform backup functions
on the entire group. If the associated SLA Policy contains different subpolicy types, a separate
Consistency Group will be created for each backup type. Note that if more than one HPE Nimble
Storage provider is selected in the job definition, a Consistency Group will be created for each provider.
Consistency Groups are named based on the prefix provided during job creation plus the job name.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
243
User's Guide Create a Backup Job Definition - HPE Nimble Storage
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
244
User's Guide Create a Backup Job Definition - HPE Nimble Storage
l Create a HPE Nimble Storage Restore job definition. See Create a Restore Job
Definition - HPE Nimble Storage on page 314.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - HPE Nimble Storage on page 314
245
User's Guide Create a Backup Job Definition - Pure Storage FlashArray
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
Note: Pure Storage FlashArray requires that the consistency group option be selected. Click Advanced
to enable this option.
2. Click New , then select Backup . The job editor opens.
246
User's Guide Create a Backup Job Definition - Pure Storage FlashArray
6. Click the job definition's associated Schedule Time field and select Enable Schedule to set a time to run
the SLA Policy. If a schedule is not enabled, run the job on demand through the Jobs tab. Repeat as
necessary to add additional SLA Policies to the job definition.
If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger
multiple SLA Policies to run concurrently.
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Create Consistency Group
If multiple volumes are selected in the Source tab (for example, volumes that contain data tied to an
application) enable this option to add the volumes to a Consistency Group to perform backup functions
on the entire group. If the associated SLA Policy contains different subpolicy types, a separate
Consistency Group will be created for each backup type. Note that if more than one Pure Storage
FlashArray provider is selected in the job definition, a Consistency Group will be created for each
provider. Consistency Groups are named based on the prefix provided during job creation plus the job
name.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
247
User's Guide Create a Backup Job Definition - Pure Storage FlashArray
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
NEXT STEPS:
248
User's Guide Create a Backup Job Definition - Pure Storage FlashArray
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
l Create a Pure Storage FlashArray Restore job definition. See Create a Restore Job
Definition - Pure Storage FlashArray on page 320.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Create a Restore Job Definition - Pure Storage FlashArray on page 320
249
User's Guide Create a Backup Job Definition - VMware
CONSIDERATIONS:
l Note that VMware Backup and Restore jobs only support vCenters or ESX hosts
running vSphere 6.0 through 7.0.
l When running VADP-based VM Replication workflows, target volumes and datastores
can be automatically expanded in response to space usage requirements if supported
by the underlying storage. Automatic growing prevents a volume from running out of
space or forcing you to delete files manually. For a list of supported storage systems
see System Requirements on page 23.
l Note that VMware DRS cluster datastores are supported in VMware Backup and
Restore jobs.
l In NetApp ONTAP environments running Clustered Data ONTAP, cluster peering must
be enabled. Peer relationships enable communication between SVMs. See NetApp
ONTAP's Cluster and Vserver Peering Express Guide.
l In addition to NFS, IBM Spectrum Copy Data Management supports VMFS datastores
for NetApp storage targets.
l In IBM storage environments, port grouping and IP partnerships are required to enable
remote copy connections. See IBM's SAN Volume Controller and Storwize Family
Native IP Replication Guide.
250
User's Guide Create a Backup Job Definition - VMware
l All related NetApp ONTAP storage resources associated with a VMware provider must
be added to IBM Spectrum Copy Data Management, which include NetApp ONTAP
storage controllers and clusters. See Register a Provider on page 79.
l Note that VMware Backup jobs do not support virtual machine SCSI controllers where
the SCSI Bus Sharing value is set to virtual or physical.
l Note that Instant Disk Restore recoveries utilizing the VM Replication method are not
supported at the datastore level. Instant Disk Restore datastore level recoveries are
supported through the primary storage snapshot method.
l Note that snapshot protection is not supported at an ESX server level.
l Note that cloned volumes will not be replicated through a backup job.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
251
User's Guide Create a Backup Job Definition - VMware
4. From the drop-down menu select VMs and Templates or Storage. From the list of available sites, select
one or more resources to back up, including virtual machines, VM templates, datastores, folders, vApps,
and datacenters.
5. Select an SLA Policy that meets your backup data criteria.
6. Click the job definition's associated Schedule Time field and select Enable Schedule to set a time to run
the SLA Policy. If a schedule is not enabled, run the job on demand through the Jobs tab. Repeat as
necessary to add additional SLA Policies to the job definition.
If configuring more than one SLA Policy in a job definition, select the Same as workflow option to trigger
multiple SLA Policies to run concurrently.
Note: Only SLA Policies with the same RPO frequencies can be linked through the Same as workflow
option. Define an RPO frequency when creating an SLA Policy.
7. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
8. To edit options before creating the job definition, click Advanced. Set the job definition options.
Maximum Concurrent Tasks
Set the maximum amount of concurrent transfers between the source and the destination.
Create VM snapshots for all VMs
Enable to configure virtual machine snapshot options. Available options include creating
virtual machine snapshots for all virtual machines, making all virtual machines included in the job
application or file system consistent, or making specific virtual machines included in the job application
or file system consistent.
Application consistent backup data captures data in memory and transactions in process. All VSS-
compliant applications such as Microsoft Active Directory, Microsoft Exchange, Microsoft SharePoint,
Microsoft SQL, and system state are quiesced. VMDKs and virtual machines can be instantly mounted
to recover data related to quiesced applications.
Truncate application logs
To truncate application logs for SQL during the Backup job, enable the Truncate application logs
option. Note that credentials must be established for the associated virtual machine and SQL instance
through the Sites & Providers pane on the Configure tab. Select a VMware provider, click the
VMs tab, then click the associated virtual machine. Click the Credentials tab and add credentials for
the virtual machine. Note that System credentials are always required. If the credentials are the same
for the SQL instance, select the Use System Credentials for app option. If the credentials differ, you
must provide credentials for all SQL instances, including the default SQL server. Ensure the Type field
in the New Credential dialog window is set to SQL.
252
User's Guide Create a Backup Job Definition - VMware
IBM Spectrum Copy Data Management generates logs pertaining to the application log truncation
function and copies them to the following location on the IBM Spectrum Copy Data Management
appliance: /data/log/ecxdeployer/<vm name>/logs.
VM Snapshot Scripts
VM snapshot prescripts and postscripts are scripts that can be run on the virtual machine before or
after a VMware virtual machine snapshot is taken. The snapshot prescript runs before a VMware
virtual machine snapshot is captured, while the snapshot postscript runs after the snapshot completes.
A script can consist of one or many commands, such as a shell script for Linux-based virtual machines
or Batch and PowerShell scripts for Windows-based virtual machines. See Configure Scripts on page
160.
Select a virtual machine, then click the Scripts field in the Pre-Script or Post-Script section to select or
upload a script. Once complete, the script displays in the Selected Script(s) section. Click the
Parameters field to add a parameter to the script, then click Add. Note additional parameters can be
added to a script by entering parameters one at a time in the field, then clicking Add.
Click the Identity field to add or create the credentials required to run the script. See Identities
Overview on page 134.
Repeat the procedure for each virtual machine associated with the job.
Note: When the VM Snapshot script is run, the virtual machine name will be passed as the first
argument to the script. Any additional arguments specified in the job will follow as second, third, and so
forth. If a non-zero exit code is returned by the script, the associated snapshot task fails.
Skip read only datastores
Enable to skip datastores mounted as read-only in vCenter.
Skip IA Mount points and/or databases
Enable to skip Instant Disk Restore objects. By default, this option is enabled.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
253
User's Guide Create a Backup Job Definition - VMware
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Enable Job-level Snapshot Scripts
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot task runs. The snapshot prescript runs before all associated snapshots are run, while the
snapshot postscript runs after all associated snapshots complete. A script can consist of one or many
commands, such as a shell script for Linux-based virtual machines or Batch and PowerShell scripts for
Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed. For parameter examples, see Using State and
Status Arguments in Postscripts on page 343.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
_SNAPSHOTS_ is an optional parameter for snapshot postscripts that displays a comma separated
value string containing all of the storage-based snapshots created by the job. The format of each value
is as follows: <registered provider name>:<volume name>:<snapshot name>.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
9. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your triggers, or can be run manually from the Jobs tab.
254
User's Guide Create a Backup Job Definition - VMware
NEXT STEPS:
l If in a Linux environment, consider creating VADP proxies to enable load sharing. See
Create VMware Backup Job Proxies on page 256.
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
l Use the Inventory Browse feature to review the recovery point. See Browse Inventory
on page 366.
l Create a VMware Restore job definition. See Create a Restore Job Definition - VMware
on page 325.
RELATED TOPICS:
l NetApp ONTAP Document: Cluster and Vserver Peering Express Guide
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Configure Scripts on page 160
l Create a Schedule on page 163
l Create VMware Backup Job Proxies on page 256
l Create a Restore Job Definition - VMware on page 325
255
User's Guide Create VMware Backup Job Proxies
System Requirements:
This feature has been tested only for Ubuntu, SUSE Linux Enterprise Server, and Red Hat environments. It is
supported only in x64 configurations with a minimum kernel of 2.6.32.
A minimum of 8 GB of RAM is required (16 GB recommenced), along with 60 GB of disk space.
Each proxy must have a fully qualified domain name.
Installer Notes:
The IBM Spectrum Copy Data Management 2.2.6+ version of the VADP Proxy installer includes Virtual Disk
Development Kit (VDDK) version 5.5.5. This version of the VADP proxy installer provides the following
functionality:
l External VADP Proxy support with vSphere 6.5
l External VADP Proxy support for Hot Add operations, which provides higher performance for VADP
Backups.
To create a proxy:
For each proxy:
256
User's Guide Create VMware Backup Job Proxies
1. Power up a physical or virtual Linux machine that meets the system requirements defined above, and is
on the same network as the host IBM Spectrum Copy Data Management machine.
2. Copy the VADP Proxy installation program to the local proxy machine. The VADP Proxy file can be
downloaded from the MySupport site.
3. Log in to the proxy machine as root, or as a user capable of running “sudo” commands.
4. On the proxy machine, open a terminal. Enter the following command to install the proxy server software:
./vmdkbackup-1.0-installer.bin
The Setup wizard opens.
Note:Alternatively you can run the installer using command line protocol by entering the following
command: ./vmdkbackup-1.0-installer.bin --mode text
5. Follow the steps in the Setup wizard to configure your proxy server and connect to the IBM Spectrum
Copy Data Management host.
b. When prompted for the IBM Spectrum Copy Data Management Discovery Server IP, enter the IP
Address of the IBM Spectrum Copy Data Management host.
c. When prompted for the IBM Spectrum Copy Data Management Site String, set it to “default”.
6. Click Finish when the Setup wizard indicates it has completed. After installation, note that your new
installation directory includes a subdirectory called /log, which is the job log location.
After successful installation, the service ecxvadp is started on the proxy machine. A log file
ecxvadp.log is generated in /opt/CDM/logs directory.
Repeat the previous steps for each proxy you want to create.
2. In the IBM Spectrum Copy Data Management management console, click the arrow next to the Support
icon, then choose View Edge Services Status. The IBM Spectrum Copy Data Management host
NEXT STEPS:
l Run the VMware Backup job. The use of the proxies are indicated in the job log by a log
message similar to the following:
257
User's Guide Create VMware Backup Job Proxies
RELATED TOPICS:
l Create a Backup Job Definition - VMware on page 250
l Start, Pause, and Hold a Job Session on page 170
258
User's Guide Restore Jobs
Restore Jobs
The topics in the following section cover Restore job definitions as well as postscript argument details.
259
User's Guide Create a Restore Job Definition - InterSystems Caché
CONSIDERATIONS:
l Note that the following users and groups must be created on the target host: instance
owner, effective user for InterSystems Caché superserver and its jobs, effective group
for InterSystems Caché processes, and a group that has permissions to start and stop
InterSystems Caché instances. The user and group IDs should match those on the
source host. The instance will be brought up using the same mount points as those
found on the source machine, so ensure these mounts are not in use on the target.
l Note that it is possible to scan in an InterSystems Caché backup failover member
instance or an async member instance and run snapshots against the mirror copy
instead of the primary failover member.
l When creating an InterSystems Caché restore job definition, select only one instance to
restore. If more than once instance is selected, the InterSystems Caché agent only
restores the last instance it receives in the command request.
l When restoring to a target with running InterSystems Caché instances, the instances
display as valid targets. Note that IBM Spectrum Copy Data Management will not
interact with these instances, but instead bring up a new instance using mapped mount
points. When restoring to a target with no prior InterSystems Caché instances, IBM
Spectrum Copy Data Management creates a placeholder that acts as a restore target
260
User's Guide Create a Restore Job Definition - InterSystems Caché
Best Practice: Create a schedule before creating a job definition so that you can easily add the schedule to
the job definition.
To create an InterSystems Caché Restore job definition:
1. Click the Jobs tab. Expand the Database folder, then select InterSystems Caché .
application server to view available database recovery points. Select resources, and change the order in
which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Application Search from the drop-down menu to search for application servers with
available recovery points. Add copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . Select a source site and an associated destination. If creating an Instant Disk
Restore job definition, review the destination's database name mapping settings. Optionally, click the
New database name field to create an alternate database name.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Application Options
Rename Mount Points
For more information about the Rename Mount Points options, see Restore Jobs - Rename Mount
Points and Initialization Parameter Options on page 340.
261
User's Guide Create a Restore Job Definition - InterSystems Caché
If creating an Instant Database Restore job definition, this option is set to Do Not Rename by default.
IBM Spectrum Copy Data Management will mount the mount points with the same path/name as the
source.
Change the GUID of the Instance
By default the restored instance will have the same GUID as the source instance. If creating an Instant
Database Restore job definition, selecting this option will generate a new GUID to be assigned to the
restored instance.
Policy Options
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If unselected, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the database recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Allow to overwrite vDisk
In cases where the Make Permanent option is enabled, and the destination VM has conflicting VMDK
files, enable the Allow to overwrite vDisk option to delete the existing VMDK and overwrite it with the
selected source.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
262
User's Guide Create a Restore Job Definition - InterSystems Caché
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Storage Options
Make Permanent
Set the default permanent restoration action of the job. All database recovery operations can leverage
Instant or Test modes and then either be deleted or promoted to permanent mode. This behavior is
controlled through the Make Permanent option. If you choose to enable make permanent for database
recovery operations, VM and application inventory jobs must explicitly be re-run to capture the updated
configuration for the application server(s).
Enabled - Always make permanent through full copy FlashCopy
Disabled - Never make permanent
User Selection - Allows the user to select Make Permanent or Cleanup when the job session is
pending
Protocol Priority
If more than one storage network protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
263
User's Guide Create a Restore Job Definition - InterSystems Caché
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - InterSystems Caché on page 205
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
264
User's Guide Create a Restore Job Definition - SAP HANA
Best Practice: Create a schedule before creating a job definition so that you can easily add the schedule to
the job definition.
To create a SAP HANA Restore job definition:
1. Click the Jobs tab. Expand the Database folder, then select SAP HANA .
application server to view available database recovery points. Select resources, and change the order in
which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Application Search from the drop-down menu to search for application servers with
available recovery points. Add copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
265
User's Guide Create a Restore Job Definition - SAP HANA
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . Select a source site and an associated database. Review the destination's
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Application Options
Rename Mount Points
For more information about the Rename Mount Points options, see Restore Jobs - Rename Mount
Points and Initialization Parameter Options on page 340.
Policy Options
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If unselected, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the database recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Allow to overwrite vDisk
In cases where the Make Permanent option is enabled, and the destination VM has conflicting VMDK
files, enable the Allow to overwrite vDisk option to delete the existing VMDK and overwrite it with the
selected source.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
266
User's Guide Create a Restore Job Definition - SAP HANA
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Storage Options
Make Permanent
Set the default permanent restoration action of the job. All database recovery operations can leverage
Instant or Test modes and then either be deleted or promoted to permanent mode. This behavior is
controlled through the Make Permanent option. If you choose to enable make permanent for database
recovery operations, VM and application inventory jobs must explicitly be re-run to capture the updated
configuration for the application server(s)..
Enabled - Always make permanent through full copy FlashCopy
Disabled - Never make permanent
User Selection - Allows the user to select Make Permanent or Cleanup when the job session is
pending
Protocol Priority
If more than one storage network protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job that starts the job immediately. Select Schedule job to start at later time to view the list of
267
User's Guide Create a Restore Job Definition - SAP HANA
available schedules. Optionally select one or more schedules for the job. As each schedule is selected,
the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Restoring a SAP HANA instance through the SAP HANA Client (Studio)
After the IBM Spectrum Copy Data Management SAP HANA Instant Disk Restore job completes, you must
restore the data volume, then finalize the restore using SAP HANA Client. Note that this procedure applies
only to Instant Disk Restore jobs utilizing the Make Permanent option.
Note: Users of SAP HANA 2.0 SP1 or later must register their systems with the Multiple Containers mode and
the System database option selected. Further options specific to SAP HANA 2.0 SP1 or later are specified in
the procedure below.
l On the SAP HANA Client, right-click the SAP HANA system and select Backup and Recovery, then
select Recover System. SAP HANA 2.0 SP1 users should select Recover System Database. The SAP
HANA system shuts down in preparation for the recovery.
l From a command line, perform the following steps:
Unmount the original data volume (if present) through the following command: umount /<path of
data installation> e.g. umount /hana/data
Unmount the data volume recovered by IBM Spectrum Copy Data Management through the following
command: umount /<path of recovered volume> e.g. umount /dev/sde1
Mount the new data volume again through the following command: mount </dev/mapper/newID>
/<path of original data installation> e.g. mount /dev/sde1 /hana/data
l For SAP HANA 2.0 SP1 or later environments: Back in the SAP HANA Client interface, keep the
default selections and click Next until the last of the backup options display, then click Finish.
Restoring an SAP HANA system database from a specific backup or storage snapshot:
1. Right-click the SAP HANA system and select Backup and Recovery.
2. Select Recover System Database to continue with the recovery. The Specify system database dialog
will load.
3. Select the SAP HANA system database.
4. Click Next. The Specify Recovery Type dialog will appear.
5. Select the Recover the database to a specific data backup or storage snapshot option, then click
Next.
6. In the Backup Location screen, keep the default selections and click Next.
268
User's Guide Create a Restore Job Definition - SAP HANA
7. In the Select a Backup screen, select the backup marked with a green availability then continue clicking
Next until the last of the backup options displays, then click Finish.
Restoring an SAP HANA tenant database from a specific backup or storage snapshot:
1. Right-click the SAP HANA system and select Backup and Recovery.
2. Select Recover Tenant Database to continue with the recovery. The Specify tenant database dialog will
load.
3. Select the SAP HANA tenant database system.
4. Click Next. The Specify Recovery Type dialog will appear.
5. Select the Recover the database to a specific data backup or storage snapshot option, then click
Next.
6. In the Backup Location screen, keep the default selections and click Next.
7. In the Select a Backup screen, select the backup marked with a green availability then continue clicking
Next until the last of the backup options displays, then click Finish.
269
User's Guide Create a Restore Job Definition - SAP HANA
/hana/logbackup/<SID>/SYSTEMDB
where <SID> is the system ID of the database.
12. Click Add. It may be necessary to select this location from the list of locations.
13. Click Next. The Other Settings dialog will appear. Ensure File System and Use Delta Backups
(Recommended) is selected.
14. Click Next. The Review Recovery Settings dialog will appear. Review the selections and click Back if any
revisions are necessary.
15. Click Finish. The Recovery Execution Summary will appear.
16. Click Close.
270
User's Guide Create a Restore Job Definition - SAP HANA
13. Click Next. The Other Settings dialog will appear. Ensure File System and Use Delta Backups
(Recommended) is selected.
14. Click Next. The Review Recovery Settings dialog will appear. Review the selections and click Back if any
revisions are necessary.
15. Click Finish. The Recovery Execution Summary will appear.
16. Click Close.
Tip: After completing an Instant Disk Restore for SAP HANA and using the Make Permanent option, both
data and temp data disks may still appear on the SAP HANA server until the SCSI bus is rescanned or the
SAP HANA server is restarted. Rescan the SCSI bus on the SAP HANA server:
# ls /sys/class/scsi_host/host
For each host found, where <x> is the number of the host, issue:
# echo "- - -" > /sys/class/scsi_host/host<x>/scan
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - SAP HANA on page 208
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
271
User's Guide Create a Restore Job Definition - Oracle
l If Oracle data resides on LVM volumes, you must stop and disable the lvm2-lvmetad
service before running Backup or Restore jobs. Leaving the service enabled can
prevent volume groups from being resignatured correctly during restore and can lead to
data corruption if the original volume group is also present on the same system. To
disable the lvm2-lvmetad service, run the following commands:
systemctl stop lvm2-lvmetad
systemctl disable lvm2-lvmetad
Next, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l You must add credentials to the destination virtual machine when recovering with the
subnet option. See Add Credentials to a Virtual Machine on page 106.
272
User's Guide Create a Restore Job Definition - Oracle
Best Practice: Create a schedule before creating a job definition so that you can easily add the schedule to
the job definition.
To create an Oracle Restore job definition:
1. Click the Jobs tab. Expand the Database folder, then select Oracle.
273
User's Guide Create a Restore Job Definition - Oracle
4. Select a template. Available options include DevOps , Instant Database Restore , and Instant
Disk Restore .
5. Click Source . From the drop-down menu select Application Browse to select a source site and an
application server to view available database recovery points. Select resources, and change the order in
which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Application Search from the drop-down menu to search for application servers with
available recovery points. Add copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
If creating an Instant Database Restore job definition, an additional recovery option is available
through the Select Version feature. Enable Allow Point-in-Time selection when job runs to leverage
archived logs and enable a point-in-time recovery of the databases.
If creating an Instant Disk Restore job definition, the RMAN tag displays next to the time in the
Version field. An Oracle administrator can correlate the RMAN backups to the IBM Spectrum Copy Data
Management versions during job creation.
7. Click Destination . Select a source site and an associated Oracle home. Click the Destination field to
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Application Options
Record mounted copies in RMAN local repository
This option is available for Instant Disk Restore workflows.
Select this option to catalog mounted copies into RMAN at the end of the Instant Disk Restore job. This
option is an alternative to the cataloging of IBM Spectrum Copy Data Management-created copies
during a Backup job. If the RMAN cataloging option is selected in the Backup job definition, every copy
created by IBM Spectrum Copy Data Management is cataloged in the source database immediately
after the copy is created. By contrast, this option allows you to perform cataloging on-demand only for a
specific copy and only when you intend to restore data from that copy through RMAN. Note that for the
cataloging to succeed the target database must be running at the time the Instant Disk Restore job
runs.
Rename Mount Points and Database Initialization Parameters
274
User's Guide Create a Restore Job Definition - Oracle
For more information about the Rename Mount Points and Database Initialization Parameters
(Instant Database Recovery and DevOps workflows only) options, see Restore Jobs - Rename Mount
Points and Initialization Parameter Options on page 340.
ASM Disk Names
This option allows you to specify the disk naming pattern for restored ASM disks, if available. If Use
default pattern is selected, IBM Spectrum Copy Data Management uses the default naming pattern,
which is /dev/ecx-asmdisk/* in Linux environments or /dev/ecx_asm* in AIX environments. Select
Specify a custom pattern to set ASM disks to follow any naming conventions that may be in use for
existing disks. The custom pattern must begin with "/dev" and must end with an asterisk (*). During
restore, IBM Spectrum Copy Data Management creates a device alias, or symlink, matching the
specified pattern and replaces the asterisk with a unique disk name.
Note: This option has no effect if the database being restored does not use any ASM disks.
Policy Options
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If unselected, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the database recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Replace existing database
This option is available for Instant Database Recovery workflows.
Select this option to replace an existing database with the same name during recovery. When an
Instant Database Recovery is performed for a database and another database with the same name is
already running on the destination host/cluster, IBM Spectrum Copy Data Management shuts down the
existing database before starting up the recovered database. If this option is not selected, the Instant
Database Recovery fails when IBM Spectrum Copy Data Management encounters an existing running
database with the same name.
Leave database shut down after recovery
This option is available for Instant Database Recovery and DevOps workflows.
Select this option to shut down the recovered database once the recovery operation completes. The
database can be started up manually once needed.
Allow to overwrite vDisk
275
User's Guide Create a Restore Job Definition - Oracle
In cases where the Make Permanent option is enabled, and the destination VM has conflicting VMDK
files, enable the Allow to overwrite vDisk option to delete the existing VMDK and overwrite it with the
selected source.
Do not stretch recovery disks (storage features like Enhance Stretched Cluster)
Enabling this option will not create stretched recovery disks. This only applies to volumes that have
been created using stretched FlashCopies.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Storage Options
Make Permanent
Set the default permanent restoration action of the job. All database recovery operations can leverage
Instant or Test modes and then either be deleted or promoted to permanent mode. This behavior is
controlled through the Make Permanent option. If you choose to enable make permanent for database
recovery operations, VM and application inventory jobs must explicitly be re-run to capture the updated
configuration for the application server(s).
Enabled - Always make permanent through full copy FlashCopy
Disabled - Never make permanent
User Selection - Allows the user to select Make Permanent or Cleanup when the job session is
pending
276
User's Guide Create a Restore Job Definition - Oracle
Protocol Priority
If more than one storage network protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job that starts the job immediately. Select Schedule job to start at later time to view the list of
available schedules. Optionally select one or more schedules for the job. As each schedule is selected,
the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - Oracle on page 212
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
277
User's Guide Create a Restore Job Definition - SQL
278
User's Guide Create a Restore Job Definition - SQL
l Create and run a SQL Backup job. See Create a Backup Job Definition - SQL on page
219.
l Review SQL requirements. See Microsoft SQL Server Requirements on page 61 and
Microsoft SQL Server Support FAQ on page 592.
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l You must add credentials to the destination virtual machine when recovering with the
subnet option. See Add Credentials to a Virtual Machine on page 106.
279
User's Guide Create a Restore Job Definition - SQL
l When creating an Instant Seeding restore job definition, the destination must be a non-
system drive. The SQL database and log files must be located on non-system drives.
Best Practice: Create a schedule before creating a job definition so that you can easily add the schedule to
the job definition.
To create a Microsoft SQL Restore job definition:
1. Click the Jobs tab. Expand the Database folder, then select SQL.
5. Select a template. Available options include Instant Database Restore for Microsoft
SQL Standalone/Failover Cluster and Always On Availability Group jobs, Instant Disk Restore , and
Instant Seeding for Microsoft SQL Always On Availability Group jobs.
6. Click Source . From the drop-down menu select Application Browse to select a source site and an
application server to view available database recovery points. Select resources, and change the order in
which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Application Search from the drop-down menu to search for application servers with
available recovery points. Add copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
7. Click Copy . Click on Select Backup Date Range to provide a date range for which recovery points
are displayed. Select a range of dates by selecting a date for Start Time and a date for End Time. After a
range is established, click OK. Sites containing copies of the selected data display. Select a site. By
default, the latest copy of your data is used. To choose a specific version, select a site and click Select
Version. Click the Version field to view specific copies and their associated job and completion time. If
recovery from one snapshot fails, another copy from the same site is used.
If creating an Instant Database Restore job definition, an additional recovery option is available
through the Select Version feature. Enable Allow Point-in-Time selection when job runs to leverage
archived logs and enable a point-in-time recovery of the databases.
8. Click Destination . Select a source site and an associated Microsoft SQL database. Click the New
database name field to enter an optional alternate name for the database. If the destination is a
SQL Failover Cluster instance, select a Windows server proxy in the Select a Windows server to
resignature LUNs section. When resignaturing a copy, IBM Spectrum Copy Data Management retains
280
User's Guide Create a Restore Job Definition - SQL
the data and mounts the volume to the proxy selected in the Select a Windows server to resignature
LUNs section.
Note: Any Windows node with iSCSI or Fibre Channel access to the storage can be selected as a proxy
server, provided that the node is not part of the original cluster. It is recommended to select a standalone
virtual or physical Windows node as a proxy server.
Note:: If creating an Instant Seeding job definition, the seeding target must be a secondary replica.
If a primary AlwaysOn node is selected as a seeding target, the job will fail. The New database name
option does not apply to Instant Seeding jobs.
9. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
10. To edit options before creating the job definition, click Advanced. Set the job definition options.
Application Options
Roll back uncommitted transactions and leave the database ready to use
Select this option to restore the database to an online state. If selected, additional transaction logs
cannot be restored. If deselected, uncommitted transactions are not rolled back, leaving the database
non-operational. Additional transaction logs can then be restored.
Note: This option does not apply to Instant Seeding jobs.
Overwrite existing database
Select this option to replace an existing database with the same name during recovery. When an
Instant Database Recovery is performed for a database and another database with the same name is
already running on the destination host/cluster, IBM Spectrum Copy Data Management shuts down the
existing database before starting up the recovered database. If this option is not selected, the Instant
Database Recovery fails when IBM Spectrum Copy Data Management encounters an existing running
database with the same name.
Copy databases
Select this option to restore SQL files to the original file path that is currently in use. The IBM Spectrum
Copy Data Management agent copies the database files from the snapshot or clone volumes to the
database folders on the local drive.
Rename mount points
By default, IBM Spectrum Copy Data Management renames mount points to the SQL data directory of
the target SQL instance. You can override this behavior through the Rename mount points option.
Default SQL data directory - Mount points are not renamed.
Original mount point or drive letter - The original volume mount point of the databases is used. This
will keep the same drive mapping, but the database recovery will occur on an alternate server. If the
original volume mount point cannot be used to mount a volume (for example, if the folder is not empty),
281
User's Guide Create a Restore Job Definition - SQL
the restore fails. Note that it is recommended to keep the SQL transaction log location on a different
disk than the SQL database data and log files.
Add a custom mount point - If enabled, a custom prefix can be entered in the Prefix string field. The
prefix must specify a valid, preexisting volume drive letter that can contain a volume mount point on the
drive. The prefix substitutes the default root folder of the mount. For example, if the default root folder
of the mount drive is E:\SQLDataFiles\mnt, and F:\RestoreMnt is entered in the Mount Point Prefix
field, E:\SQLDataFiles\mnt is renamed to F:\RestoreMnt. The drive, in this case the F drive, must exist
on the destination server. If the folder, in this case the RestoreMnt folder, does not exist, it will be
created.
Note: This option does not apply to Instant Seeding jobs. Instant Seeding supports restoring to the
original path and volume drive letter/mount points.
Policy Options
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the database recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Allow to overwrite vDisk
In cases where the Make Permanent option is enabled, and the destination VM has conflicting VMDK
files, enable the Allow to overwrite vDisk option to delete the existing VMDK and overwrite it with the
selected source.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
282
User's Guide Create a Restore Job Definition - SQL
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
Storage Options
Make Permanent
Set the default permanent restoration action of the job. All database recovery operations can leverage
Instant, or Test, mode and then either deleted or promoted to permanent mode. This behavior is
controlled through the Make Permanent option. If you choose to enable make permanent for database
recovery operations, VM and application inventory jobs must explicitly be re-run to capture the updated
configuration for the application server(s).
Enabled - Always make permanent
Disabled - Never make permanent
User Selection - Allows the user to select Make Permanent or Cleanup when the job session is
pending
Protocol Priority
If more than one storage networking protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
11. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
12. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
283
User's Guide Create a Restore Job Definition - SQL
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
13. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - SQL on page 219
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
284
User's Guide Create a Restore Job Definition - File System
5. Click Source . From the drop-down menu select Application Browse to select a source site and a
file system to view available recovery points. Select resources, and change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Application Search from the drop-down menu to search for file systems with
available recovery points. Add copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . Select a source site and an associated destination. Review the destination's
mount point mapping settings. Optionally, click the Enter an alternate mount point field to create an
alternate mount point, or select Use original mount points.
285
User's Guide Create a Restore Job Definition - File System
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Application Options
Overwrite Existing Mount Points
Select to overwrite the mount points at their original location.
Policy Options
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If unselected, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the database recovery fails.
Allow to overwrite vDisk
In cases where the Make Permanent option is enabled, and the destination VM has conflicting VMDK
files, enable the Allow to overwrite vDisk option to delete the existing VMDK and overwrite it with the
selected source.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
286
User's Guide Create a Restore Job Definition - File System
Note: If adding a script to a Windows-based File System job definition, the user running the script must
have the "Log on as a service" right enabled, which is required for running prescripts and postscripts.
For more information about the "Log on as a service" right, see https://technet.microsoft.com/en-
us/library/cc794944.aspx.
Storage Options
Make Permanent
Set the default permanent restoration action of the job. All database recovery operations can leverage
Instant or Test modes and then either be deleted or promoted to permanent mode. This behavior is
controlled through the Make Permanent option. If you choose to enable make permanent for database
recovery operations, VM and application inventory jobs must explicitly be re-run to capture the updated
configuration for the application server(s).
Enabled - Always make permanent through full copy FlashCopy
Disabled - Never make permanent
User Selection - Allows the user to select Make Permanent or Cleanup when the job session is
pending
Protocol Priority
If more than one storage network protocol is available, select the protocol to take priority in the job.
Available protocols include iSCSI and Fibre Channel.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job that starts the job immediately. Select Schedule job to start at later time to view the list of
available schedules. Optionally select one or more schedules for the job. As each schedule is selected,
the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
287
User's Guide Create a Restore Job Definition - File System
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - File System on page 224
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
288
User's Guide Create a Restore Job Definition - DellEMC Unity
Provides instant writable access to a volume. An IBM Spectrum Copy Data Management snapshot is
mapped to a target server where it can be accessed, copied, or put immediately into production use as
needed.
Restore Volume(s)
Recover a volume from a snapshot or replication created through an IBM Spectrum Copy Data
Management DellEMC Unity Restore job. Volumes can be restored to their original location or a new
volume in the same or different DellEMC Unity storage system.
CONSIDERATIONS:
l Note that before running replication jobs, replication connections must be established
between VNX arrays. Create replication connections through the DellEMC Unisphere
wizard found under Hosts > Replication Connections.
l Note that to restore data to an original volume, you must first offline the target disk on
the host prior to recovery. Once recovery completes, bring the target disk back online.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
5. Click Source . From the drop-down menu select LUNs or File Systems. Select a source site and an
associated DellEMC Unity source to view volumes with available recovery points. Select resources, and
289
User's Guide Create a Restore Job Definition - DellEMC Unity
change the order in which the resources are recovered by dragging and dropping the resources in the
grid.
Alternatively, select LUN Search or File System Search from the drop-down menu to search for
resources with available recovery points. Add copies to the job definition by clicking Add. Change the
order in which the resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
Note: When selecting a specific version, data created through VMware Backup jobs that apply to the
selected DellEMC Unity resource display, as the same data is contained with the snapshot for VMware
and non-VMware related data.
7. Click Destination . Select the DellEMC Unity hosts that contain the iSCSI Qualified Name (IQN) or
Fibre Channel WWPN of the application that you want to assign to.
Note: The DellEMC Unity hosts that are used during runtime may be different based on the initiator
name.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enable to allow a scheduled session of a recovery job to force an existing pending session to clean up
associated resources so the new session can run. Disable this option to keep an existing test
environment running without being cleaned up.
Session will auto cleanup/end after postscript execution completes
Enable to automatically clean up allocated resources after a post-script defined in the Post-Script field
completes.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
290
User's Guide Create a Restore Job Definition - DellEMC Unity
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job that starts the job immediately. Select Schedule job to start at later time to view the list of
available schedules. Optionally select one or more schedules for the job. As each schedule is selected,
the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
291
User's Guide Create a Restore Job Definition - DellEMC Unity
5. Click Source . From the drop-down menu select LUNs or File Systems. Select a source site and an
associated DellEMC Unity source to view volumes with available recovery points. Select one or more
resources, and change the order in which the resources are recovered by dragging and dropping the
resources in the grid.
Alternatively, select LUN Search or File System Search from the drop-down menu to search for
resources with available recovery points. Add copies to the job definition by clicking Add. Change the
order in which the resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
Restore to alternative location and select a volume and associated pool. If no pool is selected, the pool
with the largest amount of space available is chosen by default.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Overwrite volume if exists
Enable to overwrite the volume if the volume exists on the destination.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
292
User's Guide Create a Restore Job Definition - DellEMC Unity
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
293
User's Guide Create a Restore Job Definition - DellEMC Unity
294
User's Guide Create a Restore Job Definition - IBM Spectrum Accelerate
Provides instant writable access to a volume. An IBM Spectrum Copy Data Management snapshot is
mapped to a target server where it can be accessed, copied, or put immediately into production use as
needed.
Restore Volume(s)
Recover a volume from a FlashCopy or Global Mirror created through an IBM Spectrum Copy Data
Management IBM Spectrum Accelerate Backup job. Volumes can be restored to their original location or a
new volume in the same or different IBM storage system.
CONSIDERATIONS:
l IBM providers utilize port 22 for communication with IBM Spectrum Copy Data
Management.
l Note that to restore data to an original volume, you must first offline the target disk on
the host prior to recovery. Once recovery completes, bring the target disk back online.
l Note that after restoring data to an alternate location you must map the host to the
restore volume on the IBM storage system. Then rescan the disk on the host, and bring
the disk online.
l In IBM storage environments, port grouping and IP partnerships are required to enable
remote copy connections. See IBM's SAN Volume Controller and Storwize Family
Native IP Replication Guide.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
To create an Instant Disk Restore IBM Spectrum Accelerate Restore job definition:
1. Click the Jobs tab. Expand the Storage Controller folder, then select IBM Spectrum Accelerate.
295
User's Guide Create a Restore Job Definition - IBM Spectrum Accelerate
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
IBM source to view volumes with available recovery points. Select one or more resources, and change
the order in which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
Note: When selecting a specific version, data created through VMware Backup jobs that apply to the
selected IBM resource display, as the same data is contained with the snapshot for VMware and non-
VMware related data.
7. Click Destination . Select the IBM hosts that contain the iSCSI Qualified Name (IQN) or Fibre
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
296
User's Guide Create a Restore Job Definition - IBM Spectrum Accelerate
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
297
User's Guide Create a Restore Job Definition - IBM Spectrum Accelerate
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
IBM source to view volumes with available recovery points. Select one or more resources, and change
the order in which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
Restore to alternative location and select a volume and associated pool. If no pool is selected, the pool
with the largest amount of space available is chosen by default.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
298
User's Guide Create a Restore Job Definition - IBM Spectrum Accelerate
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - IBM Spectrum Accelerate on page 231
l Using State and Status Arguments in Postscripts on page 343
299
User's Guide Create a Restore Job Definition - IBM Spectrum Accelerate
300
User's Guide Create a Restore Job Definition - IBM Spectrum Virtualize
Provides instant writable access to a volume. An IBM Spectrum Copy Data Management snapshot is
mapped to a target server where it can be accessed, copied, or put immediately into production use as
needed.
Restore Volume(s)
Recover a volume from a FlashCopy or Global Mirror created through an IBM Spectrum Copy Data
Management IBM Spectrum Virtualize Backup job. Volumes can be restored to their original location or a
new volume in the same or different IBM storage system.
CONSIDERATIONS:
l IBM providers utilize port 22 for communication with IBM Spectrum Copy Data
Management.
l Note that to restore data to an original volume, you must first offline the target disk on
the host prior to recovery. Once recovery completes, bring the target disk back online.
l Note that after restoring data to an alternate location you must map the host to the
restore volume on the IBM storage system. Then rescan the disk on the host, and bring
the disk online.
l In IBM storage environments, port grouping and IP partnerships are required to enable
remote copy connections. See IBM's SAN Volume Controller and Storwize Family
Native IP Replication Guide.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
To create an Instant Disk Restore IBM Spectrum Virtualize Restore job definition:
1. Click the Jobs tab. Expand the Storage Controller folder, then select IBM Spectrum Virtualize.
301
User's Guide Create a Restore Job Definition - IBM Spectrum Virtualize
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
IBM source to view volumes with available recovery points. Select one or more resources, and change
the order in which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
Note: When selecting a specific version, data created through VMware Backup jobs that apply to the
selected IBM resource display, as the same data is contained with the snapshot for VMware and non-
VMware related data.
7. Click Destination . Select the IBM hosts that contain the iSCSI Qualified Name (IQN) or Fibre
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Make IA clone resource permanent
Enable to turn the snapshot copy into a proper resource that will not be cleaned up after the Instant
Disk Restore job completes.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
302
User's Guide Create a Restore Job Definition - IBM Spectrum Virtualize
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
303
User's Guide Create a Restore Job Definition - IBM Spectrum Virtualize
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
IBM source to view volumes with available recovery points. Select one or more resources, and change
the order in which the resources are recovered by dragging and dropping the resources in the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
Restore to alternative location and select a volume and associated pool. If no pool is selected, the pool
with the largest amount of space available is chosen by default.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Skip waiting for FC to finish copying during storage controller restore
304
User's Guide Create a Restore Job Definition - IBM Spectrum Virtualize
Enable to skip waiting for the FlashCopy to finish copying during a storage controller restore job. This
option is disabled by default.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
305
User's Guide Create a Restore Job Definition - IBM Spectrum Virtualize
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - IBM Spectrum Virtualize on page 235
l Using State and Status Arguments in Postscripts on page 343
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Search and Filter Guidelines on page 557
306
User's Guide Create a Restore Job Definition - NetApp ONTAP
Provides instant writable access to volume or LUN. An IBM Spectrum Copy Data Management snapshot is
mapped to a target server where it can be accessed, copied, or put immediately into production use as
needed.
Restore Volume(s)
Recover a volume from a primary snapshot, vault, or mirror copy created through an IBM Spectrum Copy
Data Management NetApp ONTAP Backup job. Volumes can be restored to their original location or a new
volume in the same or different NetApp ONTAP cluster or server.
Note that Restore Volume jobs are not available for NetApp ONTAP storage systems operating in 7-mode.
7-mode resources will not display in Source or Destination steps.
Restore File(s)
Recover files from a primary snapshot created through an IBM Spectrum Copy Data Management NetApp
ONTAP Backup job. Files are restored to their original location.
CONSIDERATIONS:
l Target volumes and datastores can be automatically expanded through the autogrow
feature if supported by the underlying storage.
l Note that a volume restore through a NetApp ONTAP Restore job that includes NetApp
ONTAP storage systems operating in 7-mode is not supported. These providers will not
display during job creation.
l Note that NetApp ONTAP Backup jobs can only vault or mirror snapshots created
through IBM Spectrum Copy Data Management jobs.
l Note that a file restore from a mirror location is not available for NetApp ONTAP storage
systems operating in 7-mode.
307
User's Guide Create a Restore Job Definition - NetApp ONTAP
l Restore Volume jobs are not available for NetApp ONTAP storage systems operating in
7-mode. 7-mode resources will not display in Source or Destination steps.
l Note that a file restore through a NetApp ONTAPRestore job can only utilize the
alternate location feature if both the source and the destination are NetApp ONTAP
storage systems running Command Data ONTAP 8.3.
l Note that NetApp ONTAP storage systems operating in 7-mode support file recovery
from primary snapshots to their original locations. To restore files from a mirror source,
create and run an Instant Disk Restore job with the mirror as a source, then mount the
restored volume via CIFS or NFS. Files can then be copied to a new location.
l Note that the .snapshot folder must be visible on NFS shares in order to properly view
and run Copy Data Management jobs on NetApp ONTAP storage systems running
Data ONTAP in 7-Mode or Clustered Data ONTAP up to and including version 8.2.
Confirm with your administrator that the .snapshot folder is not hidden in your NetApp
ONTAP environment.
l NetApp ONTAP and VMware Restore jobs will fail if the iSCSI Initiator Group (iGroup) is
not configured on the NetApp Clustered Data ONTAP 8.3 storage system target. The
procedure only needs to be performed once. Previously created iGroups for earlier
versions of NetApp Clustered Data ONTAP do not need to be reconfigured for version
8.3. Note that there should only be one iGroup using the software iSCSI initiator. For
more information, contact Technical Support.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
NetApp ONTAP source to view volumes with available recovery points. Select one or more resources,
and change the order in which the resources are recovered by dragging and dropping the resources in
the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
308
User's Guide Create a Restore Job Definition - NetApp ONTAP
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
Note: When selecting a specific version, data created through VMware Backup jobs that apply to the
selected NetApp ONTAP resource display, as the same data is contained with the snapshot for VMware
and non-VMware related data.
7. Click Destination . Select your NFS and CIFS mapping options, including the Volume Name Prefix.
If a prefix is defined, the resulting NFS path or CIFS share displays as follows:
NFS: "/" + "volumeNamePrefix" + "_" + "sourcevolumeName"
CIFS: "volumeNamePrefix" + "_" + "sourcevolumeName"
If a prefix is not defined, a unique naming convention is applied. Unique prefixes should be defined for
jobs that run concurrently. If a job is run with a specified prefix and the resulting Instant Disk Restore
volume is made permanent, you must rename the NFS/CIFS path for the permanent volume prior to
running the same job again.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Make IA clone resource permanent
Enable to turn the snapshot copy into a proper resource that will not be cleaned up after the instant
access job completes.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the NetApp ONTAP volume
recovery fails.
Allow to overwrite and force clean up of pending old session
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
309
User's Guide Create a Restore Job Definition - NetApp ONTAP
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job that starts the job immediately. Select Schedule job to start at later time to view the list of
available schedules. Optionally select one or more schedules for the job. As each schedule is selected,
the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
310
User's Guide Create a Restore Job Definition - NetApp ONTAP
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
NetApp ONTAP source to view volumes with available recovery points. Select one or more resources,
and change the order in which the resources are recovered by dragging and dropping the resources in
the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
Restore to new volume in the same or different NetApp ONTAP cluster or server and select a
volume and associated aggregate. If no aggregate is selected, the aggregate with the largest amount of
space available is chosen by default.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Auto mount NFS after volume restored
Enable to automatically mount the restored volume after restoration completes.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
311
User's Guide Create a Restore Job Definition - NetApp ONTAP
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
312
User's Guide Create a Restore Job Definition - NetApp ONTAP
5. Click Source . Select a source site and an associated NetApp ONTAP source to view volumes with
available recovery points. Select recovery points and files to recover. Selected files are added to the
Selected Files pane.
6. Click Copy . Sites containing copies of the selected files display. Select a site. The latest copy of the
file is used. If recovery from one snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the file recovery fails.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
313
User's Guide Create a Restore Job Definition - HPE Nimble Storage
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - NetApp ONTAP on page 239
l Using State and Status Arguments in Postscripts on page 343
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Search and Filter Guidelines on page 557
314
User's Guide Create a Restore Job Definition - HPE Nimble Storage
Instant Disk Restore
Provides instant writable access to a volume. An IBM Spectrum Copy Data Management snapshot is
mapped to a target server where it can be accessed, copied, or put immediately into production use as
needed.
Restore Volume(s)
Recover a volume from a snapshot created through an IBM Spectrum Copy Data Management HPE Nimble
Storage Backup job. Volumes can be restored to their original location or a new volume in the same or
different HPE Nimble Storage system.
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
To create an Instant Disk Restore HPE Nimble Storage Restore job definition:
1. Click the Jobs tab. Expand the Storage Controller folder, then select HPE Nimble Storage.
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
HPE Nimble Storage source to view volumes with available recovery points. Select one or more
resources, and change the order in which the resources are recovered by dragging and dropping the
resources in the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
315
User's Guide Create a Restore Job Definition - HPE Nimble Storage
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
316
User's Guide Create a Restore Job Definition - HPE Nimble Storage
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
HPE Nimble Storage source to view volumes with available recovery points. Select one or more
resources, and change the order in which the resources are recovered by dragging and dropping the
resources in the grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
Restore to alternative location and select a volume and associated pool. If no pool is selected, the pool
with the largest amount of space available is chosen by default.
317
User's Guide Create a Restore Job Definition - HPE Nimble Storage
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
318
User's Guide Create a Restore Job Definition - HPE Nimble Storage
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - HPE Nimble Storage on page 242
l Using State and Status Arguments in Postscripts on page 343
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Search and Filter Guidelines on page 557
319
User's Guide Create a Restore Job Definition - Pure Storage FlashArray
Provides instant writable access to a volume. An IBM Spectrum Copy Data Management snapshot is
mapped to a target server where it can be accessed, copied, or put immediately into production use as
needed.
Restore Volume(s)
Recover a volume from a snapshot or replication created through an IBM Spectrum Copy Data
Management Pure Storage FlashArray Backup job. Volumes can be restored to their original location or a
new volume in the same or different Pure Storage system.
Pure Storage FlashArray can be set to utilize CloudSnap functionality so that snapshots are offloaded from
the local storage array to a S3 cloud storage target or NFS share. When a recovery job runs, the snapshot will
be recovered from the local storage array. If it has been condensed out of the local storage and the Add
Snapshot Offload option was set during SLA creation, the offload copy from cloud storage will be restored.
Best Practice: The IBM Spectrum Copy Data Management user interface will indicate that a Pure Storage
FlashArray CloudSnap offload job has completed even though the transfer is still occurring in the background
which is dependent on network speeds. Consider setting an age as the retention for offload copies when using
the Pure Storage FlashArray CloudSnap functionality. Doing so will ensure that a sufficient amount of time
has passed for data to be transferred to the S3 storage target or NFS share before it has to be condensed out
from a backup. This is particularly important if several offload jobs are run in quick succession.
CONSIDERATIONS:
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
To create an Instant Disk Restore Pure Storage FlashArray Restore job definition:
1. Click the Jobs tab. Expand the Storage Controller folder, then select Pure Storage FlashArray.
320
User's Guide Create a Restore Job Definition - Pure Storage FlashArray
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
Pure Storage source to view volumes with available recovery points. Select one or more resources, and
change the order in which the resources are recovered by dragging and dropping the resources in the
grid.
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . Select a Pure Storage FlashArray destination.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
321
User's Guide Create a Restore Job Definition - Pure Storage FlashArray
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
5. Click Source . From the drop-down menu select Volume to select a source site and an associated
Pure Storage source to view volumes with available recovery points. Select one or more resources, and
change the order in which the resources are recovered by dragging and dropping the resources in the
grid.
322
User's Guide Create a Restore Job Definition - Pure Storage FlashArray
Alternatively, select Volume Search from the drop-down menu to search for volumes with available
recovery points. Add volume copies to the job definition by clicking Add. Change the order in which the
resources are recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . To restore to the original volume, select Restore to original volume, or select
Restore to alternative location and select a volume and associated pool. If no pool is selected, the pool
with the largest amount of space available is chosen by default.
8. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
9. To edit options before creating the job definition, click Advanced. Set the job definition options.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the volume recovery fails.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
323
User's Guide Create a Restore Job Definition - Pure Storage FlashArray
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
10. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
11. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
12. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - Pure Storage FlashArray on page 246
l Using State and Status Arguments in Postscripts on page 343
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Search and Filter Guidelines on page 557
324
User's Guide Create a Restore Job Definition - VMware
Test Mode
Creates temporary virtual machines for development/testing, snapshot verification, and disaster
recovery verification on a scheduled, repeatable basis without affecting production environments. Test
machines are kept running as long as needed to complete testing and verification and are then cleaned
up after testing and verification completes. Through fenced networking, you can establish a safe
environment to test your jobs without interfering with virtual machines used for production. Virtual
machines created through Test mode are also given unique names and identifiers to avoid conflicts
within your production environment.
Clone Mode
Creates copies of virtual machines for use cases requiring permanent or long-running copies for data
mining or duplication of a test environment in a fenced network. Virtual machines created through Clone
mode are also given unique names and identifiers to avoid conflicts within your production environment.
With clone mode you must be sensitive to resource consumption, since clone mode creates permanent
or long-term virtual machines.
Production Mode
Enables disaster recovery at the local site from primary storage or a remote disaster recovery site,
replacing original machine images with recover images. All configurations are carried over as part of the
recovery, including names and identifiers, and all copy data jobs associated with the virtual machine
continue to run.
You can also set an IP address or subnet mask for virtual machines to be repurposed for development/testing
or disaster recovery use cases. Supported mapping types include IP to IP, IP to DHCP, and subnet to subnet.
Instant Disk Restore
Provides instant writable access to data and application recovery points. An IBM Spectrum Copy Data
Management snapshot is mapped to a target server where it can be accessed, copied, or put immediately
into production use as needed.
325
User's Guide Create a Restore Job Definition - VMware
l Ensure the latest version of VMware Tools is installed in your environment. IBM
Spectrum Copy Data Management was tested against VMware Tools 9.10.0.
l For email notifications, at least one SMTP server must be configured. Before defining a
job, add SMTP resources. See Register a Provider on page 79.
l You must add credentials to the destination virtual machine when recovering with the
subnet option. See Add Credentials to a Virtual Machine on page 106.
l To run network prescripts and postscripts, select the Power on after recovery option,
then the Enable pre and post network VM level scripts option. Note that prior to
running the job, the scripts must be defined and copied to specific locations on the
virtual machine. In a Windows environment, copy prenetwork.bat and postnetwork.bat
to your c:\program files\ibm\IBM Spectrum Copy Data Management\scripts directory. In
a Linux environment, copy prenetwork.sh and postnetwork.sh to your /opt/CDM/scripts/
directory.
CONSIDERATIONS:
l Note that VMware Backup and Restore jobs only support vCenters or ESX hosts
running vSphere 6.0 through 7.0.
l If a recovery through an Instant VM Restore RRP job using an AWS-based SLA Policy
fails with a VMwareVMotionException, A general system error occurred: The source
detected that the destination failed to resume. error, retry the job at a later time. This
error may display if transferring data from the AWS cloud takes too long.
l When running VADP-based VM Replication workflows, target volumes and datastores
can be automatically expanded in response to space usage requirements if supported
by the underlying storage. Automatic growing prevents a volume from running out of
space or forcing you to delete files manually. For a list of supported storage systems
see System Requirements on page 23.
l Note that VMware DRS cluster datastores are supported in VMware Backup and
Restore jobs.
l Note that after an Instant Disk Restore Restore job completes, your vDisk will be
mounted but you may need to bring it online through the operating system from the Disk
Management console.
l In addition to NFS, IBM Spectrum Copy Data Management supports VMFS datastores
for NetApp ONTAP storage targets.
l Note that Instant Disk Restore recoveries utilizing the VM Replication method are not
supported at the datastore level. Instant Disk Restore datastore level recoveries are
supported through the primary storage snapshot method.
326
User's Guide Create a Restore Job Definition - VMware
l Note that in Instant VM Restore recoveries utilizing NetApp ONTAP storage systems
running Clustered Data ONTAP or Data ONTAP operating in 7-Mode, if a source with a
swap directory on a dedicated datastore is recovered to a different destination, then the
source datastore must have more free space than the amount of memory configured for
the virtual machine. This may not be applicable if the virtual machine is configured with
memory reservation.
l Instant Disk Restore recoveries of VMDKs through snapshots of IBM Spectrum Protect
Snapshot-protected virtual machines are not supported.
l Instant VM Restore recoveries of IBM Spectrum Protect Snapshot-protected virtual
machines under vApps are only restored as standalone virtual machines and not under
or in association with the vApp.
l A VMware Restore job recovering a virtual machine from an ESX cluster protected with
snapshot, vault, or mirror displays a Locate LUN failure if the maximum allowed LUNs
for the ESX host recovery target reaches its limit.
l In NetApp ONTAP environments running Clustered Data ONTAP, cluster peering must
be enabled. Peer relationships enable communication between SVMs. See NetApp
ONTAP's Cluster and Vserver Peering Express Guide.
l In IBM storage environments, port grouping and IP partnerships are required to enable
remote copy connections. See IBM's SAN Volume Controller and Storwize Family
Native IP Replication Guide.
l NetApp ONTAP and VMware Restore jobs will fail if the iSCSI Initiator Group (iGroup) is
not configured on the NetApp Clustered Data ONTAP 8.3 storage system target. The
procedure only needs to be performed once. Previously created iGroups for earlier
versions of NetApp Clustered Data ONTAP do not need to be reconfigured for version
8.3. For more information, contact Technical Support.
l One or more schedules might also be associated with a job. Job sessions run based on
the triggers defined in the schedule. See Create a Schedule on page 163.
327
User's Guide Create a Restore Job Definition - VMware
provider is lost, a new VASA provider can be brought up to recover the virtual machine
back to a VVOL datastore through Production or Clone mode.
Best Practice: Create a schedule before creating a job definition so that you can easily add the schedule to
the job definition.
To create an Instant Disk Restore VMware Restore job definition:
1. Click the Jobs tab. Expand the Hypervisor folder, then select VMware.
5. Click Source , then select VM Storage or Datastores as the source type. Select a source site and
an associated VMware source to view virtual machines, VM templates, folders, vApps, and datacenters
with available recovery points. Select resources, and change the order in which the resources are
recovered by dragging and dropping the resources in the grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . Expand a VMware source to view virtual machines, folders, vApps, and
datacenters available as destinations. To restore to the original host or cluster, select Use original host
or cluster.
8. Select the datastore and virtual disk mapping options if you selected a destination different from the
original host or cluster.
Virtual Disks
In the VM field, select virtual machine destinations.
In the Disk Mode field, select “persistent”, “independent persistent”, or “independent non-persistent”.
The values are defined as follows:
- Persistent - Changes are permanently written to the virtual disk. The disk is included in snapshots
taken of its virtual machine.
- Independent Persistent - Changes are permanently written to the virtual disk. The disk is excluded
from any snapshots taken of its virtual machine.
- Independent Non-Persistent - Changes to the virtual disk are discarded when the virtual machine
powers off; the VMDK files revert to their original state. The disk is excluded from any snapshots taken
of its virtual machine.
328
User's Guide Create a Restore Job Definition - VMware
In the optional Controller Type field, select a supported SCSI controller, including LSI SAS,
LSI Parallel, BusLogic, and VMware Paravirtual. Changing the SCSI controller type replaces the
existing controller with a new controller, applies the common settings of the existing controller to the
new controller, and reassigns all SCSI devices to the new controller.
Use the optional Controller Address # and Controller LUN # fields to select specific controllers or
LUNs.
Datastores
Set the destination datastore.
9. To create the job definition using default options, click Create Job. The job runs as defined by your
triggers, or can be run manually from the Jobs tab.
10. To edit options before creating the job definition, click Advanced. Set the job definition options.
Protocol Priority
If more than one storage protocol is available, select the protocol to take priority in the job. Available
protocols include iSCSI and Fibre Channel.
Make IA clone resource permanent
Enable to turn the snapshot copy into a proper resource that will not be cleaned up after the instant
access job completes.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the virtual machine recovery
fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Do not stretch recovery disks (storage features like Enhance Stretched Cluster)
Enabling this option will not create stretched recovery disks. This only applies to volumes that have
been created using stretched FlashCopies.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
329
User's Guide Create a Restore Job Definition - VMware
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
11. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
12. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
13. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
330
User's Guide Create a Restore Job Definition - VMware
5. Click Source . From the drop-down menu, select VMs and Templates to choose a source site and
an associated VMware source to view virtual machines, VM templates, datastores, folders, and vApps
and with available recovery points. Select resources, and change the order in which the resources are
recovered by dragging and dropping the resources in the grid.
Alternatively, select VM Search from the drop-down menu to search for virtual machines with available
recovery points across all datacenters. Add virtual machine copies to the job definition by clicking Add.
Change the order in which the resources are recovered by dragging and dropping the resources in the
grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Destination . Select a destination site and an associated VMware destination to view virtual
machines, folders, vApps, and datacenters available as destinations. To restore to the original host or
cluster and allow IBM Spectrum Copy Data Management to define the destination IP address, select Use
original host or cluster with system defined IP configuration. To restore to the original host or
cluster using your predefined IP address configuration, select Use original host or cluster with
original IP configuration. To restore to a destination different from the original host or cluster, select
Use alternative host or cluster.
8. Select virtual network and datastore mapping options if you selected Use alternative original host or
cluster in the previous step. The Virtual Networks pane displays all of the virtual networks associated
with your VMware Restore job sources. New virtual networks must be selected for use at the recovery
site, as well as new datastores on the Datastores pane. Select a production and test network in the
Virtual Networks tab, and a destination datastore in the Datastore tab.
Virtual Networks
Set virtual networks for production and test recovery jobs. Destination network settings for production
and test environments should be different locations.
Note: Network mappings are disabled for IBM Spectrum Protect Snapshot-protected virtual machines
in Instant VM Restore restores to alternate locations. You must re-enable virtual machine networks to
the proper target networks manually through vCenter once restoration completes.
Datastores
Set the destination datastore.
Subnet
331
User's Guide Create a Restore Job Definition - VMware
Set an IP address or subnet mask for virtual machines to be repurposed for development/testing or
disaster recovery use cases. Supported mapping types include IP to IP, IP to DHCP, and subnet to
subnet. Virtual machines containing multiple NICs are supported.
By default, the Use system defined subnets and IP addresses for VM guest OS on destination
option is enabled. To use your predefined subnets and IP addresses, select Use original subnets
and IP addresses for VM guest OS on destination.
To create a new mapping configuration, select Add mappings for subnets and IP addresses for
VM guest OS on destination, then click Add Mapping. Enter a subnet or IP address in the Source
field. In the destination field, select DHCP to automatically select an IP and related configuration
information if DHCP is available on the selected client. Select Static to enter a specific subnet or
IP address, subnet mask, gateway, and DNS. Note that Subnet or IP Address, Subnet Mask, and
Gateway are required fields. If a subnet is entered as a source, a subnet must also be entered as a
destination.
IP reconfiguration is skipped for virtual machines if a static IP is used but no suitable subnet mapping is
found, or if the source machine is powered off and there is more than one associated NIC. In a
Windows environment, if a virtual machine is DHCP only, then IP reconfiguration is skipped for that
virtual machine. In a Linux environment all addresses are assumed to be static, and only IP mapping
will be available.
Note: You must add credentials to the destination virtual machine when recovering with the subnet
option. Note that if using a domain user account, the credentials must be added to the destination
virtual machine, then configured through the Test & Configure option. This option verifies
communication with the server, tests DNS settings between the IBM Spectrum Copy Data
Management appliance and the server, and installs an IBM Spectrum Copy Data Management agent
on the server. From the Configure tab in the Provider Browser pane, right-click the virtual machine,
then click Test & Configure . See Add Credentials to a Virtual Machine on page 106 and Register
10. To edit options before creating the job definition, click Advanced. Set the job definition options.
Default Mode
332
User's Guide Create a Restore Job Definition - VMware
Set the VMware Restore job to run in Test, Production, or Clone mode by default. Once the job is
created, it can be run in Test, Production, or Clone mode through the Jobs tab.
Protocol Priority
If more than one storage protocol is available, select the protocol to take priority in the job. Available
protocols include iSCSI and Fibre Channel.
Power on after recovery
Toggle the power state of a virtual machine after a recovery is performed. Virtual machines are
powered on in the order they are recovered, as set in the Source step. Turning this feature on also
gives access to the Enable pre and post network VM level scripts option. Note that restored
VM templates cannot be powered on after recovery.
Enable pre and post network VM level scripts
If the Power on after recovery option is enabled, network prescripts and postscripts can be run at
the virtual machine level. Note that prior to running a VMware Backup job, the scripts must be defined
and copied to specific locations on the virtual machine. In a Windows environment, copy
prenetwork.bat and postnetwork.bat to your c:\program files\ibm\IBM Spectrum Copy Data
Management\scripts directory. In a Linux environment, copy prenetwork.sh and postnetwork.sh to your
/opt/CDM/scripts/ directory.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the virtual machine recovery
fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
333
User's Guide Create a Restore Job Definition - VMware
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
11. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
12. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
13. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
14. Once the job completes successfully, select one of the following options from the Actions menu on the
General tab of the job session on the Jobs tab: End IV (Cleanup), RRP (vMotion), or Clone (vMotion).
End IV (Cleanup) destroys the virtual machine and cleans up all associated resources. Since this is a
temporary/testing virtual machine, all data is lost when the virtual machine is destroyed.
RRP (vMotion) is equivalent to using the Production selection in the job Advanced screen. This option
migrates the virtual machine through vMotion to the Datastore and the Virtual Network defined as the
"For Production" Network.
Clone (vMotion) is equivalent to using the Clone selection in the job Advanced screen. This option
migrates the virtual machine through vMotion to the Datastore and Virtual Network defined as the "For
Test" network.
334
User's Guide Create a Restore Job Definition - VMware
1. Click the Jobs tab. Expand the Hypervisor folder, then select VMware.
5. Click Source . From the drop-down menu, select VMs and Templates to choose a source site and
an associated VMware source to view virtual machines, VM templates, datastores, folders, and vApps
and with available recovery points. Select resources, and change the order in which the resources are
recovered by dragging and dropping the resources in the grid.
Alternatively, select VM Search from the drop-down menu to search for virtual machines with available
recovery points across all datacenters. Add virtual machine copies to the job definition by clicking Add.
Change the order in which the resources are recovered by dragging and dropping the resources in the
grid.
6. Click Copy . Sites containing copies of the selected data display. Select a site. By default the latest
copy of your data is used. To choose a specific version, select a site and click Select Version. Click the
Version field to view specific copies and their associated job and completion time. If recovery from one
snapshot fails, another copy from the same site is used.
7. Click Local Destination . Select a local destination site and an associated VMware destination to
view virtual machines, folders, vApps, and datacenters available as destinations. Note that the local
storage destination is temporary, and is cleaned up after the job completes.
8. Click Remote Destination . Select a remote destination site and an associated VMware destination
335
User's Guide Create a Restore Job Definition - VMware
Set an IP address or subnet mask for virtual machines to be repurposed for development/testing or
disaster recovery use cases. Supported mapping types include IP to IP, IP to DHCP, and subnet to
subnet. Virtual machines containing multiple NICs are supported.
By default, the Use system defined subnets and IP addresses for VM guest OS on destination
option is enabled. To use your predefined subnets and IP addresses, select Use original subnets
and IP addresses for VM guest OS on destination.
To create a new mapping configuration, select Add mappings for subnets and IP addresses for
VM guest OS on destination, then click Add Mapping. Enter a subnet or IP address in the Source
field. In the destination field, select DHCP to automatically select an IP and related configuration
information if DHCP is available on the selected client. Select Static to enter a specific subnet or
IP address, subnet mask, gateway, and DNS. Note that Subnet or IP Address, Subnet Mask, and
Gateway are required fields. If a subnet is entered as a source, a subnet must also be entered as a
destination.
IP reconfiguration is skipped for virtual machines if a static IP is used but no suitable subnet mapping is
found, or if the source machine is powered off and there is more than one associated NIC. In a
Windows environment, if a virtual machine is DHCP only, then IP reconfiguration is skipped for that
virtual machine. In a Linux environment all addresses are assumed to be static, and only IP mapping
will be available.
Note: You must add credentials to the destination virtual machine when recovering with the subnet
option. Note that if using a domain user account, the credentials must be added to the destination
virtual machine, then configured through the Test & Configure option. This option verifies
communication with the server, tests DNS settings between the IBM Spectrum Copy Data
Management appliance and the server, and installs an IBM Spectrum Copy Data Management agent
on the server. From the Configure tab in the Provider Browser pane, right-click the virtual machine,
then click Test & Configure . See Add Credentials to a Virtual Machine on page 106 and Register
11. To edit options before creating the job definition, click Advanced. Set the job definition options.
Default Mode
336
User's Guide Create a Restore Job Definition - VMware
Set the VMware Restore job to run in Production or Clone mode by default. Once the job is created, it
can be run in Production or Clone mode through the Jobs tab.
Protocol Priority
If more than one storage protocol is available, select the protocol to take priority in the job. Available
protocols include iSCSI and Fibre Channel.
Power on after recovery
Toggle the power state of a virtual machine after a recovery is performed. Virtual machines are
powered on in the order they are recovered, as set in the Source step. Turning this feature on also
gives access to the Enable pre and post network VM level scripts option. Note that restored
VM templates cannot be powered on after recovery.
Enable pre and post network VM level scripts
If the Power on after recovery option is enabled, network prescripts and postscripts can be run at
the virtual machine level. Note that prior to running a VMware Backup job, the scripts must be defined
and copied to specific locations on the virtual machine. In a Windows environment, copy
prenetwork.bat and postnetwork.bat to your c:\program files\ibm\IBM Spectrum Copy Data
Management\scripts directory. In a Linux environment, copy prenetwork.sh and postnetwork.sh to your
/opt/CDM/scripts/ directory.
Continue with next source on failure
Toggle the recovery of a resource in a series if the previous resource recovery fails. If disabled, the
Restore job stops if the recovery of a resource fails.
Automatically clean up resources on failure
Enable to automatically clean up allocated resources as part of a restore if the virtual machine recovery
fails.
Allow to overwrite and force clean up of pending old sessions
Enabling this option allows a scheduled session of a recovery job to force an existing pending session
to clean up associated resources so the new session can run. Disable this option to keep an existing
test environment running without being cleaned up.
Job-Level Scripts
Job-level pre-scripts and post-scripts are scripts that can be run before or after a job runs at the job-
level. A script can consist of one or many commands, such as a shell script for Linux-based virtual
machines or Batch and PowerShell scripts for Windows-based virtual machines.
In the Pre-Script and/or Post-Script section, click Select to select a previously uploaded script, or click
Upload to upload a new script. Note that scripts can also be uploaded and edited through the Scripts
view on the Configure tab. See Configure Scripts on page 160.
Once complete, the script displays in the Pre-Script or Post-Script section. Click the Parameters field
at add a parameter to the script, then click Add. Note additional parameters can be added to a script by
337
User's Guide Create a Restore Job Definition - VMware
entering parameters one at a time in the field, then clicking Add. Next, click the Identity field to add or
create the credentials required to run the script. Finally, click the Application Server field to define the
location where the script will be injected and executed.
Repeat the above procedure to add additional Pre-Scripts and Post-Scripts. For information about
script return codes, see Return Code Reference on page 562.
For Restore job post-scripts only, the positional arguments state and status can be passed to the
script. For information about this feature, see Using State and Status Arguments in Postscripts on page
343. State and status arguments are not supported for Backup jobs.
Select Continue operation on script failure to continue running the job if a command in any of the
scripts associated with the job fails.
12. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
13. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
14. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
15. Once the job completes successfully, select one of the following options from the Actions menu on the
General tab of the job session on the Jobs tab: End IV (Cleanup), RRP (vMotion), or Clone (vMotion).
RRP (vMotion) is equivalent to using the Production selection in the job Advanced screen. This option
migrates the virtual machine through vMotion to the Datastore and the Virtual Network defined as the
"For Production" Network.
Clone (vMotion) is equivalent to using the Clone selection in the job Advanced screen. This option
migrates the virtual machine through vMotion to the Datastore and Virtual Network defined as the "For
Test" network.
NEXT STEPS:
338
User's Guide Create a Restore Job Definition - VMware
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Create a Backup Job Definition - VMware on page 250
l Using State and Status Arguments in Postscripts on page 343
l NetApp ONTAP Document: Cluster and Vserver Peering Express Guide
l NetApp ONTAP Document: iSCSI Configuration and Provisioning for ESX Express
Guide
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
l Search and Filter Guidelines on page 557
339
User's Guide Restore Jobs - Rename Mount Points and Initialization Parameter Options
+MYPRODDATA +MYPRODDATA1479400505
Do not rename: Select this option if you do not want to rename mount points or ASM diskgroups during
recovery. IBM Spectrum Copy Data Management will mount them with the same path/name as the source.
Add a custom prefix: Select this option and specify a custom prefix to be prepended to the source
paths/names. The prefix value may contain leading or trailing slashes. In the case of ASM diskgroup names,
the slashes are removed. For example:
Add a custom suffix: Select this option and specify a custom suffix to be appended the source paths/names.
For example:
340
User's Guide Restore Jobs - Rename Mount Points and Initialization Parameter Options
Replace a substring: Select this option and specify a custom string of characters in the old mount point to be
replaced with another string of characters. The old and new substrings must be simple alphanumeric strings
with no special characters. All occurrences of the specified substring are replaced. The value of the new
substring may be left empty if you want to remove the old substring. For example:
Use a template pfile: You can customize the initialization parameters by specifying a template file containing
the initialization parameters that IBM Spectrum Copy Data Management should use.
341
User's Guide Restore Jobs - Rename Mount Points and Initialization Parameter Options
The specified path must be to a plain text file that exists on the destination server and is readable by the IBM
Spectrum Copy Data Management user. The file must be in Oracle pfile format, consisting of lines in the form
name = value. Comments beginning with the # character are ignored.
IBM Spectrum Copy Data Management reads the template pfile and copies the entries to the new pfile that will
be used to start up the recovered database. However, the following parameters in the template are ignored.
Instead, IBM Spectrum Copy Data Management sets their values to reflect appropriate values from the source
database or to reflect new paths based on the renamed mount points of the recovered volumes.
l control_files
l db_block_size
l db_create_file_dest
l db_recovery_file_dest
l log_archive_dest
l spfile
l undo_tablespace
Additionally, cluster-related parameters like instance_number, thread, and cluster_database are set
automatically by IBM Spectrum Copy Data Management depending on the appropriate values for the
destination.
RELATED TOPICS:
l Create a Restore Job Definition - Oracle on page 272
l Create a Restore Job Definition - InterSystems Caché on page 260
l Oracle Requirements on page 45
342
User's Guide Using State and Status Arguments in Postscripts
RELATED TOPICS:
l Create a Restore Job Definition - DellEMC Unity on page 289
l Create a Restore Job Definition - IBM Spectrum Virtualize on page 301
l Create a Restore Job Definition - NetApp ONTAP on page 307
l Create a Restore Job Definition - VMware on page 325
343
User's Guide System Jobs
System Jobs
The topics in the following section cover System job definitions as well as Maintenance job information.
344
User's Guide Maintenance Job
Maintenance Job
The Maintenance job removes resources and associated objects created by IBM Spectrum Copy Data
Management when a job in a pending state is deleted. The cleanup procedure reclaims space on your storage
devices, cleans up your IBM Spectrum Copy Data Management catalog, and removes related snapshots.By
default, the Maintenance job runs once a day, but the job's associated schedule can be altered to run more or
less frequently depending on your needs, or the job can be run manually. The job cannot be deleted.
The Maintenance job only performs cleanup operations once a job in a pending state is deleted. All logs
associated with the deleted job are removed from IBM Spectrum Copy Data Management, so it is advised to
download job logs before the Maintenance job's next run. The job can be stopped and resumed; all pending
operations set to occur before the job was stopped will resume upon the next job run.
After deleting a pending Application, DellEMC Unity, IBM, NetApp ONTAP, Pure Storage FlashArray, or
VMware Backup job, all associated copy data, including recovery points, are deleted. The Maintenance job
removes all VM Copies and Primary copies associated with deleted VMware Backup and Restore jobs.
Similarly, after deleting a pending DellEMC Unity, IBM, NetApp ONTAP, or Pure Storage FlashArray Backup
or Restore job, all associated DellEMC Unity, IBM, NetApp ONTAP, and Pure Storage FlashArray locations
are removed by the Maintenance job. Once the Maintenance job completes, application, DellEMC Unity, IBM,
NetApp ONTAP, Pure Storage FlashArray or VMware data that was copied as part of the backup job cannot
be recovered. Any data related to the deleted job will not be recoverable.
The Maintenance job also removes cataloged data associated with deleted Application, DellEMC Unity, IBM,
NetApp ONTAP, Pure Storage FlashArray and VMware Inventory jobs, and removes jobs and job sessions
related to Script and Report jobs from the IBM Spectrum Copy Data Management interface.
RELATED TOPICS:
l Delete a Job Definition on page 352
345
User's Guide Create a Report Job Definition
6. To edit options before creating the job definition, click Advanced. Set the job definition options.
7. Optionally, select render options for your emailed report attachments. Reports can be rendered as Adobe
PDFs, Microsoft Word files, and Microsoft Excel spreadsheets. Select Export Data to save reports to a
defined location.
8. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
346
User's Guide Create a Report Job Definition
9. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Report Overview on page 369
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
347
User's Guide Create a Script Job Definition
348
User's Guide Create a Script Job Definition
To create a Script job definition:
1. Click the Jobs tab. Expand the System folder, then select Scripts .
5. Continue to add commands. Use the reorder, edit, and delete options to assist you. Your script can
consist of one or many commands.
6. To create the job definition using default options, click Create Job. The job can be run manually from the
Jobs tab.
7. To editoptions before creating the job definition, click Advanced. Set the job definition options.
Run scripts in order
Select Run scripts in order for the commands in the job to run sequentially. The job runs the first
command and when it finishes it runs the next one. Clear this option for the commands to run
concurrently.
Stop execution on failure
Select Stop execution on failure for the job to stop running as soon as one of the commands fails.
Clear this option for the job to continue to run after one of the commands fails. This option is only
applicable if Run scripts in order is selected.
8. Optionally, expand the Notification section to select the job notification options.
SMTP Server
From the list of available SMTP resources, select the SMTP Server to use for job status email
notifications. If an SMTP server is not selected, an email is not sent.
Email Address
Enter the email addresses of the status email notifications recipients. Click Add to add it to the list.
9. Optionally, expand the Schedule section to select the job scheduling options. Select Start job now to
create a job definition that starts the job immediately. Select Schedule job to start at later time to view
the list of available schedules. Optionally select one or more schedules for the job. As each schedule is
selected, the schedule's name and description displays.
Note: To create and select a new schedule, click the Configure tab, then select Schedules .
Create a schedule, return to the job editor, refresh the Available Schedules pane, and select the new
schedule.
10. When you are satisfied that the job-specific information is correct, click Create Job. The job runs as
349
User's Guide Create a Script Job Definition
defined by your schedule, or can be run manually from the Jobs tab.
Note: If you selected the Start job now option, the job runs.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job session on the Jobs tab. See Monitor a Job Session on
page 172.
l If notification options are enabled, an email message with information about the status
of each task is sent when the job completes.
RELATED TOPICS:
l Return Code Reference on page 562
l Edit a Job Definition on page 351
l Delete a Job Definition on page 352
l Create a Schedule on page 163
350
User's Guide Edit a Job Definition
2. Select the job definition to edit by clicking in the row containing the job definition name.
3. Click Edit . The Job Definition Editor opens.
NEXT STEPS:
l If you do not want to wait until the next scheduled job run, run the job session on
demand. See Start, Pause, and Hold a Job Session on page 170.
l Track the progress of the job on the Jobs tab. See Monitor a Job Session on page 172.
l If SMTP options are enabled, an email message with information about the status of
each task is sent when the job completes.
RELATED TOPICS:
l Delete a Job Definition on page 352
351
User's Guide Delete a Job Definition
2. Select the job definition to delete by clicking in the row containing the job definition name.
3. Click Delete . A confirmation dialog box opens.
RELATED TOPICS:
l Edit a Job Definition on page 351
l Maintenance Job on page 345
352
User's Guide Search
Search
The topics in the following section cover searching for objects, downloading search results, and browsing the
Inventory.
353
User's Guide Search Overview
Search Overview
Use IBM Spectrum Copy Data Management to explore objects on WHY IT MATTERS:
cataloged providers. With the Search feature, you can easily search
for and rapidly find all objects that match certain criteria. IBM Spectrum Copy Data
Management lets you quickly and
The DellEMC Unity Inventory may include CIFS shares, file systems, easily locate every version of a file
hosts, host containers, LUNs, NAS servers, NFS shares, pools, across your entire Enterprise. IBM
snapshots, storage resources, and storage resource replications. Spectrum Copy Data Management
The IBM Inventory may include FlashCopies, Hosts, IOGroups, searches its databases and returns
MDisks, mirrors, Node Canisters, PortIPs, and volumes. results in moments; every instance
and version of a file displays across
The NetApp ONTAPInventory may include aggregates, CIFS shares, all devices and snapshots.
files, LUNs, networks, NFS exports, nodes, policies, protocols,
qtrees, quotas, SnapMirrors, Snapshots, SnapVaults, SVMs, vFilers,
and volumes.
The Recovery Inventory may contain datacenters, data stores, ESX hosts, LUNs, folders, recovery points,
vApps, vDisks, vSnapshots, and vSpheres.
The VMware Inventory may include data stores, ESX hosts, LUNs, virtual disks, virtual machines, VMware
hosts, and virtual snapshots.
The Pure Storage FlashArray Inventory may include volumes, snapshots, LUNS, hosts, and host groups.
You can match a character pattern and apply other filters such as category, object type, and location through
the Search feature. The results are presented in the user interface and are also exportable. Furthermore, click
an object that appears in the Search results to open a tab with additional details about that object. You can
also review previous versions of your files, along with their Snapshot, SnapVault and SnapMirror replication
status.
Alternatively, use the Inventory Browser to browse through the list of providers. Drill into the Inventory
Browser to logically view the details of the objects underlying a storage system, virtual host, or application.
Use the Time Machine control to view the Inventory as it appeared on a past date.
RELATED TOPICS:
l Search for Objects on page 355
l View Object Details on page 360
l View NetApp ONTAP File Details on page 361
l Find and Restore a File on page 363
l Download Search Results on page 365
l Browse Inventory on page 366
354
User's Guide Search for Objects
2. Open a new Search pane. If this is the first search in your IBM Spectrum Copy Data Management
session, click the Search tab. If you have already done a Search, go into an existing Search pane and
click New Search. From a new Search pane, you can perform a basic search or an advanced search.
2. Click Search Now. The list of objects that meet all the criteria displays.
3. Click an object name. The properties of the object display in a new tab. The specific properties vary by
type of object.
355
User's Guide Search for Objects
To perform a basic search using inline search parameters:
Using the following inline search strings, you can perform complex searches based on a file's location, size,
and access, creation, or modified time from the basic search field.
356
User's Guide Search for Objects
The following time strings are supported:
years, yearsago, year, yearago
months, monthsago, month, monthago
weeks, weeksago, week, weekago
days, daysago, day, dayago
hours, hoursago, hour, hourago
minutes, minutesago, minute, minuteago
Name
Object name or character pattern.
Enter a character string to find objects with a name that matches or contains the character string.
You can also enter partial character strings. Character strings are case insensitive.
357
User's Guide Search for Objects
Location
The place where the object resides. This is usually the host name or the host/volume. Wildcards can be
used.
Hide Duplicates
Toggles the behavior of duplicate search results. The default option, No, displays duplicate search
results in the search results pane. Select Yes to hide duplicate search results. View the object's
properties to view duplicate versions of an object.
In some cases, the name of a returned object on the search results pane may be the same as another
object, however the resources where the objects reside is different. Review the file properties of the
objects by selecting their names on the search results pane to view the differences between the
returned entries.
The following filters apply to advanced NetApp ONTAP File searches only. Select NetApp ONTAP > File
in the Search For dialog to view the following filters:
Last Modified Time, Creation Time, Last Accessed Time
Filter a search by modification, creation, and accessed dates with the calendar tool. Select On or after
and On or before to set a date range.
File Size
Filter a search by a file size range. Enter a file size and select bytes, kilobytes, megabytes, or
gigabytes.
3. Click Search . The list of objects that meet all the criteria displays.
4. Click an object name. The properties of the object display in a new tab. The specific properties vary by
type of object.
Tip: Periodically closing tabs helps simplify navigation and browsing. To close multiple tabs, right-click a tab
then select Close Tab, Close Other Tabs, or Close All Tabs.
Wildcard considerations:
A wildcard is a character that you can substitute for zero or more unspecified characters when searching text.
Position wildcards at the beginning, middle, or end of a string, and combine them within a string.
l Match a character string with an asterisk, which represents a variable string of zero or more characters:
string* searches for terms like string, strings, or stringency
str*ing searches for terms like string, straying, or straightening
*string searches for terms like string or shoestring
358
User's Guide Search for Objects
You can use multiple asterisk wildcards in a single text string, though this might considerably slow down a
large search.
NEXT STEPS:
l You can download the search results as a CSV file format. See Download Search
Results on page 365.
l You can reorder and resize columns in the search results table.
l You can learn more about an object or its attributes in the search results. See View
Object Details on page 360.
RELATED TOPICS:
l Download Search Results on page 365
l Browse Inventory on page 366
l View Object Details on page 360
l View NetApp ONTAP File Details on page 361
l Search and Filter Guidelines on page 557
l Select, Sort, and Reorder Columns on page 560
359
User's Guide View Object Details
RELATED TOPICS:
l View NetApp ONTAP File Details on page 361
l Search Overview on page 354
l Search for Objects on page 355
360
User's Guide View NetApp ONTAP File Details
361
User's Guide View NetApp ONTAP File Details
NEXT STEPS:
l Restore a file to a previous version. See Find and Restore a File on page 363.
RELATED TOPICS:
l View Object Details on page 360
l Search Overview on page 354
l Search for Objects on page 355
l Find and Restore a File on page 363
362
User's Guide Find and Restore a File
363
User's Guide Find and Restore a File
Select a previous version of the file and click Open to open the previous version without altering the
original version.
Select a previous version of the file and click Copy to copy the previous version to an alternate
location.
Select a previous version of the file and click Restore to replace or roll back the file to the selected
previous version.
RELATED TOPICS:
l View NetApp ONTAP File Details on page 361
l Search Overview on page 354
l Search for Objects on page 355
364
User's Guide Download Search Results
RELATED TOPICS:
l Search for Objects on page 355
l View Object Details on page 360
l View NetApp ONTAP File Details on page 361
365
User's Guide Browse Inventory
Browse Inventory
Use IBM Spectrum Copy Data Management to explore resources that are cataloged and find the properties of
underlying objects. View the details for a storage system, virtual host, or application server.
The DellEMC Unity Inventory may include CIFS shares, file systems, hosts, host containers, LUNs, NAS
servers, NFS shares, pools, snapshots, storage resources, and storage resource replications.
The IBM Inventory may include FlashCopies, Hosts, IOGroups, MDisks, mirrors, Node Canisters, PortIPs,
and volumes.
The NetApp ONTAPInventory may include aggregates, CIFS shares, files, LUNs, networks, NFS exports,
nodes, policies, protocols, qtrees, quotas, SnapMirrors, Snapshots, SnapVaults, SVMs, vFilers, and volumes.
The Recovery Inventory may contain datacenters, data stores, ESX hosts, LUNs, folders, recovery points,
vApps, vDisks, vSnapshots, and vSpheres.
The VMware Inventory may include data stores, ESX hosts, LUNs, virtual disks, virtual machines, VMware
hosts, and virtual snapshots.
The Pure Storage FlashArray Inventory may include volumes, snapshots, LUNS, hosts, and host groups.
2. Drill down through providers in the Inventory Browser. A tab that displays the names of the underlying
objects opens.
3. In the tab, click an object name. The properties of the object display in a new tab. The specific properties
vary by type of object.
2. Click the All Resources tab. The Time Machine control appears.
3. Use the calendar tool to set the Time Machine to a previous date. The Inventory Browser reflects the
state of the Inventory on that date.
4. Drill down through providers in the Inventory Browser. A tab that displays the names of the underlying
objects opens.
Tip: Periodically closing tabs helps simplify navigation and browsing. To close multiple tabs, right-click a tab
then select Close Tab, Close Other Tabs, or Close All Tabs.
366
User's Guide Browse Inventory
RELATED TOPICS:
l Search Overview on page 354
l Search for Objects on page 355
l View Object Details on page 360
l View NetApp ONTAP File Details on page 361
367
User's Guide Report
Report
The topics in the following section cover running and customizing reports, as well as individual report details.
368
User's Guide Report Overview
Report Overview
IBM Spectrum Copy Data Management provides a number of predefined reports, which you can tailor to meet
your specific reporting requirements. Reports are based on the data collected by the most recently run
Inventory job, and you can generate reports after all cataloging jobs and subsequent database condense jobs
complete. Click the Reports tab to display the Report Browser. You can run reports with predefined default
parameters or run and save customized reports driven by custom parameters.
The information in these reports are presented in a chart-based Quick View section, or tabular Summary
View and Detail View sections.
Reports include interactive elements, such as searching for individual values within a report, vertical scrolling,
and column sorting. Information groups, such as the Primary Source Volume groups in the NetApp ONTAP
Protection Usage report, can also be sorted by clicking the group name.
You can add a Report job to summarize information about cataloged providers and the data and other
resources that reside on them, then schedule the Report job to run as defined by the parameters of the
schedule.
To further analyze the data or print a hard copy, use the export functionality to save the data from the
generated report to an Adobe PDF, Microsoft Word file, Microsoft Excel file, or HTML file.
RELATED TOPICS:
l Run a Report on page 370
l Create a Customized Report on page 372
l Edit a Customized Report on page 373
l Download a Report on page 374
l Create a Report Job Definition on page 346
l Application Reports on page 377
l System Management Reports on page 385
l File Analytics Reports on page 397
l Protection Compliance Reports on page 407
l Storage Protection Reports on page 444
l Storage Utilization Reports on page 458
369
User's Guide Run a Report
Run a Report
Perform the following steps to run any report from the Report tab. You can run reports with predefined default
parameters or run customized reports driven by custom parameters.
NEXT STEPS:
l Create a Report job definition to schedule the report to run automatically. See Create a
Report Job Definition on page 346.
l Save a report with customized parameters. See Create a Customized Report on page
372.
RELATED TOPICS:
370
User's Guide Run a Report
371
User's Guide Create a Customized Report
2. Select a predefined report to save as a customized report from the Report Browser pane.
3. Select report parameter values in the Parameters pane. Parameters are unique to each report. The
parameters that you select drive the report output.
4. Click Save As . The Save As window opens.
5. Enter a Title and a Description for the customized report. Report names can include alphanumeric
characters and the following symbols: $ - _ . + ! * ' ().
6. Click Submit. The customized report is saved.
7. Return to the Report Browser. Expand the original predefined report to view associated customized
reports.
NEXT STEPS:
l Run the customized report from the Report tab. See Run a Report on page 370.
RELATED TOPICS:
l Edit a Customized Report on page 373
l Run a Report on page 370
l Download a Report on page 374
l Report Overview on page 369
372
User's Guide Edit a Customized Report
5. Return to the Report Browser. Expand the original predefined report to view associated customized
reports.
NEXT STEPS:
l Run the customized report from the Report tab. See Run a Report on page 370.
RELATED TOPICS:
l Create a Customized Report on page 372
l Run a Report on page 370
l Download a Report on page 374
l Report Overview on page 369
373
User's Guide Download a Report
Download a Report
Download reports from the report output or from the Jobs History pane. Reports can be downloaded as
HTML files, Adobe PDFs, Microsoft Excel spreadsheets, and Microsoft Word files.
Best Practice: Generated reports are automatically removed from the Jobs History pane seven days
after their initial run. To save a report indefinitely, download it as an HTML file, Adobe PDF, Microsoft Excel
spreadsheet, or Microsoft Word file. Click Download while viewing an open report or download a
previously generated report from the Jobs History pane.
Note: Some report images appear truncated when viewed in Microsoft Word. For best results, view
downloaded reports through the Web Layout.
To download a report:
1. Run a report from the Report Browser pane.
2. Click Download .
3. Select HTML , PDF , Excel , or Word . You can view the report now or save it to a file.
Reports are saved on the IBM Spectrum Copy Data Management appliance in the following
directories:
l Report Download location: /data/reports/output
NEXT STEPS:
374
User's Guide Download a Report
RELATED TOPICS:
l Run a Report on page 370
l Create a Customized Report on page 372
375
User's Guide Delete a Generated Report
RELATED TOPICS:
l Download a Report on page 374
l Run a Report on page 370
376
User's Guide Application Reports
Application Reports
The Application Reports help you review your application server configuration. Use the Application Reports to
view your application server's database, log disks, and eligibility for protection. Reports are based on the data
collected by the most recently run job.
Choose the Application reports that fit your needs:
l Application Configuration Report - Review the configuration of your application servers including the
disks you the database and logs reside.
l Application RPO Compliance Report - The Application RPO Compliance report displays your
application servers in relation to your recovery point objective parameters.
RELATED TOPICS:
l Application Configuration Report on page 378
l Application RPO Compliance Report on page 381
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
l Download a Report on page 374
377
User's Guide Application Configuration Report
Parameters
Use the following parameters to customize your report:
l Application Type: File System, InterSystems Caché , Microsoft SQL, Oracle, SAP HANA. Multiple
selections are supported.
The default report parameter, All, reports on all available application types in your configuration.
378
User's Guide Application Configuration Report
379
User's Guide Application Configuration Report
The following fields and corresponding data display in the SAP HANA section of the Application Configuration
report:
Instance
The name and location of the SAP HANAinstance.
Database
The name of the database associated with the SAP HANA instance and application server.
Data Disk(s)
The name of the associated database disk mount points, along with the associated volume and storage
array in parentheses.
Log Disk(s)
The name of the associated log disk mount points, along with the associated volume and storage array in
parentheses.
Eligible for Protection
The status of the database's eligibility for protection.
RELATED TOPICS:
l Application Reports on page 377
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
380
User's Guide Application RPO Compliance Report
Parameters
Use the following parameters to customize your report:
l Application Type
InterSystems Caché , Microsoft SQL, Oracle, SAP HANA. Multiple selections are supported.
l Application Server
Multiple selections are supported.
l Protection Type
Set the protection types to return in the report. Values include Primary and Replication. Multiple selects
are supported.
l Storage Vendor
Set the storage vendor types to display in the report. Multiple selections are supported.
l Display Databases That Are
Set the compliance status of your application server to return in the report. Values include Compliant
and Not Compliant. Multiple selections are supported. By default, this parameter is set to Not
Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
The default report parameters report on all non-compliant application servers based on an RPO older than
one day.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant application servers based on
your RPO parameters.
381
User's Guide Application RPO Compliance Report
382
User's Guide Application RPO Compliance Report
The most recent instance of a successful run of the Backup job.
Compliance Time Remaining
The time remaining before your application server will be non-compliant.
383
User's Guide Application RPO Compliance Report
The Backup job associated with the application server.
Storage Vendor
The storage vendor associated with the application server.
Protection Time
The most recent instance of a successful run of the Backup job.
Compliance Time Remaining
The time remaining before your application server will be non-compliant.
RELATED TOPICS:
l Application Reports on page 377
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
384
User's Guide System Management Reports
RELATED TOPICS:
l Catalog Summary Report on page 386
l Configuration Report on page 388
l Job Report on page 392
l Job Sessions Report on page 395
l System Sizing Report
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
l Download a Report on page 374
385
User's Guide Catalog Summary Report
Parameters
Use the following parameters to customize your report:
l Catalog Type: All, Application, DellEMC Unity, IBM Spectrum Accelerate, IBM Spectrum Virtualize,
NetApp ONTAP, NetApp ONTAP Node Summary, Pure Storage FlashArray, and VMware. Multiple
selections are supported.
The default report parameter, All, reports on all available Catalog information.
386
User's Guide Catalog Summary Report
Last Updated
The date and time of the most recent Catalog update.
RELATED TOPICS:
l System Management Reports on page 385
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
387
User's Guide Configuration Report
Configuration Report
Review the IBM Spectrum Copy Data Management providers added to IBM Spectrum Copy Data
Management and their associated configurations.
Use the Configuration report to answer questions such as:
l What is the operating system, memory, and processor information of my nodes?
Parameters
Use the following parameters to customize your report:
l Configuration Type: All, AWS Nodes, Application Nodes, DellEMC Unity Nodes, IBM Spectrum
Accelerate Nodes, IBM Spectrum Protect Snapshot Nodes, IBM Spectrum Virtualize Nodes, NetApp
ONTAP Nodes, Pure Storage FlashArray Nodes, and VMware Nodes. Multiple selections are supported.
The default report parameter, All, reports on all available nodes in your configuration.
VMware Nodes
The following fields and corresponding data display in the VMware Nodes section of the Configuration report:
Node (Site)
388
User's Guide Configuration Report
The name of the vCenter node and the associated site.
Type
The node type.
Product Version
The installed product version on the vCenter node.
OS Type
The installed operating system type on the vCenter node.
389
User's Guide Configuration Report
The IBM Spectrum Accelerate version number.
System Name
The system name of the IBM Spectrum Accelerate node.
Application Nodes
The following fields and corresponding data display in the Application Nodes section of the Configuration
report:
Node (Site)
The name of the Oracle node and the associated site.
Application Type
The type of application server, including SQL or Oracle.
OS Type
The installed operating system type on the application node.
Server Type
The application server type, including physical or virtual.
AWS Nodes
The following fields and corresponding data display in the AWS Nodes section of the Configuration report:
Node (Site)
The name of the AWS node and the associated site.
Region
The AWS node's associated region.
390
User's Guide Configuration Report
Node (Site)
The name of the Pure Storage FlashArray node and the associated site.
Version
The Pure Storage FlashArray version number.
RELATED TOPICS:
l System Management Reports on page 385
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
391
User's Guide Job Report
Job Report
Review the available jobs in your IBM Spectrum Copy Data Management configuration. Run the Job report to
view jobs by type, their average runtime, and their successful run percentage.
Use the Job report to answer questions such as:
l What types of jobs are available?
l What is the average runtime of a job, and when was it last successfully run?
l How many times has a specific job run successfully or failed?
Parameters
Use the following parameters to edit your report:
l Job Type
Multiple selections are supported.
l Days Since Successful Run
l Show Job Details
Enable to display the Job ID field in the Detail View section.
Quick View
The Quick View section displays a pie chart of the number of times a job type successfully completed, failed,
or was marked with any of the following statuses: unknown, waiting, running, stopped, partial, skipped,
aborted, or stopping. Use the Job Type parameter to display File, NetApp ONTAP, Report, Script, VMware, or
all job types.
Note: The Quick View section is only modified through the Job Type parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the Jobs report:
Job Type
The type of job. For example, a catalog or report job.
Jobs
The number of jobs associated with the job type.
Runs
The number of times a job of this type ran.
Completed
392
User's Guide Job Report
The number of times a job of this type successfully completed.
Failed
The number of times a job of this type failed.
Other
The number of times a job of this type was marked with any of the following statuses: unknown, waiting,
running, stopped, partial, skipped, aborted, or stopping.
Detail View
The following fields and corresponding data display in the Detail View section of the Jobs report:
Job
The job name.
Job ID
The job ID of the displayed job. This field is controlled by the Show Job Details parameter.
Type
The type of job. For example, a catalog or report job.
Runs
The number of times the job ran.
Completed
The number of times the job successfully completed.
Failed
The number of times the job failed.
Other
The number of times the job was marked with any of the following statuses: unknown, waiting, running,
stopped, partial, skipped, aborted, or stopping.
Last Successful Run
The date and time the job last ran successfully.
Average Runtime (Days hh:mm:ss)
The average time it takes to run the job, listed in days, hours, minutes, and seconds.
Success %
The percentage of times the job successfully completed.
RELATED TOPICS:
393
User's Guide Job Report
394
User's Guide Job Sessions Report
Parameters
Use the following parameters to edit your report:
l Job Type
Multiple selections are supported.
l Job
Multiple selections are supported.
l Status
Multiple selections are supported.
l Protection Type
Multiple selections are supported.
l Job Sessions for Past Number of Days
Enter a number for the number of days.
The default report parameters report on all available jobs and 1 for Job Sessions for Past Number of Days.
Detail View
The following fields and corresponding data display in the Detail View section of the Job Sessions report:
Job
The job name.
Type
The type of job. For example, a catalog or report job.
Status
The status of the job session.
395
User's Guide Job Sessions Report
Start Time
The time that the job session started in UTC time.
Finish Time
The time that the job session finished in UTC time.
Duration (Days hh:mm:ss)
The total time that it takes for the job session to run, listed in days, hours, minutes, and seconds.
RELATED TOPICS:
l System Management Reports on page 385
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
396
User's Guide File Analytics Reports
Quick View
This area of the report is a graphical illustration of the report using pie charts. For example, the quick view of
the Files by Age report shows the age of all files on a volume.
Detail View
This area of the report is a table where each row details a node, its corresponding volume, and details
returned by the report. For example, the Files By Size report shows the largest files on your node, their size,
and the last time they were accessed.
RELATED TOPICS:
l File Usage by Owner Report on page 399
l Files By Age Report on page 401
397
User's Guide File Analytics Reports
398
User's Guide File Usage by Owner Report
Parameters
Use the following parameters to customize your report:
l NetApp ONTAP Storage
l Volume
l Limit No. of Owners to View
Select Yes to limit the number of owners to view through the Enter No. of Owners to View parameter.
Select No to display all owners in the report.
l Number of Owners to View
If the Limit No. of Owners to View parameter is set to Yes, set the number of owners to display in the
Details section of the report.
l Export Date Format
Set the date format to use when exporting data.
The default report parameters report on owners consuming the most amount of space on all storage systems
and volumes.
Quick View
The Quick View section displays a graph of the top ten owners consuming the most amount of space as well
as the top ten owners with the largest number of files on your storage systems.
Note: The Quick View section is modified through the NetApp ONTAP Storage, Volume, and Number of
Owners to View parameters.
Detail View
399
User's Guide File Usage by Owner Report
The following fields and corresponding data display in the Detail View section of the File Usage by Owner
report:
Owner
The assigned volume owner that is consuming the largest amount of space on your selected storage
systems.
Total Used
The amount of space on a storage system used by the owner.
File Count
The number of files on the storage system per owner.
RELATED TOPICS:
l File Analytics Reports on page 397
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
400
User's Guide Files By Age Report
Parameters
Use the following parameters to customize your report:
l NetApp ONTAP Storage
l Volume
Multiple selections are supported.
l Date accessed, created, or modified.
l File Size Equal To or Greater Than (GB)
Set the minimum file size to display. By default, only files greater than 1 GB display.
l Limit to File Extensions
401
User's Guide Files By Age Report
Set the file types to include in the report by their extension. Separate multiple extensions by commas.
l Export Data
Set the age of data to export, from greater than 180 days to greater than 10 years. A list of files based
on age is exported. Each row in the file contains directory path, file name, type, and other metadata
related to the file. By default, this parameter is set to No. Set the location of the export file on the Run
Report dialog, which displays before the report is run. By default, the file is exported to the local
/data/reports directory.
l Export Date Format
Set the date format to use when exporting data.
The default report parameters report on the age of files greater than 1 GB on all storage systems and
volumes.
Quick View
The Quick View section displays a pie chart of the storage used by files based on creation date and the last
time modified. Use the NetApp ONTAP Storage parameter to display volumes on all storage systems or a
specific storage system.
Note: The Quick View section is modified through the Date, Include Deleted Files, NetApp ONTAP Storage,
and Volume parameters.
Detail View
The following fields and corresponding data display in the Detail View section of the Files by Age report:
File Age (days)
Age groups including older than ten years, five to ten years, four to five years, three to four years, two to
three years, one to two years, one year to 180 days, and less than 180 days.
Total Data
The amount of space used by files in the associated age group.
File Count
The number of files in the associated age group.
RELATED TOPICS:
l File Analytics Reports on page 397
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
402
User's Guide Files By Category Report
Parameters
Use the following parameters to customize your report:
l NetApp ONTAP Storage
l Volume
l Number of Categories to View
l Exclude File Extensions
You can exclude file extensions from the report using the Exclude File Extensions parameter. Enter
extensions without a leading period, and separate multiple extensions to exclude with a comma. For
example, enter raw, exe, bmp to exclude files with .RAW, .EXE, and .BMP extensions from the report.
Enter a space to exclude files without extensions. Note that filters are not case sensitive.
Note: Files without extensions are grouped in a category labeled {Empty} in the Quick View and Detail
View sections.
l Export Date Format
Set the date format to use when exporting data.
The default report parameters report on ten file categories on all storage systems and volumes. Select up to
100 categories.
Quick View
The Quick View section displays graphs of the storage used by a file category and the number of files in a file
category. Use the NetApp ONTAP Storage parameter to display volumes on all storage systems or a specific
storage system.
Note: The Quick View section is modified through the Number of Categories to View, Exclude File
Extensions, NetApp ONTAP Storage, and Volume parameters.
Detail View
The following fields and corresponding data display in the Detail View section of the Files By Category report:
File Extension
403
User's Guide Files By Category Report
The file type included in the report (for example .txt, .exe, or .bin).
Total Used
The amount of space on a storage system used by the file type.
File Count
The number of files associated with the file type.
RELATED TOPICS:
l File Analytics Reports on page 397
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
404
User's Guide Files By Size Report
Parameters
Use the following parameters to customize your report:
l NetApp ONTAP Storage
l Volume
l Number of Largest Files to View
l Export Date Format
Set the date format to use when exporting data.
The default report parameters report on the ten largest files on all storage systems and volumes. Select up to
100 files.
Quick View
The Quick View section displays a pie chart of the largest files on your storage systems compared to the total
storage. Use the NetApp ONTAP Storage parameter to display volumes on all storage systems or a specific
storage system.
Note: The Quick View section is modified through the Number of Largest Files to View, NetApp ONTAP
Storage, and Volume parameters.
Detail View
The following fields and corresponding data display in the Detail View section of the Files by Size report:
405
User's Guide Files By Size Report
Node
The physical server where your files are stored.
Volume
The name of the volume on the node.
File
The name of the file returned by the report, including the path.
Owner
The assigned volume owner or node where the volume resides. To view file ownership information, create
and run a NetApp ONTAP File Inventory job with the Traversal Mode set to Filewalk in conjunction with the
IBM Spectrum Copy Data Management Filewalker tool before running this report
No. of Copies
The number of copies of the file that exist in the Inventory.
SnapMirror
The SnapMirror associated with the file.
SnapVault
The SnapVault associated with the file.
Size
The amount of space on the node used by the file.
Last Time Modified
The date and time in which the file was last modified.
RELATED TOPICS:
l File Analytics Reports on page 397
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
406
User's Guide Protection Compliance Reports
RELATED TOPICS:
407
User's Guide Protection Compliance Reports
408
User's Guide DellEMC Unity RPO Compliance Report
Use the DellEMC Unity RPO Compliance report to answer questions such as:
l Which of my DellEMC Unity storage systems are not RPO compliant for replication or snapshot
protection?
l Which of my DellEMC Unity Backup jobs have never run successfully?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Resource Type
Set the resource type to return in the report. Resource types include file system and LUN.
l Protection Type
Set the RPO compliance protection type. Protection types include replication and snapshot. Multiple
selections are supported. By default, this parameter is set to All.
l Display Resources That Are
Set the compliance status of your DellEMC Unity storage systems to return in the report. Values include
Compliant and Not Compliant. Multiple selections are supported. By default, this parameter is set to Not
Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
The default report parameters report on all non-compliant DellEMC Unity storage systems based on an
RPO older than one day.
409
User's Guide DellEMC Unity RPO Compliance Report
Quick View
The Quick View section displays a bar graph of compliant and non-compliant DellEMC Unity storage systems
based on your RPO parameters.
410
User's Guide DellEMC Unity RPO Compliance Report
The most recent instance of a successful run of the Backup job.
Compliance Time Remaining
The time remaining before your DellEMC Unity storage system will be non-compliant.
411
User's Guide DellEMC Unity RPO Compliance Report
The destination of the replication.
Job Name
The Replication Backup job associated with the DellEMC Unity storage system.
Protection Time
The most recent instance of a successful run of the Backup job.
Remaining Compliance Time
The time remaining before your DellEMC Unity storage system will be non-compliant.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
412
User's Guide File System RPO Compliance Report
Parameters
Use the following parameters to customize your report:
l Host
Multiple selections are supported.
l Protection Type
Set the protection types to return in the report. Values include Primary and Replication. Multiple selects
are supported.
l Storage Vendor
Set the storage vendor types to display in the report. Multiple selections are supported.
l Display File Systems That Are
Set the compliance status of your file system to return in the report. Values include Compliant and Not
Compliant. Multiple selections are supported. By default, this parameter is set to Not Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
The default report parameters report on all non-compliant file systems based on an RPO older than one day.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant file systems based on your
RPO parameters.
413
User's Guide File System RPO Compliance Report
The following fields and corresponding data display in the Not Compliant for Primary Protection section of the
File System RPO Compliance report:
File System Mount Point(s)
The file system's associated mount points.
Host (OS Type)
The hostname and operating system type. For example, Windows, AIX, or Linux.
Job Name
The Backup job associated with the file system.
Storage Vendor
The storage vendor associated with the file system.
Last Successful Protection Time
The most recent instance of a successful run of the Backup job.
Reason
The reason the file system does not meet your RPO compliance parameters. Examples include no
successful runs of a protection job, or backing up to an unsupported disk.
414
User's Guide File System RPO Compliance Report
The following fields and corresponding data display in the Not Compliant for Replication section of the File
System RPO Compliance report:
File System Mount Point(s)
The file system's associated mount points.
Source
The source of the secondary protection.
Destination
The destination of the secondary protection.
Job Name
The Backup job associated with the file system.
Protection Time
The most recent instance of a successful run of the Backup job.
Storage Vendor
The storage vendor associated with the file system.
Reason
The reason the file system does not meet your RPO compliance parameters. Examples include no
successful runs of a protection job, or backing up to an unsupported disk.
415
User's Guide HPE Nimble Storage RPO Compliance Report
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
Parameters
Use the following parameters to customize your report:
l HPE Nimble Storage
Multiple selections are supported.
l Protection Type
Set the HPE Nimble Storage protection type to return in the report. Values include Primary and
Replication. Multiple selections are supported.
l Display Resources That Are
Set the compliance status of your HPE Nimble Storage systems to return in the report. Values include
Compliant and Not Compliant. Multiple selections are supported. By default, this parameter is set to Not
Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant HPE Nimble Storage systems
based on your RPO parameters.
416
User's Guide HPE Nimble Storage RPO Compliance Report
417
User's Guide HPE Nimble Storage RPO Compliance Report
The name of the HPE Nimble Storage system volume.
Source Storage Array
The source of the secondary protection.
Destination Storage Array
The destination of the secondary protection.
Job Name
The Backup job associated with the HPE Nimble Storage storage system.
Last Successful Protection Time
The most recent instance of a successful run of the protection job.
Reason
The reason the HPE Nimble Storage storage system does not meet your RPO compliance parameters.
Examples include no successful runs of a protection job, or backing up to an unsupported disk.
RELATED TOPICS:
418
User's Guide HPE Nimble Storage RPO Compliance Report
419
User's Guide IBM Spectrum Accelerate RPO Compliance Report
Use the IBM Spectrum Accelerate RPO Compliance report to answer questions such as:
l Which of my IBM storage systems are not RPO compliant for Flash Copy protection?
l Which of my IBM Backup jobs have never run successfully?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Display Resources That Are
Set the compliance status of your IBM storage systems to return in the report. Values include Compliant
and Not Compliant. Multiple selections are supported. By default, this parameter is set to Not
Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant IBM storage systems based on
your RPO parameters.
420
User's Guide IBM Spectrum Accelerate RPO Compliance Report
The name of the IBM storage system.
Location
The location of the IBM storage system.
Job Name
The Backup job associated with the IBM storage system.
Last Successful Protection Time
The most recent instance of a successful run of the Backup job.
Reason
The reason the IBM storage system does not meet your RPO compliance parameters. Examples include no
successful runs of a protection job, or backing up to an unsupported disk.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
421
User's Guide IBM Spectrum Virtualize RPO Compliance Report
Use the IBM Spectrum Virtualize RPO Compliance report to answer questions such as:
l Which of my IBM storage systems are not RPO compliant for Flash Copy protection?
l Which of my IBM Backup jobs have never run successfully?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
Set the IBM protection type to return in the report. Values include FlashCopy and Global Mirror with
Change Volumes. Multiple selections are supported.
l Display Resources That Are
Set the compliance status of your IBM storage systems to return in the report. Values include Compliant
and Not Compliant. Multiple selections are supported. By default, this parameter is set to Not
Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
The default report parameters report on all non-compliant IBM storage systems based on an RPO older than
one day.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant IBM storage systems based on
your RPO parameters.
422
User's Guide IBM Spectrum Virtualize RPO Compliance Report
423
User's Guide IBM Spectrum Virtualize RPO Compliance Report
424
User's Guide IBM Spectrum Virtualize RPO Compliance Report
The most recent instance of a successful run of the Backup job.
Compliance Time Remaining
The time remaining before your IBM storage system will be non-compliant. Hover over the bar to see the
remaining compliance time.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
425
User's Guide NetApp ONTAP Protection Usage Report
Use the NetApp ONTAP Protection Space Usage report to answer questions such as:
l What is the secondary protection storage usage across my volumes?
l What is the combined size of all of my volumes on a NetApp ONTAP storage system?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
Set the NetApp ONTAP protection type. Protection types include SnapMirror, SnapVault, and
Snapshot. Multiple selections are supported.
l Job Type
Set the IBM Spectrum Copy Data Management protection jobs to display in the report. Multiple
selections are supported.
Quick View
The Quick View section displays a bar graph of the storage protection usage across all of the volumes on the
selected NetApp ONTAP storage system.
426
User's Guide NetApp ONTAP Protection Usage Report
Location
The location of the primary source volume.
Job Names
The names of the associated IBM Spectrum Copy Data Management protection jobs.
Snapshot Count
The number of snapshots available on the volume.
Oldest Snapshot Creation Time
The creation date and time of the oldest snapshot on the volume
Volume Size
The total size of the volume.
Volume Used Size
The size of the volume occupied by data.
Snapshot Usage
The amount of space on the volume dedicated to snapshot protection.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
427
User's Guide NetApp ONTAP Protection Usage Report
428
User's Guide NetApp ONTAP RPO Compliance Report
Use the NetApp ONTAP RPO Compliance report to answer questions such as:
l Which of my NetApp ONTAP storage systems are not RPO compliant for SnapVault or SnapMirror
protection?
l Which of my NetApp ONTAP Backup jobs have never run successfully?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
Set the RPO compliance protection type. Protection types include SnapMirror, SnapVault, and
Snapshot. Multiple selections are supported. By default, this parameter is set to All.
l Display Resources That Are
Set the compliance status of your NetApp ONTAP storage systems to return in the report. Values
include Compliant and Not Compliant. Multiple selections are supported. By default, this parameter is
set to Not Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
The default report parameters report on all non-compliant NetApp ONTAP storage systems based on an
RPO older than one day.
Quick View
429
User's Guide NetApp ONTAP RPO Compliance Report
The Quick View section displays a bar graph of compliant and non-compliant NetApp ONTAP storage
systems based on your RPO parameters.
430
User's Guide NetApp ONTAP RPO Compliance Report
The following fields and corresponding data display in the Not Compliant for Replication section of the NetApp
ONTAP RPO Compliance report:
Source
The source of the secondary protection.
Destination
The destination of the secondary protection.
Job Name
The SnapVault or SnapMirror Backup job associated with the NetApp ONTAP storage system.
Protection Time
The most recent instance of a successful run of the protection job.
Reason
The reason the NetApp ONTAP storage system does not meet your RPO compliance parameters.
Examples include no successful runs of a protection job, or backing up to an unsupported disk.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
431
User's Guide NetApp ONTAP RPO Compliance Report
432
User's Guide Pure Storage FlashArray RPO Compliance Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
Set the Pure Storage FlashArray protection type to return in the report. Values include Primary and
Replication. Multiple selections are supported.
l Display Resources That Are
Set the compliance status of your Pure Storage systems to return in the report. Values include
Compliant and Not Compliant. Multiple selections are supported. By default, this parameter is set to Not
Compliant.
l RPO Older Than
Set the age of the recovery point objective in days.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant Pure Storage FlashArray
systems based on your RPO parameters.
433
User's Guide Pure Storage FlashArray RPO Compliance Report
Location
The location of the Pure Storage FlashArray system.
Job Name
The Backup job associated with the Pure Storage FlashArray system.
Last Successful Protection Time
The most recent instance of a successful run of the Backup job.
Reason
The reason the Pure Storage FlashArray system does not meet your RPO compliance parameters.
Examples include no successful runs of a protection job, or backing up to an unsupported disk.
434
User's Guide Pure Storage FlashArray RPO Compliance Report
Job Name
The Backup job associated with the Pure Storage FlashArray storage system.
Last Successful Protection Time
The most recent instance of a successful run of the protection job.
Reason
The reason the Pure Storage FlashArray storage system does not meet your RPO compliance parameters.
Examples include no successful runs of a protection job, or backing up to an unsupported disk.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
435
Recovery Points Report
The Recovery Points report displays recovery points of protected resources. View the protection
type, job name, and location of your available recovery points.
Parameters
Use the following parameters to customize your report:
l Site
Multiple selections are supported.
l Resource Type
Set the resource type to return in the report. Resource types include Oracle Database,
SAP HANA Database, SQL Database, VMware Datastores, and VMware VMs. By default,
this parameter is set to All.
l Protection Type
Set the protection type to display in the report. Protection types include primary and
secondary. Multiple selections are supported. By default, this parameter is set to All.
l Show Recovery Points
Set the age of the recovery point objective in days.
Recovery Points
The following fields and corresponding data display in the Recovery Points section of the
Recovery Points report:
Resource Name
The name and location of the protected resource.
Site
The site associated with the protected resource.
Protection Time
The most recent instance of a successful run of the Backup job.
Protection Type
The recovery point's protection type.
Job Name
The Backup job associated with the recovery point.
Volumes (Location)
The volumes associated with the displayed recovery point.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
User's Guide Unprotected Virtual Machines Report
Use the Unprotected Virtual Machines report to answer questions such as:
l How many of my virtual machines are eligible for primary Storage Snapshot protection?
l How much storage space is used within a given time period?
Parameters
Use the following parameters to customize your report:
l vCenter
Multiple selections are supported.
l Power State
Set the power state of virtual machines returned by the report. Power states include Powered On,
Powered Off, Suspended, or All. Multiple selections are supported. By default, this parameter is set to
All.
The default report parameters report on unprotected virtual machines on all vCenters, in any power state.
438
User's Guide Unprotected Virtual Machines Report
The location of the virtual machine.
Hostname
The host node where the virtual machine resides.
Operating System
The operating system associated with the virtual machine
Provisioned Space
The amount of space on the datastore allocated for virtual disk files.
Datastores
The name of the associated datastore.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
439
User's Guide VMware RPO Compliance Report
Use the VMware RPO Compliance report to answer questions such as:
l Which of my VMware Backup job have never run successfully?
l What is the remaining compliance time of a specific VMware object?
Parameters
Use the following parameters to customize your report:
l vCenter
Multiple selections are supported.
l Primary Protection Resource
Set the primary protection resource type to display in the report. Resource types include Datastore and
Virtual Machine. Multiple selections are supported. By default, this parameter is set to All.
l Replication Resource
Set the replication resource type to display in the report. Resource types include Datastore and Virtual
Machine. Multiple selections are supported. By default, this parameter is set to All.
l Protection Type
Set the RPO compliance protection type. Protection types include Primary and Replication. Multiple
selections are supported.By default, this parameter is set to All.
l Storage Vendor
Set the storage vendor types to display in the report. Multiple selections are supported.
440
User's Guide VMware RPO Compliance Report
The default report parameters report on all non-compliant VMware objects based on an RPO older than one
day.
Quick View
The Quick View section displays a bar graph of compliant and non-compliant VMware objects based on your
RPO parameters.
441
User's Guide VMware RPO Compliance Report
The reason the VMware object is not compliant for primary protection. Examples include no successful runs
of a protection job, or backing up to an unsupported disk.
442
User's Guide VMware RPO Compliance Report
The most recent instance of a successful run of the Backup job.
Storage Vendor
The storage vendor associated with the VMware object.
Reason
The reason the VMware object is not compliant for secondary protection. Examples include no successful
runs of a protection job, or backing up to an unsupported disk.
RELATED TOPICS:
l Protection Compliance Reports on page 407
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
443
User's Guide Storage Protection Reports
RELATED TOPICS:
l NetApp ONTAP OSSV Relationship Status Report on page 446
l NetApp ONTAP Overprotected Volumes Report on page 448
444
User's Guide Storage Protection Reports
445
User's Guide NetApp ONTAP OSSV Relationship Status Report
Parameters
Use the following parameters to customize your report:
l OSSV Primary Node
Multiple selections are supported.
l Days Since Last Backup
l Record Limit
The default report parameters display the relationship status of all OSSV nodes.
RELATED TOPICS:
l Storage Protection Reports on page 444
l Report Overview on page 369
446
User's Guide NetApp ONTAP OSSV Relationship Status Report
447
User's Guide NetApp ONTAP Overprotected Volumes Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Acceptable Snapshots
l Is Volume SnapMirrored?
l Is Volume SnapVaulted?
The default report parameters display all overprotected volumes with ten or more snapshots.
448
User's Guide NetApp ONTAP Overprotected Volumes Report
The reason the volume was returned by the report. For example, if the number of snapshots is larger than
the defined Acceptable Snapshots parameter, or if a volume is SnapVaulted and the Is Volume
SnapVaulted parameter is set to No.
Overprotection Storage Cost
The amount of space on the volume dedicated to overprotection.
RELATED TOPICS:
l Storage Protection Reports on page 444
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
449
User's Guide NetApp ONTAP Qtree Protection Status Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
l Lag time in days
l Show Protected Qtrees
l Record Limit
The default report parameters display the unprotected qtrees for all storage systems and all volumes.
Detail View
The following fields and corresponding data display in the following sections: NetApp ONTAP SnapMirrored
Qtrees in an Unprotected State, Qtrees Protected by SnapMirror, NetApp ONTAP SnapVaulted Qtrees in an
Unprotected State, and Qtrees Protected by SnapVault.
Storage Array
The physical server where your files are stored.
Source
The name of the protected or unprotected qtree.
Destination
The node where the replication destination is located.
State
The state of the destination (for example, SnapMirrored, SnapVaulted, broken-off, uninitialized, or
unknown).
450
User's Guide NetApp ONTAP Qtree Protection Status Report
Lag Time
The lag time in days, hours, minutes, and seconds between the source and the destination.
Xfer Throughput
The transfer throughput in KBs per second between the source and the destination.
RELATED TOPICS:
l Storage Protection Reports on page 444
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
451
User's Guide NetApp ONTAP Underprotected Volumes Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Acceptable Snapshots
l Lag time in days
l Is Volume SnapMirrored?
l Is Volume SnapVaulted?
The default report parameters display underprotected volumes with less than ten snapshots that are older
than 7 days.
Underprotected Volumes
The following fields and corresponding data display in the Underprotected Volumes section of the NetApp
ONTAP Underprotected Volumes report:
Volume
The name of the underprotected volume.
Storage Array
The node where the underprotected volume is located.
Location
The node where the volume resides.
452
User's Guide NetApp ONTAP Underprotected Volumes Report
Underprotection Reason
The reason the volume was returned by the report. For example, if the number of snapshots is lower than
the defined No. of Acceptable Snapshots parameter, or if the snapshots have excessive lag times.
RELATED TOPICS:
l Storage Protection Reports on page 444
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
453
User's Guide NetApp ONTAP Volume Protection Status Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
l Lag time in days
l Show Protected Volumes
454
User's Guide NetApp ONTAP Volume Protection Status Report
The SnapMirror type for clustered volumes. For example, Vault or Mirror. Blank entries indicate non-
clustered 7-Mode volumes.
Lag Time
The lag time in days, hours, minutes, and seconds between the source and the destination.
Xfer Throughput (NetApp ONTAP Volumes Protected by SnapMirror/SnapVault section only)
The transfer throughput in KBs per second between the source and the destination.
RELATED TOPICS:
l Storage Protection Reports on page 444
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
455
User's Guide NetApp ONTAP Transition Dependency Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Set the NetApp ONTAP storage systems to display in the report. Multiple selections are supported.-
l Show Detailed View
Enable to show detailed volume, SnapMirror, and SnapVault data in the report.
Summary View
The following fields and corresponding data display in the Summary View section of the NetApp ONTAP
Transition Dependency report:
Storage Array
The name of the NetApp ONTAP storage system source, along with the operating system version sunning
on the source node.
SnapMirror Destination Nodes (Count)
The name of the SnapMirror destination node. The relationship count displays in parentheses.
SnapVault Destination Nodes (Count)
The name of the SnapVault destination node. The relationship count displays in parentheses.
Destination Node OS Version
The operating system version running on the destination node.
Detail View
456
User's Guide NetApp ONTAP Transition Dependency Report
If the Show Detailed View parameter is set to Yes, the following fields and corresponding data display in the
Detail View section of the NetApp ONTAP Transition Dependency report:
Storage Array: Volume
The name of the NetApp ONTAP storage system and associated source volume, along with the size of the
source volume.
SnapMirror Destination
The name of the SnapMirror destination and associated volume.
SnapVault Destination
The name of the SnapVault destination and associated volume.
RELATED TOPICS:
l Storage Protection Reports on page 444
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
457
User's Guide Storage Utilization Reports
The information in these reports are presented in a chart-based Quick View section, or tabular Summary
View and Detail View sections.
Choose the Storage Utilization report that fits your needs.
l DellEMC Unity File Systems Report - Review the storage utilization of your DellEMC Unity file systems.
l DellEMC Unity LUNs Report - Review the total capacity of your LUNs, the total free space, and the
percentage available to ascertain your DellEMC Unity LUN storage utilization.
l DellEMC Unity Pools Report - Review the storage utilization of your DellEMC Unity pools. View the total
and free space available and the number of volumes and disks that make up your storage pools.
l HPE Nimble Storage Volumes Report - Review the total capacity of your volumes, the total free space,
and the percentage available to ascertain your HPE Nimble Storage volume storage utilization.
l IBM Spectrum Accelerate Pools Report - Review the storage utilization of your IBM Spectrum
Accelerate Pools. View the total and free space available and the number of volumes and disks that make
up your pools.
l IBM Spectrum Accelerate Volumes Report - View the total space used by and free space available to
your IBM Spectrum Accelerate volumes.
l IBM Spectrum Virtualize Consistency Groups Report - Display information about your
IBM Consistency Groups. View the associated storage providers, source and target volumes, and
protection status of your IBM volumes through Consistency Groups
l IBM Spectrum Virtualize Pools Report - Review the storage utilization of your IBM storage pools. View
the total and free space available and the number of volumes and disks that make up your storage pools.
l IBM Spectrum Virtualize Volumes Report - Review the storage utilization of your IBM volumes. View
the total space consumed as well as the free space available on your IBM volumes.
458
User's Guide Storage Utilization Reports
l Instant Disk Restore Volumes Report - Display a list of mapped volumes created through Instant Disk
Restore jobs.
l NetApp ONTAP Aggregates Report - Review the storage utilization and configuration of your NetApp
aggregates. View the total and free space available and the number of volumes and disks that make up
your aggregates.
l NetApp ONTAP LUNs Report - Review the total capacity of your LUNs, the total free space, and the
percentage available to ascertain your NetApp ONTAP LUN storage utilization.
l NetApp ONTAP Orphaned LUNs Report - Review NetApp ONTAP storage orphaned LUNs. These are
the LUNs that have no initiator group mapping or belong to volumes that are offline.
l NetApp ONTAP Quotas Report - Review quota status to determine which users or groups are
approaching or exceeding quota limits.
l NetApp ONTAP Snapshots Report - Review the total capacity of your snapshots, the total free space,
and the percentage available to ascertain your NetApp ONTAP Snapshot storage utilization.
l NetApp ONTAP Volumes Report - Review the total capacity of your volumes, the total free space, and
the percentage available to ascertain your NetApp ONTAP volume storage utilization.
l Pure Storage FlashArray Volumes Report - Review the total capacity of your volumes, the total free
space, and the percentage available to ascertain your Pure Storage FlashArray volume storage
utilization.
l Storage Capacity Report - Report the storage capacity of your IBM Spectrum Virtualize pools, DellEMC
Unity pools, and NetApp ONTAP aggregates.
l VM and Storage Mapping Report - Report that maps VMs all the way to the physical storage from which
the datastore is created.
l VMware Datastores Report - Review the total capacity of your datastores, the total free space, and the
percentage available to ascertain your VMware datastore storage utilization.
l VMware LUNs Report - Displays information about VMware LUNs such as which ESX server it belongs
to, its datastore, vendor, total, and allocated capacity.
l VMware Orphaned Datastores Report - Review the datastores that do not have any virtual machines
assigned to them, or if virtual machines are assigned to the datastores, view the virtual machines that are
in an inaccessible state.
l VMware Orphaned LUNs Report - Review VMware orphaned LUNs. These are the LUNs not used as
datastores or RDMs.
l VMware VM Snapshot Sprawl Report - Displays information about virtual machines with aged and
memory snapshots.
l VMware VM Sprawl Report - Displays storage utilization across virtual machines based on their power
state and storage utilization across virtual machine templates.
l VMware VM Storage Report - Review your virtual machines and associated datastores.
459
User's Guide Storage Utilization Reports
Quick View
This area of the report is a graphical illustration of the report using pie charts. For example, the quick view of
the NetApp ONTAP Storage Volumes report shows the total capacity of your volumes, the free space, and the
used space.
Summary View
This area of the report displays a summary of the data returned in the report. For example, the summary view
of the NetApp ONTAP Storage Aggregates report shows the total used, free, and reserved space on your
aggregate.
Detail View
This area of the report is a table where each row details a storage system, its corresponding volume or
aggregate, and details returned by the report. For example, the NetApp ONTAP Storage Aggregates report
shows the used and free space, volume count, disk count, and status of your aggregates.
Choose the Storage Utilization Report that fits your needs.
RELATED TOPICS:
l DellEMC Unity File Systems Report on page 462
l DellEMC Unity LUNs Report on page 464
l DellEMC Unity Pools Report on page 467
l HPE Nimble Storage Volumes Report on page 469
l IBM Spectrum Accelerate Pools Report on page 472
l IBM Spectrum Accelerate Volumes Report on page 474
l IBM Spectrum Virtualize Consistency Groups Report on page 476
l IBM Spectrum Virtualize Pools Report on page 478
l IBM Spectrum Virtualize Volumes Report on page 481
l Instant Disk Restore Volumes Report on page 484
l NetApp ONTAP Aggregates Report on page 485
l NetApp ONTAP LUNs Report on page 488
l NetApp ONTAP Orphaned LUNs Report on page 491
l NetApp ONTAP Quotas Report on page 493
l NetApp ONTAP Snapshots Report on page 495
l NetApp ONTAP Volumes Report on page 497
l Pure Storage FlashArray Volumes Report on page 500
l Storage Capacity Report on page 503
460
User's Guide Storage Utilization Reports
461
User's Guide DellEMC Unity File Systems Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Detail View Filter
Select the alert threshold percentage to display in the report. Select Any to view the details of every
DellEMC Unity file system.
Summary View
The following fields and corresponding data display in the Summary View section of the DellEMC Unity File
Systems report:
DellEMC Unity Storage
The physical server where your files are stored.
File Systems Count
The number of file systems on the storage system.
Total Size
The quantity of storage reserved for primary data.
Used
The quantity of primary storage allocated.
Free
The quantity of primary storage that is unallocated.
Detail View
462
User's Guide DellEMC Unity File Systems Report
The following fields and corresponding data display in the Detail View section of the DellEMC Unity File
Systems report:
File System
The name of the file system.
DellEMC Unity Storage
The physical server where your files are stored.
Type
The file system type.
Storage Pool
The name of the associated storage pool.
Total Size
The quantity of storage reserved for primary data.
Used
The quantity of primary storage allocated.
Free
The quantity of primary storage that is unallocated.
No. of Snapshots
The number of snapshots for the file system.
Status
The current status of the file system.
Thin
The disk format of the file system, either thick or thin provisioned.
% Used/Free
A status bar that displays the used space on the file system.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
463
User's Guide DellEMC Unity LUNs Report
Use the DellEMC Unity LUNs report to answer questions such as:
l How many LUNs are associated with a storage system?
l What is the total and allocated capacity of a LUN?
l What LUNs are not being used, so I can reclaim this space?
Parameters
Use the following parameters to customize your report:
l Storage Array
l Detail View Filter
The default report parameters report on all LUNs with more than 80% space used.
Quick View
The Quick View section displays a pie chart of used and free space on your LUNs. Use the DellEMC Unity
parameter to display LUNs on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the DellEMC Unity Storage parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the DellEMC Unity LUNs
report:
DellEMC Unity Storage
The physical server where your files are stored.
LUNs Count
The number of LUNs on the storage system.
Total Size
The total size of the LUNs on the storage system.
464
User's Guide DellEMC Unity LUNs Report
Allocated
The quantity of primary storage allocated.
Unallocated
The quantity of primary storage that is unallocated.
Detail View
The following fields and corresponding data display in the Detail View section of the DellEMC Unity LUNs
report:
LUN
The name of the LUN.
DellEMC Unity Storage
The physical server where your files are stored.
Type
The LUN type. Types include Standalone, VmWareISCSI, and Generic Storage.
Storage Pool
The name of the associated storage pool.
Total Size
The quantity of storage reserved for primary data.
Allocated
The quantity of primary storage allocated.
Unallocated
The quantity of primary storage that is unallocated.
No. of Snapshots
The number of snapshots for the LUN.
Status
The current status of the LUN.
Thin
The disk format of the LUN, either thick or thin provisioned.
% Used/Free
A status bar that displays the used space on the LUN.
RELATED TOPICS:
465
User's Guide DellEMC Unity LUNs Report
466
User's Guide DellEMC Unity Pools Report
Use the DellEMC Unity Pools report to answer questions such as:
l What is the volume count of a specific storage pool?
l How many child pools are associated with a specific storage pool?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Only View Pools Exceeding Alert Threshold
Enable to view storage pools in which the usage exceeds the warning threshold.
Quick View
The Quick View section displays a pie chart of used and free space on your storage pools. Use the DellEMC
Unity Storage parameter to display storage pools on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the DellEMC Unity Host parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the DellEMC Unity Pools
report:
DellEMC Unity Storage
The physical server where your files are stored.
Storage Pools
The number of storage pools on the DellEMC Unity storage system.
Total Space
The total amount of storage that is assigned to the storage pool.
467
User's Guide DellEMC Unity Pools Report
Used Space
The total storage allocated to volumes within the storage pool.
Available Space
The amount of storage that is assigned to the storage pool that is unused.
Detail View
The following fields and corresponding data display in the Detail View section of the DellEMC Unity Pools
report:
Storage Pool
The name of the storage pool.
DellEMC Unity Storage
The physical server where your files are stored.
Total Space
The total amount of storage that is assigned to the storage pool.
Used Space
The total storage allocated to volumes within the storage pool.
Available Space
The amount of storage that is assigned to the storage pool that is unused.
Disks
The number of disks in the storage pool.
Status
The status of the disk with the highest priority status in the group.
Alert Threshold
A warning is generated when the assigned amount of space in the storage pool exceeds this level.
% Used/Free
A status bar that displays the used space on the storage pool.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
468
User's Guide HPE Nimble Storage Volumes Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
Quick View
The Quick View section displays the overall storage utilization of your HPE Nimble Storage volumes.
Summary View
The following fields and corresponding data display in the Summary View section of the HPE Nimble Storage
Volumes report:
Storage Array
The physical storage system where your files are stored.
Total
The storage capacity that is available to the host.
Used
The used space on the storage array.
Empty Space
The available space on the storage array.
Snapshot Usage
The total capacity of the HPE Nimble Storage snapshots.
Volume Usage
The space allocated to HPE Nimble Storage volumes.
Shared Space
469
User's Guide HPE Nimble Storage Volumes Report
The total amount of shared space on the HPE Nimble Storage.
Total Reduction
The ratio of the total data reduced on the HPE Nimble Storage host. It includes data reduction, thin
provisioning, zero detection, and unmap.
Data Reduction
The data reduction ratio of the HPE Nimble Storage. It includes deduplication, compression, and copy
reduction.
% Used/Free
A status bar that displays the used space on the volume.
Detail View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Virtualize
Storage Volumes report:
Volume
The name of the volume on the HPE Nimble Storage.
Storage Array
The physical storage system where your files are stored.
Provisioned
The total provisioned storage space on the HPE Nimble Storage volume.
Used
The used space on the volume.
Snapshot Usage
The total space on the HPE Nimble Storage occupied by snapshots.
Volume Usage
The total space on the HPE Nimble Storage used by volumes.
Total Reduction
The ratio of the total data reduced on the HPE Nimble Storage volume. It includes data reduction, thin
provisioning, zero detection, and unmap.
Data Reduction
The data reduction ratio of the HPE Nimble Storage volume. It includes deduplication, compression, and
copy reduction.
RELATED TOPICS:
470
User's Guide HPE Nimble Storage Volumes Report
471
User's Guide IBM Spectrum Accelerate Pools Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Only View Pools Exceeding Alert Threshold
Enable to view pools in which the usage exceeds the warning threshold.
Detail View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Virtualize
Pools report:
Storage Pool
The name of the storage pool as known to IBM Spectrum Copy Data Management.
Storage Array
The name of the storage array as known to IBM Spectrum Copy Data Management.
Thin
The disk format of the storage system, either thick or thin provisioned.
Hard Usage
A status bar that displays the hard size, or physical capacity, of the storage pool.
Soft Usage
A status bar that displays the soft size, or maximum size seen by the hosts, of the storage pool.
Snapshot Usage
A status bar that displays the space used by snapshots on the volumes.
472
User's Guide IBM Spectrum Accelerate Pools Report
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
473
User's Guide IBM Spectrum Accelerate Volumes Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Detail View Filter
The default report parameters report on all storage arrays with more than 80% space used.
Quick View
The Quick View section displays the overall volume utilization of your IBM storage volumes.
Summary View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Accelerate
Volumes report:
Storage Array
The name of the storage array as known to IBM Spectrum Copy Data Management.
# of Pools
The number of storage pools on the storage array.
# of Volumes
The number of volumes on the storage array.
Size
The storage capacity that is available to a host.
Used Capacity
The total used storage on the storage array.
474
User's Guide IBM Spectrum Accelerate Volumes Report
Snapshots Used Capacity
The total used storage on the storage array dedicated to snapshots.
Detail View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Accelerate
Storage Volumes report:
Volume
The name of the volume on the IBM storage system.
Storage Array
The physical storage system where your files are stored
Pool
The volume's associated storage pool.
Size
The volume storage capacity that is available to a host.
Used Capacity
The total used storage on the storage array.
Snapshots Used Capacity
The total used storage on the storage array dedicated to snapshots.
Consistency Group
The volume's associated consistency group.
Locked Status
The volume's lock status. If a volume is locked, no write commands are allowed.
% Used/Free
A status bar that displays the used space on the volume.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
475
User's Guide IBM Spectrum Virtualize Consistency Groups Report
Use the IBM Consistency Group report to answer questions such as:
l What is the mapping name of a specific Consistency Group?
l What are the source and target volumes associated with a Consistency Group relationship?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Protection Type
Select FlashCopy or Global Mirror with Change Volumes protection type. Multiple selections are
supported.
476
User's Guide IBM Spectrum Virtualize Consistency Groups Report
Source Volume
The name of the source volume in the FlashCopy relationship.
Target Volume
The name of the target volume in the FlashCopy relationship.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
477
User's Guide IBM Spectrum Virtualize Pools Report
Use the IBM Spectrum Virtualize Pools report to answer questions such as:
l What is the volume count of a specific storage pool?
l How many child pools are associated with a specific storage pool?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Detail View Filter
Available options include Warning Exceeded or All. Select Warning Exceeded to view storage pools in
which the usage exceeds the warning threshold.
Quick View
The Quick View section displays a pie chart of used and free space on your storage pools. Use the IBM Host
parameter to display storage pools on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the IBM Host parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the IBM Spectrum
Virtualize Pools report:
Storage Array
The name of the storage virtualizer as known to IBM Spectrum Copy Data Management.
Storage Pools
The number of storage pools on the IBM storage system.
Volume Count
The number of volumes that make up the storage pool.
478
User's Guide IBM Spectrum Virtualize Pools Report
Capacity
The total amount of MDisk storage that is assigned to the storage pool.
Allocated
The total storage allocated to volumes within the storage pool.
Virtual Capacity
The total virtual size of all the volume copies that are associated with the storage pool.
Child Pool Capacity
The capacity of the associated child pool, if available.
Child Pools
The number of associated child pools, if available.
Detail View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Virtualize
Pools report:
Storage Pool
The name of the storage pool and associated child pools. Note that child pool capacities are not included in
column totals.
Storage Array
The name of the storage virtualizer as known to IBM Spectrum Copy Data Management.
Capacity
The total amount of MDisk storage that is assigned to the storage pool.
Allocated
The total storage allocated to volumes within the storage pool.
Virtual Capacity
The total virtual size of all the volume copies that are associated with the storage pool.
External Virtual Capacity
The aggregate capacity of the managed and image mode MDisks from storage controllers virtualized using
the “External Virtualization” feature of the chosen IBM storage systems.
Volumes
The number of volume copies that are in the storage pool.
Disk Count
The number of MDisks in the storage pool.
Status
479
User's Guide IBM Spectrum Virtualize Pools Report
The status of the MDisk with the highest priority status in the group, excluding image mode MDisks.
Warning
A warning is generated when the assigned amount of space in the storage pool exceeds this level.
%
The percentage of space used on the storage pool.
% Used/Free
A status bar that displays the used space on the storage pool.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
480
User's Guide IBM Spectrum Virtualize Volumes Report
Use the IBM Spectrum Virtualize Storage Volumes report to answer questions such as:
l What is the total available storage space in the entire system?
l What is the amount of free and used space?
l How many volumes are available on a specific storage system?
l What is the size of the volume and the storage system that it resides on?
Parameters
Use the following parameters to customize your report:
l Storage Array
l Detail View Filter
l Show Only Thin Provisioned Volumes
Quick View
The Quick View section displays the overall volume utilization of your IBM storage volumes.
Summary View
The following fields and corresponding data display in the Summary View section of the IBM Spectrum
Virtualize Storage Volumes report:
Storage Array
The name of the IBM storage system.
Volume Count
The number of volumes available on the IBM storage system.
Capacity
The volume storage capacity that is available to a host.
Real Capacity
481
User's Guide IBM Spectrum Virtualize Volumes Report
The amount of physical storage that is allocated from a storage pool to volume copies.
Used Capacity
The portion of real capacity that is being used to store data.
Free Capacity
The difference between the real capacity and used capacity values for volume copies.
Detail View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Virtualize
Storage Volumes report:
Volume
The name of the volume on the IBM storage system.
Storage Array
The physical storage system where your files are stored.
Storage Pool
The volume's associated storage pool.
Capacity
The difference between the real capacity and used capacity values for volume copies.
Real Capacity
The amount of physical storage that is allocated from a storage pool to volume copies.
Used Capacity
The portion of real capacity that is being used to store data.
Warning Threshold
For thin provisioned or compressed volume copies, a warning is generated at this percentage of the volume
capacity.
Status
The status of the volume. A volume can be online, offline, or degraded.
Thin
Displays the thin provisioned status of the volume.
% Used/Free
A status bar that displays the used space on the volume.
RELATED TOPICS:
482
User's Guide IBM Spectrum Virtualize Volumes Report
483
User's Guide Instant Disk Restore Volumes Report
Parameters
Use the following parameters to customize your report:
l Storage Vendor
Set the storage vendor types to display in the report. Multiple selections are supported.
Detail View
The following fields and corresponding data display in the Detail View section of the Instant Disk Restore
Volumes report:
Volume
The name of the mapped Instant Disk Restore volume on the node.
Location (Site)
The node and associated site where the volume resides.
Storage Vendor
The storage vendor of the node.
Job Name (Type)
The job and job type associated with the Instant Disk Restore.
Recovery Time
The time at which the restore job completed.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
484
User's Guide NetApp ONTAP Aggregates Report
Parameters
Use the following parameters to customize your report:
l Storage Array
l Detail View Filter
The default report parameters report on all aggregates with more than 80% space used.
Quick View
The Quick View section displays a pie chart of used and free space on your aggregates. Use the NetApp
ONTAP Storage parameter to display aggregates on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the NetApp ONTAP Storage parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the Aggregates report:
Storage Array
The physical server where your files are stored.
Aggregate Count
The number of aggregates on the node.
Volume Count
485
User's Guide NetApp ONTAP Aggregates Report
The number of volumes that make up the aggregate.
Disk Count
The number of disks that make up the aggregate.
Total
The total size of the aggregate.
Available
The amount of free space available in the aggregate.
% Used
The percentage of used storage space on the aggregate.
Detail View
The following fields and corresponding data display in the Detail View section of the Aggregates report:
Aggregate
The name of the aggregate.
Storage Array
The physical server where your files are stored.
Location
The node where the volume resides.
Total
The total size of the aggregate.
Available
The amount of free space available in the aggregate.
Volume Count
The number of volumes that make up the aggregate.
Disk Count
The number of disks that make up the aggregate.
Status
The status of the aggregate. An aggregate can be online, offline for maintenance, or reserved for snapshot
storage.
% Used/Free
A status bar that displays the used space on the aggregate.
486
User's Guide NetApp ONTAP Aggregates Report
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
487
User's Guide NetApp ONTAP LUNs Report
Use the NetApp ONTAP LUNs report to answer questions such as:
l How many LUNs are associated with a storage system?
l What is the total and allocated capacity of a LUN?
l What LUNs are not being used, so I can reclaim this space?
Parameters
Use the following parameters to customize your report:
l Storage Array
l Volume
Multiple selections are supported.
l Detail View Filter
The default report parameters report on all LUNs with more than 80% space used.
Quick View
The Quick View section displays a pie chart of used and free space on your LUNs. Use the NetApp ONTAP
Storage parameter to display LUNs on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the NetApp ONTAP Storage parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the NetApp ONTAP
LUNs report:
Storage Array
The physical server where your files are stored.
LUNs Count
The number of LUNs on the node.
488
User's Guide NetApp ONTAP LUNs Report
Total Size
The total size of the aggregate.
Available Size
The amount of free space available in the aggregate.
% Used
The percentage of used storage space on the aggregate.
Detail View
The following fields and corresponding data display in the Detail View section of the NetApp ONTAP LUNs
report:
LUN
The name of the LUN.
Storage Array
The physical server where your files are stored.
Volume
The volume associated with the LUN.
Location
The node where the volume resides.
Total Size
The total size of the LUN.
Available Size
The amount of free space available on the LUN.
Status
The status of the LUN. A LUN can be online or offline for maintenance.
Thin Provisioned
The disk format of the LUN, either thick or thin provisioned.
% Used/Free
A status bar that displays the used space on the LUN.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
489
User's Guide NetApp ONTAP LUNs Report
490
User's Guide NetApp ONTAP Orphaned LUNs Report
Use the NetApp ONTAP Orphaned LUNs report to answer questions such as:
l How much space on an object is consumed by orphaned LUNs?
l Is thin provisioning enabled on a specific LUN?
Parameters
Use the following parameter to customize your report:
l Storage Array
l Volume
Multiple selections are supported.
The default report parameters report on all NetApp ONTAP storage volumes.
Quick View
The Quick View section displays a pie chart of space consumed by orphaned LUNs. Use the NetApp ONTAP
Storage parameter to display orphaned LUNs on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the NetApp ONTAP Storage parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the NetApp ONTAP
Orphaned LUNs report:
Storage Array
The physical server where your files are stored.
Orphaned LUNs
The number of orphaned LUNs on the node.
Total Size (Volumes)
The total size of the volume on the node.
491
User's Guide NetApp ONTAP Orphaned LUNs Report
Total Size (Orphaned LUNs)
The total space on the volume occupied by orphaned LUNs.
% Used (Orphaned LUNs)
The percentage of used storage space on the node.
Detail View
The following fields and corresponding data display in the Detail View section of the NetApp ONTAP
Orphaned LUNs report:
LUN
The name of the LUN.
Storage Array
The physical server where your files are stored.
Volume
The volume associated with the LUN.
Location
The node where the volume resides.
Thin Provisioned
The disk format of the LUN, either thick or thin provisioned.
Total Size
The total size of the LUN.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
492
User's Guide NetApp ONTAP Quotas Report
Parameters
Use the following parameters to customize your report:
l Storage Array
l Volume
Multiple selections are supported.
l Quota Criteria
l Top Quota Users
Detail View
The following fields and corresponding data display in the Detail View section of the NetApp ONTAP Quotas
report:
Storage Array
The physical server where your files are stored.
Volume
The name of the volume on the node.
Location
The node where the volume resides.
Qtree
The name of the associated qtree.
Users
The users affected by the quota.
Quota Target
The name and location of the quota file on the volume.
Type
493
User's Guide NetApp ONTAP Quotas Report
The type of entity to apply the quota against. For example, users, groups, or qtrees.
Space Usage
The space used on the volume.
Space Hard Limit
The hard disk space limit defined by the quota.
Space Soft Limit
The soft disk space limit defined by the quota.
% Used
The percentage of the quota used on the volume.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
494
User's Guide NetApp ONTAP Snapshots Report
Use the NetApp ONTAP Snapshots report to answer questions such as:
l How many snapshots are on a storage system?
l What is the percentage of space on a volume that is used for snapshot storage?
Parameters
Use the following parameters to customize your report:
l Storage Array
l Volume
Multiple selections are supported.
l Number of Largest Snapshots to View
The default report parameters report on the hundred largest snapshots on all volumes.
Quick View
The Quick View section displays a pie chart of the size on your volumes consumed by snapshots. Use the
NetApp ONTAP Storage parameter to display snapshots on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the NetApp ONTAP Storage parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the NetApp ONTAP
Snapshots report:
Storage Array
The physical server where your files are stored.
Snapshot Count
The number of snapshots on the node.
Total Volume Size
495
User's Guide NetApp ONTAP Snapshots Report
The total size of the volume on which the snapshots are stored.
Total Snapshot Size
The total combined size of all snapshots on the node.
% Used By Snapshot
The percentage of space on the volume used for snapshot storage.
Detail View
The following fields and corresponding data display in the Detail View section of the NetApp ONTAP
Snapshots report:
Snapshot
The name of the snapshot.
Storage Array
The physical server where your files are stored.
Volume
The volume on which the snapshot is stored.
Location
The node where the volume resides.
Snapshot Creation Time
The snapshot creation date and time.
Volume Size
The total size of the volume on which the snapshot is stored.
Snapshot Size
The total size of the snapshot.
Total %
The percentage of space on the volume used by the snapshot.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
496
User's Guide NetApp ONTAP Volumes Report
Use the NetApp ONTAP Volumes report to answer questions such as:
l What is the total available storage space in the entire system?
l What is the amount of free and used space?
l How many volumes are available on a specific storage system?
l What is the size of the volume and the storage system that it resides on?
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
l Detail View Filter
The default report parameters report on all NetApp ONTAP storage systems with more than 80% space used.
Quick View
The Quick View section displays a pie chart of used and free space on your volumes. Use the NetApp ONTAP
Storage parameter to display volumes on all storage systems or a specific storage system.
Note: The Quick View section is only modified through the NetApp ONTAP Storage parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the NetApp ONTAP
Volumes report:
Storage Array
The number of nodes included in the report, based on your parameters.
Volume Count
The number of cataloged volumes included in the report, based on your parameters.
497
User's Guide NetApp ONTAP Volumes Report
Total
The total space of the volumes included in the report.
Available
The space available on the volumes included in the report.
Reserved
The space reserved for snapshot storage on the volume.
% Used
The percentage of used storage space on the volumes included in the report.
Detail View
The following fields and corresponding data display in the Detail View section of the NetApp ONTAP Volumes
report:
Volume
The name of the volume on the node.
Storage Array
The physical server where your files are stored.
Aggregate
The name of the associated aggregate.
Location
The node where the volume resides.
Total
The total space on the volume.
Available
The space available on the volume.
Reserved
The space reserved for snapshot storage on the volume.
Status
The online status of the volume.
% Used / Free
The percentage of space used and a status bar that displays the used and free space on the volume. Note
that space reserved for snapshot storage is included in this percentage.
RELATED TOPICS:
498
User's Guide NetApp ONTAP Volumes Report
499
User's Guide Pure Storage FlashArray Volumes Report
Parameters
Use the following parameters to customize your report:
l Storage Array
Multiple selections are supported.
Quick View
The Quick View section displays the overall storage utilization of your Pure Storage FlashArray volumes.
Summary View
The following fields and corresponding data display in the Summary View section of the Pure Storage
FlashArray Volumes report:
Storage Array
The physical storage system where your files are stored.
Total
The storage capacity that is available to the host.
Used
The used space on the storage array.
Empty Space
The available space on the storage array.
Snapshot Usage
The total capacity of the Pure Storage FlashArray snapshots.
Volume Usage
The space allocated to Pure Storage FlashArray volumes.
500
User's Guide Pure Storage FlashArray Volumes Report
Shared Space
The total amount of shared space on the Pure Storage FlashArray.
Total Reduction
The ratio of the total data reduced on the Pure Storage FlashArray host. It includes data reduction, thin
provisioning, zero detection, and unmap.
Data Reduction
The data reduction ratio of the Pure Storage FlashArray. It includes deduplication, compression, and copy
reduction.
% Used/Free
A status bar that displays the used space on the volume.
Detail View
The following fields and corresponding data display in the Detail View section of the IBM Spectrum Virtualize
Storage Volumes report:
Volume
The name of the volume on the Pure Storage FlashArray.
Storage Array
The physical storage system where your files are stored.
Provisioned
The total provisioned storage space on the Pure Storage FlashArray volume.
Used
The used space on the volume.
Snapshot Usage
The total space on the Pure Storage FlashArray occupied by snapshots.
Volume Usage
The total space on the Pure Storage FlashArray used by volumes.
Total Reduction
The ratio of the total data reduced on the Pure Storage FlashArray volume. It includes data reduction, thin
provisioning, zero detection, and unmap.
Data Reduction
The data reduction ratio of the Pure Storage FlashArray volume. It includes deduplication, compression,
and copy reduction.
RELATED TOPICS:
501
User's Guide Pure Storage FlashArray Volumes Report
502
User's Guide Storage Capacity Report
l Storage controllers will only be displayed if they have at least one resource successfully
backed up to it.
Parameters
Use the following parameter to customize your report:
l Storage Vendor
l Show Managed Capacity Details
Enable to display a detailed view of the managed capacity of storage volumes within a storage array
Summary View
The following fields and corresponding data display in the Summary View section of the Storage Capacity
report:
Storage Array
The name of the storage array.
Storage Vendor
The name of the storage vendor of the associated storage provider.
Usable Capacity
The total storage capacity that is available to a storage provider.
Managed Capacity
The space used by IBM Spectrum Copy Data Management backup jobs on the volumes of an array.
503
User's Guide Storage Capacity Report
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
504
User's Guide VM and Storage Mapping Report
Parameters
Use the following parameters to customize your report:
l vCenter
l ESX Host
l NetApp ONTAP Storage
l Site
Detail View
The following fields and corresponding data display in the Detail View section of the VM Storage and Mapping
report:
Datastore
The name of the virtual machine, host, and site and the name of the datastore that is used.
Disk
The disk on which the virtual machine is stored.
Path
The path to the virtual machine disk image file.
LUN
The corresponding logical unit number.
Source NetApp Node : Volume Used/Total
The corresponding NetApp ONTAP Node, volume, and the amount of used and total space.
Replication Destination
If a replication destination exists, it is listed here.
RELATED TOPICS:
505
User's Guide VM and Storage Mapping Report
506
User's Guide VMware Datastores Report
Parameters
Use the following parameters to customize your report:
l vCenter
l ESX Host
Multiple selections are supported.
l Detail View Filter
The default report parameters report on all datastores with 80% space used, all vCenters, and all ESX Hosts.
Quick View
The Quick View section displays a pie chart of used and free space on your datastores. Use the ESX Host
parameter to display datastores on all hosts or a specific host.
Note: The Quick View section is only modified through the ESX Host parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the VMware Datastores
report:
Datastore Type
The file system types used by your datastores. For example, NFS or VMFS.
Datastore Count
507
User's Guide VMware Datastores Report
The number of datastores associated with the datastore type.
Capacity
The total capacity of the datastore by file system type.
Provisioned Space
The amount of space on the datastore allocated for virtual disk files by file system type.
Free Space
The space available on the datastore by file system type.
Detail View
The following fields and corresponding data display in the Detail View section of the VMware Datastores
report:
Datastore
The name of the datastore.
ESX Host (vCenter)
The host node where the datastore resides. More than one datastore can reside on an ESX host.
Type
The file system type of the datastore. For example, NFS or VMFS.
VM Count
The number of virtual machines on the datastore.
Capacity
The capacity of the datastore.
Provisioned Space
The amount of space on the datastore allocated for virtual disk files.
Free Space
The space available on the datastore.
% Used/Free
The percentage of space used on the datastore and a visual indicator of the amount of space used and
available.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
508
User's Guide VMware Datastores Report
509
User's Guide VMware LUNs Report
Parameters
Use the following parameters to customize your report:
l vCenter
l ESX Host
Multiple selections are supported.
The default report parameters report on all vCenters and ESX hosts.
Quick View
The Quick View section displays a pie chart of LUN storage utilization by storage vendor. Use the ESX Host
parameter to display LUNs on all hosts or a specific host. Any storage vendor that takes less than 5% of the
total size of all datastores displays as Others.
Note: The Quick View section is only modified through the ESX Host parameter.
Summary View
The following fields and corresponding data display in the Summary View section of the VMware LUNs report:
ESX Host (vCenter)
The host node where the LUN resides. More than one LUN can reside on an ESX host.
Fiber Channel
The capacity of the storage attached through fiber channel.
iSCSI
510
User's Guide VMware LUNs Report
The capacity of the storage attached through iSCSI.
Block Adapter
The capacity of the storage attached through a block adapter.
Parallel SCSI
The capacity of the storage attached through parallel SCSI.
Detail View
The following fields and corresponding data display in the Detail View section of the VMware LUNs report:
LUN Name
The name of the LUN.
LUN ID
The unique identification number of the LUN.
Storage Vendor
The name of the storage vendor of the LUN.
ESX Host (vCenter)
The host node where the LUN resides.
Datastore(s)
The name of the associated datastore.
Capacity
The total capacity of the LUN.
Transport Type
The method through which data is transferred. For example, fiber channel or iSCSI.
RDM
The raw device mapping type. For example, physical or virtual.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
511
User's Guide VMware Orphaned Datastores Report
Parameters
Use the following parameter to customize your report:
l vCenter
l ESX Host
Multiple selections are supported.
The default report parameters report on all ESX hosts and vCenters.
Orphaned Datastores
The following fields and corresponding data display in the Orphaned Datastores section of the VMware
Orphaned Datastores report.
Datastore
The name of the datastore.
ESX Host (vCenter)
The host node where the datastore resides. More than one datastore can reside on an ESX host.
Type
The file system type of the datastore. For example, NFS or VMFS.
Capacity
The capacity of the datastore.
Provisioned Space
The amount of space on the datastore allocated for virtual disk files.
Free Space
The amount of free space on the datastore.
% Used/Free
512
User's Guide VMware Orphaned Datastores Report
The percentage of space used on the datastore and a visual indicator of the amount of space used and
available.
Reason
The reason the datastore was returned by the report. For example, if no virtual machines are registered on
the datastore.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
513
User's Guide VMware Orphaned LUNs Report
Use the VMware Orphaned LUNs report to answer questions such as:
l What is the transport type of an orphaned LUN?
l What is the storage vendor of an orphaned LUN?
Parameters
Use the following parameter to customize your report:
l vCenter
l ESX Host
Multiple selections are supported.
The default report parameters report on all ESX hosts and vCenters.
Summary View
The following fields and corresponding data display in the Detail View section of the Orphaned LUNs report.
vCenter
The name of the vCenter node.
ESX Host
The host node where LUNs reside. More than one LUN can reside on an ESX host.
Total LUNs
The total number of LUNs on the host.
Capacity
The capacity overall LUN storage utilization.
Detail View
The following fields and corresponding data display in the Detail View section of the Orphaned LUNs report.
LUN Name
514
User's Guide VMware Orphaned LUNs Report
The name of the LUN.
LUN ID
The unique identification number of the LUN.
Storage Vendor
The name of the storage vendor of the LUN.
ESX Host (vCenter)
The host node where the LUN resides. More than one LUN can reside on an ESX host.
Transport Type
The method through which data is transferred. For example, fibre channel or iSCSI.
Capacity
The capacity of the LUN.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
515
User's Guide VMware VM Snapshot Sprawl Report
Use the VMware VM Snapshot Sprawl report to answer questions such as:
l What is the age of the oldest snapshot on a virtual machine?
l Which virtual machines have a large number of snapshots?
Parameters
Use the following parameters to customize your report:
l vCenter
l ESX Host
Multiple selections are supported.
l Snapshot Sprawl Criteria
l Snapshot Creation Time
The default report parameters report on all the criteria with a Snapshot Creation time of more than a year.
RELATED TOPICS:
516
User's Guide VMware VM Snapshot Sprawl Report
517
User's Guide VMware VM Sprawl Report
Parameters
Use the following parameters to customize your report:
l vCenter
l ESX Host
Multiple selections are supported.
l Days Since Last Power Off
l Days Since Last Suspended
l Days Since Last Power On
The default report parameters report on all criteria that were last powered on over 180 days ago.
Quick View
The Quick View section displays a pie chart of used and free space on your virtual machines. Use the
ESX Host parameter to display virtual machines on all hosts or a specific host.
Note: The Quick View section is only modified through the ESX Host parameter.
518
User's Guide VMware VM Sprawl Report
Powered Off Since
The date and time the virtual machine was last powered off.
ESX Host (vCenter)
The host node where the virtual machine resides.
Resource Pool
The name of the associated resource pool.
Provisioned Space
The amount of space on the datastore allocated for virtual disk files.
Datastore(s)
The name of the associated datastores.
519
User's Guide VMware VM Sprawl Report
The host node where the virtual machine template resides.
Provisioned Space
The amount of space on the datastore allocated for virtual disk files.
Datastore(s)
The name of the associated datastores.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
520
VMware VM Storage Report
Review your virtual machines and associated datastores through the VMware VM Storage report.
View datastores, resource pools, and the provisioned space of the datastore.
Parameters
Use the following parameters to customize your report:
l vCenter
l ESX Host
Detail View
The following fields and corresponding data display in the Detail View section of the VMware
VM Storage report:
VM
The name of the virtual machine, along with the associated datastore.
ESX Host (vCenter)
The host node where the virtual machine resides.
Resource Pool
The name of the associated resource pool.
Provisioned Space
The amount of space on the datastore allocated for virtual disk files.
RELATED TOPICS:
l Storage Utilization Reports on page 458
l Report Overview on page 369
l Run a Report on page 370
l Create a Customized Report on page 372
User's Guide Maintenance
Maintenance
The topics in the following section cover maintenance information including updating the IBM Spectrum Copy
Data Management appliance, logging on to the virtual appliance, and collecting logs for troubleshooting.
522
User's Guide Maintenance Overview
Maintenance Overview
In most cases, IBM Spectrum Copy Data Management is installed on a virtual appliance. The virtual appliance
contains the application and the Inventory.
System Administrators can perform maintenance tasks on the IBM Spectrum Copy Data Management
application. Note that a System Administrator is usually a senior-level user who designed or implemented the
vSphere and ESX infrastructure, or a user with an understanding of IBM Spectrum Copy Data Management,
VMware, and Linux command-line usage. Maintenance tasks are performed in vSphere Client, through the
IBM Spectrum Copy Data Management command-line, or through a web-base management console.
Maintenance tasks include collecting logs, updating the application, and reviewing the configuration of the
virtual appliance.
RELATED TOPICS:
l Log On to the Virtual Appliance on page 524
l Set Time Zone on page 525
l Collect Logs For Troubleshooting on page 527
l Modifying Job Log Options on page 530
l Manage the Administrative Console on page 535
l Update IBM Spectrum Copy Data Management on page 536
l Install the Marketplace RPM on page 539
l Modifying Network Settings on page 542
l Upload an SSL Certificate on page 544
l Horizontal Scale Out on page 546
l Restoring A Snapshot From A FlexGroup Volume To Another FlexGroup Volume on
page 551
l LDAP User Name Syntax on page 561
523
User's Guide Log On to the Virtual Appliance
RELATED TOPICS:
l Collect Logs For Troubleshooting on page 527
l Manage the Administrative Console on page 535
524
User's Guide Set Time Zone
If instances of IBM Spectrum Copy Data Management reside in time zones that differ from the IBM Spectrum
Copy Data Management appliance, the instances can be synced to the time zone of the IBM Spectrum Copy
Data Management appliance instead of the time zone of the instance. In some cases, a user may wish to
schedule IBM Spectrum Copy Data Management jobs based on the IBM Spectrum Copy Data Management
appliance's time zone over their local time zone. Perform the following procedure to sync the time zones of the
IBM Spectrum Copy Data Management instances with the IBM Spectrum Copy Data Management appliance:
To sync time zones of IBM Spectrum Copy Data Management instances with the IBM Spectrum Copy
Data Management appliance:
1. Follow the procedure above to set the time zone of the IBM Spectrum Copy Data Management
appliance.
525
User's Guide Set Time Zone
2. Log in to the IBM Spectrum Copy Data Management appliance as a root user, and edit the following
file: /opt/virgo/repository/ecx-usr/com.syncsort.dp.xsb.api.endeavour.session.properties.
3. In the property file, set useServerTime to true, then save the file.
4. Restart the IBM Spectrum Copy Data Management appliance through the following commands:
service virgo stop
service virgo start
RELATED TOPICS:
l Log On to the Virtual Appliance on page 524
l Manage the Administrative Console on page 535
526
User's Guide Collect Logs For Troubleshooting
2. The Audit Log window displays a log of actions performed in IBM Spectrum Copy Data Management,
along with the user performing the action and a description of the action.
3. To search for the actions of a specific IBM Spectrum Copy Data Management user, search for the user
name in the Search for users field.
527
User's Guide Collect Logs For Troubleshooting
4. To download the current view of the Audit Log as a .csv file, click Download, the select a location to save
the file.
To collect IBM Spectrum Copy Data Management logs from the virtual appliance:
Note: This procedure assumes IBM Spectrum Copy Data Management deployment was to a VMware
appliance host.
1. Log on to the virtual appliance console as administrator.
To log on to the virtual appliance:
1. In vSphere Client, select the virtual machine where IBM Spectrum Copy Data Management is
deployed.
2. In the Summary tab, select Open Console and click in the console.
3. Select Login, and enter your user name and password. The default user name is administrator and
the default password is ecxadLG235.
2. Navigate to /opt/CDM/tools/scripts.
Run logcollect using the sudo command. For example, at the command prompt enter:
$ sudo ./logcollect
or
$ sudo /opt/CDM/tools/scripts/logcollect
The logcollect script might take a few minutes to run depending on application usage. .
3. Optionally, add a specific job log to the archive using the -job command and the job ID, which can be
obtained through the job's instance on the Jobs tab. For example, at the command prompt enter:
528
User's Guide Collect Logs For Troubleshooting
Note: The following logs are added to the zip file: logcollect, mongo, postgres, rabbitmq, system, virgo,
and job if the -job command was run.
NEXT STEPS:
l Contact Technical Support to inform them that you have created a log collection file for
troubleshooting.
l If you collected logs from the virtual appliance, copy the zip file to your local computer. If
your local computer is Windows based, you can use WinSCP. See
http://winscp.net/eng/index.php. If your local computer is Unix based, you can use scp.
See http://www.hypexr.org/linux_scp_help.php.
l Send the zipped log collection file to Technical Support.
l Manually clean up the archive directory.
RELATED TOPICS:
l Log On to the Virtual Appliance on page 524
l Monitor a Job Session on page 172
529
User's Guide Modifying Job Log Options
2. Stop the virgo service on the IBM Spectrum Copy Data Managementappliance.
$ service virgo stop
3. Using a text editor (vi), open the com.syncsort.dp.xsb.serviceprovider.properties file
which is located at /opt/virgo/repository/ecx-usr/.
4. Modifying the file as necessary. The settings are listed with their defaults.
maintenance.joblog.enable=true
maintenance.joblog.offload.enable=false
maintenance.joblog.maxrunningminutes=60
maintenance.joblog.retentiondays=30
maintenance.joblog.export.target=/data2/joblogs
5. Save the properties file and exit the text editor.
530
User's Guide Modifying Job Log Options
6. Start the virgo service on the IBM Spectrum Copy Data Management appliance.
$ service virgo start
531
User's Guide Updating Global Settings
2. The Update Global Settings window displays a predefined list of options that can be adjusted in your IBM
Spectrum Copy Data Management environment. The settings are divided into three
categories: Protection, Recovery, and Job log purge.
Protection
n Protection Incremental FC Copy rate
Enter an integer value to set the copy rate for incremental FlashCopy protection jobs. The default
setting is 100 for new deployments.
Enter an integer value to set the clean rate for incremental FlashCopy protection jobs. The default
setting is 100 for new deployments.
Recovery
n Recovery FC Copy rate
Enter an integer value to set the copy rate for FlashCopy recovery jobs. The default setting is 100 for
new deployments.
532
User's Guide Updating Global Settings
Enter an integer value to set the clean rate for FlashCopy recovery jobs. The default setting is 0 for
new deployments.
Select this to enable the purging of job logs after a set number of days. When enabled, the value
defined in Job log purge retention will be used to determine how log to retain job logs before being
purged. This is not enabled by default for new deployments.
Enter an integer value to set the number of days to retain job logs. The default setting is 30 days for
new deployments. This setting requires that Job log purge be enabled.
Select this to enable the offload of job logs to a specific location as defined by the path in the Job log
purge offload target field. This is not enabled by default for new deployments. This setting requires
that Job log purge be enabled.
Enter a path to which job logs will be offloaded. The default setting is /data2/joblogs for new
deployments. This setting requires that both Jog log purge and Jog log purge offload be enabled
and is required when Job log purge offload is enabled.
RELATED TOPICS:
533
User's Guide Updating Global Settings
534
User's Guide Manage the Administrative Console
CONSIDERATIONS:
l
RELATED TOPICS:
l Install IBM Spectrum Copy Data Management as a Virtual Appliance on page 67
l Update IBM Spectrum Copy Data Management on page 536
l Install the Marketplace RPM on page 539
535
User's Guide Update IBM Spectrum Copy Data Management
Note: If updating from IBM Spectrum Copy Data Management 2.2.5, or updating to a newer version of CDM
that was previously updated from CDM 2.2.5, additional steps are required. Updates in this scenario will fail
with the following error: <open file '<fdopen>', mode 'rb' at (exception)>. To resolve this issue, log into the
CDM appliance with root access, then navigate to /etc/yum.repos.d/. If the erlang_solutions.repo file is in
this location, move the file to /tmp on the appliance before updating. Once the file has been moved, run the
update procedure again.
When updating IBM Spectrum Copy Data Management through the Administrative Console, a Clean Process
Failure message displays during the update process. The message indicates that the update has been
applied successfully, but the update cleanup process failed. This message appears consistently when
performing an update to IBM Spectrum Copy Data Management, regardless of whether the Clean Process
failed or was successful. This issue is resolved in the latest update.
1. From a machine with internet access, download the update file from IBM Fix Central.
2. From a supported web browser, access the Administrative Console at the following address:
https://<HOSTNAME>:8090/
where <HOSTNAME> is the IP address of the virtual machine where the application is deployed.
3. In the login window, select System from the Authentication Type drop-down menu. Enter your
password to access the Administrative Console. The default password is ecxadLG235.
4. Click Manage updates. In the Don't want to update IBM Spectrum Copy Data Management to the
latest available release? section, select Click here.
5. Click Choose File, browse for the update file to upload to the appliance, then click Upload Update
Image. The update process begins once the update image has been uploaded to the appliance.
6. After the update completes, navigate to the Perform System Actions page on the Administrative
Console to restart the appliance.
536
User's Guide Update IBM Spectrum Copy Data Management
HTML content from previous versions of IBM Spectrum Copy Data Management may be stored in your
browser's cache. Clear your browser's cache before logging in to an updated version of IBM Spectrum
Copy Data Management to ensure you are viewing the latest content changes.
Note: Updating the IBM Spectrum Copy Data Management appliance will require that the virtual machine
be restarted for plugins to function properly. For example, if the VM is not restarted, the HPE Nimble
plugin will not function properly and cannot be configured in an SLA.
Manually Update Administrative Console RPMs When Upgrading from IBM Spectrum Copy
Data Management 2.2.6 to 2.2.7
When upgrading IBM Spectrum Copy Data Management from version 2.2.6 to 2.2.7 in an environment without
internet access, additional steps must be taken to upgrade the Administrative Console. In summary, first the
IBM Spectrum Copy Data Management appliance is updated to version 2.2.7 through an ISO, then three RPM
packages are manually installed on the appliance.
1. From a machine with internet access, download the IBM Spectrum Copy Data Management 2.2.7 ISO
from IBM Fix Central.
2. Contact Technical Support for information about accessing the Administrative Console RPM packages:
scdm-adminconsole-2.0.*, scdm-emi-2.0.*, and scdm-catalogmanager-1.0.*.
3. Log in the 2.2.6 Administrative Console. Click Manage updates. In the Don't want to update to the latest
available release? section, select Click here.
4. Click Choose File, browse for the 2.2.7 ISO to upload to the appliance, then click Upload Update
Image. The update process begins once the update image has been uploaded to the appliance.
Once complete, the IBM Spectrum Copy Data Management appliance is updated to version 2.2.7.
5. Log out of the Administrative Console.
6. Log in to the IBM Spectrum Copy Data Management command line interface via vSphere or ssh utility as
a root user.
7. Copy the Administrative Console RPM packages (scdm-adminconsole-2.0.*, scdm-emi-2.0.*, and scdm-
catalogmanager-1.0.*) to the appliance’s “/” directory.
8. Run the following commands as root:
rpm -Uvh scdm-adminconsole-2.0.0-*.rpm
rpm -Uvh scdm-emi-2.0.0-*.rpm
rpm -Uvh scdm-catalogmanager-1.0.0*.rpm
9. Once complete, clear your browser cache, then log into the updated IBM Spectrum Copy Data
Management 2.2.7 Administrative Console.
10. Click Product Information, then verify that the RPM packages have been updated.
To update your IBM Spectrum Copy Data Management appliance through RPM update files:
The following procedure requires root access to the IBM Spectrum Copy Data Management virtual appliance.
537
User's Guide Update IBM Spectrum Copy Data Management
1. From a machine with Internet Access, download the necessary RPM update files from IBM Fix Central.
2. Log into the IBM Spectrum Copy Data Management appliance as a root user.
3. Copy the RPM update files to the appliance's "/" directory.
4. Navigate to "/" through the cd command, then execute the following command for each RPM:
rpm -Uvh <rpm>
5. Once complete, restart the appliance.
NEXT STEPS:
l If the update process fails, encounters an error, or there is any interruption in the update
process prior to it completing, review the update logs at the following location on the
virtual machine: /opt/vmware/var/log/vami/updatecli.log.
l To reapply the update, revert the virtual machine snapshot, which was created before
the update procedure, and then attempt another update.
RELATED TOPICS:
l Install IBM Spectrum Copy Data Management as a Virtual Appliance on page 67
l Manage the Administrative Console on page 535
538
User's Guide Install the Marketplace RPM
To install or update the Marketplace for IBM Spectrum Copy Data Management:
1. From a machine with Internet Access, download the marketplaceadmin-<version>.noarch.rpm file
from IBM Fix Central at https://www-945.ibm.com/support/fixcentral
2. Copy marketplaceadmin-<version>.noarch.rpm to the IBM Spectrum Copy Data Management
appliance "/" directory via SCP.
3. Log in as “root” user, and execute “cd /" to move to the "/" directory.
4. Execute one of the following commands:
n To install the RPM package if it is not currently installed, execute "rpm -ivh marketplaceadmin-
<version>.noarch.rpm"
n To update an installed RPM package, execute "rpm -Uvh marketplaceadmin-<version>.noarch.rpm"
RELATED TOPICS:
l Install IBM Spectrum Copy Data Management as a Virtual Appliance on page 67
l Manage the Administrative Console on page 535
539
User's Guide Backup and Restore the Catalog
To manage your IBM Spectrum Copy Data Management catalog through the Catalog Manager:
1. From a supported browser, enter the following URL:
https://<HOSTNAME>:8090/
where <HOSTNAME> is the IP address of the virtual machine where the application is deployed.
2. In the login window, select System from the Authentication Type drop-down menu. Enter your
password to access the Administrative Console. The default password is ecxadLG235.
3. Click Menu, then select Catalog Manager.
4. Select Backup Catalog or Restore Catalog.
Backup Catalog: In the Directory field, enter the backup destination on the IBM Spectrum Copy Data
Management host. Ensure the destination volume exists and that there is enough room on the
destination volume for the Catalog backup. Click Backup to begin the Catalog backup.
Backup Catalog considerations: IBM Spectrum Copy Data Management will be stopped while the
Catalog is being backed up. The IBM Spectrum Copy Data Management user interface will not be
accessible, and all running jobs will be aborted.
Restore Catalog: In the Directory field, enter the source of the Catalog restore on the IBM Spectrum
Copy Data Management host. Ensure the source volume exists. Click Restore to begin the Catalog
restore.
Restore Catalog considerations: IBM Spectrum Copy Data Management will be stopped while the
Catalog is being restore. The IBM Spectrum Copy Data Management user interface will not be
accessible, and all running jobs will be aborted. All IBM Spectrum Copy Data Management snapshots
created after the Catalog backup was run will be lost.
RELATED TOPICS:
540
User's Guide Backup and Restore the Catalog
541
User's Guide Modifying Network Settings
Once changes have been made to the IP or hostname using the system-config-network tool, a reboot of
the appliance may be required. Updates to any associated VADP proxy servers may be required.
To update VADP proxy server settings after an IBM Spectrum Copy Data Management appliance IP
or hostname change:
Additionally, changes made to the IP or hostname of the IBM Spectrum Copy Data Management appliance
will result in a loss of communication with associated VADP proxy servers. Please follow the steps below on
each associated VADP proxy server to re-establish communication with the IBM Spectrum Copy Data
Management appliance.
SSH to each VADP proxy and enter the following commands:
1. Enter the new IBM Spectrum Copy Data Management appliance IP or hostname to the ECX_HOST
variable in /opt/ECX/bin/escvadp and save:
vi /opt/ECX/bin/ecxvadp
ECX_HOST=<new ECX IP or Hostname>
542
User's Guide Modifying Network Settings
2. Enter the new IBM Spectrum Copy Data Management appliance IP or hostname to the ECX_HOST
variable in /etc/rd.d/init.d/escvadp and save:
vi /etc/rc.d/init.d/ecxvadp
ECX_HOST=<new ECX IP or Hostname>
Any associated VADP proxy servers should be now able to communicate with the IBM Spectrum Copy Data
Management appliance using the updated IP address or hostname.
RELATED TOPICS:
l Log On to the Virtual Appliance on page 524
l Manage the Administrative Console on page 535
543
User's Guide Upload an SSL Certificate
To upload a certificate:
1. Contact your network administrator for the name of the certificate to export.
2. From a supported browser, export the certificate to your computer. Make note of the location of the
certificate on your computer. The process of exporting certificates varies based on your browser. See
Related Topics.
3. From a supported browser, enter the following URL:
https://<HOSTNAME>:8090/
where <HOSTNAME> is the IP address of the virtual machine where the application is deployed.
4. In the login window, select System from the Authentication Type drop-down menu. Enter your
password to access the Administrative Console. The default password is ecxadLG235.
5. Click Manage your certificates.
6. Select the Certificate Type, browse for the certificate file on your computer, then click Upload.
7. Reboot the virtual machine where the application is deployed.
RELATED TOPICS:
l Microsoft Knowledge Base Article 179380: How to Remove, Import, and Export Digital
Certificates
l Firefox Knowledge Base Article: Advanced settings for accessibility, browsing, system
defaults, network, updates, and encryption
l Google Chrome Knowledge Base Article: Advanced security settings
l Register a Provider on page 79
544
User's Guide Upload an SSL Certificate
545
User's Guide Horizontal Scale Out
Step 1: Deploy a MongoDB virtual machine (VM) from the .ova file:
546
User's Guide Horizontal Scale Out
The resulting VM will have 4 processors and 16GB of RAM with a single 50GB VMDK. The guest operating
system is CentOS 7.6.
Step 2: Copy the appropriate VMDKs from IBM Spectrum Copy Data Management to the MongoDB
VM.
Note: The next step requires that you know the difference between the two databases. From this point
forward, the two databases (and VMs) will be denoted as 'mongo1' and 'mongo2'. In a standard deployment
case where neither database has been ‘extended’ (i.e., where neither LVM volume group has had additional
logical volumes added to it), mongo1 is the 100GB hard disk, and mongo2 is the 250GB hard disk. In cases
where there is not a standard deployment and disks have been extended, knowing which VMware disk
belongs to each volume group is critical. It is best to contact support to determine the appropriate VMs.
Each of the steps below will need to be performed for mongo1 and mongo2. Only proceed with mongo2 after
completing all of the steps for mongo1.
Prerequisites
l Ensure that both IBM Spectrum Copy Data Management and the MongoDB VM are powered off.
l Have available the IP address of the vCenter server that manages IBM Spectrum Copy Data
Management appliance.
l Determine the names of the IBM Spectrum Copy Data Management and MongoDB VM. These are the
VM names displayed in the vCenter UI.
3. Enter the command, substituting the vCenter IP for <vCenter_ip>, the vCenter user name for <vCenter_
userid>, the IBM Spectrum Copy Data Management VM name for <IBM Spectrum Copy Data
Management_VM_name>, and the MongoDB VM name for <MongoDB_VM_name>:
./copy_mongodb_vmdk.ps1 mongo1 <vCenter_ip> <vCenter_userid> <IBM Spectrum
Copy Data Management_VM_name> <MongoDB_VM_name> -whatif:$false
4. The script will prompt for the hard disk number that corresponds to the database (ex. mongo1). Type the
number at the prompt and press Enter.
5. When all hard disk numbers have been entered, type the letter 'q' at the prompt.
Note: Depending on the size of the VMDKs and the speed of the storage network, the script may run for as
long as one hour. To observe progress, consult the vSphere UI.
547
User's Guide Horizontal Scale Out
Prerequisites
l A deployed, powered off MongoDB VM to which the IBM Spectrum Copy Data Management VMDK
(s) have been copied.
l Have available the IP address of the IBM Spectrum Copy Data Management virtual appliance.
Step 4: Configure IBM Spectrum Copy Data Management to use the MongoDB VMs rather than the
collocated MongoDB processes.
Prerequisites
l Steps 1 - 3 have been completed for both mongo1 and mongo2.
l A powered off IBM Spectrum Copy Data Management appliance that is in its default configuration with
collocated MongoDBs.
l The password for the IBM Spectrum Copy Data Management appliance's root account.
l Have available the IP addresses of the mongo1 and mongo2 VMs.
548
User's Guide Horizontal Scale Out
3. After the IBM Spectrum Copy Data Management VM is up, use the vSphere UI console facility to open
the VM's console.
4. Log in to the IBM Spectrum Copy Data Management shell as root using the password set when you first
logged into the IBM Spectrum Copy Data Management shell.
5. Edit (using vi) the managedb.config file located at
/opt/ECX/tools/scripts/managedb.config and set the properties:
MONGO_HOST=<mongo1_ip>
MONGO2_HOST=<mongo2_ip>
MONGO2_PORT=27017
where <mongo1_ip> is the IP of mongo1 and <mongo2_ip> is the IP of mongo2.
6. Restart the IBM Spectrum Copy Data Management appliance. Please give IBM Spectrum Copy Data
Management several minutes to fully restart.
Results
549
User's Guide Horizontal Scale Out
It may take several minutes for IBM Spectrum Copy Data Management to fully restart. Once IBM Spectrum
Copy Data Management has restarted, log in to the IBM Spectrum Copy Data Management appliance UI to
ensure that the procedure completed successfully.
550
User's Guide Restoring A Snapshot From A FlexGroup Volume To Another FlexGroup Volume
options cifs.show_snapshot
options cifs.show_snapshot on
2. Obtain the name of the source and destination volumes, and the name of the snapshot to be restored.
3. Deactivate any active quota rules on the destination FlexGroup volume. Active quota rules can be
deactivated by issuing the following command from the OnTap CLI:
volume quota modify -vserver <vserver> -volume <volume> -state off
551
User's Guide Restoring A Snapshot From A FlexGroup Volume To Another FlexGroup Volume
Note: Information contained between the less-than and greater-than symbols should contain the source
and destination servers and flexgroups. Omit the less-than and greater-than symbols when running the
command. The colon and dashes should remain.
4. Enter the following command to initiate the restore:
snapmirror restore -source-path <source vserver>:<source flexgroup> -
destination-path <destination vserver>:<destination flexgroup> -snapshot
<snapshot name>
5. If there were active quota rules applied to the destination FlexGroup volumes that were deactivated in
Step 3, reactivate them. Use the following command to reactivate quote rules:
volume quota modify -vserver <vserver> -volume <volume> -state on
552
User's Guide Documentation and Support
553
User's Guide Documentation Roadmap
Documentation Roadmap
Help System
In IBM Spectrum Copy Data Management, when needed:
l Click the help icon to invoke help specific to the active function.
l Use the Help system's Search and Index features to locate pertinent information, as these features search
the entire documentation suite.
User's Guide
This PDF is intended for IBM Spectrum Copy Data Management users, system administrators, and the Super
User. It contains information, procedures, and tips for the most commonly used functions.
System administrators can use this guide to help install, maintain, and start the application, manage users,
and catalog resource information. Users can find procedures on how to search and browse for objects,
generate and interpret reports, schedule jobs, and orchestrate Backup and Restore jobs.
RELATED TOPICS:
l About the Help System on page 555
554
User's Guide About the Help System
Pop-up windows must be enabled in your browser to access the Help system and some IBM Spectrum Copy
Data Management operations.
Search for words on a help page by using the Find feature in your browser.
RELATED TOPICS:
l Documentation Roadmap on page 554
555
User's Guide Reference Topics
Reference Topics
The topics in the following section cover reference information including virtual machine privileges, search
and filter guidelines, and return code references.
556
User's Guide Search and Filter Guidelines
Using the following inline search strings, you can perform complex searches based on a file's location, size,
and access, creation, or modified time from the basic search field.
557
User's Guide Search and Filter Guidelines
Search by object access, creation, and modified time:
Search for cataloged objects that were last accessed, modified, or created at a specific time or time range
using the following examples:
atime:2yearsago searches for all objects with an access time of two years ago from the time of
the search. ctime searches against the object's creation time, and mtime searches against the
object's modification time.
atime:2yearsago-lastyear searches for all objects with an access time between last year
and two years ago. ctime searches against the object's creation time, and mtime searches
against the object's modification time.
atime:past2weeks searches for all objects with an access time from the past two weeks.
ctime searches against the object's creation time, and mtime searches against the object's
modification time.
Wildcard Considerations:
A wildcard is a character that you can substitute for zero or more unspecified characters when searching text.
Position wildcards at the beginning, middle, or end of a string, and combine them within a string.
l Match a character string with an asterisk, which represents a variable string of zero or more characters:
string* searches for terms like string, strings, or stringency
str*ing searches for terms like string, straying, or straightening
558
User's Guide Search and Filter Guidelines
You can use multiple asterisk wildcards in a single text string, though this might considerably slow down a
large search.
RELATED TOPICS:
l Search for Objects on page 355
l View Object Details on page 360
l View NetApp ONTAP File Details on page 361
l Create an Inventory Job Definition - NetApp ONTAP File on page 194
559
User's Guide Select, Sort, and Reorder Columns
RELATED TOPICS:
l View a Provider on page 100
l Edit a Schedule on page 165
l Edit a Job Definition on page 351
l Monitor a Job Session on page 172
l Search Overview on page 354
l Run a Report on page 370
560
User's Guide LDAP User Name Syntax
RELATED TOPICS:
l Upload an SSL Certificate on page 544
l User Administration and Security Management on page 14
561
User's Guide Return Code Reference
2 SIGINT
3 SIGQUIT
4 SIGILL
5 SIGTRAP
6 SIGABRT
7 SIGBUS
8 SIGFPE
9 SIGKILL
10 SIGUSR1
11 SIGSEGV
12 SIGUSR2
13 SIGPIPE
14 SIGALRM
15 SIGTERM
16 SIGSTKFLT
17 SIGCHLD
18 SIGCONT
19 SIGSTOP
562
User's Guide Return Code Reference
21 SIGTTIN
22 SIGTTOU
23 SIGURG
24 SIGXCPU
25 SIGXFSZ
26 SIGVTALRM
27 SIGPROF
28 SIGWINCH
29 SIGIO
30 SIGPWR
31 SIGSYS
34 SIGRTMIN
35 SIGRTMIN+1
36 SIGRTMIN+2
37 SIGRTMIN+3
38 SIGRTMIN+4
39 SIGRTMIN+5
40 SIGRTMIN+6
41 SIGRTMIN+7
42 SIGRTMIN+8
43 SIGRTMIN+9
44 SIGRTMIN+10
45 SIGRTMIN+11
46 SIGRTMIN+12
47 SIGRTMIN+13
563
User's Guide Return Code Reference
49 SIGRTMIN+15
50 SIGRTMAX-14
51 SIGRTMAX-13
52 SIGRTMAX-12
53 SIGRTMAX-11
54 SIGRTMAX-10
55 SIGRTMAX-9
56 SIGRTMAX-8
57 SIGRTMAX-7
58 SIGRTMAX-6
59 SIGRTMAX-5
60 SIGRTMAX-4
61 SIGRTMAX-3
62 SIGRTMAX-2
63 SIGRTMAX-1
64 SIGRTMAX
RELATED TOPICS:
l Create a Script Job Definition on page 348
564
User's Guide Frequently Asked Questions
Deployment
How is IBM Spectrum Copy Data Management distributed?
In most cases, IBM Spectrum Copy Data Management is distributed as a virtual appliance through an OVF
template.
What are the requirements of the datastores used for the hard disks? What types of VMware
datastores are supported?
The type of datastore on which IBM Spectrum Copy Data Management is deployed is transparent to IBM
Spectrum Copy Data Management.
565
User's Guide Frequently Asked Questions
Yes. This is a function of the virtual appliance, and can be set during IBM Spectrum Copy Data Management
installation. Better performance can be achieved with thick provisioning of the appliance.
The more heavily loaded the ESX server is, the longer the boot process might take.
What are the default IBM Spectrum Copy Data Management user names and passwords?
When logging on to IBM Spectrum Copy Data Management for the first time, the default user name is admin
and the default password is password. You will be prompted to reset the default password.
When logging on to the management console of the virtual machine, the default user name is administrator
and the default password is ecxadLG235.
Resources
Can the disks be increased dynamically?
IBM Spectrum Copy Data Management data volumes can be expanded if necessary with the approval of
Technical Support.
What resource can I add to improve IBM Spectrum Copy Data Management performance?
Increasing memory should help improve performance.
566
User's Guide Frequently Asked Questions
Is it possible to install proprietary software, such as antivirus software, on the virtual appliance?
It is not recommended to install third party applications on the virtual appliance without approval from
Technical Support.
Can I access the IBM Spectrum Copy Data Management user interface remotely?
Yes. The IBM Spectrum Copy Data Management user interface is browser based. Supported browsers and
the URL are described in the topics System Requirements on page 23 and Start IBM Spectrum Copy Data
Management on page 69.
What ports are needed to access the IBM Spectrum Copy Data Management user interface?
To access IBM Spectrum Copy Data Management, appropriate ports need to be opened through the firewall.
For details, see the topic User Administration and Security Management on page 14.
What operating system is IBM Spectrum Copy Data Management built on?
CentOS is the operating system on the IBM Spectrum Copy Data Management virtual appliance.
Is Java used as part of the IBM Spectrum Copy Data Management appliance?
Yes. However, OpenJDK is used as opposed to JRE.
Do the IBM Spectrum Copy Data Management cataloging and reporting functions impact the
performance of the registered storage systems?
The cataloging function is built on technology that is designed to run as low priority on the storage system and
automatically adjust itself to give top priority to primary workload operations. The reporting functions do not
impact registered storage systems as they run on the IBM Spectrum Copy Data Management virtual
appliance.
Connectivity
How does IBM Spectrum Copy Data Management connect to NetApp ONTAP storage systems?
IBM Spectrum Copy Data Management connects to NetApp ONTAP storage systems through HTTPS or
HTTP.
567
User's Guide Frequently Asked Questions
Does IBM Spectrum Copy Data Management work with storage vendors other than DellEMC, IBM,
and NetApp?
IBM Spectrum Copy Data Management software works with DellEMC, IBM, and NetApp storage and VMware
infrastructure.
IBM Spectrum Copy Data Management provides Backup and Restore support for customers with VMware
leveraging heterogeneous storage, extending Backup use cases to VMware on mixed storage.
For the search and reporting features of IBM Spectrum Copy Data Management, the VMware environment
can use any storage; it does not have to be DellEMC Unity, IBM, or NetApp. Therefore, IBM Spectrum Copy
Data Management provides visibility and insight into VM information across any storage device.
Does IBM Spectrum Copy Data Management work with volumes that have non-Windows file
systems?
Yes. IBM Spectrum Copy Data Management catalogs NFS and CIFS files residing on NetApp ONTAP volume
snapshots. Linux/Unix files are stored using NFS and Windows files are stored using CIFS protocol.
Does IBM Spectrum Copy Data Management work with SnapManager data?
IBM Spectrum Copy Data Management catalogs the meta-data on LUNs created by SnapManager for SQL
Server and Exchange. File level granularity of content hosted inside these LUNs is not available.
Cataloging
If you add a vCenter into an Inventory job definition, does that automatically discover all the ESX
servers within that vCenter?
Yes. Once cataloged, view available VMware resources through the Inventory browse function on the Search
tab.
568
User's Guide Frequently Asked Questions
Why is it that sometimes many jobs and tasks are marked with Waiting indicators on the Jobs tab?
The number of operations in progress on the Jobs tab varies depending on the number of jobs currently
running. IBM Spectrum Copy Data Management controls the number of jobs allowed to run. When the number
of jobs exceeds the value defined by IBM Spectrum Copy Data Management, jobs marked with Waiting
indicators display on the Jobs tab. IBM Spectrum Copy Data Management also controls the number of job
tasks to run simultaneously for a given job when multiple jobs are running.
Operation
How do I protect and recover the IBM Spectrum Copy Data Management appliance itself?
Backing up the IBM Spectrum Copy Data Management appliance regularly is a critical operation. For more
information, contact Technical Support.
569
User's Guide Frequently Asked Questions
Why is it that when I select Hide Duplicates when performing an advanced search, some duplicate
objects still display in the search results pane?
In some cases, the name of a returned object on the search results pane may be the same as another object,
however the resources where the objects reside is different. Review the file properties of the objects by
selecting their names on the search results pane to view the differences between the returned entries.
How are IBM Spectrum Copy Data Management logs collected for troubleshooting?
There are two approaches for downloading logs. Download logs from the Support menu or access the IBM
Spectrum Copy Data Management appliance through a command prompt. The first approach is simpler and
generally sufficient. The second approach produces a more comprehensive set of logs. See the topic Collect
Logs For Troubleshooting on page 527.
Backup/Restore Jobs
For a Backup job, how do I update the retention after a job has run?
Open the existing job definition, click Snapshot in the workflow pane, and update the Keep Snapshots
parameter. The retention policy changes to the supplied value when the job is next run.
To what extent do the IBM Spectrum Copy Data Management Backup and Restore functions impact
the performance of NetApp storage systems?
The Backup and Restore functions employ technologies such as Snapshot and FlexClone that are designed
to be low-impact on the NetApp storage systems. Generally, users should observe little unexpected
performance impact on the storage systems.
570
User's Guide Frequently Asked Questions
RELATED TOPICS:
l Oracle Database Support FAQ on page 572
l Microsoft SQL Server Support FAQ on page 592
l System Requirements on page 23
571
User's Guide Oracle Database Support FAQ
IBM Spectrum Copy Data Management simplifies Oracle database copy management by enabling
administrators to orchestrate application-consistent copy creation, cloning and recovery in minutes, instead of
hours or days. IBM Spectrum Copy Data Management copy management leverages the advanced snapshot
and replication features of the underlying storage platform to rapidly create, replicate, clone, and restore
copies of Oracle databases in the most efficient way possible, in both time and space. IBM Spectrum Copy
Data Management enables you to focus on the backup and recovery requirements of your business rather
than the technical details of the underlying storage platforms.
IBM Spectrum Copy Data Management is an intelligent copy data management solution that delivers end-to-
end automation, orchestration, and self-service functionality for your Oracle environment through a
comprehensive and scalable Inventory. With the self-service features of IBM Spectrum Copy Data
Management, your users are empowered to create clones on demand, freeing DBAs, while at the same time
offering the advanced recovery features needed for Oracle environments.
IBM Spectrum Copy Data Management Oracle Copy Data Management solution supports the following
Oracle deployment modes:
l Single instance – a single instance running on a single server accessing a database
l RAC (Real Application Clusters) leveraging ASM – more than one instance running on multiple servers
are accessing a database simultaneously
l ASM (Automatic Storage Management) – Oracle’s own volume manager and cluster filesystem that is
optimized for Oracle Database features.
572
User's Guide Oracle Database Support FAQ
Do I need to deploy any additional agents to protect Oracle standalone or Oracle RAC servers?
IBM Spectrum Copy Data Management for Oracle is delivered as a VMware OVA that is easily deployed on
demand in a matter of minutes. Once deployed, you simply register your Oracle servers with appropriate
credentials and then let IBM Spectrum Copy Data Management discover the rest. IBM Spectrum Copy Data
Management eliminates the complexity of manually deploying and maintaining application agents on Oracle
servers. A lightweight application-aware component is automatically injected to the required Oracle servers
on demand and automatically updated to the latest version if required.
App consistent Oracle database backup creation (local and remote) step by step
IBM Spectrum Copy Data Management auto-discovers databases and enables copies only of eligible
databases. To be eligible for IBM Spectrum Copy Data Management backup, the Oracle database needs to
be residing on a supported storage platform. With IBM Spectrum Copy Data Management, application owners
do not need to be concerned about storage infrastructure.
IBM Spectrum Copy Data Management creates application-consistent Oracle database copies without the
need to build and maintain complex RMAN scripts.
A typical IBM Spectrum Copy Data Management Oracle database backup creation workflow consists of the
following steps:
l Auto-inject a lightweight component into the standalone Oracle Server(s) or one of the Oracle RAC
server node(s) running a database instance to be copied
l Discover storage volume mapping to selected Oracle database(s) and logs
l Place the Oracle database in hot backup mode
l Automatically create a consistency group for related storage volumes
l Create an application-consistent backup of the consistency group
l Take the Oracle database out of hot backup mode (typically within a few seconds of entering the hot
backup mode)
l Optionally create log copies into the specified mount points
l Optionally create selected masked copies using masking tools for secure DevOps use
l Catalog Oracle copies in Inventory and optionally record details in RMAN recovery catalog
l Optionally replace application-consistent backup to remote location leveraging storage replication
l Clean up auto-injected components from Oracle server node
IBM Spectrum Copy Data Management creates and uses in-place copies, so no data is physically moved.
Replication to off-host storage is performed by leveraging storage replication, which reduces the amount of
impact on Oracle Servers and Databases. IBM Spectrum Copy Data Management generated application-
consistent copies are both space and time efficient. With the same ease, a DBA can automate the creation of
remote copies for DR use cases.
573
User's Guide Oracle Database Support FAQ
Does IBM Spectrum Copy Data Management Oracle solution leverage the Storage Consistency
Group feature?
The storage consistency group feature allows storage administrators to take a snapshot of database
applications where the data is spread across multiple volumes to maintain consistency across all volumes.
In a typical Oracle Database, the data is spread across different volumes for better IO performance and
availability. IBM Spectrum Copy Data Management Oracle application-consistent backup creation ensures
that appropriate consistency groups are automatically created to maintain consistency across all related
volumes.
Why can I not select some of the databases for protection in an Oracle backup workflow?
You cannot select a database if it is not eligible for protection. Hover your cursor over the database name to
view the reasons the database is ineligible, such as that the database files, control files, or redo log files are
stored on unsupported storage.
Will IBM Spectrum Copy Data Management auto discover newly added databases and automatically
protect them?
If you select the parent Oracle Home in a Backup job definition, all databases under it are protected. If a new
database is added under the home, it will be automatically protected once it is cataloged. Discovery and
cataloging of new Oracle databases occurs as part of regularly scheduled Oracle Inventory job.
Does the Oracle database need to be on supported storage for IBM Spectrum Copy Data
Management?
Yes, the IBM Spectrum Copy Data Management Oracle solution leverages storage snapshots for database
protection. All databases, database files, control files and redo logs must be on supported storage systems for
it to be eligible for protection.
Does IBM Spectrum Copy Data Management leverage the Oracle 12c Storage Snapshot feature?
This new feature of Oracle 12c enables you to take a storage snapshot of your database without needing the
database to enter BACKUP mode. In Oracle, when you need to recover, you can use a point in time of the
snapshot. You can roll forward by using the database archive logs, and use this snapshot feature to recover
part or all of the database. IBM Spectrum Copy Data Management fully supports this feature starting in IBM
Spectrum Copy Data Management 2.5.1.
Does IBM Spectrum Copy Data Management support protection of Offline Databases?
574
User's Guide Oracle Database Support FAQ
Databases in offline mode are not automatically included during backup workflows, unless they share storage
volumes with a selected database that is active. IBM Spectrum Copy Data Management marks the offline
databases and does not present them for Oracle Restore Workflows. Users may be able to retrieve these
database files as flat files and perform application mount outside of IBM Spectrum Copy Data Management.
Does IBM Spectrum Copy Data Management support protection of Oracle databases not running in
Archive Log mode?
Yes, protection of Oracle Databases running in NOARCHIVELOG mode are now supported for both Inventory
and Backup use cases. You can recover database to the point of the most recent snapshot. PIT recoveries
are not supported for databases running in NOARCHIVELOG mode.
Does IBM Spectrum Copy Data Management support protection of Oracle databases using pFile
(text initialization parameter files)?
Yes, IBM Spectrum Copy Data Management now supports IBM Spectrum Copy Data Management backup of
databases started through pFile in addition to spFile.
Does IBM Spectrum Copy Data Management support archive log backup and log management?
Oracle DBMS creates database transaction logs as part of its operation. Oracle databases can run in the
following logging modes:
l NOARCHIVELOG mode – In NOARCHIVELOG mode, no transaction logs are created, and there is
no capacity to run point-in-time recovery or online backups. This is the default.
l ARCHIVELOG mode – In ARCHIVELOG mode, the database makes copies of all online redo logs
after they are filled. These copies are called archived redo logs. The archived redo logs are created
via the ARCH process. The ARCH process copies the archived redo log files to one or more archive
log destination directories. These saved archived logs are used for point-in-time recovery.
IBM Spectrum Copy Data Management provides you with an option for archive log files processing:
l Enable archive log backup (Recommended)
l Use existing archive log (Default)
IBM Spectrum Copy Data Management greatly simplifies archive log protection. If a user chooses to protect
archive logs, IBM Spectrum Copy Data Management enables continuous backup of archive logs to a specified
destination providing the lowest RPO (transaction level recoveries).
IBM Spectrum Copy Data Management automatically discovers the location where Oracle writes archived
logs. If this location resides on storage from a supported vendor, IBM Spectrum Copy Data Management can
protect it. If the existing location is not on supported storage, or if you wish to create an additional backup of
database logs, enable the Create Additional Archive Log Destination option in the Oracle Backup job
definition, then specify a path that resides on supported storage. When enabled, IBM Spectrum Copy Data
Management configures the database to start writing archived logs to this new location in addition to any
575
User's Guide Oracle Database Support FAQ
existing locations where the database is already writing logs. If multiple databases are selected for backup,
then each of the servers hosting the database must have their destination directories set individually.
Can I specify a retention period for backed up archive logs within IBM Spectrum Copy Data
Management?
If the Create Additional Archive Log Destination option is selected, IBM Spectrum Copy Data Management
automatically manages the retention of only those archived logs that are under the new destination specified
in the job definition. After a successful backup, logs older than the backup are automatically deleted from the
IBM Spectrum Copy Data Management-managed destination.
If the Use Existing Archive Log Destination(s) option is selected in Oracle Backup job definition, IBM
Spectrum Copy Data Management does not automatically purge any archived logs. The retention of archived
logs must be managed externally, for example using RMAN. In order to support point-in-time recovery, ensure
that the retention period is at least large enough to retain all archived logs between successive runs of the
Oracle Backup job.
Does the IBM Spectrum Copy Data Management Oracle solution integrate with RMAN?
Oracle Recovery Manager (RMAN), a command-line and Enterprise Manager-based tool, is the method
preferred by Oracle DBAs for backup and recovery of Oracle databases, including maintaining an RMAN
repository.
IBM Spectrum Copy Data Management creates application-consistent Oracle database copies simply – with
no need to build and maintain complex RMAN protection scripts. At the same time, IBM Spectrum Copy Data
Management automates cataloging of Oracle database copies in the RMAN recovery catalog. This enables
DBAs to leverage RMAN for:
l Verification - IBM Spectrum Copy Data Management-created Oracle database copies can be
instantly mounted so they can be easily verified through the RMAN verify command
l Advanced Recovery - IBM Spectrum Copy Data Management-created Oracle database copies can
be instantly mounted to perform RMAN-driven PIT recoveries of database and tablespace.
IBM Spectrum Copy Data Management offers the following choice to the user for RMAN catalog registration:
l Register all copies in the RMAN catalog to enable RMAN recoveries against all copies
l Register on-demand only selected copies in the RMAN catalog to enable RMAN recoveries only
when you need to recover
IBM Spectrum Copy Data Management simplifies full application-aware Oracle-consistent copy lifecycle while
maintaining the flexibility and benefits of full RMAN-driven advanced recovery capabilities.
576
User's Guide Oracle Database Support FAQ
Does IBM Spectrum Copy Data Management support pre/post scripts for the Application Backup
workflow?
Yes, IBM Spectrum Copy Data Management supports job-level Pre/Post scripts and job-level pre/post
Snapshot scripts to enable further customization.
l Job-level prescripts and postscripts are scripts that can be run before or after a job runs.
l Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based
snapshot subpolicy runs. (Please refer to pre/post script topic in the IBM Spectrum Copy Data
Management User’s Guide for details.)
Data Masking
Does the IBM Spectrum Copy Data Management Oracle solution offer Data Masking integration with
third party masking tools?
A concern for security officers in any organization is to keep confidential information locked down, even
internally. Data masking is used to hide confidential data, by replacing it with fictitious data, when making data
copies for DevTest or other use cases. It prevents leakage of sensitive data in non-production databases via
static data masking [SDM], and production data in transit via dynamic data masking [DDM].
IBM Spectrum Copy Data Management includes integrated data masking workflows with the ability to
leverage third party masking tools. Traditionally, data masking is difficult, slow, and storage-consuming, but
with IBM Spectrum Copy Data Management it is easily integrated into the Oracle Backup workflow, allowing
creation of masked copies at a specified frequency. Masked copies are automatically marked in the Inventory.
Access to secure copies is managed by the administrator by leveraging the application-level RBAC.
Is sample masking script provided with IBM Spectrum Copy Data Management?
A sample data masking script can be provided upon request. A sample masking script demonstrates data
masking integration with built-in data masking functionality of Oracle 11g and 12c database system.
o Usage of RMAN or custom scripts creates full backup requiring large amounts of additional
storage
577
User's Guide Oracle Database Support FAQ
o Creating full copies is slow
IBM Spectrum Copy Data Management solves these challenges with a simple, automated, end-to-end clone
lifecycle management:
l Self-service access to secure clones by QA team eliminates administrative and process bottlenecks
l IBM Spectrum Copy Data Management enables rapid database clones that are both time- and
space-efficient
l IBM Spectrum Copy Data Management promotes standardization and governance through
centralized Inventory, granular RBAC, and automated jobs
Your Oracle clones can be utilized and consumed instantly – for whatever your use case -- through IBM
Spectrum Copy Data Management “Instant Disk Restore” jobs. IBM Spectrum Copy Data Management
catalogs and tracks all cloned instances. Instant Disk Restore can leverage iSCSI or FC protocol to provide an
immediate mount of LUNs without transferring data.
Can I create an Instant Clone of an Oracle database for DevOps and Business Analytics?
IBM Spectrum Copy Data Management provides automated workflows to create instant clones of Oracle
database regardless of its size.
l Instantly create database clones from any of the copies in the IBM Spectrum Copy Data
Management Inventory, at local or remote locations, to accelerate Business Analytics.
l Enable and accelerate DevOps by providing Instant Disk Restore to secure clones of databases to
appropriate users via application-level RBAC.
Then, when your TestDev, DevOps, or research/analytics work is completed, you can save the clone to more
permanent storage or simply tear it down.
What is the granularity of Database recovery supported by IBM Spectrum Copy Data Management?
578
User's Guide Oracle Database Support FAQ
Supported recoveries for standalone or RAC configurations
l Instant recovery of Oracle databases regardless of the size of the database
l Recover database(s) to original or to a new server (physical or virtual) simply with few clicks
l Recover database(s) with new names simply with few clicks
l Recover at DR site from replicated copies simply with few clicks
l Database can be recovered to point of snapshots to original or new location
l Perform PIT recoveries to original or new location simply with few clicks
l Recover RAC databases to any node on RAC simply with few clicks
l Database running on older versions can be recovered to an instance running same or newer version
(IBM Spectrum Copy Data Management inherits any limitations defined by Oracle)
l Each selected database in a Restore job can have a separate destination specification. Databases can
be recovered using a new name to an original or new instance
l You can select one or more databases in a single Restore job definition
l Recoveries are supported across the same storage type (i.e. ASM to ASM and standalone to
standalone)
l Databases are always recovered in online mode
Users can additionally perform advanced fine-grained recoveries via RMAN integration. You can use IBM
Spectrum Copy Data Management to instantly mount required snapshot copies to a specified Oracle server
and use RMAN to recover. All RMAN-supported recoveries are available to users.
During PIT (point-in-time) Oracle database recoveries, if no log file snapshot exists that is newer than the
chosen recovery PIT, the job creates a fresh log snapshot and uses it for recovery.
How is the Oracle Initialization parameter file (init.ora) processed during Database recovery?
579
User's Guide Oracle Database Support FAQ
The Oracle initialization parameter file (init.ora) is created by the DBA and defines the overall instance
configuration, such as how much memory should be allocated to the instance, the file locations, and internal
optimization parameters.
IBM Spectrum Copy Data Management catalogs the initialization parameters during a job and uses them
during recovery. IBM Spectrum Copy Data Management provides advanced options to control the behavior of
processing initialization parameters used to start up the recovered database in Instant Database Recovery
and DevOps workflows.
Customizable options allow you to use the same parameters as the source or specify a template pfile file to
use.
Additionally, cluster-related parameters like instance_number, thread, and cluster_database are set
automatically by IBM Spectrum Copy Data Management depending on the appropriate values for the
destination.
For more information, see Restore Jobs - Rename Mount Points and Initialization Parameter Options on page
340.
How does IBM Spectrum Copy Data Management set up mount points during Database recovery?
IBM Spectrum Copy Data Management provides advanced options for Instant Database Recovery and
DevOps workflows to override the behavior of mount point handling during recovery. The Mount Point
Rename option provides these selections:
l Append a timestamp: IBM Spectrum Copy Data Management appends a timestamp to the original
mount point.
l Do not rename: IBM Spectrum Copy Data Management uses the same path/name for mount points
or ASM diskgroups as the source.
l Add a custom prefix: Select this option to specify a custom prefix to be prepended to the source
paths/names. The prefix value may contain leading or trailing slashes. In the case of ASM diskgroup
names, the slashes are removed.
l Add a custom suffix: Select this option and specify a custom suffix to be appended the source
paths/names.
l Replace a substring: Select this option to specify a custom string of characters in the old mount point
to be replaced with another string of characters.
For examples of these options, see Restore Jobs - Rename Mount Points and Initialization Parameter Options
on page 340.
Can I recover an Oracle Database running on a Linux physical server to Oracle running on a Linux
VM?
Yes, the recovered database will be recovered as Physical RDM (pRDM).
Can I recover an Oracle Database running in Linux VM to Oracle running on a Linux Physical
Server?
580
User's Guide Oracle Database Support FAQ
Yes, the pRDM configured database will be recovered as LUNs on the Physical Linux Oracle server. Oracle
configured with VMDK can only be recovered to VM.
Where are Oracle specific and IBM Spectrum Copy Data Management specific logs if errors occur?
All required logs (IBM Spectrum Copy Data Management and Oracle application) are collected as part of the
current log collection functionality. There should be no need to manually obtain Oracle application logs from
within Oracle Servers.
Does Oracle application level encryption, such as Transparent Data Encryption(TDE), impact IBM
Spectrum Copy Data Management?
Transparent Data Encryption (TDE) stops would-be attackers from bypassing the database and reading
sensitive information from storage by enforcing data-at-rest encryption in the database layer. This is
application-layer encryption that wouldn’t typically impact IBM Spectrum Copy Data Management. The
encryption keys live outside the database in a "wallet" that is managed separately by the administrator. IBM
Spectrum Copy Data Management will create copies of the data on server A and mount them on server B, for
example. The database software on server B should be able to read the encrypted data as long as the
necessary keys are installed in the wallet there.
Does IBM Spectrum Copy Data Management provide Oracle specific reporting?
581
User's Guide Oracle Database Support FAQ
Reports are offered to assure that your Oracle databases are sound and that your IBM Spectrum Copy Data
Management jobs are verified. Reports provided by IBM Spectrum Copy Data Management specifically for
application support include:
l Application Configuration Report, which describes valuable system information about your Oracle
Database Servers, and affirms that Oracle is configured correctly to be eligible for backup creation.
l Application RPO Compliance Report, which determines which of your Oracle database servers are
not in compliance with your RPO parameters, and displays the reasons for their non-compliance.
System Requirements
582
User's Guide Oracle Database Support FAQ
l Red Hat
Enterprise Linux /
Centos / Oracle
6.5+ [9]
l Red Hat
Enterprise Linux /
l Fibre Channel [18]
Centos / Oracle
l iSCSI [18]
7.0+ [9]
l NFS
l SUSE Linux
Enterprise Server o Data ONTAP 9.x
11.0 SP4+ [9] o Clustered Data
l SUSE Linux ONTAP 8.1, 8.2, 8.3,
Enterprise Server 9.4, 9.5, 9.6 and later
12.0+ [9]
l Pure Storage running Pure
l Red Hat l Physical RDM backed
APIs 1.5 and later:
Enterprise Linux / by Fibre Channel or
o FlashArray//c
Centos / Oracle iSCSI disks attached
6.5+ [9] to ESX [18]
o FlashArray//m
[1] For Oracle 12c multitenant databases, IBM Spectrum Copy Data Management supports protection and
recovery of the container database, including all pluggable databases under it. Granular recovery of specific
PDBs can be performed via Instant Disk Restore recovery combined with RMAN.
[2] Standalone databases protected by IBM Spectrum Copy Data Management can be recovered to the same
or another standalone server as well as to a RAC installation. When recovering from standalone to RAC, if the
source database uses Automatic Storage Management, then it will be successfully recovered to all nodes in
the destination cluster. If the source database uses non-ASM storage, the database will be mounted only on
the first node in the destination RAC. Source disks for standalone databases restoring to a virtual RAC
environment must be thick provisioned.
583
User's Guide Oracle Database Support FAQ
RAC databases protected by IBM Spectrum Copy Data Management can be recovered to the same or
another RAC installation as well as to a standalone server. In order to recover a RAC database to a
standalone server, the destination server must have Grid Infrastructure installed and an ASM instance must
be running.
[3] Oracle Data Guard primary and secondary databases support inventory and backup operations. Oracle
Data Guard databases will always be restored as primary databases without any Data Guard configurations
enabled.
[4] Oracle Flex ASM is not supported.
[5] RAC database recoveries are not server pool-aware. IBM Spectrum Copy Data Management can recover
databases to a RAC, but not to specific server pools.
[6] IBM Spectrum Copy Data Management supports recovering databases from a source physical server to a
destination virtual server by provisioning disks as physical RDMs. Similarly, IBM Spectrum Copy Data
Management can recover databases from a source virtual server that uses physical RDM to a destination
physical server. However, source databases on VMDK virtual disks can only be recovered to another virtual
server and not to a physical server.
[7] IBM Spectrum Copy Data Management does not support VADP-based protection of virtual Oracle servers.
Oracle data must reside directly on one of the supported storage systems listed above.
[8] On AIX LPAR/VIO servers, Oracle data must reside on disks attached to the server using NPIV. Virtual
SCSI disks are not supported.
[9] Linux LVM volumes containing Oracle data must use LVM version 2.02.118 or above. On SLES 11 SP4,
this version of LVM may not be available through the official repositories, in which case, databases running on
or recovered to SLES 11 SP4 systems must use ASM or non-LVM filesystems only.
[10] Masking and DevOps recoveries are not supported on virtual servers.
[11] See System Requirements on page 23 for supported VMware vSphere versions.
[12] Oracle servers registered as virtual must have VMware Tools installed and running.
[13] Oracle 12c multithreaded configurations are not supported.
[14] Data masking is not supported for Oracle in NetApp storage environments. Masking is not supported on
source database on NFS (a copy of which will be cloned and masked) or source databases on replica copies.
Instead of the default mirror copy, you must select a snapshot copy as a replication source.
[15] NetApp systems running in 7-Mode are not supported.
[16] The ORACLE_HOME directory may reside on ACFS. The data and log directories are required to be on
ACFS. IBM Spectrum Copy Data Management does not protect the Oracle installation that is contained in the
ORACLE_HOME directory.
[17] Oracle 19c standalone is supported for AIX 7.2. Oracle 19c RAC is supported for both IBM and Pure
storage systems.
Additional Requirements:
584
User's Guide Oracle Database Support FAQ
l Oracle database data and the flash recovery area (FRA) must reside on supported storage systems. IBM
Spectrum Copy Data Management can back up archived logs to a supported storage system if they are
not already on one.
l During an Instant Database Restore, there may be failures if the new name specified for the restored
database is similar to an existing database only differing by numerical suffix. For clustered instances of
Oracle databases, the appliance always uses global database name in the UI. During the inventory and
restore processes, individual instances using the numerical suffixes of the cluster must be correlated to
the global database name. The issue with this comes when, as an example, “Production12” is discovered.
Is this the instance 12 of the “Production” database, or instance two (2) of the “Production1” database, or a
database named “Production12.”
[18] HPE Nimble Storage must be version 5.2 or later to support iSCSI and Fibre Channel.
[19] Oracle PIT (point-in-time) recovery is not supported with HPE Nimble Storage.
[20] The make permanent option is not available for HPE Nimble Storage using physical disks.
[21] Oracle ACFS on AIX 7.2 is supported. Oracle ACFS on AIX is not supported for flashcopy incremental
backups and iSCSI protocol.
[22] On IBM Systems Storage, condense is run during maintenance jobs.
For Oracle RAC clustered nodes running prior to vSphere 6.0, virtual machines cannot use a virtual SCSI
controller whose SCSI Bus Sharing option is set to None. This is required to ensure that IBM Spectrum Copy
Data Management can hot-add shared virtual disks to the cluster nodes. For vSphere 6.0 and above, this
requirement does not apply. Instead, if an existing shared SCSI controller is not found on vSphere 6.0 and
above, IBM Spectrum Copy Data Management automatically enables the "multi-writer" sharing option for
each shared virtual disk.
Does IBM Spectrum Copy Data Management support Oracle running on a VMware VM?
Oracle support for VMware virtual machines requires Oracle data/logs to be stored on VMDK virtual disks or
Physical RDMs (pRDM). Virtual RDM disks are not supported. The VMDKs must reside on a datastore
created on LUNs from supported storage systems. Similarly, the Physical RDMs must be backed by LUNs
from supported storage systems.
Does IBM Spectrum Copy Data Management support Oracle Database 12c Multitenant features?
IBM Spectrum Copy Data Management supports the Oracle Database 12c R1 Multitenant option for backup
or clone of a container database (CDB). Recoveries of pluggable databases (PDB) are supported through
RMAN.
585
User's Guide Oracle Database Support FAQ
1. Perform an Instant Disk Restore of the Container Database (CDB) by using an Oracle Restore job in
IBM Spectrum Copy Data Management. The Oracle CDB backup may already have been cataloged
into RMAN when the backup was created, or you can opt to do this during the Instant Disk Restore.
2. Login to RMAN. List the copies in the catalog and identify the tag from which you want to recover.
Tags for IBM Spectrum Copy Data Management-created entries are generally of the form "ECX_
<timestamp>".
3. Close the existing PDB by running the following command:
alter pluggable database <PDB_name> close;
4. Recover the PDB by running the following command:
run {
restore pluggable database <name> from tag '<tag_name>';
recover pluggable database <name>;
}
5. Open the recovered PDB by running the following command:
alter pluggable database <PDB_name> open;
Can I back up Oracle running on any storage to a supported storage system via VADP (VM
Replication job)?
No, not through IBM Spectrum Copy Data Management Oracle Backup workflow. You can leverage a
VMware Backup job with pre/post script to protect Oracle database in such configuration.
Oracle Requirements
Review the following requirements and pre-requisites for registering an Oracle provider in IBM Spectrum
Copy Data Management.
Software
l The bash and sudo packages must be installed. Sudo must be version 1.7.6p2 or above. Run sudo -V to
check the version.
l Python version 2.6.x or 2.7.x must be installed.
l AIX only: If Oracle data resides on IBM Spectrum Accelerate storage, the IBM Storage Host Attachment
Kit (also known as IBM XIV Host Attachment Kit) must be installed on the Oracle server.
l RHEL/OEL/CentOS 6.x only: Ensure the util-linux-ng package is up-to-date by running yum update
util-linux-ng. Depending on your version or distribution, the package may be named util-linux.
l RHEL/OEL/CentOS 7.3 and above: A required Perl module, Digest::MD5, is not installed by default.
Install the module by running yum install perl-Digest-MD5.
586
User's Guide Oracle Database Support FAQ
l Linux only: If Oracle data resides on LVM volumes, ensure the LVM version is 2.0.2.118 or later. Run
lvm version to check the version and run yum update lvm2 to update the package if necessary.
l Linux only: If Oracle data resides on LVM volumes, the lvm2-lvmetad service must be disabled as it can
interfere with IBM Spectrum Copy Data Management's ability to mount and resignature volume group
snapshots/clones.
Run the following commands to stop and disable the service:
$ systemctl stop lvm2-lvmetad
$ systemctl disable lvm2-lvmetad
Additionally, disable lvmetad in the LVM config file. Edit the file /etc/lvm/lvm.conf and set:
use_lvmetad = 0
Connectivity
l The SSH service must be running on port 22 on the server and any firewalls must be configured to allow
IBM Spectrum Copy Data Management to connect to the server using SSH. The SFTP subsystem for
SSH must also be enabled.
l The server can be registered using a DNS name or IP address. DNS names must be resolvable by IBM
Spectrum Copy Data Management.
l When registering Oracle RAC nodes, register each node using its physical IP or name. Do not use a
virtual name or Single Client Access Name (SCAN).
l In order to mount clones/copies of Oracle data, IBM Spectrum Copy Data Management automatically
maps and unmaps LUNs to the Oracle servers. Each server must be preconfigured to connect to the
relevant storage systems at that site.
o For Fibre Channel, the appropriate zoning must be configured beforehand.
o For iSCSI, the Oracle servers must be configured beforehand to discover and log in to the targets on
the storage servers.
Authentication
l The Oracle server must be registered in IBM Spectrum Copy Data Management using an operating
system user that exists on the Oracle server (referred to as "IBM Spectrum Copy Data Management agent
user" for the rest of this topic).
l During registration you must provide either a password or a private SSH key that IBM Spectrum Copy
Data Management will use to log in to the server.
l For password-based authentication ensure the password is correctly configured and that the user can log
in without facing any other prompts, such as prompts to reset the password.
l For key-based authentication ensure the public SSH key is placed in the appropriate authorized_keys file
for the IBM Spectrum Copy Data Management agent user.
587
User's Guide Oracle Database Support FAQ
o Typically, the file is located at /home/<username>/.ssh/authorized_keys
o Typically, the .ssh directory and all files under it must have their permissions set to 600.
Privileges
The IBM Spectrum Copy Data Management agent user must have the following privileges:
l Privileges to run commands as root and other users using sudo. IBM Spectrum Copy Data Management
requires this for various tasks such as discovering storage layouts and mounting and unmounting disks.
o The sudoers configuration must allow the IBM Spectrum Copy Data Management agent user to run
commands without a password.
o The !requiretty setting must be set.
o The ENV_KEEP setting must allow the ORACLE_HOME and ORACLE_SID environment variables to
be retained.
l Privileges to read the Oracle inventory. IBM Spectrum Copy Data Management requires this to discover
and collect information about Oracle homes and databases.
o To achieve this, the IBM Spectrum Copy Data Management agent user must belong to the Oracle
inventory group, typically named oinstall.
o The agent user must have read permissions for all paths under the ORACLE_HOME directory. This
includes privileges to any Oracle home directory pointed to by HOMELOC or any other environment
variables.
l SYSDBA privileges for database instances. IBM Spectrum Copy Data Management needs to perform
database tasks like querying instance details, hot backup, RMAN cataloging, as well as starting/stopping
instances during recovery.
o To achieve this, the IBM Spectrum Copy Data Management agent user must belong to the OSDBA
operating system group, typically named dba.
o In the case of multiple Oracle homes each with a different OSDBA group, the IBM Spectrum Copy
Data Management agent user must belong to each group.
l SYSASM privileges, if Automatic Storage Management (ASM) is installed. IBM Spectrum Copy Data
Management needs to perform storage tasks like querying ASM disk information, as well as renaming,
mounting, and unmounting diskgroups.
o To achieve this, the IBM Spectrum Copy Data Management agent user must belong to the OSASM
operating system group, typically named asmadmin.
l Shell user limits for the IBM Spectrum Copy Data Management agent user must be the same as those for
the user that owns the Oracle home, typically named oracle. Refer to Oracle documentation for
588
User's Guide Oracle Database Support FAQ
requirements and instructions on setting shell limits. Run ulimit -a as both the oracle user and the IBM
Spectrum Copy Data Management agent user and ensure their settings are identical.
For examples on creating a new user with the necessary privileges, see Sample Configuration of an IBM
Spectrum Copy Data Management Agent User on page 590.
Database Discovery
IBM Spectrum Copy Data Management discovers Oracle installations and databases by looking through the
files /etc/oraInst.loc and /etc/oratab, as well as the list of running Oracle processes. If the files are
not present in their default location, the "locate" utility must be installed on the system so that IBM Spectrum
Copy Data Management can search for alternate locations of these files.
IBM Spectrum Copy Data Management discovers databases and their storage layouts by connecting to
running instances and querying the locations of their datafiles, log files, etc. In order for IBM Spectrum Copy
589
User's Guide Oracle Database Support FAQ
Data Management to correctly discover databases during cataloging and copy operations, databases must be
in "MOUNTED," "READ ONLY," or "READ WRITE" mode. IBM Spectrum Copy Data Management cannot
discover or protect database instances that are shut down.
Databases must be started using a server parameter file (spfile). IBM Spectrum Copy Data Management does
not support copy operations for databases that are started using a text-based parameter file (pfile).
Additionally, IBM Spectrum Copy Data Management creates aliases/symbolic links with names that follow a
consistent pattern. To ensure that ASM is able to discover the disks mapped by IBM Spectrum Copy Data
Management, you must update the ASM_DISKSTRING parameter to add this pattern.
Linux:
IBM Spectrum Copy Data Management creates udev rules for each disk to set the appropriate ownership and
permissions. The udev rules also create symbolic links of the form /dev/ecx-asmdisk/<diskId> that point to the
appropriate device under /dev.
To ensure the disks are discoverable by ASM, add the following pattern to your existing ASM_DISKSTRING:
/dev/ecx-asmdisk/*
AIX:
IBM Spectrum Copy Data Management creates a device node (using mknod) of the form /dev/ecx_
asm<diskId> that points to the appropriate hdisk under /dev. IBM Spectrum Copy Data Management also sets
the appropriate ownership and permissions for this new device.
To ensure that the disks are discoverable by ASM, add the following pattern to your existing ASM_
DISKSTRING: /dev/ecx_asm*
Notes:
l If the existing value of the ASM_DISKSTRING is empty, you may have to first set it to an appropriate value
that matches all existing disks, then append the value above.
l If the existing value of the ASM_DISKSTRING is broad enough to discover all disks (for example,
/dev/*), you may not need to update it.
l Refer to Oracle documentation for details about retrieving and modifying the ASM_DISKSTRING
parameter.
590
User's Guide Oracle Database Support FAQ
The commands below are examples for creating and configuring an operating system user that IBM Spectrum
Copy Data Management will use to log in to the Oracle server. The command syntax may vary depending on
your operating system type and version.
l Create the user that will be designated as the IBM Spectrum Copy Data Management agent user:
useradd -m cdmagent
l Set a password if using password-based authentication: passwd cdmagent
l If using key-based authentication, place the public key in /home/cdmagent/.ssh/authorized_keys, or the
appropriate file depending on your sshd configuration, and ensure the correct ownership and permissions
are set, such as:
chown -R cdmagent:cdmagent /home/cdmagent/.ssh
chmod 700 /home/cdmagent/.ssh
chmod 600 /home/cdmagent/.ssh/authorized_keys
l Add the user to the Oracle installation and OSDBA group: usermod -a -G oinstall,dba
cdmagent
l If ASM is in use, also add the user to the OSASM group: usermod -a -G asmadmin cdmagent
Note:If on AIX, the append argument (-a) should be omitted when using the usermod command.
l Place the following lines at the end of your sudoers configuration file, typically /etc/sudoers. If your existing
sudoers file is configured to import configuration from another directory (for example, /etc/sudoers.d), you
can also place the lines in a new file in that directory:
Defaults:cdmagent !requiretty
Defaults:cdmagent env_keep+="ORACLE_HOME"
Defaults:cdmagent env_keep+="ORACLE_SID"
cdmagent ALL=(ALL) NOPASSWD:ALL
591
User's Guide Microsoft SQL Server Support FAQ
IBM Spectrum Copy Data Management simplifies SQL Server copy management by enabling administrators
to orchestrate application-consistent copy creation, cloning and recovery in minutes, instead of hours or days.
IBM Spectrum Copy Data Management copy management leverages the advanced snapshot and replication
features of the underlying storage platform to rapidly create, replicate, clone, and restore copies of SQL
Server databases in the most efficient way possible, in both time and space. IBM Spectrum Copy Data
Management enables you to focus on the backup and restore requirements of your business rather than the
technical details of the underlying storage platforms.
IBM Spectrum Copy Data Management is an intelligent copy data management solution that delivers end-to-
end automation, orchestration, and self-service functionality for your SQL Server environment through a
comprehensive and scalable catalog. With the self-service features of IBM Spectrum Copy Data
Management, your users are empowered to create clones on demand, freeing DBAs, while at the same time
offering the advanced recovery features needed for SQL Server environments.
IBM Spectrum Copy Data Management SQL Server Copy Data Management solution supports the following
SQL Server deployment modes running on VMware virtual machines or physical servers:
l Standalone SQL Server – Databases running on a single server
l SQL Server Failover Cluster – SQL Server instances running on Windows Server Failover Clusters using
Shared storage
l SQL Server Always On – Primary and secondary databases in Availability group configured across
clusters of servers
592
User's Guide Microsoft SQL Server Support FAQ
Do I need to deploy any additional agents to protect SQL Server standalone, Failover Cluster or
AlwaysON configuration?
IBM Spectrum Copy Data Management for SQL Server is delivered as a VMware OVA that is easily deployed
on demand in a matter of minutes. Once deployed, you simply register your SQL Servers with appropriate
credentials and then let IBM Spectrum Copy Data Management discover the rest. IBM Spectrum Copy Data
Management eliminates the complexity of manually deploying and maintaining application agents on SQL
Servers. A lightweight application-aware agent is automatically injected and updated to the required SQL
Servers on demand.
IBM Spectrum Copy Data Management creates and uses in-place copies, so no data is physically moved.
IBM Spectrum Copy Data Management generated application-consistent copies are both space and time
efficient. With the same ease, a DBA can automate the creation of remote copies for disaster recovery use
cases.
Does SQL Server solution leverage the storage consistency group feature?
The storage consistency group feature allows storage administrators to take a snapshot of database
applications where the data is spread across multiple volumes to maintain consistency across all volumes.
In a typical SQL Server Database, the data is spread across different volumes for better IO performance and
availability. On Physical servers, IBM Spectrum Copy Data Management SQL Server application-consistent
copy creation ensures that appropriate consistency groups are automatically created to maintain consistency
across all related volumes. IBM Spectrum Copy Data Management SQL Server backup on VM relies on
VMware snapshots and doesn’t need to leverage storage consistency group feature.
593
User's Guide Microsoft SQL Server Support FAQ
What level of Application selection granularity is supported for SQL Server Backup jobs?
IBM Spectrum Copy Data Management SQL Server backup job definition supports copy selection at the
following levels:
l One or more SQL Server Instances for Standalone SQL Server/Failover Cluster
l One or more Availability groups for AlwaysON
l One or more Databases for Standalone/Failover Cluster and SQL Server AlwaysON
Can I restore a database to an original instance and overwrite existing database in a single step?
Yes, use the Overwrite existing database option in the Application restore job definition.
Will IBM Spectrum Copy Data Management auto discover newly added SQL Server instances in a
Standalone SQL Server and automatically protect it?
No. IBM Spectrum Copy Data Management will auto discover and present newly added SQL Server instances
in the Backup job but you must explicitly select newly added SQL server instances for protection. Discovery of
new SQL Server instances occurs as part of a regularly scheduled Application inventory job.
Will IBM Spectrum Copy Data Management auto discover newly added databases and automatically
protect it?
Yes, if you select at Availability group level protection, IBM Spectrum Copy Data Management will auto
discover newly added databases in selected availability group and protect it automatically during next job
instance run. Discovery of new SQL Server instances and database occurs as part of regularly scheduled
Application Inventory job.
Does IBM Spectrum Copy Data Management backup primary databases or secondary databases in
SQL AlwaysOn?
IBM Spectrum Copy Data Management backs up only primary databases across the SQL AlwaysOn cluster.
Do SQL Server databases and logs need to be on supported storage for IBM Spectrum Copy Data
Management CDM?
IBM Spectrum Copy Data Management also supports protection of SQL Server running on VMware VM
configured on any storage that can be protected to supported storage systems via VM Replication. SQL
Server running on physical servers require the database and logs to be on supported storage.
Does IBM Spectrum Copy Data Management perform full backups of databases?
IBM Spectrum Copy Data Management backups of SQL Server databases are always VSS COPY type
backups.
Does IBM Spectrum Copy Data Management support Transaction log backup and log management?
594
User's Guide Microsoft SQL Server Support FAQ
Every SQL Server database has a transaction log that records all transactions and the database modifications
made by each transaction. The transaction log must be truncated on a regular basis to keep it from filling up.
IBM Spectrum Copy Data Management provides you with an option to back up transaction log files. IBM
Spectrum Copy Data Management supports log backup at a specified frequency. You can select one or more
databases for log backup in a single backup job definition. Log destination can be specified as a single
universal mount point or separate destination mount point for each database. Specified log backup
destination path must already exist and must reside on supported storage system. If
multiple databases are selected for backup, then each of the servers hosting the database must have their
Destination directory set individually.
Does IBM Spectrum Copy Data Management support truncation of database logs?
Yes, IBM Spectrum Copy Data Management will automatically truncate log post log backups of databases
that it backs up. If database logs are not backed up with IBM Spectrum Copy Data Management, its logs are
not truncated by IBM Spectrum Copy Data Management and must be managed separately.
I am backing up transaction logs with IBM Spectrum Copy Data Management, but I don’t want it to
truncate logs. Can I control this behavior?
No. This will be enhanced in future release of IBM Spectrum Copy Data Management.
Does IBM Spectrum Copy Data Management support pre/post scripts for Application Database
Backup jobs?
Yes, IBM Spectrum Copy Data Management supports job-level pre/post scripts and job-level pre/post
Snapshot scripts to enable further customization.
Job-level prescripts and postscripts are scripts that can be run before or after a job runs.
Snapshot prescripts and postscripts are scripts that can be run before or after a storage-based snapshot
subpolicy runs. (Please refer to pre/post script topic in the IBM Spectrum Copy Data Management User’s
Guide for details.)
Data masking
Does IBM Spectrum Copy Data Management SQL Server solution offer Data Masking integration
with third party masking tools?
A concern for security officers in any organization is that of keeping confidential information locked down,
even internally. Data masking is used to hide confidential data, by replacing it with fictitious data, when
making data copies for DevTest or other use cases. It prevents leakage of sensitive data in non-production
databases via static data masking [SDM], and production data in transit via dynamic data masking [DDM].
595
User's Guide Microsoft SQL Server Support FAQ
The following Data Masking integration features will be available in a future release.
IBM Spectrum Copy Data Management will include integrated data masking workflows with the ability to
leverage third party masking tools. Traditionally, data masking is difficult, slow, and storage-consuming, but
with IBM Spectrum Copy Data Management it will be easily integrated into the SQL Server backup workflow,
allowing creation of masked copies at a specified frequency. Masked copies are automatically marked in the
catalog. Access to secure copies is managed by the administrator by leveraging the application-level RBAC.
In addition, SQL Server will enable you to leverage the Dynamic Data Masking feature of SQL Server 2016.
Is sample masking script provided with IBM Spectrum Copy Data Management SQL Server solution?
A sample data masking script can be provided upon request. A sample masking script demonstrates data
masking integration with built-in Dynamic Data masking script of SQL Server 2016. This feature will be
available in a future release.
l Usage of common cloning tools or custom scripts creates full copy requiring large amounts of additional
storage
l Creating full copies is slow
IBM Spectrum Copy Data Management solves these challenges with simple, automated end-to-end clone
lifecycle management:
l Self-service access to secure clones by QA team eliminates administrative and process bottlenecks
l IBM Spectrum Copy Data Management enables rapid database clones that are both time- and space-
efficient
l IBM Spectrum Copy Data Management promotes standardization and governance through centralized
catalog, granular RBAC, and automated policies
596
User's Guide Microsoft SQL Server Support FAQ
Your SQL Server clones can be utilized and consumed instantly – for whatever your use case -- through IBM
Spectrum Copy Data Management “Instant Disk Restore” jobs. IBM Spectrum Copy Data Management
catalogs and tracks all cloned instances. Instant Disk Restore can leverage iSCSI or FC protocol to provide
immediate mount of LUNs without transferring data.
Can I create an Instant Clone of a SQL Server database for DevOps and Business Analytics?
IBM Spectrum Copy Data Management provides automated workflows to create instant clones of SQL Server
database regardless of its size.
l Instantly create database clones from any of the copies in the IBM Spectrum Copy Data Management
inventory, at local or remote locations, to accelerate Business Analytics.
l Enable and accelerate DevOps by providing Instant Disk Restore to secure clones of databases to
appropriate users via application-level RBAC.
Then, when your TestDev, DevOps, or research/analytics work is completed, you can save the clone to more
permanent storage or simply tear it down.
What is the granularity of database recovery supported by IBM Spectrum Copy Data Management?
Supported recoveries for standalone or AlwaysOn:
l Database can be recovered to point of snapshots to original or new instance (Instant Disk Restore)
l Database can be recovered to point in time leveraging backed up transaction logs (Instant recovery) to
original or new instance
l Database can be recovered using new name to original or new instance
l You can select one or more databases in a single restore job definition.
l Each selected database in a Restore job definition can have separate destination specification
l Databases are always recovered in online mode
l Database can be recovered from standalone instance to AlwaysON Availability group
l Database from AlwaysON Availability can be recovered to standalone instance
l Database running on older version can be recovered to instance running same or newer version.
Does IBM Spectrum Copy Data Management support recovering a database in online mode?
597
User's Guide Microsoft SQL Server Support FAQ
Yes, Instant Disk Restore or Instant Database Restore recovers databases in online mode.
Does IBM Spectrum Copy Data Management support recovering databases in an offline state
(norecovery)?
No. This will be enhanced in a future release of the product.
Does IBM Spectrum Copy Data Management support recovering databases in a standby/read-only
state (standby)?
Yes, IBM Spectrum Copy Data Management provides an application option to control this behavior.
Roll back uncommitted transactions and leave the database ready to use
l Select this option to restore the database to an online state. If selected, additional transaction logs cannot
be restored. If deselected, uncommitted transactions are not rolled back, leaving the database non-
operational. Additional transaction logs can then be restored.
Does IBM Spectrum Copy Data Management support recovering database with Restricted Access?s
No. This will be enhanced in a future release of the product.
Does IBM Spectrum Copy Data Management support restoring only logs so that they can be applied
to a standby database?
No, not from the IBM Spectrum Copy Data Management application restore workflow. A user can easily
access the transaction log backup location from the SQL server and perform this outside of the product.
Where are SQL specific and IBM Spectrum Copy Data Management specific logs if errors occur?
All required logs (IBM Spectrum Copy Data Management and application) are collected as part of the current
log collection functionality. There should be no need to manually obtain SQL application logs from within SQL
Server VMs.
598
User's Guide Microsoft SQL Server Support FAQ
Does IBM Spectrum Copy Data Management use existing hardware providers for physical SQL
backups?
No. IBM Spectrum Copy Data Management automatically deploys its own VSS HW provider service for SQL
Server running on physical servers. It is automatically started on demand during SQL Server Backup jobs. At
the completion of the backup job, the VSS HW provider service is automatically stopped.
When IBM Spectrum Copy Data Management protects a SQL VM with pRDM, can it restore a
database back to the original node as a pRDM?
Currently, a SQL VM with pRDM must be registered as Physical in IBM Spectrum Copy Data Management.
Hence, the restoration of that data obeys the Physical restore restrictions, which means it can only restore
back to the original host via iSCSI. If the target host being restored to was registered as Virtual, then the
database would be restored as a pRDM.
This functionality will be improved in future release.
Why must I choose a proxy node when performing a restore to a SQL Failover cluster?
Windows requires signatures to be unique, so when you attach a disk that has a signature equal to one that is
already attached, Windows keeps the disk in “offline” mode and doesn’t read its partition table or mount its
volumes. To prevent disk signature collision, during Instant Database Restore, IBM Spectrum Copy Data
Management leverages Windows proxy servers to temporarily mount disks from snapshots, generate a new
signature, then mount to original server.
Any Windows node with iSCSI or Fibre Channel access to the storage can be selected as a proxy server,
provided that the node is not part of the original cluster. It is recommended to select a standalone virtual or
physical Windows node as a proxy server.
Self Service
Does IBM Spectrum Copy Data Management support RBAC? What is the level of granularity
supported for SQL Servers?
Role-based access control allows you to set the resources and permissions available to IBM Spectrum Copy
Data Management accounts. Through role-based access control you can tailor IBM Spectrum Copy Data
Management for individual users, giving them access to the features and providers they need.
Using IBM Spectrum Copy Data Management RBAC functionality, user can delegate IBM Spectrum Copy
Data Management role to enable and accelerate DevOps by providing Instant Access to secure clones of
databases to appropriate users via application-level RBAC. Then, when your TestDev, DevOps, or
research/analytics work is completed, you can save the clone to more permanent storage or simply tear it
down.
599
User's Guide Microsoft SQL Server Support FAQ
Can developers access IBM Spectrum Copy Data Management operations using command line or
APIs?
A rich set of REST APIs are provided to enable full access to IBM Spectrum Copy Data Management
functionalities for further customization.
System Requirements
What SQL Server versions are supported and on what Windows OS? What are supported Storage
Systems for Microsoft SQL Server?
The Microsoft Server SQL Support Matrix is below.
l SQL 2016 on Windows Server l Pure Storage running Pure APIs 1.5 and
2016 above:
l SQL 2017 on Windows Server o FlashArray//c
2019
o FlashArray//m
l SQL 2019 on Windows Server
o FlashArray//x
2019
o FlashArray 4xx series
Standalone, SQL Server Failover
Clustering, and AlwaysOn
configurations are supported for
SQL 2012, SQL 2014, SQL 2016,
600
User's Guide Microsoft SQL Server Support FAQ
l DellEMC Unity
o EMC Unity 300, 400, 500, 600 (All-
Flash and Hybrid Flash)
o EMC UnityVSA
o EMC VNXe 1600 running version 3.1.3
+
o EMC VNXe 3200 running version 3.1.1
+
601
User's Guide Microsoft SQL Server Support FAQ
[2] Note that Clustered Shared Volumes (CSV) are not supported.
[3] See System Requirements for supported VMware vSphere versions.
[4] Select the Physical provider type when registering the provider in IBM Spectrum Copy Data Management.
Recoveries require direct access to storage. Note that NetApp ONTAP and DellEMC storage systems are not
supported.
[5] vRDMs are supported through VM Replication jobs.
[6] Independent disks are supported only if the underlying storage utilizes supported storage systems.
Register the SQL resource as Physical when configuring the provider in IBM Spectrum Copy Data
Management. Note that independent disks do not allow snapshots to be taken in VMware virtual scenarios.
The above listed IBM Spectrum Accelerate, IBM Spectrum Virtualize, and Pure Storage FlashArrays are
supported for physical registration.
[7] When registering physical SQL servers it is recommended to register via the DNS server. The IBM
Spectrum Copy Data Management appliance must be resolvable and route-able by the DNS server; the
physical SQL server will communicate back to IBM Spectrum Copy Data Management through DNS.
[8] Recovery for target servers registered as Physical provider types requires direct access to storage.
[9] Any Windows node with iSCSI or Fibre Channel access to the storage can be selected as a proxy server,
provided that the node is not part of the original cluster. It is recommended to select a standalone virtual or
physical Windows node as a proxy server.
[10] For physical SQL servers you must allow outgoing connections to port 8443 on the IBM Spectrum Copy
Data Management appliance from the SQL server.
[11] Dynamic disks are not supported.
[12] HPE Nimble Storage must be version 5.2 or later to support iSCSI and Fibre Channel.
[13] The make permanent option is not available for HPE Nimble Storage using physical disks.
[14] SQL PIT (point-in-time) recovery is not supported with HPE Nimble Storage.
SQL servers residing on any storage can also be protected to supported storage systems through
VM Replication jobs.
For both physical and virtual SQL environments, point-in-time recoveries beyond the last snapshot taken are
incompatible with workflows utilizing more than one Site. In a virtual environment, the SQL server, associated
vCenter, and storage must be registered to the same site. In a physical environment, the SQL server and
storage must be registered to the same site.
[15] On IBM Systems Storage, condense is run during maintenance jobs.
What are Environment and permission requirements for SQL Server solution?
Note the following Microsoft environmental requirements:
l Windows Remote Shell (WinRM) must be enabled
l The SQL user must enable the public and sysadmin SQL permissions.
602
User's Guide Microsoft SQL Server Support FAQ
l The user identity must have sufficient rights to install and start the IBM Spectrum Copy Data Management
Tools Service on the virtual machine node. This includes "Log on as a service" rights. For more
information about the "Log on as a service" right, see https://technet.microsoft.com/en-
us/library/cc794944.aspx.
l The fully qualified domain name must be resolvable and route-able from the IBM Spectrum Copy Data
Management appliance
l The virtual machine node DNS name must be resolvable and route-able from the IBM Spectrum Copy
Data Management appliance
l The VMGuest version must be current
l VMware Tools must be installed on the virtual machine node
Does IBM Spectrum Copy Data Management support SQL Server 2016 running on Windows 2016?
Yes. See the matrix above.
Does IBM Spectrum Copy Data Management support SQL Server configured as Physical RDMs, or
Independent disks?
Yes. See footnotes 4 and 6 in the matrix above.
Does IBM Spectrum Copy Data Management support SQL Server configured as Virtual RDMs?
Yes. For limitations see footnote 5 in the matrix above.
Does IBM Spectrum Copy Data Management support SQL Server running on physical machine(s)?
Yes. See the matrix above.
Are there additional requirements for SQL support in IBM Spectrum Copy Data Management?
603
User's Guide Microsoft SQL Server Support FAQ
l The maximum restore file path must be less than 256 characters, which is a SQL requirement. If the
original path exceeds this length, consider using a customized restore file path to reduce the length.
l The metadata that can be restored is subject to VSS and SQL restore capabilities.
Privileges
On the SQL server, the system login credential must have public and sysadmin permissions enabled, plus
permission to access cluster resources in a SQL AlwaysOn environment. If one user account is used for all
SQL functions, a Windows login must be enabled for the SQL server, with public and sysadmin permissions
enabled.
Every SQL instance can use a specific user account to access the resources of that particular SQL instance.
604
User's Guide Acronyms
G
Acronyms GB
Gigabytes
A
AD H
Active Directory HTTP
Bytes K
C KB
CBT Kilobytes
605
User's Guide Acronyms
OSSV TB
Open Systems SnapVault Terabytes
OVF
U
Open Virtualization Format
UUID
P Universally Unique Identifier
PDF
V
Portable Document Format
VADP
R VMware vStorage API for Data Protection
RBAC VASA
Role-based access control vSphere API for Storage Awareness
RDM VM
Raw Device Mapping Virtual Machine
RDN VMDK
Relative Distinguished Name Virtual Machine Disk
RRP VMFS
Rapid Return to Production Virtual Machine File System
VVOL
S
Virtual Volume
SMTP
Simple Mail Transfer Protocol
SNMP
Simple Network Management Protocol
SSL
Secure Sockets Layer
SVC
SAN Volume Controller
SVM
Storage Virtual Machine
606
User's Guide Terminology
A
Terminology
account
The following terminology changes are implemented The definition associating a resource pool
across the IBM Spectrum Copy Data Management with a role for a user. A user account has
user interface as of version 2.2.6. access to the resources and features
defined in the resource pool as well as the
2.2.6 and permissions to interact with those resources
Pre 2.2.6
later and features as defined in the role.
Catalog Inventory
appliance
Copy Backup The virtual machine containing the IBM
Hold (Job Hold Spectrum Copy Data Management
action) Schedule application and Invetory, accessible through
Instant VMware vSphere client. The appliance is
Instant also referred to as the virtual appliance or
Disk Restor
Access virtual machine.
e
Instant VM
Instant
Virtualizat
Restore, B
Instant
ion Backup job
DB Restore
A job that leverages Copy Data
Pending Management technology for replicating and
Resource
(Job intelligently reusing snapshots, vaults, and
Active
status) mirrors.
Job, job
Policy
definition
C
Release
Release
(Job clone mode
Schedule A Restore job mode where virtual machines
action)
are created in a fenced network for use
Storage
SLA Policy cases requiring permanent or long-running
Workflow copies.
Use Restore
VM Replica Copy Data Management
VM Copy The ability to understand where data copies
tion
or backups are located in your IT
environment, and leverage the most
appropriate backup for any given use case.
607
User's Guide Terminology
608
User's Guide Terminology
Inventory job
A job for gathering and recording objects
O
and object metadata about specified object
resources. A resource that has metadata about it stored
in the Inventory. Examples of objects are
files, directories, qtrees, and volumes.
J
Job definition
A defined set of tasks and rules that govern
P
the execution of a job. A job definition can be production mode
applied to one or more jobs. IBM Spectrum A Restore job mode which restores a
Copy Data Management includes support resource to the production environment from
for the following job types: Inventory jobs, secondary storage or a remote disaster
Backup jobs, Restore jobs, Report jobs, and recovery site.
Script jobs.
Protection Compliance Reports
A category of reports that help ensure your
L data is protected through user-defined
recovery point objective parameters.
LDAP provider
A server that accesses centralized data provider
including user and authentication An object repository, physical or virtual, from
information. which object metadata is retrieved.
Provider Browser
M A feature enabling you to view a list of
registered resources and their underlying
management interface resources. The Provider Browser scans the
The browser-based portal through which the actual resource and returns native
IBM Spectrum Copy Data Management properties.
application is viewed and managed.
mirror
An efficient block-level data replication
R
technique that produces exact replicas for RBAC
disaster recovery. A feature that allows an administrator to set
the resources and permissions available to
IBM Spectrum Copy Data Management user
N accounts.
NetApp provider register
A NetApp storage system such as a FAS. The process that allows IBM Spectrum Copy
Data Management to recognize a resource.
609
User's Guide Terminology
610
User's Guide Terminology
T
W
tenant
A tenant is a grouping of resources and workflow
users that are administered by a tenant A grouping of subpolicies that are
administrator. An IBM Spectrum Copy Data sequentially joined together for creating
Management administrator creates tenants, Backup and Restore jobs. A workflow is
assigns resources to be made available to defined visually by the storage administrator
the tenants, and creates the tenant in the management interface.
administrator. The tenant administrator can
then further control and restrict resources for
611
User's Guide Terminology Changes
Terminology Changes
The following terminology changes are implemented across the IBM Spectrum Copy Data Management user
interface as of version 2.2.6.
RELATED TOPICS:
l Documentation Roadmap on page 554
612
User's Guide Trademarks
Trademarks
Commonly Used Company and Product Names
Companies and products listed here may be used in the documentation:
Adobe®
PDF
DellEMC®
EMC®, EMC2®, VNX®, VNXe®
Google®
Android™, Chrome®
IBM®
FlashCopy®, FlashSystem™, IBM Spectrum™, IBM Spectrum Accelerate™, IBM Spectrum Protect
Snapshot™, IBM Spectrum Virtualize™, Storwize®, XIV®
Microsoft®
Active Directory®, Excel®, Exchange, Internet Explorer®, Internet Information Services (IIS), Hyper-V®,
iSCSI Initiator, SharePoint®, SourceSafe®, SQL Server®, Vista®, Visual (VSS), Windows®, Windows
PowerShell®, Windows Server®, Word®
Mozilla®
Firefox®
NetApp ONTAP®
Data ONTAP®, FilerView®, FlexVol®, FlexClone®, Infinite Volume, MultiStore®, NearStore®, NOW®,
OnCommand™, OnCommand Unified Manager, OSSV, RAID-DP®, Snap Creator™, SnapManager®,
SnapMirror®, Snapshot™, SnapProtect®, SnapVault®, SVM, vFiler®, WAFL®
OpenLDAP™
Oracle®
Java®, JavaScript™
Pure Storage™
VMware®
ESX server, ESXi server, vCenter, vSphere, VMware Consolidated Backup, VMDK, vMotion
613
User's Guide Trademarks
l © 2012 Google Inc. All rights reserved. Google and Chrome are registered trademarks of Google Inc.
Android is a trademark of Google Inc.
l NetApp, the NetApp logo, Go further, faster, Data ONTAP, FilerView, FlexClone, FlexVol, NearStore,
RAID-DP, Snapshot, and SnapVault are trademarks or registered trademarks of NetApp, Inc. in the
United States and/or other countries. NetApp provides no representations or warranties regarding the
accuracy, reliability, or serviceability of any information or recommendations provided in this publication,
or with respect to any results that may be obtained by the use of the information or observance of any
recommendations provided herein. The information in this document is distributed AS IS, and the use of
this information or the implementation of any recommendations or techniques herein is a customer's
responsibility and depends on the customer's ability to evaluate and integrate them into the customer's
operational environment. This document and the information contained herein may be used solely in
connection with the NetApp products discussed in this document.
l Windows and SharePoint are registered trademarks of Microsoft Corporation.
l VMware is a registered trademark of VMware, Inc. Copyright © 2011 VMware, Inc. All rights reserved.
This product is protected by U.S. and international copyright and intellectual property laws. VMware
products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a
registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.
l All other company and product names used herein may be the trademarks of their respective owners.
614
User's Guide Index
B
D
Best Practice 163, 165, 174, 194, 261, 265, 273,
280, 328, 374, 376 Dashboard Overview 71
615
User's Guide Index
E G
Edit Generate a Report 370
Customized Report 373 Generated Report 374
Schedule 165 Glossary 607
Edit a Provider 103
Edit a Resource Pool 112
H
Edit a Role 115 Help 554-555
Edit an Account 118 Horizontal Scale Out 546
EMC Unity 74, 79 HPE Nimble Storage 24
EMC VNX Catalog Data Policy Requirements 24
Encryption 15 I
End IV 334, 338
IBM Storage and IBM FCM Catalog Data Policy
Export Search Results 365 Requirements 24
Identification and Authentication 14
F iGroup 308, 327
Fibre Channel 263, 267, 277, 283, 287, 329, 333, Internationalization 26
337 iSCSI 308, 327
File Analytics Reports 397
File Search 355
616
User's Guide Index
Marketplace 539
J
Modify Network Settings 542
Job History 172 Modifying Job Log Options 530
Job Log Options 530 Monitor a Job Session 172
Job Logs 530
Job Monitor 172 N
Job Session 170, 172
Native User 14
NetApp 7-Mode 74, 79
K
NetApp Cluster Mode 74, 79
Knowledge Base articles NetApp Knowledge Base articles
1010992 27 1010992 27
1013246 27 1013246 27
NetApp Protection Usage Report 426
L
NetApp RPO Compliance Report 429
LDAP NetApp Storage Catalog Data Policy
Requirements 25
Authentication 14, 19, 561
NetApp Storage System 74, 79
Resources 100
NetApp Volumes 100
Server 74, 79, 91, 100, 568
Network Settings 542
User Name Syntax 561
Legal 613
O
Load Balancing 256, 542-543
Load Sharing 256, 542-543 Object Search 355
617
User's Guide Index
618
User's Guide Index
619