NetBackup11 CloudObjectStoreAdmin
NetBackup11 CloudObjectStoreAdmin
Release 11.0
NetBackup™ for Cloud Object Store Administrato's
Guide
Last updated: 2025-03-06
Legal Notice
Copyright © 2025 Cohesity, Inc. All rights reserved.
Cohesity, Veritas, the Cohesity Logo, Veritas Logo, Veritas Alta, Cohesity Alta, and NetBackup
are trademarks or registered trademarks of Cohesity, Inc. or its affiliates in the U.S. and other
countries. Other names may be trademarks of their respective owners.
This product may contain third-party software for which Cohesity is required to provide
attribution to the third party (“Third-party Programs”). Some of the Third-party Programs are
available under open source or free software licenses. The License Agreement accompanying
the Software does not alter any rights or obligations you may have under those open source
or free software licenses. Refer to the Third-party Legal Notices document accompanying this
Cohesity product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Cohesity, Inc. and
its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Cohesity as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
Cohesity, Inc.
2625 Augustine Drive
Santa Clara, CA 95054
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Cohesity account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan [email protected]
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Cohesity
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
You can also see documentation information or ask a question on the Cohesity community
site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
Error 5541: Cannot take backup, the specified staging location does
not have enough space ........................................................... 79
Error 5537: Backup failed: Incorrect read/write permissions are
specified for the download staging path. ..................................... 79
Error 5538: Cannot perform backup. Incorrect ownership is specified
for the download staging path. .................................................. 79
Reduced acceleration during the first full backup, after upgrade to
versions 10.5 and 11. .............................................................. 80
After backup, some files in the shm folder and shared memory are not
cleaned up. ........................................................................... 80
Contents 6
Note: Cloud vendors may levy substantial charges for data egress for moving data
out of their network. Check your cloud provider's pricing for data-out before
configuring a backup policy that transfers data out of one cloud to another cloud
region or an on-premises data center.
NetBackup can protect Azure Blob Storage, and a wide variety of S3 API-compatible
object stores like AWS S3, Google Cloud Storage (GCS), Hitachi Cloud Platform
object stores, and so on. For a complete list of compatible object stores, refer to
the NetBackup Hardware Compatibility List (HCL).
The protected objects in Azure Data Lake are referred to as files and directories,
even though the underlying object is of type blob.
Introduction 9
Features of NetBackup Cloud object store workload support
Feature Description
Integration with NetBackup's The NetBackup Web UI provides the Default cloud
role-based access control (RBAC) object store Administrator RBAC role to control
which NetBackup users can manage Cloud object
store operations in NetBackup. You do not need to be
a NetBackup administrator to manage Cloud object
stores.
Management of Cloud object store You can configure a single NetBackup primary server
accounts for multiple Cloud object store accounts, across
different cloud vendors as required.
Intelligent selection of cloud objects Within a single policy, NetBackup provides flexibility
to configure different queries for different buckets or
containers. Some buckets or containers can be
configured to back up all the objects in them. You can
also configure some buckets and containers with
intelligent queries to identify objects based on:
Feature Description
Fast and optimized backups In addition to full backup, NetBackup also supports
different types of incremental schedules for faster
backups. Accelerator feature is also supported for the
Cloud object store policies.
Restore options NetBackup restore the object store data along with
their metadata, properties, tags, ACLs, and object lock
properties.
Feature Description
Support for malware scan before You can run malware scan of the selected files or
recovery folders for recovery as part of the recovery flow from
Web UI and decide the recovery actions based on
malware scan results.
Scalability support for the backup NetBackup Cloud object store protection supports
host configuring the NetBackup Snapshot Manager as a
scalable backup host for cloud deployments, along
with the media server. If you have an existing
NetBackup Snapshot Manager deployment in your
environment, you can use that as a backup host for
Cloud object store policies.
Object lock This feature lets you retain the original object lock
properties and also provides an option to customize
the object lock properties. If you use object lock
properties on the restored objects, you can't delete
those objects until the retention period is over, or the
legal holds are removed. You can use the Object lock
and retention properties without any configuration
during policy creation and backup.
Introduction 12
Features of NetBackup Cloud object store workload support
Feature Description
Quick object change scan This feature significantly speeds up the NetBackup
Accelerator resulting in faster backups.
https://www.veritas.com/content/support/en_US/article.100073853.html
Chapter 2
Managing Cloud object
store assets
This chapter includes the following topics:
Step 1 Verify the operating system See the NetBackup Compatibility Lists.
and platform compatibility.
Step 3 Configure the required See “Prerequisites for adding Cloud object store
permissions and credentials. accounts” on page 15.
Step 4 Identify the buckets and Make a list of the buckets and containers that you
containers that you want to want to protect with NetBackup, and include them
protect. in the Cloud object store accounts that you create
in Step 5.
Step 5 Create Cloud object store See “Adding Cloud object store accounts”
accounts. on page 26.
increases the backup speed. You can configure this setting while creating policies
for Cloud object store. See “Policy attributes” on page 54.
You must configure a temporary staging location to use this feature. See “Configure
a temporary staging location” on page 17. Step 7.
■ If you plan to use a proxy for communication with cloud endpoints, gather the
required details of the proxy server.
■ Get the Cloud account credentials, and any additional required parameters, as
per the authentication type. These credential details should have the required
permissions recommended in NetBackup documentation.
See “Permissions required for Amazon S3 cloud provider user” on page 21.
See “Permissions required for Azure blob storage” on page 22.
See “Permissions required for GCP” on page 23.
Managing Cloud object store assets 16
Configuring buffer size for backups
■ Make sure that the required outbound ports are open, and configurations are
done for communication from the backup host or scale-out server to the cloud
provider endpoint using REST API calls.
■ On the backup host, S3 or Azure storage URL endpoints use the HTTPS
default port 443. For a private cloud provider, this port can be any custom
port that is configured in the private cloud storage.
■ If you use a proxy server to connect to the cloud storage, you need to allow
that port. You can provide the proxy server-related details in NetBackup,
while creating a Cloud object store account.
■ The certificate revocation status check option uses the OCSP protocol, which
typically uses HTTP port 80. Ensure that the OCSP URL is reachable from
the backup host.
Note: The configurations described in this section are not required if your backup
host is NetBackup version 11.0 or higher, and your policy does not have Dynamic
multi-streaming enabled. For policies with Dynamic multi-streaming, see the section
Configuring a temporary staging location.
By default, NetBackup creates buffers of 4 MB. You can use this default buffer size,
if most of the objects in your buckets/containers are less than 4 MB. If your
buckets/containers have a large number of objects greater than 4 MB then you can
increase the buffer size up to 64 MB.
To configure buffer size:
1 Open the /usr/openv/netbackup/bp.conf file on the backup host.
You can identify the backup host from the corresponding policy under the Cloud
objects tab.
2 Enter a value for this parameter in MB: COS_SHM_BUFFER_SIZE =
For example:COS_SHM_BUFFER_SIZE = 16
of buffers to 4 to 16 buffers for each stream. Increasing the number of buffers results
in faster backups, but the memory usage on the backup host may increase.
To configure the number of buffers:
1 Open the /usr/openv/netbackup/bp.conf file on the backup host.
2 Enter a value for this parameter in MB: COS_NO_SHM_BUFFER =
For example: COS_NO_SHM_BUFFER = 12
Note that these settings are applied to all backup jobs that use that backup host.
■ For Flex:
■ The default temporary staging location for a media server:
/usr/openv/netbackup/db/cos_tmp_staging_path is symlinked to the
mounted location
/mnt/nbstage/usr/openv/netbackup/db/cos_tmp_staging_path
mounted location
/mnt/nbdata/usr/openv/netbackup/db/cos_tmp_staging_path
For backup host other than Flex, it is recommended to use different path than the
default path as it may not comply with the storage requirements. It is also
recommended to create a different mount point with enough space. Backup
performance is affected if the minimum required space is not available.
Note: Ensure that the automount configuration entry is updated on the backup host,
if you are using a different path other than the default.
■ The device used for the temporary staging location must have high read and
write throughput. It is recommended to use an SSD. If you still experience slow
disk I/O throughput, mount RAM as a disk, and configure as the temporary
staging location.
Here is an example command to mount RAM as disk:
sudo mount -t tmpfs -o size=64G tmpfs /mnt/tmp
For 64 GB and 128-GB RAM setups, it is recommend mounting half of RAM as
a disk. For a 32-GB RAM setup, it is not recommended to mount RAM as a disk,
as low backup performance is expected on large size objects.
■ Ensure that there is enough free space at the location.
■ File path should have only ASCII Characters.
■ Ensure that the NetBackup service user is the owner of the storage location and
has 0700 permissions. Writing fails if the permissions are set to anything other
than 0700.
■ The temporary staging location is cleaned up once a backup completes.
■ Each backup host must have a different mount point for the temporary staging
location.
Optionally, you can set the following parameters in the bp.conf file in each backup
host, to configure the temporary staging location.
Managing Cloud object store assets 19
Configuring advanced parameters for Cloud object store
COSP_STAGING_LOC_
WATER_MARK_IN
_PERCENTAGE= 70
COSP_SPACE_
MANAGEMENT_TIMEOUT
_IN_MIN = 20
Table 2-3
Parameter Default/ Configuration Description
{
"properties": {
"roleName": "cosp_minimal",
"description": "minimal permission required for cos
protection.",
"assignableScopes": [
"/subscriptions/<Subsfription_ID>"
],
"permissions": [
{
"actions": [
"Microsoft.Storage/storageAccounts/blobServices/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/write",
"Microsoft.ApiManagement/service/*",
"Microsoft.Authorization/*/read",
"Microsoft.Resources/subscriptions/resourceGroups/read",
"Microsoft.Storage/storageAccounts/read"
],
"notActions": [],
"dataActions": [
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write",
Managing Cloud object store assets 23
Permissions required for GCP
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action",
],
"notDataActions": []
}
]
}
}
storage.bucketOperations.cancel
storage.bucketOperations.get
storage.bucketOperations.list
storage.buckets.create
storage.buckets.createTagBinding
storage.buckets.delete
storage.buckets.deleteTagBinding
storage.buckets.enableObjectRetention
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.getObjectInsights
storage.buckets.list
storage.buckets.listEffectiveTags
storage.buckets.listTagBindings
storage.buckets.restore
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
Managing Cloud object store assets 24
Limitations and considerations
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.restore
storage.objects.setIamPolicy
storage.objects.update
prefix = /
prefix = /folder1
prefix = /object1
prefix = folder1//
object = /obj1
■ NetBackup does not back up an object with its name in the format: <name>/
■ Upgrade only the primary and media servers: Backup jobs with Dynamic
multi-streaming fails. Your must update the backup host used in the policy
to the current version.
■ Upgrade only primary server and backup host and media server remains at
an older version: NetBackup protection works.
■ If you upgrade to the latest NetBackup version from version 10.1 or 10.2, the
following limitations occur:
■ You can create Cloud object store accounts with backup hosts or scale-out
servers of version 10.3 or later only. You cannot update an existing Cloud
object store account that was created on NetBackup 10.3 or later with backup
hosts or scale-out servers older than version 10.3.
■ You can create policies only with backup hosts or scale-out servers of version
10.3 or later. You cannot update an existing policy that is created on
NetBackup 10.3 or later with backup hosts or scale-out servers older than
version 10.3.
■ The following credential types are not supported with backup hosts or
scale-out servers older than version 10.3: For Azure: Service principal and
Managed identity. For AWS: Assume role (EC2).
■ Restores with object lock properties are supported for backup hosts or
scale-out servers of version 10.3 or later only.
■ Backup and restore of buckets with default retention enabled are supported
with backup hosts or scale-out servers of version 10.3 or later only.
■ For Azure, if you update a policy created with NetBackup version prior to 10.3,
with a backup host or scale-out server of version 10.3 or later, the backups fail.
As a workaround, update all the buckets to use the new format of the provided
generated ID with the existing queries. Note that you must create the associated
Cloud object store account in the policy, using NetBackup 10.3 or later, for this
workaround to be successful.
■ Discovery is supported for NetBackup version 10.3 or later, deployed on RHEL.
If no supported host is available, then discovery does not start for any of the
configured Cloud storage accounts. In this case, discovery status is not available,
and you cannot see a bucket list during policy creation. Even if you add the
buckets manually after discovery fails, your backups may fail. Upgrade at least
one supported backup host or scale-out server and create a new policy.
■ If you update a policy that is created on a NetBackup version prior to 10.3,
consider the following after a backup:
Managing Cloud object store assets 26
Adding Cloud object store accounts
■ After backup, you may see two versions of the same buckets, for the old and
new formats. If you want to restore old data, select the bucket in the old
format. For newer backups, select the ones in the newer format.
■ The subsequent backup after the update is a full backup, irrespective of what
is configured in the policy.
■ When you upgrade to 10.3, the first Azure blob accelerated backup takes a
backup of all objects in the selection, even if the configured backup is
incremental. This full backup is required for the change in metadata properties
for the Azure blobs between NetBackup versions 10.2 and 10.3. The subsequent
incremental backups back up only the changed objects.
■ If you use a Cloud object store account created in a version older than 10.3,
NetBackup discovers the buckets with the old format, where:
uniqueName=bucketName.
Note: The Cloud object store account shares the namespace with the Cloud storage
server and MSDP-C LSU name.
For Cloud object store accounts, NetBackup supports a variety of cloud providers
using AWS S3-compatible APIs (for example, Amazon, Google, Hitachi etc.), other
than Microsoft Azure. For such providers, you need to provide AWS S3-compatible
Managing Cloud object store assets 27
Adding Cloud object store accounts
account access details to add the credentials (that is, Access Key ID, Secret Access
key) of the provider.
You need to select a validation host while creating a Cloud object store account. A
validation host is a specific backup host that validates the credentials. The validation
host is used during manual, periodic discovery, and when manual validation is
required for an existing Cloud object store account. The validation host can be
different from the actual backup host specified in the policy.
To add a Cloud object store account:
1 On the left, click Cloud object store under Workloads.
2 In the Cloud object store account tab, click Add. Enter a name for the account
in the Cloud object store name field, and select a provider from the list Select
Cloud object store provider.
3 To select a backup host or scale-out server, click Select host for validation.
The host should be NetBackup 10.1 or later, on an RHEL media server that
supports Credential validation, backup, and recovery of the Cloud object stores.
■ To select a backup host, select the Backup host option, and select a host
from the list.
■ To use a scale-out server, select the Scale out server option, select a
server from the list. NetBackup Snapshot Manager servers 10.3 or later,
serve as scale-out servers.
If you have a very large number of buckets, you can also use NetBackup
Snapshot Manager as a backup host with NetBackup 10.3 or later releases.
Select the Scale out server option, and select a NetBackup Snapshot
Manager from the list.
Note: In a Cloud Scale environment, you cannot use the primary server as
a backup host. For more information regarding the Cloud Scale environment,
see NetBackup Kubernetes Administrator's guide.
Managing Cloud object store assets 28
Adding Cloud object store accounts
4 Select a region from the available list of regions. Click Add above the Region
table to add a new region.
See “Adding a new region” on page 35.. Region is not available for some Cloud
object store providers.
For GCP, which supports dual-region buckets, select the base region during
account creation. For example, if a dual-region bucket is in the regions
US-CENTRAL1, US-WEST1, select US, as the region during account creation
to list the bucket.
5 In the Access settings page: Select a type of access method for the account:
■ Access credentials-In this method, NetBackup uses the Access key ID,
and the secret access key to access and secure the Cloud object store
account. If you select this method, perform the subsequent steps 6 to 10
as required to create the account.
■ IAM role (EC2)-NetBackup retrieves the IAM role name and the credentials
that are associated with the EC2 instance. The selected backup host or
scale-out server must be hosted on the EC2 instance. Make sure the IAM
role associated with the EC2 instance has required permissions to access
the required cloud resources for Cloud object store protection. Make sure
that you select the correct region as per permissions associated with the
EC2 instance while configuring the Cloud object store account with this
option. If you select this option, perform the optional steps 7 and 8 as
required, and then perform steps 9 and 10.
■ Assume role-NetBackup uses the provided key, the secret access key,
and the role ARN to retrieve temporary credentials for the same account
and cross-account. Perform the steps 6 to 10 as required to create the
account.
See “Creating cross-account access in AWS ” on page 31.
■ Assume role (EC2)- NetBackup retrieves the AWS IAM role credentials
that are associated with the selected backup host or scale-out server, hosted
on an EC2 instance. Henceforward, NetBackup assumes the role mentioned
in the Role ARN to access the cloud resources required for Cloud object
store protection.
■ Credentials broker- NetBackup retrieves the credentials to access the
required cloud resources for Cloud object store protection.
■ Service principal- NetBackup uses the tenant ID, client ID, and client
secret associated with the service principal to access the cloud resources
required for Cloud object store protection. Supported by Azure.
■ Managed identity- NetBackup retrieves the Azure AD tokens, using the
managed identity that is associated with the selected backup host or
Managing Cloud object store assets 29
Adding Cloud object store accounts
6 You can add existing credentials or create new credentials for the account:
■ To select an existing credential for the account, select the Select existing
credentials option, select the required credential from the table, and click
Next.
■ To use Managed identity for Azure, select System assigned or User
assigned. For the user-assigned method, enter the Client ID associated
with the user to access the cloud resources.
■ To add a new credential for the account, select Add new credentials. Enter
a Credential name, Tag, and Description for the new credential.
For cloud providers supported through AWS S3-compatible APIs, use AWS
S3-compatible credentials. Specify the Access key ID and Secret access
key.
For Microsoft Azure cloud provider:
■ For the Access key method, provide Storage account credentials,
specify Storage account.
■ For the Service principal method, provide Client ID, Tenant ID, and
Secret key.
■ If you use Assume role as the access method, specify the Amazon
Resource Name (ARN) of the role to use for the account, in the Role ARN
field.
7 (Optional) Select Use SSL if you want to use the SSL (Secure Sockets Layer)
protocol for user authentication or data transfer between NetBackup and the
cloud storage provider.
■ Authentication only: Select this option if you want to use SSL only at the
time of authenticating users while they access the cloud storage.
■ Authentication and data transfer: Select this option if you want to use
SSL to authenticate users and transfer the data from NetBackup to the
cloud storage, along with user authentication.
■ Check certificate revocation (IPv6 not supported for this option): For
all cloud providers, NetBackup provides the capability to verify the SSL
certificates revocation status by OCSP protocol. The OCSP protocol sends
a validation request to certificate issuer to get certificates current revocation
status. If SSL is enabled and the check certificate revocation option is
enabled, each non-self-signed SSL certificate is verified with an OCSP
Managing Cloud object store assets 30
Adding Cloud object store accounts
Note: The FIPS region of the Amazon GovCloud cloud provider (that is
s3-fips-us-gov-west-1.amazonaws.com) supports only secured mode of
communication. Therefore, if you disable the Use SSL option while you
configure Amazon GovCloud cloud storage with the FIPS region, the
configuration fails.
8 (Optional) Select the Use proxy server option to use a proxy server and provide
proxy server settings. Once you select the Use proxy server option, you can
specify the following details:
■ Proxy host–Specify IP address or name of the proxy server.
■ Proxy Port–Specify port number of the proxy server.
■ Proxy type– You can select one of the following proxy types:
■ HTTP
Note: You need to provide the proxy credentials for the HTTP proxy
type.
■ SOCKS
■ SOCKS4
■ SOCKS5
■ SOCKS4A
through the proxy server, without reading the headers or data from the
connection.
Select one of the following authentication types if you use the HTTP proxy
type.
■ None– Authentication is not enabled. A username and password are not
required.
■ Basic–Username and password needed.
■ NTLM–Username and password needed.
Username–is the username of the proxy server.
Password–can be empty. You can use a maximum of 256 characters.
9 Click Next.
10 In the Review page, review the entire configuration of the account, and click
Finish to save the account.
NetBackup creates the Cloud object store accounts only after validation of the
associated credentials with the connection information provided. If you face an
error, update the settings as per the error details. Also, check if the provided
connection information and credentials are correct. The backup host or scale-out
server that you assign for validation, can connect to cloud provider endpoints using
the provided information.
5 In the source AWS account, create a policy that allows the IAM role in the
source AWS account to assume the IAM role in the target AWS account.
6 Attach the policy to the source account user, whose access key and secret
access key you use for the assume role.
■ Windows:
<installation-path>\NetBackup\var\global\cloud
■ UNIX:
/usr/openv/var/global/cloud/
Note: In a cluster deployment, the NetBackup database path points to the shared
disk, which is accessible from the active node.
Note: Ensure that you do not change the file permission and ownership of the
cacert.pem file.
To add a CA
You must get a CA certificate from the required cloud provider and update it in the
cacert.pem file. The certificate must be in .PEM format.
1 Open the cacert.pem file.
2 Append the self-signed CA certificate on a new line and at the beginning or
end of the cacert.pem file.
Add the following information block:
Certificate Authority Name
==========================
–––––BEGIN CERTIFICATE–––––
<Certificate content>
–––––END CERTIFICATE–––––
==========================
–––––BEGIN CERTIFICATE–––––
<Certificate content>
–––––END CERTIFICATE–––––
Backup images
This section describes the procedure for scanning policy of client backup images
for malware.
To scan policy of client backup images for malware
1 On left, click Detection and reporting > Malware detection.
2 On the Malware detection page, click Scan for malware.
3 In the Search by option, select Backup images.
4 In the search criteria, review and edit the following:
■ Policy name: Only supported policy types are listed.
■ Client name: Displays the clients that have backup images for a supported
policy type.
■ Policy type: Select the policy type as Cloud-Object-Store.
■ Type of backup
■ Copies: If the selected copy does not support instant access, then the
backup image is skipped for the malware scan.
■ Disk pool: MSDP (PureDisk), OST (DataDomain) and AdvancedDisk
storage type disk pools are listed.
■ Disk type: MSDP (PureDisk), OST (DataDomain) and AdvancedDisk disk
types are listed.
■ Malware scan status
Managing Cloud object store assets 38
Scan for malware
■ For the Select the timeframe of backups, verify the date and the time
range or update it.
5 Click Search: Select the search criteria and ensure that the selected scan host
is active and available.
6 From the Select the backups to scan table select one or more images for
scan.
7 In the Select a malware scanner host pool, Select the appropriate host pool
name.
Note: Scan host from the selected scan host pool must be able to access the
instant access mount created on storage server which is configured with
NFS/SMB share type.
Note: Any backup images that fail validation are ignored. Malware scanning
is supported for the backup images that are stored on storage with instant
access capability and for the supported policy types only.
■ In progress
■ Pending
Note: You can cancel the malware scan for one or more in progress and
pending jobs.
Managing Cloud object store assets 39
Scan for malware
Warning: Scan is limited to only 100 images. Adjust the date range and try
again.
10 After the scan is initiated, the Scan status is displayed. The following are the
status fields:
■ Not scanned
■ Not infected
■ Infected
■ Failed
Managing Cloud object store assets 40
Scan for malware
Note: Hover over the status to view the reason for the failed scan.
Any backup images that fail validation are ignored. Malware scanning is
supported for the backup images that are stored on storage with instant
access capability and for the supported policy types only.
■ Pending
■ In progress
For more information on the malware scan status, refer to the NetBackup Security
and Encryption Guide.
Chapter 3
Protecting Cloud object
store assets
This chapter includes the following topics:
■ Policy attributes
■ Adding conditions
■ The media server directs the storage server to write the changed blocks, and
combine these blocks with the locally stored, previously unchanged blocks to
make a new full image.”
Note: When you first enable a policy to use accelerator, the next backup
(whether full or incremental) is in effect a full backup. It backs up all objects
corresponding to Cloud objects queries. If that backup was scheduled as an
incremental, it may not be completed within the backup window.
■ NetBackup retains track logs for future accelerator backups. Whenever you add
a query, NetBackup does a full, non-accelerated backup for the queries that are
added to the list. The unchanged queries are processed as normal accelerator
backups.
Protecting Cloud object store assets 44
About accelerator support
■ If the storage unit that is associated with the policy cannot be validated when
you create the policy, it is validated later, when the backup job begins. If
accelerator does not support the storage unit, the backup fails. In the bpbrm
log, a message appears that is similar to one of the following: Storage server
%s, type %s, does not support image include. Storage server type %s, does
not support accelerator backup.
■ Accelerator requires that the storage have the OptimizedImage attribute enabled.
■ The Expire after copy retention can cause images to expire while the backup
runs. To synthesize a new full backup, the SLP-based accelerator backup needs
the previous backup.
■ To detect changes in metadata, NetBackup uses one or more cloud APIs per
object/blob. Hence, change detection time increases with the number of
objects/blobs to be processed. You may observe backups running longer than
expected in cases with little or no data change but a large number of objects.
■ If in your environment, for a given object, the metadata or tag is always changed
(added/removed/updated) with its data. Evaluate using incremental without
accelerator over incremental with accelerator from a performance and cost
viewpoint.
■ While creating a Cloud object store policy with multiple tag-based queries, you
can use a few simple rules to get the best effect with accelerator. Use the query
builder in the policy creation page, and create separate queries, one query per
tag. The accelerator-based policies perform best in this configuration.
NetBackup has two ways to determine whether an object is considered for
incremental backup or not.
■ The object's modification time.
■ Any changes in the tags and user attributes.
These metadata checks are not required for the organization environments
where the metadata of the objects do not change after the objects are
created. You can use the Quick Object change scan option in the policy
to avoid these metadata checks, which lead to faster change detection and
faster accelerated backup.
changed or un-changed. A compact backup stream that uses less network bandwidth
is used between the backup host or scale-out server and the server.
For example: A 1-TB file system with one million objects needs approximately
701-MB track log.
Note that if you modify the backup selection or stream count in an
accelerator-enabled policy, NetBackup creates a new track log. The older track
logs remain on the backup host.
For Cloud object store workload, some metadata properties do not alter the
modification time for an object or blob. For example, the Tags in Azure blobs. Even
if you change these metadata properties, the corresponding objects are not
considered for the next incremental backup. This may appear as a loss of data
during an incremental backup.
For Azure Data Lake and Azure Data Lake Government providers, when you update
the ACLs of files or directories, the last modified time of the file or directory doesn't
change. So, only by changing the ACLs, the files and directories do not qualify as
incremental backups.
For a detailed list of metadata properties that do not alter modification time for an
object or blob, refer to the respective cloud provider's documentation.
For incremental backups, if an object name has a path-style naming scheme, then
for each path, an entry is added in NetBackup. If the object, which is represented
by the end node of this path style naming, has not changed since the last backup
(either full or last incremental, based on the incremental schedule used), then that
object is not included in the next incremental backup. Because of this behavior,
empty paths show up in the catalog and are rendered in the browse view of restore.
policy and select five buckets for backup, you get 50 concurrent streams. Some
streams may go to a queue, if the maximum number of concurrent jobs allowed
in the storage unit selected for the policy, is less than the total number of streams
that are running across different policies. For optimal performance, keep the
Maximum concurrent jobs allowed property of selected storage greater than
the total number of streams that you expect to run across the policies.
■ You cannot use a scale-out server as a backup host, when you use dynamic
multi-streaming.
■ Dynamic multi-streaming requires staging location path.
■ Job retry feature does not work for backup jobs.
■ Dynamic multi-streaming is not available for Azure Data Lake storage and Azure
Data Lake storage Government providers.
■ Checkpoint restart is not supported.
■ You must configure a temporary staging location as per guidelines to use this
feature. See “Configure a temporary staging location” on page 17.
■ Dynamic multi-streaming starts all the backup streams for a bucket or container
at the same time and writes to a storage unit. Therefore, using tape storage
units as the target for primary backup copies is not recommended. You can use
an MSDP storage as the target for the first backup copy, and configure a tape
storage as the target for secondary or duplication copies.
included in the SLP. This process allows the NetBackup administrator to use the
advantages of different backups in the near term or long term.
This section briefly describes the SLPs, for more details see NetBackup™
Administrator's Guide, Volume I.
For SLP best practices, see the Knowledge Article:
https://www.veritas.com/content/support/en_US/article.100009913.
Adding an SLP
The operations in an SLP are the backup instructions for the data. Use the following
procedure to create an SLP that contains multiple storage operations.
This section briefly describes SLP creation, for more details see NetBackup™
Administrator's Guide, Volume I.
To create an SLP
1 Open the NetBackup web UI.
2 On the left, click Storage > Storage lifecycle policies.
3 Click Add to create a new SLP.
4 On the Storage lifecycle policy pane, provide the following details:
■ Storage lifecycle policy name: The name cannot be modified after the
SLP is created.
■ Data classification: Defines the level or classification of data that the SLP
is allowed to process. The dropdown menu contains all of the defined
classifications as well as the Any classification, which is unique to SLPs.
The Any selection indicates to the SLP that it should preserve all images
that are submitted, regardless of their data classification.
■ Priority for secondary operations: The priority that jobs from secondary
operations have in relationship to all other jobs. The priority applies to the
jobs that result from all operations except for Backup and Snapshot
operations. Range: 0 (default) to 99999 (highest priority).
For example, you may want to set the Priority for secondary operations
for a policy with a gold data classification higher than for a policy with a
silver data classification.
5 Add one or more operations to the SLP. The operations are the instructions
for the SLP to follow and apply to the data that is specified in the backup policy.
Click Add to add operations to the SLP. Provide the following on the New
operation pane. Select an Operation type.
Protecting Cloud object store assets 49
About storage lifecycle policies
Retention > Retention type ■ The Fixed retention indicates that the
data on the storage is retained for the
specified length of time, after which the
backups or snapshots are expired.
Expires immediately, 1 week, 2 weeks,
3 weeks and many more.
An image copy with a fixed retention is
eligible for expiration when all of the
following criteria are met:
■ The Fixed retention period for the
copy has expired.
■ All child copies have been created.
■ All child copies that are mirror
copies are eligible for expiration.
■ The Expire after copy retention
indicates that after all direct (child)
copies of an image are successfully
duplicated to other storage, the data on
this storage is expired. The last
operation in the SLP cannot use the
Expire after copy retention type
because no subsequent copy is
configured. Therefore, an operation
with this retention type must have a
child.
■ The Capacity managed operation
means that NetBackup automatically
manages the space on the storage,
based on the High water mark setting
for each volume.
The High water mark and Low water
mark settings on the disk storage unit
or disk pool determine how the space
is managed.
To add a child operation, select an operation and then click Add child. Select
an Operation type. For a child operation, the SLP displays only those
operations that are valid based on the parent operation that you selected.
6 The Window tab displays for the available operation types. Use them to specify
when the secondary operation runs, create a window for the operation.
Protecting Cloud object store assets 50
About policies for Cloud object store assets
7 Optionally, select Postpone creation of this copy until the source copy is
about to expire.
8 Under Advanced, specify if NetBackup should process active images after
the window closes.
9 Under Duplication, you can allow an alternate read server to read a backup
image originally written by a different media server.
Step 1 Gather information about Gather the following information about each
the Cloud object store bucket/container:
account.
■ The Account name: Credential and
connection details mentioned in the account
are used to access cloud resources using
REST APIs during backup. An account is
associated with a single region; hence, a
policy can contain buckets or containers
associated with that region only.
■ The bucket/container names
■ The approximate number of objects on each
bucket/container to be backed up.
■ The typical size of the objects.
Step 2 Group the objects based Divide the different objects in the accounts into
on backup requirements groups according to the different backup and
archive requirements.
Step 3 Consider the storage The NetBackup environment may have some
requirements special storage requirements that the backup
policies must accommodate.
Step 7 Select exactly what to You do not need to back up entire objects,
back up. unless required. Create queries to select and
back up only the required object(s).
You can use a scale-out server if you have a large number of buckets in your
Cloud object store. NetBackup Snapshot Manager can scale out as many data
mover containers as needed at run time, and then scale them down when the
data protection jobs are completed. You do not need to worry about configuring
multiple backup hosts, and creating multiple policies to distribute the load across
these backup hosts.
■ Evaluate the requirement for NetBackup multistreaming in your environment.
For a given bucket, NetBackup creates one stream per query defined for the
bucket in the policy. If you want to use multistreaming, you can specify this while
creating the policy. To use multistream, you also need to configure the number
of jobs for the buckets as clients in the Client attributes section, under primary
server Host properties. Add the client name and set the Maximum data
streams as required.
Define policy attributes like name, storage See “Policy attributes” on page 54.
type, job priority and so on.
Select the account and objects to backup. See “Configuring the Cloud objects tab”
on page 64.
Policy attributes
The following procedure describes how to select the attributes for the backup policy.
Protecting Cloud object store assets 55
Policy attributes
6 The Limit jobs per policy attribute limits the number of jobs that NetBackup
performs concurrently when the policy is run. By default the box is cleared and
NetBackup performs an unlimited number of backup jobs concurrently. Other
resource settings can limit the number of jobs.
A configuration can contain enough devices so that the number of concurrent
backups affects performance. To specify a lower limit, select Limit jobs per
policy and specify a value from 1 to 999.
7 In the Job priority field, enter a value from 0 to 99999. This number specifies
the priority that a policy has as it competes with other policies for resources.
The higher the number, the greater the priority of the job. NetBackup assigns
the first available resource to the policy with the highest priority.
8 The Media owner field is available when the Policy storage attribute is set
to Any Available. The Media owner attribute specifies which media server or
server group should own the media that backup images for this policy are
written to.
■ Any(default)-Allows NetBackup to select the media owner. NetBackup
selects a media server or a server group (if one is configured).
■ None-Specifies that the media server that writes the image to the media
owns the media. No media server is specified explicitly, but you want a
media server to own the media.
9 To activate the policy, select the option Go into effect at, and set the date and
time of activation. The policy must be active for NetBackup to use it. Make sure
that the date and time are set to the time that you want to resume backups.
To deactivate a policy, clear the option. Inactive policies are available in the
Policies list.
10 The Allow multiple data stream option is selected by default and is read-only.
This option allows NetBackup to divide automatic backups for each query into
multiple jobs. Because the jobs are in separate data streams, they can occur
concurrently.
Multi-stream jobs consist of a parent job to perform stream discovery and child
jobs for each stream. Each child job displays its job ID in the Job ID column in
the Activity monitor. The job ID of the parent job appears in the Parent Job
ID column, which is not displayed by default. Parent jobs display a dash (-) in
the Schedule column.
Protecting Cloud object store assets 57
Policy attributes
11 Select the Use Accelerator option to enable accelerator for the policy.
NetBackup Accelerator optimizes backups to perform better both in terms of
data movement and backup time. Accelerator identifies changed content, move
only unique data to backup target. Subsequently, it creates a synthetic backup
image by combining current changed data and unchanged data from the
previous backup image. The backup host sends the changed data to the media
server in a more efficient backup stream. The media server combines the
changed data with the rest of the backup data that is stored in previous backups.
Typically, any data change for an object results in an update of its modification
time (mtime). Certain Cloud object store vendors may not update the mtime
for an object's tag or user attribute change. NetBackup's Cloud object store
component considers object's modification time (mtime) to identify the objects
that have changed since the last backup. Additionally, it must check if tags or
user attributes has changed as well to account for the cloud vendor's inability
to update mtime for tags change.
Optionally, select the Quick object change scan option to skip the checks for
object tags. If you have an application environment, where the object tags are
not modified after the initial create time, you can use this option to skip these
checks and significantly increase the backup speed.
Enabling this option speeds up object change identification for Accelerator, as
it skips the comparison of object tags since last backup. However, with this
option you may not backup objects that have no data change but only tag
change since the last backup.
This feature works only with Dynamic multi-streaming. You need to configure
a temporary storage space for backups before using this option. See “Configure
a temporary staging location” on page 17.
Protecting Cloud object store assets 58
Creating schedule attributes for policies
12 Select the Disable for all clients option from the Client-side deduplication
options. NetBackup Cloud object store protection uses the backup host as the
client.
13 The Keyword phrase attribute is a phrase that NetBackup associates with all
backups or archives based on the policy. Only the Windows and UNIX client
interfaces support keyword phrases.
Clients can use the same keyword phrase for more than one policy. The same
phrase for multiple policies makes it possible to link backups from related
policies. For example, use the keyword phrase “legal department documents”
for backups of multiple clients that require separate policies, but contain similar
types of data.
The phrase can be a maximum of 128 characters in length. All printable
characters are permitted, including spaces and periods. By default, the keyword
phrase is blank.
For example, assume that a schedule is set up for a full backup with a
frequency of one week. If NetBackup successfully completes a full backup
for all clients on Monday, it does not attempt another backup for this
schedule until the following Monday.
To set the frequency, select a frequency value from the list. The frequency
can be seconds, minutes, hours, days, or weeks.
9 Specify a Retention period for the backups. This attribute specifies how long
NetBackup retains the backups. To set the retention period, select a period (or
level) from the list. When the retention period expires, NetBackup deletes
information about the expired backup. After the backup expires, the objects in
the backup are unavailable for restores. For example, if the retention is 2 weeks,
data can be restored from a backup that this schedule performs for only 2
weeks after the backup.
10 The Media multiplexing attribute specifies the maximum number of jobs from
the schedule that NetBackup can multiplex to any drive. Multiplexing sends
concurrent backup jobs from one or several clients to a single drive and
multiplexes the backups onto the media.
Specify a number from 1 through 32, where 1 specifies no multiplexing. Any
changes take effect the next time a schedule runs.
11 Click Add to add the attributes, or click Add and add another to add a different
set of attributes for another schedule.
Drag your cursor in the time table. Click the day and time when you'd like the
window to start and drag it to the day and
time when you'd like the window to close.
Use the settings in the dialog box. ■ In the Start day field, select the first
day that the window opens.
■ In the Start time field, select the time
that the window opens.
Drag your cursor in the time table. Click the day and time when you'd like the
window to start and drag it to the day and
time when you'd like the window to close.
Enter the duration of the time window. Enter a length of time in the Duration
(days, hours, minutes) field.
Indicate the end of the time window. ■ Select a day in the End day list.
■ Select a time in the End time field.
Click Add and Add another. To save the time window and add another.
Protecting Cloud object store assets 62
Configuring the exclude dates
displays a calendar of three consecutive months. Use the lists at the top of the
calendar to change the first month or year displayed.
To exclude a day from a schedule:
1 On the left, click Policies, under Protection. Click the Schedules tab. Under
Backup schedules, click Add. Click the Exclude dates tab.
2 Use one or more methods to indicate the days to exclude:
■ Select the day(s) on the 3-month calendar that you want to exclude. Use
the drop-down lists at the top of the calendar to change the months or years.
■ To indicate Recurring week days:
■ Click Set all to select all of the days in every month for every year.
■ Click Clear all to remove all existing selections.
■ Check a box in the matrix to select a specific day to exclude for every
month.
■ Click the column head of a day of the week to exclude that day every
month.
■ Click the 1st, 2nd, 3rd, 4th, or Last row label to exclude that week every
month.
The tab displays a calendar of three consecutive months. Use the lists at the top
of the calendar to change the first month or year displayed.
Adding conditions
NetBackup gives you the convenience of selectively backing up the backup
objects/containers inside the buckets/containers using intelligent queries. You can
add conditions or tag conditions to select the objects/blobs inside a bucket/container
that you want to back up.
If you enable dynamic multi-streaming, all selected buckets and containers are
completely backed up. You cannot define any queries for the buckets or containers
that you have selected.
Protecting Cloud object store assets 66
Adding tag conditions
To add a condition:
1 While creating a policy, in the Cloud objects tab, click Add query, under
Queries.
2 In the Add a query dialog, enter a name for the query, and select the bucket(s)
to which you want to apply the query. In the list of buckets, you can see only
those buckets that are not selected to include all objects.
Note: While editing a query, you can see the buckets that are selected to
include all objects, but the edit option is disabled.
The Queries table shows the queries that you have added. You can search
through the queries using values in the Query name and Queries columns.
The values of the Queries column do not include the queries with Include all
objects/blobs in the selected buckets/containers option selected.
3 Select Include all objects in the selected buckets option to back up all the
objects in the selected bucket(s).
4 To add a condition, click Add condition.
You can make conditions by using either prefix or object. You cannot use
both prefix and object in the same query. Do not leave any empty fields in a
condition.
5 Select prefix or object from the drop-down, and enter a value in the text field.
Click Condition to add another condition. You can join the conditions by the
boolean operator OR.
6 Click Add to save the condition.
3 Select Include all objects in the selected buckets option to back up all the
objects in the selected bucket(s).
4 To add a tag condition, click Add Tag Condition.
5 Enter values for Tag Key and Tag Value to create the condition. The boolean
operator AND joins the values. NetBackup backs up objects with matching
key-value pairs.
6 Click Tag condition to add more conditions. You can use the boolean AND
or OR parameters to connect the tag conditions.
7 Click Add to save the condition.
■ The following blobs are tagged with "Project": "Finance" tag value
■ OrganizationData/Fin/accounts/account1/records1.txt
■ OrganizationData/Fin/accounts/account2/records2.txt
■ OrganizationData/Fin/accounts/account3/records3.txt
■ OrganizationData/Fin/accounts/monthly_expenses/Jul2022.rec
■ OrganizationData/Fin/accounts/monthly_expenses/Aug2022.rec
Copy a policy
Copying a policy lets you reuse similar policy attributes, schedules, and cloud
objects among your policies. You can also reuse complex queries by copying
policies, to save time.
To copy a policy:
1 On the left, click Policies. All the policies that you have the privilege to view
are displayed in the Policies tab.
2 Click the ellipsis menu (three dots) in the row of the policy that you want to
copy. Click Copy policy.
Alternatively, select the option in the row of the policy, click Copy policy at
the top of the table.
3 In the Copy policy dialog, optionally, change the name of the policy in the
Policy to copy field.
4 Enter the name of the new policy, in the New policy field.
5 Click Copy to initiate copying.
■ Copying the deactivated policy creates a new policy in the deactivated state.
When you delete a policy, the scheduled backups, which were configured in that
policy, are not conducted.
To deactivate or delete a policy:
1 On the left, click Policies. All the policies that you have the privilege to view
are displayed in the Policies tab.
2 Click the ellipsis menu (three dots) in the row of the policy that you want to
copy. Click Deactivate or Delete as required.
Alternatively, select the option in the row of the policy, click Deactivate or
Delete as required, at the top of the table.
The policies get deactivated immediately. To reactivate the policy again, click
the ellipsis menu (three dots) in the row of the deactivated policy and click
Activate.
3 If you delete a policy, click Delete in the confirmation box.
individual objects, select all objects under a set of folder(s), or all objects
matching a set of prefixes.
■ A valid Cloud object store account is required to access the buckets, containers,
and objects/blobs. You can add the Cloud object store account-related
information to NetBackup while creating the account. The permission required
for restoring differs from the ones required for backup. If it helps, you can create
a separate Cloud object store account for recovery.
■ Ensure that you have permission to view and select the Cloud object store
account and the access host. To be able to select a recovery host for a policy,
in the Cloud objects tab.
■ If required, you can use a different recovery host than the one used for Cloud
object store account validation. Ensure that the new recovery host has the
required ports opened and configured for communication from the backup host
or scale-out server to the cloud provider endpoint, using REST API calls.
■ You can plan to start multiple restore jobs in parallel for better throughput. You
can select objects for recovery as individual objects, or using a folder or prefix.
Note: This option can incur additional cloud storage costs to hold data for a longer
time. Avoid using these options if you want a temporary copy of data that you want
to delete after browsing the objects or copying to another location.
To apply object retention locks or legal holds on the restored objects, you can select
multiple options during restoration to meet your organization's compliance and
retention requirements. You can select the options in the Recovery options page,
under Advanced restore options. See “Recovering Cloud object store assets”
on page 72.
To recover assets:
1 On the left, click Recovery. Under Regular recovery, click Start recovery.
2 In the Basic properties page, select Policy type as Cloud-Object-Store.
3 Click the Buckets/Containers field to select assets to restore.
■ In the Add bucket/container dialog, the default option displays all available
bucket/containers with completed backups. You can search the table using
the search box.
■ To add a specific bucket or container, select Add the bucket/container
details option. If you have selected an Azure Data Lake workload, select
Add files/directories.
Select the cloud provider, and enter the bucket/container name, and the
Cloud object store account name. For Azure workloads, specify the storage
account name, if available in the UI.
Note: In a rare scenario, if you cannot find the required bucket listed in the
table for selection. But you can see the same bucket listed in the catalog
view as a backup ID. You can select the bucket by manually entering the
bucket name, provider ID, and the Cloud object store account name as per
the backup ID. The backup ID is formed as
<providerId>_<cloudAccountname>_<uniquename>_<timestamp>
The following warning message is displayed when images which are not
scanned are selected for recovery:
Note: Clean file recovery (Skip infected files) as part of recovery is not
supported for Cloud-Object-Store.
8 In the Recovery options page, you can select whether you want to restore to
the source bucket of the container or use a different one. These are the Object
restore options:
■ Restore to the original bucket or container: Select to recover to the same
bucket or container from where the backup was taken.
Optionally:
■ Add a prefix for the recovered assets in the Add a prefix field.
■ If you have selected an Azure Data Lake workload, enter the Directory
to restore.
■ Optionally, add a prefix for the recovered assets in the Add a prefix
field.
Note: If you have selected Include all objects/blobs and folders, in step
7, the Restore objects/blobs or prefixes to different destinations option
is disabled.
9 Select a Recovery host. The recovery host that is associated with the Cloud
object store account is displayed by default. If required, change the Backup
host. If the Cloud object store account uses a scale-out server, this field is
disabled.
10 Optionally, to overwrite any existing object or blobs using the recovered assets,
select Overwrite existing objects/blobs.
11 (Optional) To override the default priority of the restore job, select Override
default priority, and assign the required value.
12 In the Advanced restore options:
■ To apply the original object lock attributes from the backed-up objects,
select Retain original object lock properties.
■ To change the values of different properties, select Customize object lock
properties. From the Object lock mode list:
■ Select Compliance or Governance for Amazon or other S3 workloads.
■ Select Locked or Unlocked for Azure workloads.
■ Select a future date and time till which the object lock is valid. Note that
the recovered object is locked till this specified date and time.
■ Select Object lock legal hold status to implement it on the restored objects.
See “Configuring Cloud object retention properties” on page 72.
Recovering Cloud object store assets 76
Recovering Cloud object store assets
The Advanced restore options are not applicable to the Azure Data Lake
workload.
13 In the Review page, view the summary of all the selections that you made, and
click:
■ Start recovery
Or
You can see the progress of the restore job in the Activity monitor.
Chapter 5
Troubleshooting
This chapter includes the following topics:
■ Error 5541: Cannot take backup, the specified staging location does not have
enough space
■ Error 5537: Backup failed: Incorrect read/write permissions are specified for the
download staging path.
■ Error 5538: Cannot perform backup. Incorrect ownership is specified for the
download staging path.
■ Reduced acceleration during the first full backup, after upgrade to versions 10.5
and 11.
■ After backup, some files in the shm folder and shared memory are not cleaned
up.
■ Backup fails with default number of streams with the error: Failed to start
NetBackup COSP process.
■ Backup fails, after you select a scale out server or Snapshot Manager as a
backup host
■ Backup fails or becomes partially successful on GCP storage for objects with
content encoded as GZIP.
■ Recovery for the original bucket recovery option starts, but the job fails with
error 3601
■ Restore fails: "Error bpbrm (PID=3899) client restore EXIT STATUS 40: network
connection broken"
Troubleshooting 78
■ Access tier property not restored after overwriting the existing object in the
original location
■ Backup failed and shows a certificate error with Amazon S3 bucket names
containing dots (.)
■ Azure backup jobs fail when space is provided in a tag query for either tag key
name or value.
■ Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects
tab
■ AIR import image restore fails on the target domain if the Cloud store account
is not added to the target domain
■ Backup for Azure Data Lake fails when a back-level media server is used with
backup host or storage server version 10.3
■ Backup fails partially in Azure Data Lake: "Error nbpem (pid=16018) backup of
client
■ Recovery for Azure Data Lake fails: "This operation is not permitted as the path
is too deep"
■ Recovery error: "Invalid alternate directory location. You must specify a string
with length less than 1025 valid characters"
■ Restore fails: "Cannot perform the COSP operation, skipping the object:
[/testdata/FxtZMidEdTK]"
■ Wait for all the running backups and restores to finish, and then restart the
backup host. This clears the shared memory.
Troubleshooting 81
After an upgrade to NetBackup version 10.5, copying, activating, and deactivating policies may fail for older
policies
The error occurs when a policy has Dynamic multi-streaming, and you update the
Cloud object store account used in the policy to use a scale-out server or NetBackup
Snapshot Manager as a backup host.
Workaround:
Do any of the following:
■ Disable Dynamic multi-streaming in the policy.
■ Do not use a scale out server or NetBackup Snapshot Manager as a backup
host in the Cloud object store account.
Workaround
Create the Cloud object store account with the same name and provider as the
original Cloud object store account, and retry the recovery.
Note: This results in the first backup, which is without any acceleration for these
new queries. For subsequent backups, you can see the problem is resolved.
The Cloud object store account has encountered an error, see user
documentation, and re-create the account.
You cannot edit the Cloud object store account in this state. All jobs corresponding
to the Cloud object store account keep failing.
Cause
The Cloud object store account goes to an error state when:
■ The alias corresponding to Cloud object store account is accidentally deleted
using the csconfig CLI.
■ The alias corresponding to the Cloud object store account is accidentally updated
using the csconfig CLI.
Note: It is recommended not to use the csconfig CLI to update the alias
corresponding to the Cloud object store account. The correct way to update the
same is through the Edit workflow or create-or-update API. Aliases with the same
name as the Cloud object store account are the aliases corresponding to the Cloud
object store account.
Workaround
The NetBackup domain name must be unique across the Cloud object store account,
Cloud storage server, or MSDP-C LSU. They share a single namespace. Hence,
we can have the following usage scenarios:
Case 1: When there is no valid Cloud storage server or MSDP-C LSU with the
same name as the Cloud object store account in the environment.
■ Gather the Cloud object store account details as per your environment and
cross-check the details obtained.
■ Optionally, if the Alias corresponding to the Cloud object store account exists,
use the csconfig CLI and note down the details of the alias.
■ Use the following command to list all instances for the type and locate
the Cloud object store account and its instance:
<install-path>/csconfig cldinstance -i -pt <provider_type>
■ Use the following command to get the details of the instance and the
Cloud object store account:
<install-path>/csconfig cldinstance -i -in <instance name>
Do the following:
1 Call the getBucketLocation API on the bucket to retrieve the correct location
constraint for your account configuration.
If the API returns a blank location constraint, use 'us-east-1' as the region
location constraint.
2 Correct the region details by editing the account configuration. See “Adding
Cloud object store accounts” on page 26.
3 To edit the cloud configuration, do the following:
■ On the left, click Host Properties.
■ Select the required primary server and connect it. Click Edit primary server.
■ Click Cloud storage.
■ Optionally, enter your cloud provider name in the search field, to filter the
list.
■ In the row corresponding to your cloud provider service host, enter the
correct region details and save.
Alternatively, delete the account and recreate it with the correct region location
constraint.
"s3RegionDetails": [
{ "regionId": "us-east-1",
Troubleshooting 89
Restore failed with 2825 incomplete restore operation
}
]
Workaround:
Troubleshooting 90
Bucket listing of a cloud provider fails when adding a bucket in the Cloud objects tab
When the error is not fatal, the restore job is a partial success. Check the Activity
Monitor to see the list of objects that cannot be restored. Try restoring to a different
location (bucket/container or different account) to check if the problem is with the
destination cloud account or bucket settings.
When the error is fatal, the restore job fails. Check the nbcosp logs to determine
the object for which the restore has failed. Use granular object selection for the next
restore, and skip the earlier failed object while selecting the objects.
Refer to your cloud provider documentation to check if you use a feature or any
metadata that the cloud vendor does not support completely, or if it needs any more
configuration. Fix the object with the right attributes in the Cloud object store and
start a new backup job. Once this backup completes, the objects can be restored
without this workaround.
Workaround
Although the bucket list is not available, you can always manually add buckets in
the Cloud objects tab for backup.
When it is a DNS issue, you can optionally list buckets using a temporary workaround
by adding an IP hostname-mapping entry in the /etc/hosts file. When only
virtual-hosted style requests are supported, first prefix the endpoint using a random
bucket name, when using commands like ping, dig, and nslookup to determine the
IP of the cloud endpoint. For example,
ping randombucketname.s3-fips.us-east-1.amazonaws.com
You can then add the resulting IP along with the actual endpoint name (without the
random bucket name prefix) in the /etc/hosts file.
Troubleshooting 91
AIR import image restore fails on the target domain if the Cloud store account is not added to the target domain
Note that this is a temporary workaround to edit DNS entries on the computer for
bucket listing. Remove them after the policy configuration is done, unless the cloud
endpoint is a private cloud setup that can use static IP addresses permanently.
bpbkar Exit: INF - EXIT STATUS 3600: Cannot perform the COSP operation.
Error nbpem (pid=13052) backup of client
azuredatalake_COSv17_adlsgen2xxxxx.xxxxxxx exited with status 3600 (cannot
perform the COSP operation).
Explanation
Support for Azure Data Lake was introduced in NetBackup version 10.3, so this
workload does not work on media or storage server versions before 10.3.
Workaround
Ensure that media server, backup host or scale-out server, and storage server
versions are 10.3 or later.
Occurs when you add an empty directory or a leaf level directory without any
contents, to a policy's query filter. During backup that empty directory is not backed
up.
Workaround
Select the Include all files/directories in the selected buckets/containers option
to backup the empty directories.
Note: This error does not appear on the Activity Monitor. For more details, refer
to NetBackup API documentation.
Troubleshooting 94
Restore fails: "Cannot perform the COSP operation, skipping the object: [/testdata/FxtZMidEdTK]"
Note: In this issue, files that are uploaded to Azure portal using the Upload option
are not included.
Workaround
Do any of the following:
■ Try restore operation on the same container to a different directory.
■ Try restore operation to a different container.
■ Try restore operation to a different destination.
■ Delete the original directory, and then try to restore to the same location.
{"level":"error","Error Code":"SignatureDoesNotMatch","Message":"The
request signature we calculated does not match the signature you
provided. Check your key and signing method.
Troubleshooting 95
Discovery failures due to improper permissions
","time":"2023-07-25T10:58:48.130601182Z","caller":"main.validateNBCosCreds:s3_ops.go:1634",
"message":"Error in getBucketLocation for credential validation"}
{"level":"error","errmsg":"Unable to validate creds.","storage
server":"aws-acc","time":"2023-07-
2,51216,309,366,474,1690282728130,1673,140536982484736,0:,0:,0:,2,(28|S113:ERR
- OCSD reply with error,error_code=1003 error_msg:
updateStorageConfig Failed as credential validation failed|)
2,51216,309,366,475,1690282728131,1673,140536982484736,0:,0:,0:,2,(28|S60:ERR
- operation_to_ocsd failed, storageid=aws-acc, retval=23|)
0,51216,526,366,6,1690282728131,1673,140536982484736,0:,132:Credential
validation failed for given account,
Workaround:
Update the credentials and try to create the account again.
25T11:14:14.761525555Z","caller":"main.(*OCSS3).
listBucketsDetailsCOSP:s3_ops.go:5261","message":"Unable
to listBucketsDetailsCOSP"} {"level":"debug","status code"
:403,"errmsg":"AccessDenied: Access Denied\n\tstatus code: 403,
request id: K7JVVPWAGW4KYSQ6, host id:
Workaround:
Add the required permissions. See “Permissions required for Amazon S3 cloud
provider user” on page 21.
Troubleshooting 96
Restore failures due to object lock
</Error>\n","time":"2023-07-25T05:56:00.708117368Z","caller":
"internal/logging.ExtendedLog.Log:zerolog_wrapper.go:18","message":"SDK
log entry"}
{"level":"debug","status code":403,"errmsg":"AccessDenied:
Access Denied\n\tstatus code: 403, request id: ZNT4GXHP70HX573A,
host id:
3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==",
"time":"2023-07-25T05:56:00.708145345Z","caller":"main.s3StatusCode:s3_ops.go:8447",
"message":"s3StatusCode(): get http status code"}
{"level":"error","error":"AccessDenied: Access Denied\n\tstatus code:
403,
request id: ZNT4GXHP70HX573A,
host id:
Troubleshooting 97
Restore failures due to object lock
3scBmke9LmOwtuK5lnYv0ozyKgbne+ey04qXtSt6s/OQbpSCyfxiwvdi2CPG3cHU+H/ztz7C3mHeoX5Cnvb2xg==",
"object
key":"cudtomer35jul/squash.txt","time":"2023-07-25T05:56:00.708160142Z",
"caller":"main.(*OCSS3).commitBlockList:s3_ops.go:2655",
"message":"s3Storage.svc.PutObjectRetention Failed to Put
ObjectRetention"}
Workaround:
You must have the required permissions for object retention. These are the
necessary permissions that your role must have:
"Version": "2012-10-17",
"Statement": [
"Sid": "ObjectLock",
"Effect": "Allow",
"Action": [
"s3:PutObjectRetention",
"s3:BypassGovernanceRetention"
],
"Resource": [
"*"
}
Troubleshooting 98
Restore failures due to object lock