SG 248513
SG 248513
Henry Vo
Gayathri Gopalakrishnan
Diego Kesselman
Rohit Chauhan
Ash Giddings
Luis Eduardo Silva Viera
Tim Simon
IBM Power
Redbooks
IBM Redbooks
January 2025
SG24-8513-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server . . . . . . 21
2.1 IBM i data migration to Power Virtual Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.1 Other challenges in cloud migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.2 Migration options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Migration with partial saves and restores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2.1 Example scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Backup and Restore migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.1 Considerations for backup and restore migrations . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.2 Full system backup and restore by using native commands. . . . . . . . . . . . . . . . . 28
2.3.3 Full-system backup from IBM i Source by using BRMS and IBM Cloud Storage and
Restore on IBM Power Systems Virtual Server . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.4 Migrating IBM i to Power Virtual Server using PowerVC. . . . . . . . . . . . . . . . . . . . . . . . 61
2.5 Other migration options in IBM Cloud catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.6 Migrating IBM i by using IBM i Migrate While Active . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.6.1 Migration use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.6.2 Anatomy of a migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.6.3 Data synchronization jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server . . . . . . . . . . . . . 113
4.1 Backup and restore consideration for IBM Power Virtual Server instances . . . . . . . . 114
4.2 IBM Backup, Recovery, and Media Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.2.1 IBM BRMS overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.2.2 IBM Cloud Storage Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.2.3 Cloud solutions for i characteristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.2.4 IBM BRMS turn-key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.2.5 Backing up the IBM i system to the cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.2.6 Full-system backups from the cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.2.7 Recovering the IBM i system from the cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.2.8 Full-system recovery from the cloud using IBM i as an NFS server . . . . . . . . . . 128
4.3 Creating object-level backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.3.2 Copying files to the cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.3 Copying files from the cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.3.4 Backup and restore by using save files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.3.5 Backup by using image catalog and IBM Cloud Object Storage. . . . . . . . . . . . . 151
4.3.6 Sample save and restore IBM i objects to IBM Cloud Object Storage . . . . . . . . 153
4.3.7 Transferring a save library to the cloud by using IBM BRMS . . . . . . . . . . . . . . . 157
4.3.8 Automatically transferring media to IBM Cloud Object Storage . . . . . . . . . . . . . 159
4.4 FalconStor overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.4.1 Installation and Configuration Prerequisites for FalconStor VTL . . . . . . . . . . . . 161
4.4.2 Deployment Scenario with IBM Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.4.3 IBM i Migration Example using FalconStor: Details for 7 Key Steps. . . . . . . . . . 169
4.4.4 Sizing and ordering FalconStor VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.4.5 Configure FalconStor VTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
4.5 Power Virtual Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server
191
5.1 Connecting to an IBM i virtual machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.1.1 Remote access to IBM i using tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.1.2 IBM i 5250 console through LAN adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.2 Using snapshots on IBM i instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.2.1 Taking snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.3 Power Virtual Server with VPC Landing Zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.3.1 Standard variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.3.2 Extend Power Virtual Server with VPC landing zone - Standard variation . . . . . 206
5.3.3 Quickstart variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.3.4 Import variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server. . . . . . . . . . . 211
6.1 Disaster recovery using system level replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.1.1 Solution components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
6.1.2 Solution requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6.2 Operating system level replication with PowerHA SystemMirror for i geographic mirroring
240
6.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.2.2 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.2.3 Implementation of PowerHA SystemMirror for i geographic mirroring in the Power
Systems Virtual Server environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
6.2.4 Switching a geographic mirroring environment . . . . . . . . . . . . . . . . . . . . . . . . . . 257
6.2.5 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.3 Logical replication use case with Bus4i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
6.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
6.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
6.3.3 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
6.3.4 Implementation of Bus4i for IBM i Logical Replication . . . . . . . . . . . . . . . . . . . . 269
6.3.5 Switching a Bus4i environment between two IBM i virtual machines in the IBM Power
Systems Virtual Server Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
6.3.6 BUS4i ISB Disaster Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
6.4 Disaster Recovery with storage replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
6.4.1 Benefits of Global Replication Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Contents ix
x Power Virtual Server for IBM i
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM FlashSystem® Rational®
Aspera® IBM Garage™ Redbooks®
DB2® IBM Security® Redbooks (logo) ®
Db2® IBM Watson® Satellite™
FlashCopy® Interconnect® System z®
IBM® Passport Advantage® SystemMirror®
IBM Cloud® POWER® Think®
IBM Cloud for Financial Services® Power9® Tivoli®
IBM Cloud Pak® PowerHA® WebSphere®
IBM Cloud Satellite® PowerVM®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Ansible, CloudForms, Fedora, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc.
or its subsidiaries in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
This IBM Redbooks publication delivers a how-to usage content perspective that describes
deployment, networking, and data management tasks on the IBM Power Systems Virtual
Server by using sample scenarios.
During the content development, the team used available documentation, IBM Power
Systems Virtual Server environment, and other software and hardware resources to
document the following information:
IBM Power Systems Virtual Server networking
Data management deployment scenarios
Migrations use case scenarios
Backups case scenarios
Disaster recovery case scenarios
This book addresses topics for IT architects, IT specialists, developers, sellers, and anyone
who wants to implement and manage workloads in the IBM Power Systems Virtual Server.
This publication also describes transferring the how-to-skills to the technical teams, and
solution guidance to the sales team.
This book compliments the documentation that available at the IBM Documentation web page
and aligns with the educational materials that are provided by IBM Garage™ for Systems
Technical Education.
Authors
This book was produced by a team of specialists from around the world working at IBM
Redbooks, Center.
Henry Vo IBM® Redbooks® Project Lead. He joined IBM right after graduate from University
of Texas at Dallas 2012 with MIS (Management Information System) major.He shared his
technical expertise in business problem solving, risk/root-cause analyze, writing technical
plan for business. Within IBM, he hold multiple roles such as Project management, Project
lead, ST/FT/ETE Test, Back End Developer, DOL agent for NY.
Rohit Chauhan is a Senior Technical Specialist with expertise in IBM i architecture, working
at Tietoevry Tech Services, Stavanger, Norway, an IBM Business Partner and also one of the
biggest IT service provider in Nordics. He has over 12 years of experience working on the
IBM Power platform with design, planning and implementation of IBM i infrastructure including
High availability/Disaster recovery solutions for many customers during this tenure. Before his
current role, Rohit worked for clients in Singapore and U.A.E in the technical leadership and
security role on IBM Power domain. He possesses rich corporate experience in architecting
solution design, implementations and system administration. He is also a member of
Common Europe Norway with strong focus on IBM i platform and security. He is also
recognized as IBM Advocate, Influencer and Contributor for 2024 through IBM Rising
Champions Advocacy Badge program. He holds a bachelor degree in Information
Technology. He is a IBM certified technical expert and also holds ITIL CDS certificate. His
areas of expertise includes overall IBM i, IBM Hardware Management Console (HMC),
Security enhancements, IBM PowerHA®, Systems Performance analysis and tuning, BRMS,
External storage, Power VM and providing solutions to the customers on IBM i platform.
Luis Eduardo Silva Viera is an IBM i Expert IT specialist and consultant who worked for IBM
in Latin America since 1999 for 20 years, developing expertise as IBM i Consultant in the
design, planning, and implementation of external storage, high availability, Disaster Recovery
(DR), backup and recovery, and performance review solutions for small and large customers
of the IBM i platform in countries in South, Central, and North America. Since 2019, he Works
for the globally operating n-Komm group of companies that are based in Karlsruhe, Germany,
and focuses on the company's IBM i services for its customers in Germany, Switzerland, and
Austria. Starting in 2021, he also supports the group's worldwide activities with the T.S.P.
Bus4i logical replication product for IBM i, and focuses in the high availability, DR, and
migration projects for IBM i customers. Luis also has experience teaching technical courses
on IBM i in different countries.
Tim Simon is a Redbooks Project Leader in Tulsa, Oklahoma, USA. He has over 40 years of
experience with IBM primarily in a technical sales role working with customers to help them
create IBM solutions to solve their business problems. He holds a BS degree in Math from
Towson University in Maryland. He has worked with many IBM products and has extensive
experience creating customer solutions using IBM Power, IBM Storage, and IBM System z®
throughout his career.
Dan Sundt
IBM i Power as-a-Service Product Manager
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xv
xvi Power Virtual Server for IBM i
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
New information
Migration while active
IBM Cloud Network
FalconStor Virtual Tape Library
Logical replication usecase with Bus4i
Changed information
Delete some outdate info
Update new workplace for IBM Cloud®
Add support system to IBM Power10
This chapter provide a look at the benefits of running your AIX, IBM i, or Linux on Power
workloads on the cloud and discussing some of the use cases that can be effectively run on
Power Virtual Server.
IBM Power Virtual Server is targeted at organizations looking for robust, enterprise grade
cloud solutions that can support demanding applications. IBM Power Virtual Server resources
reside in IBM data centers with dedicated networking and storage area network attached
Fiber Channel storage. There are currently 21 data centers located around the world hosting
Power Virtual Server, you can choose one of the data centers that are nearest to your data
center. The list of data center regions is provided in section 1.1.2, “IBM Power Virtual Server
Regions” on page 3. IBM Power clients who rely on private cloud infrastructure can now
quickly and economically extend their Power IT resources to the cloud.
In the data centers, the Power Virtual Server are separated from the rest of the IBM Cloud
servers with separate networks and direct attached storage. The internal networks are fenced
for security, but offer connectivity options to IBM Cloud infrastructure or private cloud
environments located on your premises. This infrastructure design enables Power Virtual
Server to maintain key enterprise software certification and support as the Power Virtual
Server architecture is identical to certified private cloud infrastructure.
Power Virtual Server is an infrastructure as a service offering (IaaS) where there are no up
front costs for deploying resources and resources are paid for based on usage. Power Virtual
Server uses a monthly billing rate that includes the licenses for the AIX, IBM i, or Linux
operating systems. The monthly billing rate is prorated by the hour based on the resources
that are deployed to the Power Virtual Server instance for the month. When you create the
Power Virtual Server instance, you can see the total cost for your configuration based on the
options that you specify. You can quickly identify what configuration options provide you with
the best value for your business needs. There is also an option for a bring your own Linux
image. In this case the customer is responsible for acquiring the subscription and paying the
Linux distributor for licenses used.
You can configure and customize the following options when you create a Power Virtual
Server:
– Number of virtual server instances
– Number of cores
– Amount of memory
– Data volume size and type
– Network interfaces
For those customers looking to move to a cloud-like environment but with a preference for
on-premise hardware, IBM offers IBM Power Virtual Server Private Cloud. This on-premises
based solution enables customers who potentially have sensitive data concerns to implement
their Power based applications in an on-premises cloud solution providing the same cloud
capabilities and management as the public cloud based version of the IBM Power Virtual
Server, while not placing their data in a public cloud infrastructure.
IT administrators
The people who manage infrastructure technology are interested in migrating to the cloud to
speed up time for value their Power workloads, shift capital expense to operating expense,
and improve business resilience and scalability. It allows clients to manage a truly hybrid
environment with flexible burst environments for spikes in usage, development and test
environments, or production workloads. The flexibility of quickly implementing new hardware
for a project speeds time to results and lowers costs, providing significant benefit to the
enterprise.
Application developers
Companies with Power servers feature subject matter experts in AIX, IBM i, and Linux
development can take advantage of this offering, developing mission-critical applications in
the cloud and then deploying either in the cloud or on-premises. Reducing the time to
implement new development and test environments increases the productivity of the
developers.
Data center locations are provided in both North and South America. Figure 1-1 displays
regions and zones in the Americas.
There are additional data centers available in Europe. Figure 1-2 display regions and zones in
Europe.
If you need service in the Asia Pacific regions, data centers are available as shown in
Figure 1-3.
Figure 1-3 Power Virtual Server regions and zones in Asia Pacific
There are additional benefits provided when the workloads are running in an IBM Power
Systems Virtual Server environment that provide significant advantages compared to building
the environment yourself.
1.2.1 Availability
IBM Power Virtual Server is designed to support a range of environments, from
development/test environments to critical business applications requiring high availability and
disaster recovery. The following availability features are built into the offering:
Live Partition Mobility support
The IBM i instance is migrated seamlessly to a new server with Live Partition Mobility at no
cost when the original server is being maintained or is failing. This issue results in no
charge high availability, which is the default behavior but can be tuned.
Data protection and replication
– Instant snapshots can be taken from your IBM i instances.
– Clones can be taken from our disks with just a couple of clicks.
– Instances can be exported to an OVA and saved for later use or migrated to a new
region.
Disaster recovery solutions available
– Data replication using storage replication is available using Global Replication Services
(GRS).
– PowerHA is available for automated failover.
– Other third party IBM i replication services are also supported.
1.2.2 Performance
Power Virtual Server is designed to meet the performance requirements of any environment.
Based on IBM Power9® and Power10 servers, your workloads benefit from the latest
generations of IBM Power technology and the environment is designed to provide excellent
performance. Consider the following:
Power9 and Power10 processors provide significant improvement in performance over
older generations of IBM Power processor based server.
Power Virtual Server uses SAN attached high performance storage provided by IBM
FlashSystem® servers with SSD drives.
– Choose from a variety of IOPs based storage pools to meet the performance needs of
your application.
Network speed is fast:
– Internet connection can scale 1 Gbps - 10 Gbps.
– Local networking is redundant, supports Jumbo Frames and link aggregation, and the
link speed is over 10 Gb.
1.3 Storage
Power Virtual Server utilizes direct attached SAN storage to provide a high performance
storage environment. The storage infrastructure is designed with high availability in mind,
providing multiple connections between the storage and servers with no single point of failure.
In addition, Power Virtual Server provides multiple methods of replicating storage in your
environment to support high availability between data centers and disaster recovery options.
When you define your Power VIrtual Server instance (LPAR, you define the boot image and
the storage tier that you want the boot image volume to be created. You can create additional
data volumes when you define your LPAR or you can attach them to the instance at a later
time. For each storage volume, you define the storage tier for that volume.
The Figure 1-4 shows the supported storage tiers with corresponding IOPS.
Flexible IOPS offers multiple IOPS levels (tiers) of storage to choose from. Each IOPS level
has its own pricing, performance, and capability.
Note: Best practice for an image import is to use the default IOPS level (Tier 3). While
deploying the image you can choose the IOPS level for your boot volume during virtual
server instance deployment, the boot volume's IOPS level does not need to match the
IOPS level of the image.
Flexible IOPS allows you to select a tier that you need for your:
Boot volume
Data volume
Boot volume
When you are creating a virtual server instance, you can define the boot volume by
performing the following steps:
Select your Operating system.
Select or clear the Configure for Epic workloads indicator.
Select your Image.
Select from Tier 0, Tier 1, Tier 3, or Fixed IOPS.
For Storage pool as Figure 1-5 on page 8, select from Auto-select pool, Affinity,
Anti-affinity.
Note: All volumes that are created during VM provisioning are created on the same
storage pool as the boot volume irrespective of their tier selection.
Data volume
During VM provisioning, if you create additional data volumes to attach to the new virtual
server instances then these data volumes can be created on any of the supported storage
tiers. All these additional data volumes reside in the same storage pool where the boot
volume of the virtual server instance resides.
When you create a Power Virtual Server, you can select a private or public network interface
to connect your Power Virtual Server instance to your on-premises networks.
Public network
The public network option provide a quick and easy method to connect to a Power Virtual
Server instance.
IBM configures the network environment to enable a secure public network connection
from the internet to the Power Virtual Server instance.
Connectivity is implemented by using an IBM Cloud Virtual Router Appliance (VRA) and a
Direct Link Connect connection.
The public network is protected by a firewall and supports the following secure network
protocols:
– SSH
– HTTPS
– Ping
– IBM i 5250 terminal emulation with SSL (port 992)
Private network
The private network option allows your Power Virtual Server instance to access existing IBM
Cloud resources, such as IBM Cloud Bare Metal Servers, Kubernetes containers, and Cloud
Object Storage.
Uses a Direct Link Connect connection to connect to your IBM Cloud account network and
resources.
Required for communication between different Power Virtual Server instances.
For more detail on Types of Network see Chapter 3 - Chapter 3, “Networking Solutions for
Power Virtual Server” on page 81.
The Power Systems Virtual Server service is available from the Resource list in the Power
Systems Virtual Server user interface. The service can contain multiple Power Systems
Virtual Server instances. For example, you can have two Power Systems Virtual Server
services: one in Dallas, Texas, US, and another Washington, DC, US. Each service can
contain multiple Power Systems Virtual Server instances.
Before you create your first Power Virtual Server instance, log in to IBM Cloud. See
Figure 1-6.
If you do not have an account, see Create an IBM Cloud acount to register an account.
Dashboards are customizable. You can create a dashboard that displays information that is
relevant to you.
Note: The dashboards you create can be scoped to specific resources. Also, you can
share the dashboards with users in your account, so you can group resources for particular
projects or teams.
You can securely authenticate users, control access to Power Virtual Server resources with
resource groups, and use access groups to allow access to specific resources for a set of
users. This service is based on an Identity and Access Management (IAM) mechanism, which
provides all user and resource management in the IBM Cloud.
For more information about IAM, see Identity and access management (IAM) services.
After you select Workspace for Power Systems Virtual Server, a new window opens.
After you click Create, the Resource List page is displayed, which contains a list of your
account resources. Use this page to view and manage your platform and infrastructure
resources in your IBM Cloud.
Another way to access the Resource List page is to click the Navigation menu in the
upper-left, and then click Resource List in the Dashboard menu. See Figure 1-13.
You can search for resources from anywhere in the IBM Cloud by entering the resource or tag
in the search field from the menu bar.
Each resource is displayed in its row, and an Actions icon is included at the end of the row.
Click the Actions icon to start, stop, rename, or delete a resource.
Click the Create instance button to create a Power Virtual Server instance (VSI), for example
an LPAR or VM, then complete the required fields under the Virtual servers instances section.
The total due per month is dynamically updated in the Summary panel based on your
selections as shown in Figure 1-16. You can create a cost-effective Power Virtual Server
instance that satisfies your business needs.
Number of instances
Specify the number of instances that you want to create for the Power Virtual Server. If you
specify more than one instance, additional options are available, such as hosting all instances
on the same server or not and VM pinning. You can choose to soft pin or hard pin a VM to the
host where it is running. When you soft pin a VM for high availability, PowerVC automatically
migrates the VM back to the original host after the returns to its operating state. The hard pin
option restricts the movement of the VM during remote restart, automated remote restart,
Dynamic Resource Optimizer, and live partition migration.
SSH key
Choose an existing SSH key or create one to connect to your Power Virtual Server securely.
Machine type
Specify the machine type. The machine type that you select determines the number of
maximum cores and maximum memory that is available.
Cores
There is a core-to-vCPU ratio of 1:1. For shared processors, fractions of cores round up to the
nearest whole number. For example, 1.25 cores equal 2 vCPUs.
Memory
Boot image
When you click Boot image, you select boot images from a group of stock images or the list
of images in your catalog.
Attached volumes
You can either create a new data volume or attach an existing one that you defined in your
account.
Network interfaces
At least one private or public network is required. Network interfaces are created by adding a
public network, private network, or both. When you add an existing private network, you can
choose a specific IP address or have one auto-assigned.
Now that you have your VSI running, you see an Actions icon included at the end of the row.
When you click the icon, a pull-down menu is displayed from which you can select to
Immediately shutdown, Restart, Open console, or Delete your instance as shown in
Figure 1-18. If you click the open console, you get a new window with the login prompt for
your system.
The following chapters in this publication expand and show more details about managing the
Virtual Server instances and the associated resources.
IBM i clients seeking to leverage IBM Power Virtual Server's benefits often ask: how do I
move my existing workloads to Power Virtual Server? The process typically involves saving
the system on-premise, transferring the save to Power Virtual Server, and performing a
restore.
A well-planned migration strategy is crucial to the project's success. Since every customer
has unique business needs, the right strategy drives the migration project forward. This
chapter discusses the various tools and techniques for transferring IBM i workloads, including
the requirements, challenges, and advantages of each method.
When moving workloads to a new system, you must consider many issues based on business
rules, technical resources, and the environment. In addition IBM Cloud Power Virtual Servers
have specific requirements and challenges that you must address because you're utilizing a
self-service capability within a multi-tenant cloud environment.
You must consider the following differences between the Power Virtual Server instance and
an IBM i LPAR running on on-premises hardware:
We have no access to the hardware layer, so no HMC or Storage direct access is
provided, and hardware replication cannot be used.
We have no access to the data center, so a physical tape cannot be sent to perform a
restore.
No physical tape is available for backup or restore operations.
Your system is in a remote facility, and communications can have some latency, depending
on your location and link type.
Most IBM i backup devices use SAS or Fiber Channel connections with proven reliability
and fast transfer rates, but network backups are slow, and even backups to disk can seem
slow compared to physical or virtual tape.
Fewer third-party backup software solutions are available for IBM i than in other
environments, and only a small portion of them use the network to transfer data.
Most IBM i shops rely on physical media for backup and restore, and their data recovery
strategies include this media as a main option, even when a replication mechanism is in
place. In the Power Virtual Server instance, the use of physical media is not an option and the
backup and restore process must be modified to use a supported option.
When data must be moved without disrupting the business operation, replication methods or
a migration software setup can be used to reduce the backup window. PowerHA and
third-party products provide solutions to transfer information gradually and synchronize
systems while they keep running. Figure 2-1 on page 23 illustrates how data mirroring can be
used in a migration.
Many resources are available that can be used to migrate data from on-premises systems to
IBM Power Systems Virtual Servers. Therefore, planning is crucial to the migration process.
Any migration from an on-premises LPAR to Power Virtual Server will require some amount of
downtime for your users. Choosing the appropriate migration scenario is important to match
the length of the downtime to your business requirements. One of the major considerations in
planning your migration is the amount of data that needs to be migrated and the amount of
time you have to do the migration – larger LPARs with a large amount of data require more
planning than smaller LPARs. For non-critical LPARS with a tolerance for longer outages for
the migration there are many options available, for critical LPARs that can only tolerate a short
outage you will need to choose a migration option using data replication and synchronization
– consider an IBM i migration tool like IBM PowerHA, MIMIX by Assure, or IBM i Migrate
While Active to meet your business requirements.
If you would like assistance with planning and implementing a migration of your IBM i
workloads to Power Virtual Server, IBM Expert Labs has a standard offering available – IBM
Expert Labs Migrate Services for IBM Power – to assist you. Contact IBM Expert Labs or your
IBM sales representative for more information.
This chapter describes some common tools and scenarios that can be used to migrate your
existing IBM i environment to Power Virtual Server. We discuss the following:
“Migration with partial saves and restores”
“Backup and Restore migration”
“Migrating IBM i to Power Virtual Server using PowerVC”
“Other migration options in IBM Cloud catalog”
“Migrating IBM i by using IBM i Migrate While Active”
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 23
2.2 Migration with partial saves and restores
Migration with partial saves and restores consists of using multiple save files, ISO images
with virtual optical media, or virtual tape images with which a system can be backed up in
parts. These files are then transferred to the new Power Virtual Server instance by using FTP,
SFTP, SCP, or any other method.
The lack of HMC access forces users to use restart mechanisms and requires that an empty
VM instance is created with a working IBM i operating system. Partial restores can be a good
option to migrate LPARs when the operating system images that are provided on IBM Cloud
fit a user’s needs. This scenario is easy to deploy because the target system is already
running.
3. Communications must be set up between the local network and IBM Cloud so that the
system that was created from our source data center can be reached.
4. Some changes should be made on the target server by using the console, in order to allow
object restore. This is shown in Example 2-1.
Note: Some stock images require to enable System Value changes from DST/SST
5. Start *FTP or *SSHD servers to allow file transfer as shown in Example 2-2.
6. On the source system, enough disk space is needed so that the system can be backed up
to disk by using Save Files or IMAGE CATALOG (optical or tape). In this example, Save
Files are used. A control language (CL) with all the commands is recommended, as shown
in Example 2-3.
In this step, all libraries to be migrated and saved as shown in Example 2-4.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 25
Example 2-4 Libraries to be migrated
CRTSAVF BKPIBMI/SAVDLO1
SAVDLO DLO(*ALL) DEV(*SAVF) SAVF(BKPIBMI/SAVDLO1)
CRTSAVF BKPIBMI/SAVIFS1
SAV DEV('/QSYS.LIB/BKPIBMI.LIB/SAVIFS1.FILE') OBJ(('/home/'))
SAVACT(*YES)SAVACTOPT(*ALL)
Check and validate the new system. If the system passes validation, the new server can be
published.
Unfortunately, physical tapes cannot be used with cloud environments; therefore, alternative
methods must be used, such as saving data to disk and the network for transfer. An additional
option is available using FalconStor StorSafe VTL which is a virtual tape library solution
available in the IBM Cloud Catalog for Power Virtual Server environments.
Utilizing the FalconStor StorSafe VTL as a migration option involves installing a FalconStor
StorSafe VTL in the customer location and connecting it to the FalconStor StorSafe VTL
instance in Power Virtual Server. Data is backed up to the FalconStor StorSafe VTL in the
customer location and replicated to the Power Virtual Server instance where it can be
restored to the Power Virtual Server instance. This is described in section 4.4.3, “IBM i
Migration Example using FalconStor: Details for 7 Key Steps” on page 169
Important: It is important to mention that you can save your data to Physical tape and
convert the media to Image Catalog Virtual Tape cartridges in a different system with
enough space and the adequate communications to reach Cloud Object Storage or your
target system.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 27
IBM Aspera® software improves data transfers with IBM Cloud Object Storage and upload
backups by using multiple streams. You can run IBM Aspera client from a PC or side
server and make file transfers move faster to IBM Cloud.
When the amount of data to be transferred makes it impossible to use the network, IBM
Cloud can be used to send a Massive Data Migration device. This portable storage device
can save up to 120 TB and then, this data can be uploaded directly to IBM Cloud network,
which avoids direct file transfers from our data centers.
Once your system is up and running, you will need to plan a data backup solution. See
Chapter 4, “Back up for IBM i on IBM Power Systems Virtual Server” on page 113 for more
info on solutions to back up data in Power Virtual Server instances.
This process is the most traditional migration method. Most customers are confident when
performing a full system backup by using physical tape devices or Virtual Tape Libraries (VTL)
because this process is part of the Disaster Recovery procedures.
IBM i can be installed from this special backup, starting with License Machine Code (SLIC)
and then the operating system. However, because cloud environments cannot use the
physical media that is required for a “D” IPL, a different procedure must be used.
On IBM Power Virtual Servers, the VM cannot be started by using network installation
parameters, and license keys and IP addresses are needed to identify the correct Ethernet
adapter to use when restoring the system. Therefore, a starting point must be created by
using a new empty instance that uses the target operating system.
At the time of this writing, the following operating systems are supported in IBM Cloud Power
Virtual Server service:
V7R1 with extended support on IBM Power9 servers
V7R2 with extended support on IBM Power9 servers
V7R3 with extended support
V7R4 with regular support
V7R5 with regular support
When a restore must be done from this virtual media, special requirements that are based on
cloud characteristics are important. The new iSCSI VTL for IBM i or an IBM i NFS server that
is combined with TFTP service can be used so that the system can use BOOTP and load the
SLIC that is necessary to IPL.
In this section. a temporary IBM i VM with NFS and TFTP is used to install the system from
the full system backup media. This special restore process works with Virtual Optical media,
and the backups must be done on this media.
Some specific requirements must be met to perform this migration and some of them need
time to be provisioned. Therefore, good planning is essential for success.
Tip: When a disk is not available on the source system, use an NFS server or a Mass Data
Migration device for backup purposes.
Setting up IBM i network installation server with NFS Server and NFS
Client
Complete the following steps to set up IBM i network installation server with NFS server and
NFS client:
1. Provision an IBM i VSI in the target Power Systems Virtual Server location to be an NFS
Server.
IBM i NFS Server is at a minimum on Version 7.2 with current PTFs.
2. To use virtual optical images through an NFS server, the IBM i NFS client must meet the
following requirements:
– The IBM i includes a Version 4 Internet Protocol (IP) address.
– During set-up, the shared NFS server directory is mounted over a directory on the IBM
i client.
– An IBM i service tools server or a LAN console connection is configured by using a
Version 4 IP address.
– A 632B-003 virtual optical device is created that uses the IP address of the NFS server.
Note: The IBM i IP address and the IBM i service tools server (LAN console connection) IP
address can be the same.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 29
Setup process
Figure 2-3 shows how data flows in this type of migration process.
As shown in Figure 2-3, different mechanisms are based on resources and the migration
strategy that is used.
Backup process
It is assumed that not enough space is available on the IBM i source server to be migrated.
When the free disk space is less than 50%, a full system backup can be performed to an NFS
disk.
2. Set up an NFS server on the Linux system. Our source network is IBMi04 with an IP
address of 192.168.50.244 and Linux 192.168.50.22, as shown in Example 2-6.
4. At IBMi04, set the Service Tools Server from SST or DST. For more information, see the
following resources:
– Configuring the service tools server using SST
– Configure a service tools server for DST for the virtual optical device to use
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 31
Consider the following points:
– The Line Description that is associated to the IP address must be in the same network
as Linux NFS server:
CFGTCP
Option 1
F11
– This command shows our network interface and the Line Description name, as shown
in the following example:
ETH01:
DSPLIND ETH01
5. Start Service Tools (SST):
– STRSST
6. Select Option 8 Work with Service tools Server Security and Devices (see
Figure 2-4).
Figure 2-5 SST Work with Service Tools Server Security and Devices
9. As shown in Figure 2-7 on page 33, set the host name, address (which is different from
the main interface), gateway address, and subnet mask.
10.“Store” the LAN adapter by pressing F7, deactivate the LAN adapter by pressing F13, and
then, Activate the LAN adapter by pressing F14.
11.Press F3 to exit Service Tools.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 33
Note: It is important to know the amount of data to be backed up so that enough virtual
media can be created. It is not possible to create the images during the backup process.
cd /NFS/IBMi04
dd if=/dev/zero of=IMAGE01.ISO bs=1M count=10240
dd if=/dev/zero of=IMAGE02.ISO bs=1M count=10240
dd if=/dev/zero of=IMAGE03.ISO bs=1M count=10240
dd if=/dev/zero of=IMAGE04.ISO bs=1M count=10240
# Now we need to create the VOLUME_LIST file entries to tell our device how to
change media when the mounted ISO gets full.
15.Create a device description that is linked to the Service Tools Server LAN Adapter, as
shown in Example 2-9.
Figure 2-8 Select the resource type: Lite or Standard and create the resource
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 35
Figure 2-10 Select and Create the bucket
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 37
22.Take note of the endpoint URI. When connecting by using VPN, select Private. When
connecting the internet, use Public (see Figure 2-14, Figure 2-15, Figure 2-16 on
page 38, and Figure 2-17 on page 39).
25.After the process completes, verify whether all files were uploaded to the Cloud Object
Storage portal, as shown in Figure 2-18.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 39
Figure 2-18 View objects in bucket
26.In the command line, use awscli and the ls command, as shown in Example 2-12.
2. Install some tools by using the ACS or command line, as shown in Example 2-14:
pigz gzip gunzip python3 python3-pip
PATH=$PATH:/QOpenSys/pkgs/bin
export PATH
# Paste our Secret Key, Access Key and select the region.
aws --endpoint-url=https://s3.us-south.cloud-object-storage.appdomain.cloud s3 cp
s3://ibmi-bkp01/IMAGE01.ISO.gz /NFS/RESTORE21
aws --endpoint-url=https://s3.us-south.cloud-object-storage.appdomain.cloud s3 cp
s3://ibmi-bkp01/IMAGE02.ISO.gz /NFS/RESTORE21
aws --endpoint-url=https://s3.us-south.cloud-object-storage.appdomain.cloud s3 cp
s3://ibmi-bkp01/IMAGE03.ISO.gz /NFS/RESTORE21
aws --endpoint-url=https://s3.us-south.cloud-object-storage.appdomain.cloud s3 cp
s3://ibmi-bkp01/IMAGE04.ISO.gz /NFS/RESTORE21
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 41
5. Press F3. Complete the following steps in QCMD:
a. Create an Image Catalog and add our virtual media:
CRTIMGCLG IMGCLG(RESTORE21) DIR('/NFS/RESTORE21') CRTDIR(*NO)
ADDVRTVOL(*DIR) IMGTYPE(*ISO)
b. Create and male available a Virtual Optical Device:
CRTDEVOPT DEVD(NFSOPT) RSRCNAME(*VRT) LCLINTNETA(*N) VRYCFG CFGOBJ(NFSOPT)
CFGTYPE(*DEV) STATUS(*ON)
c. Load the Image Catalog with the device:
LODIMGCLG IMGCLG(RESTORE21) DEV(NFSOPT)
d. Verify the IMAGE Catalog (some files are extracted for the remote IPL process):
VFYIMGCLG IMGCLG(RESTORE21) TYPE(*LIC) SORT(*YES) NFSSHR(*YES)
e. Change authorities:
CHGAUT OBJ('/NFS/RESTORE21') USER(*PUBLIC) DTAAUT(*RWX) SUBTREE(*ALL)
CHGAUT OBJ('/NFS/RESTORE21') USER(QTFTP) DTAAUT(*RX) SUBTREE(*ALL)
f. Export NFS Share as read-only:
CHGNFSEXP OPTIONS('-i -o ro') DIR('/NFS/RESTORE21')
g. Start NFS servers:
STRNFSSVR *ALL
h. Change the TFTP server attributes:
CHGTFTPA AUTOSTART(*YES) ALTSRCDIR('/NFS/RESTORE21/BOOTP')
i. Restart the TFTP server:
ENDTCPSVR SERVER(*TFTP)
STRTCPSVR SERVER(*TFTP)
6. Complete the following steps to set up the environment In the TARGET instance:
a. Configure Service Tools Server LAN Adapter by using a different IP address on the
same private network segment and adapter (IP address is 192.168.80.21). Use the
guidelines that were described in the previous backup process.
b. Add the NFS instance to the host table:
ADDTCPHTE INTNETADR('192.168.80.12') HOSTNAME((IBMiNFS))
c. Mount NFS SHARE on the target instance to confirm whether the files are visible:
MOUNT TYPE(*NFS) MFS('192.168.80.12:/NFS') MNTOVRDIR('/NFS')
WRKLNK '/NFS/RESTORE21/*'
d. Create the virtual optical device:
CRTDEVOPT DEVD(NFSRESTORE) RSRCNAME(*VRT) LCLINTNETA(*SRVLAN)
RMTINTNETA('192.168.80.12') NETIMGDIR('/NFS/RESTORE21')
VRYCFG CFGOBJ(NFSRESTORE) CFGTYPE(*DEV) STATUS(*ON)
e. IPL the instance so that the changes can take effect.
Figure 2-20 Display line description details and get the Resource Name
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 43
Figure 2-21 Display Resource Detail by using WRKHDWRSC *CMN option 7
Figure 2-23 Confirm your NFS instance values and continue with F10
iv. Confirm the installation of the operating system and then, press Enter.
v. Select a language group from the Select a Language Group Menu, (in this example,
2924 is selected).
vi. If disks must be added, use the Add All Disk Units to the System menu. In our
example, option 1 is selected: Keep the current disk configuration.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 45
vii. The sign-on window opens. Log in by using the QSECOFR user profile. The IPL
options are displayed. Enter your choice (see Figure 2-24).
Now, the IBM i TARGET system is running and the validation process can begin.
Alert: If a failure occurs during the LIC or operating system installation process, delete the
instance and start again by installing the LIC and operating system.
2.3.3 Full-system backup from IBM i Source by using BRMS and IBM Cloud
Storage and Restore on IBM Power Systems Virtual Server
This section describes how to perform a save and transfer of an IBM i workload on-premises
to Power Systems Virtual Server by using Backup, Recovery, and Media Services (BRMS)
with the IBM Cloud Storage licensed program product (LPP) for IBM i.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 47
Figure 2-25 Migrating IBM i to the Cloud using IBM Cloud Object Storage
Figure 2-26 shows the steps to retrieve your image by using IBM Cloud Storage Solutions
for i.
i
Figure 2-26 Get your BRMS Image Catalog on IBM Cloud Object Storage
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 49
IBM i software requirements
For Cloud-Init Support for IBM i, this Cloud-Init requirement is installed post deployment. Any
images that are imported to the cloud must have cloud-init installed.
If you are bringing your own IBM i custom SAVSYS, you must install these SAVSYS PTFs
and the software that is required for cloud-init. For more information, see this IBM Support
web page.
Note: For IBM i 7.2 and 7.3 on POWER8 and later version, cloud-init is enabled
automatically.
For IBM i 7.2 and 7.3 or other IBM i versions that are running on POWER7 or later Power
Systems servers, the process that is described in “Installing cloud-init on IBM i” on page 50
must be completed to run cloud-init on IBM i.
After you enable cloud-init on the IBM i server, power off the system; then, it is ready to
save and migrate.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 51
2. Click My Entitled Software -> IBM i evaluation. Figure 2-28. For more information, see
hthis web page.
3. Click IBM i product 5770-SS1. Then, click Continue, as shown in Figure 2-29.
5. Agree to the evaluation terms. Then, click Continue. Figure 2-31 on page 53
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 53
6. Select new download.Figure 2-32
7. If you have chosen HTTP, just click to download the link. Figure 2-33
Figure 2-33 Entitled software downloads - Select language and press Continue
In our system, we extracted the compressed file content and renamed it to 5733ICC.udf.
3. Create an image catalog for the licensed programs that you want to install. The Create
Image Catalog (CRTIMGCLG) command associates an image catalog with a target directory
where the optical image files are loaded, as shown in Example 2-20.
4. If you downloaded your images into an image catalog directory, two quick methods are
available to add all of the images at once into your image catalog:
– Use the CRTIMGCLG command, as shown in Example 2-21.
– Use the QVOIFIMG API when the image catalog already exists as shown in
Example 2-22 on page 55.
5. Add an image catalog entry for each physical media or optical image file. You must repeat
this step for each volume of media. Add the physical media or optical image files in the
same order as though you were going to install from them. Start with the first media in the
list and continue until all of the media are loaded.
You can add the entries from an optical device or from an optical image file. Select one of
the following methods:
– From an image file
This method is the fastest way. To add an image entry to an image catalog from an
Integrated File System file that is in the image catalog directory, enter the command as
shown in Example 2-23.
Note: If you need to add multiple images, see the CRTIMGCLG command and the QVOIFIMG
API to add all of the images at the same time.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 55
– To add an image catalog entry to an image catalog from an integrated file system
optical image file that is from a directory other than the image catalog directory, enter
the commands as shown in Example 2-24.
Note: To generate a name for the TOFILE parameter and a text description from the
media, specify *GEN.
Example 2-26 Load the Virtual Optical Device to the Image Catalog
LODIMGCLG IMGCLG(catalog-name) DEV(virtual-device-name) OPTION(*LOAD)
Note: If you are preselecting the licensed programs to install, do not perform this step now.
You are directed to perform this step later.
If you are preparing for an upgrade, you must verify that the required media for an upgrade
exists and is sorted in the correct sequence. You also must verify that your software
agreements were accepted, and that reserved storage is available for the Licensed
Internal Code.
Enter the command as shown in Example 2-27.
To verify that images are added, another method, as shown in Example 2-28.
Then, press F7 to prompt for the VFYIMGCLG command. Enter *UPGRADE for the type and
*YES for the sort field.
To see the order of the images, use the Work with Image Catalog Entries (WRKIMGCLGE)
command:
WRKIMGCLGE IMGCLG(catalog-name)
After completing these steps, your image catalog is ready for use.
Full-system backups from IBM i source by using BRMS and IBM Cloud
Storage
The following save data must be restored from physical media before BRMS can begin
restoring save data directly from the cloud:
SAVSYS to install the operating system.
IBM Backup, Recovery, and Media Services for i and BRMS save information.
IBM TCP/IP Connectivity for i and configuration information to allow communications with
cloud storage providers.
IBM Cloud Storage Solutions for i and configuration information to establish connections
with cloud storage providers.
BRMS provides specific control groups that can be used to automatically save this data to
media in the cloud and the cloud media can be used to create physical media. The control
groups create cloud media that is formatted so it can be downloaded and burned directly
to physical optical media. All remaining data on the system can be backed up to media in
the cloud and restored directly from the cloud without a need to create physical media.
Control group QCLDBIPLnn can be used to perform full backups of all data that must be
recovered from physical media. Likewise, QCLDBGRPnn can be used to perform
cumulative incremental saves of the data that must be recovered from physical media.
Note: The Journaled objects control group field must be changed to *YES for a
QCLDBGRPnn control group before the control group is used to perform an incremental
backup.
Run the WRKCTLGBRM command and change the Journaled objects field by specifying option
8=Change attributes for QCLDBGRPnn.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 57
Control group QCLDBSYSnn can be used to perform a full backup of the data that can be
restored directly from the cloud. Likewise, control group QCLDBUSRnn can be used to
perform cumulative incremental backups of the data that can be restored directly from the
cloud.
Note: The Journaled objects control group field must be changed to *YES for a
QCLDBGRPnn control group before the control group is used to perform an incremental
backup.
Run the WRKCTLGBRM command and change the Journaled objects field by specifying option
8=Change attributes for QCLDBGRPnn.
It is critical to run the cloud control groups in the correct order; otherwise, all necessary media
information is not available to perform a recovery.
Complete the following steps to perform a full save of your entire system:
Note: Processing time for each backup depends on the size of your system processor,
device capability, and the amount of data that you want to save.
You cannot perform other activities during these backups because your system is in a
restricted state.
3. Display QSYSOPR MSGQ, run the DSPMSG QSECOFR command, and look for the following
messages:
– System ended to restricted condition.
– A request to end TCP/IP has completed.
4. Change subsystems to process for Control Group QCLDBSYS01:
a. Run the WRKCTLGBRM command.
b. Find QCLDBSYS01 (see Figure 2-34).
c. Select Option 9=Subsystems to process.
d. Change Restart to *NO for Seq 10 Subsystem *ALL.
Note: If you see a break message during the backup, press Enter to return you to the
window in which you entered the STRBKUBRM command so that you can review the backup
progress.
5. Run the First backup from the console, as shown in Figure 2-33 on page 54.
6. Check the backup for errors. It is normal to see some errors, such as the following
examples:
– Objects not saved (some objects are not required for the recovery).
– Media not transferred (complete this step manually after the second backup).
7. Check the subsystems after the backup completes. You see only subsystem QCTL in a
status of RSTD. If not, end all subsystems again, as shown in Example 2-31.
Example 2-31 End all Subsystems again when QCTL is not the only one running
ENDSBS SBS(*ALL) DELAY(120)
Note: Control groups that have a QCLD prefix enable BRMS to automatically create and
transfer media to the cloud. Control groups QCLDBIPLxx or QCLDBGRPxx must be used to
burn DVDs for recovery. If the backup uses media class QCLDVRTOPT, the BRMS default is to
create 10 virtual volumes to back up to optical.
– Because optical devices do not have an exit program interface to handle media
switching while a backup is running, the backup command must provide enough
volumes to successfully hold the backup data. A symptom that indicates the backup
does not fit on the initial number of volumes that are provided is that the backup fails
with message BRM4301 Volume list exhausted.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 59
– When control groups QCLDBIPLxx or QCLDGRPxx are used, the required volume size is
4.7 GB to burn DVDs for manual recovery. The control group data size cannot exceed
350 GB because of the 75 volume restriction.
9. Set the number of optical volumes for automatic transfers:
– The number of virtual optical volumes to create can be specified by running the
command that is shown in Example 2-32.
– The number of volumes to create can be reset to the default value of 10 by running the
command that is shown in Example 2-34.
– Run the second backup from the console, as shown in Example 2-35.
– Check the backup for errors. It is normal to see some errors, such as the following
examples:
• Objects not saved (some objects are not required for the recovery).
• Media not transferred (complete manually after the second backup).
– Identify the volumes that are used for backups QCLDBSYS01 and QCLDBIPL001 and
transfer to IBM Cloud Object Storage.
– Check the status of the transfer by running the WRKSTSICC STATUS(*ALL) command.
– If the status is Failed, this status is normal. The volumes are transferred in the next
step.
– Identify which volumes were used for the backups, as shown in Example 2-36.
Note: It is important to review the recovery report to ensure it is complete. If any of the
media that is produced during the backup process is not successfully transferred to the
cloud, it is not included in the recovery report.
The CTLGRP and PERIOD parameters that are specified with the STRRCYBRM command help
identify objects that are saved to volumes that were not transferred to the cloud.
If objects are on volumes that were not included in the recovery report, they are listed in a
Missing Objects Attention section near the top of the report.
After the recovery report is verified, the report is stored in a safe location so it can be
referred to during a recovery.
13.Daily incremental backups can be run Monday - Saturday by using the control groups
command, as shown in Example 2-40.
It also provides numerous operational benefits, such as one-click system evacuation for
simplified server maintenance, dynamic resource optimization (DRO) to balance server
usage during peak times, and automated VM restart to recover from failures.
Users can easily import and export VM images (in the standard Open Virtual Application
(.OVA) format) from IBM PowerVC and upload them into IBM Cloud for easy back-and-forth
image mobility.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 61
For more information about migrating an IBM i VM from IBM Power Virtualization Center to
IBM Power Systems Virtual Server, see Red Hat OpenShift V4.X and IBM Cloud Pak on IBM
Power Systems Volume 2, SG24-8486.
Infrastructure management delivers the insight, control, and automation enterprises that are
need to address the challenges of managing virtual environments, which are far more
complex than physical environments.
The use of IBM Cloud Pak for Multicloud Management can be an end-to-end solution to
manage IBM i workloads on- and off-premises. Therefore, you can migrate and deploy IBM i
images, as shown in Figure 2-35 on page 62.
Figure 2-35 Export IBM i PowerVC images to IBM Power Virtual Server
IBM i images can be deployed on- or off-premises by using Infrastructure-as-Code (IaC) that
uses HarshiCorp Terraform. You also can use Ansible for i for deployment and configuration
management. This approach can be used for deploying an IBM i image by Terraform and
configuration management that uses Ansible controller on IBM i as an in-house setup.
For an Enterprise solution, the use of IBM Cloud Pak for Multicloud Management that is
integrated with Terraform, Red Hat Ansible Automation Platform, and Ansible Tower
(compatible with IBM i systems), can be a solution for IBM i workloads management, and as a
This product allows you to migrate an IBM i workload without the need for tapes or additional
disk/storage space. A TCP/IP network connection is all that is required and the backup and
restore processes begin simultaneously. The entire procedure is designed for the user's
convenience and to ensure a smooth and efficient migration. Migration can be executed
unattended and once it is finished, the user receives an email notification.
This product is useful for off-line migrations, that is migrations in which the system to be
migrated can be put in a restricted state while the migration lasts.
The basic offer includes only the license, does not include services. It is a one-time charge
license for the migration tool in self -service modality (that is, without TSP service hours). This
license is valid for 30 days and to migrate one (1) IBM i partition.
More information about this product offering can be found in this IBM Cloud Catalog link.
Additionally, for cases where the Customer wants the off-line migration with Bus4i System
Copy-Migrate 23 to be executed by the experts of T.S.P., it is available an offering that
complements the previous one, adding Professional Services of T.S.P. to provide the
assistance required for migration.
This is a good option for companies that seek expert assistance with the migration of their
IBM i systems, since the professional services of T.S.P. offer a comprehensive and practice
solution by experts with many years of experience, which will ensure perfect and efficient
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 63
migration. More information about this service offering can be found in the this IBM Cloud
Catalog link.
This professional services offering utilizes a proven methodology that combines advanced
tools and expertise to migrate your IBM i system to an IBM i System in IBM Cloud (or in any
other place) in a non-disruptive manner.
The team of experts from T.S.P. works in close collaboration with the Customer to understand
their specific requirements and develop a personalized migration plan. This plan will take into
account the configuration, applications and workload of your system to guarantee a perfect
transition.
Using specialized tools and techniques, data and applications are migrated from the source
IBM i system to the new target system. This process is designed to minimize the interruption
and guarantee the integrity of the data. Finally, after rigorous synchronize validation tests, a
planned switchover is executed for the enable of the productive system in its new
environment, minimizing any potential impact.
This offering applies to online migrations, that is, migrations in which users must continue
working in the system while the migration lasts and that the only interruption is limited to the
planned switchover to move to the system in the IBM Cloud after both copies are
synchronized.
This offer includes the license and services required in a single package. The license is valid
for 30 days and to migrate one (1) IBM i partition.
More information about this service offering can be found in IBM Cloud Catalog link.
Rocket iCluster
Rocket Software's iCluster product is also available in the IBM Cloud catalog and can be used
to migrate IBM i workloads to the IBM Cloud. Rocket iCluster high availability/disaster
recovery (HA/DR) solutions ensure uninterrupted operation of IBM i applications, providing
continuous access by monitoring, identifying and self-correcting replication issues.
Using Rocket iCluster, a full replication from the on-premise system to a target system in IBM
Cloud can be established, so that both systems can be synchronized without affecting the
availability of the source system. Once both systems are synchronized, the migration to IBM
Cloud can be executed in minutes through a Planned Switchover in a simple and efficient
way.
More information about this service offering can be found at this IBM Cloud Catalog link.
to:
IBM Cloud Power Virtual Server (AIX, IBMi)
IBM Cloud - IBM Kubernetes Service (IKS)
IBM Cloud Satellite®
VMware on IBM Cloud Class
Wanclouds proudly presents IBM Power Virtual Server Migration as a Service, designed to
facilitate effortless migration of your workloads to IBM Power Virtual Server cloud
infrastructure.
With Power Virtual Server Migration as a Service Wanclouds’ team can craft a customized
migration strategy based on each Customer's unique requirements, ensuring minimal
disruption and providing end-to-end migration support, from initial assessment and planning
to execution and post-migration support.
This is done with a priority placed on mitigating risk by conducting thorough assessments of
your existing infrastructure, and planning to ensure a smooth transition. Wanclouds also
provides continuous monitoring and proactive support to address any issues promptly,
ensuring the smooth operation of your workloads.
More information about this service offering can be found at this IBM Cloud Catalog link.
For more detail on VTL, check out 4.4, “FalconStor overview” on page 161.
TEL-Cloud provides professional services for all aspects of IBM Cloud adoption, but majors in
Cloud Native development, Hyper Protect Crypto Service (HPCS), IBM Kubernetes Services
(IKS), Red Hat OpenShift Kubernetes Service (ROKS), Satellite™, Security Compliance
Center (SCC), VMware and associated core capabilities in IBM Cloud for networking,
firewalls, security, back up and disaster recovery.
TEL-Cloud provides professional services for IBM Cloud in a simple offering framework of
Architect and Build services, each available in fixed increments via this Tile. In addition,
TEL-Cloud offers a monthly subscription called Expertise Connect for clients who want
ongoing IBM Cloud subject matter expertise to plan and guide their IBM Cloud adoption.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 65
FNTS Cloud Migration Services by First National Technology Solutions
Partner with IBM, First National Technology Solution (FNTS) offer custom solution base on
business need for migrate IBM i, AIX, Linux and Windows workloads to IBM Cloud. FNTS will
work with you to build a migration plan and estimate the service credits required to deliver the
project and ongoing services.
IBM i Migrate While Active orchestrates the migration of an IBM i logical partition (LPAR) to a
different location. That location can be within the same server, data center, or remote location
such as IBM Power Virtual Server on IBM Cloud.
The process starts with taking a source LPAR at IBM i 7.4, IBM i 7.5, or higher, making a copy
of that LPAR, and migrating data from the source to the copy. While the source node is still
active, changed data will be continuously migrated to the copy node. When the two nodes are
in sync, a cutover to the new node can be performed, which completes the migration.
Migrate While Active is optimized for migration where a physical tape cannot be used for a
traditional D-Mode IPL and restore. It handles the save to optical media so a D-mode network
install may be performed. Figure 2-37 provides a high level view of the process.
Traditional Db2 Mirror provides continuous availability with a two-node cluster, using
RDMA-capable hardware adapters to synchronize Db2 for i data, replication-eligible object
types, and security attributes in real-time. The two IBM i nodes can operate at different
maintenance levels, with varying PTF levels and operating system levels. Migrate While
Active creates a copy node that matches the source node's operating system level, PTF
levels, and capabilities, as captured during the save process.
Instead of relying on RDMA capable hardware adapters, Migrate While Active uses TCP/IP as
the communication protocol between the source and copy nodes. Another important
difference is that traditional Db2 Mirror uses synchronous replication and Migrate While Active
uses asynchronous replication. At any point during the migration process, the Migrate While
Active user can examine the remaining amount of migration needed before the copy node will
be in sync with the source node.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 67
2. Migrate an on-premises IBM i from a local data center to any IBM i cloud provider. The
production node is moving to a cloud provider.
3. Migrate an on-premises IBM i to a newer and more advanced Power system. The
production node is moving to new hardware within the same data center.
4. Migrate an IBM i for the purpose of creating a clone. The production node remains intact
and is merely used as the master source of the copy node.
These and many other use cases are made possible with Migrate While Active.
A Migrate While Active migration consists of three distinct stages. First, we set up and migrate
the system. Next, we synchronize the data when the migration is nearly complete. Finally, we
initiate a system cutover once the data is synchronized and validated, as illustrated in
Figure 2-39.
For Db2 Mirror replication-eligible objects, Migrate While Active tracks every action that
resulted in changes to replication-eligible objects.
The resynchronization processing is focused on replicating the tracked user actions to the
copy node.
2. QMRDBNSYNC - Handles tracked non-Db2 Mirror replication-eligible objects within
libraries. Multiple changes to the same object typically yield a single entry on the
LIBRARY_MIGRATION_LIST view. The LIBRARY_MIGRATION_LIST view returns a list
of all eligible library-based objects that require migration to the copy node by Migrate
While Active.
For non-Db2 Mirror replication-eligible objects, tracking is focused on the library name, object
name, and object type. Actions that cause a change to an object result in a single entry on the
LIBRARY_MIGRATION_LIST.
The resynchronization processing saves and restores the object from the source node to the
copy node.
3. QMRDBISYNC - Handles tracked integrated file system objects. Multiple changes to the
same object typically yield a single entry on the IFS_MIGRATION_LIST view. The
IFS_MIGRATION_LIST view returns a list of all the changed integrated file system objects
that need to be migrated to the copy node.
For integrated file system objects, tracking is focused on the path name and object type.
Actions that cause a change to a specific integrated file system object result in a single entry
on the IFS_MIGRATION_LIST.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 69
The resynchronization processing saves and restores the object from the source node to the
copy node.
Per the agreement signed with the customer, all IBM i VMs are to be moved “as is”, with
operating system release 7.1. No transformation is included in the project, except the
operating system upgrade to 7.3.
This environment includes storage, backups in IBM Cloud Object Storage, and an internal
network, which features firewalls and a jump server for the IBM i VMs access management.
Also, thus use case includes a third-party vendor logical replication solution between the
source and target, which replicates across different zones in IBM Cloud (see Figure 2-40).
Figure 2-40 Migrating IBM i VMs from La Paz (on-premises) to Sao Paulo (off-premises)
2.7.2 Scope
In this use case, all 10 IBM i VMs must be migrated from two data centers (LPZ01, which is the
source, and LPZ02, which is the target) and are managed by a customer in La Paz, Bolivia, to
the IBM Cloud data center in SAO01 (the source) and SAO04 (the target).
Development and archive VMs that are in the La Paz-Bolivia data center are to be migrated to
IBM Cloud by using IBM Cloud Object Storage backup. Then, they are to be restored to the
target VM in IBM Power Virtual Server on IBM Cloud. The IBM i production VMs that are on
LPZ01 are to be migrated by using the IBM Cloud Object Storage backup and logical
replication method.
Note: A maintenance contract, an activation key of the logical replication, and installation
and configuration that done by the third-party vendor often are needed for this process.
After the IBM i VMs are active, the data is restored to the target location and the connectivity
from La Paz data center is established by using IBM i Access Client Solution (ACS). After
connectivity is validated, the customer’s applications and their functions are configured in the
environment.
Also, each VM and the overall function for each environment are tested.
IBM provides to customer the temporary technical information. such as: IP addresses, DNS
names, and VM temporary name. This information required so that the tests can be
completed without any errors on IBM i production environments that are established in La Paz
Bolivia.
The customer provides the firewall rules that are to be set up by IBM. User profiles are
automatically transferred to the IBM Power Systems Virtual Server on IBM Cloud that is
restoring user profiles (RSTUSRPRF). It is the customer’s responsibility to administer their
applications and user ID management.
After migration, IBM can run the suitable security scan for each IBM i environment that is
based on the customer’s security policy. IBM also provides the evaluation results to the
customer.
Often, the following teams (among others) are involved in the transition and steady state
support of the environment:
IBM Cloud connect for the network
IBM Cloud Object Storage
IBM Power Virtual Server
IBM i support team
Customer’s application team
Third-party vendor software team support for logical replication
Image catalogs are created out of objects that are backed up by using optical devices. These
catalogs must be restored on the IBM Power Systems Virtual Server instance by using some
of the migration strategies that us IBM Cloud Object Storage and NFS server.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 71
IBM Backup, Recovery, and Media Services (BRMS) is an IBM i product that can be used to
automate tasks that help define and process your backup, recovery, and media management
operations. IBM Content Collector can be integrated with IBM BRMS to move and retrieve
objects from remote locations, including IBM Cloud Object Storage.
The following procedure shows how the customer’s IBM i operating system and data is
migrated from an on-premises system to the IBM Cloud environment. (Most of these steps
can be automated by using IBM BRMS and IBM Content Collector):
1. Install IBM BRMS and IBM Content Collector.
2. Install IBM i System Minimum PTF levels.
3. Install BRMS and IBM Cloud Object Storage PTFs.
4. Install Cloud-Init for IBM i and PTFs.
5. Save the system by using IBM BRMS and IBM Content Collector to IBM Cloud Object
Storage.
6. Create IBM BRMS Recovery Reports.
7. Copy the data to IBM Cloud Object Storage.
8. Copy the data from IBM Cloud Object Storage.
9. Build a VM in Power Virtual Server by using NFS VM.
10.. Restore the user data by using the image catalog.
Infrastructure considerations
The technical infrastructure architecture and design consist of deploying several zones in IBM
Cloud in Sao Paulo. In addition to the IBM Power Systems Virtual Servers zone that is named
Power Colo (Power Collocation), a front-end zone is created in which jump servers are
deployed for users access and other functions, such as a proxy for accessing the IBM Cloud
Object Storage services. These services are hosted in a bare metal server.
A cluster of firewalls is deployed in the front-end zone within the SAO01 site. A stand-alone
firewall is deployed in SAO04.
Functional requirements
The solution must satisfy functional and nonfunctional requirements in a way that best
balances competing stakeholders’ concerns and that considers any relevant constraints (see
Table 2-1).
Dual sites high availability. The use of any third-party vendor solution on logical
replication to establishing DR between SAO01 and
SAO04.
Provide fault tolerant LAN infrastructure Provide network connectivity for application and servers.
in IBM Cloud.
Data replication between IBM Cloud and The customer uses a logical replication solution to
customer data center. replicate the data for the IBM i application.
Provide traffic isolation and The use of jump servers and traffic filtering on IBM Cloud.
segmentation.
Provide WAN connectivity. Customer provides WAN circuit and the POP network
infrastructure; IBM provides the termination endpoint in
Sao Paulo.
Nonfunctional requirements
The following nonfunctional requirements must be met:
IBM Cloud portal access for IBM i VMs provisioning.
Worldwide Tools Solutions for alert monitoring and reporting (IBM i).
Traffic bandwidth in IBM Cloud infrastructure does not exceed 1 Gbps.
Traffic bandwidth for replication is limited to 500 Mbps. Internet is used for preserving
production traffic.
Local network redundancy to be provided in primary IBM Cloud site (SAO01); firewall
cluster in High Availability, dual ports connectivity.
Manageability access for customer’s users to be provided by using jump servers.
Note: For more information about Worldwide Tools Solutions (WWTS), see this IBM
Support web page.
This choice incorporates components and their connections, innovation options, allotting
usefulness to different components, making situation choices for components that are
facilitated inside different infrastructure nodes, and so on.
The choices can have diverse costs that are related to them, the degree they fulfill different
prerequisites. and can show distinctive ways of adjusting competing stakeholders’ concerns.
Architects can perform the following tasks:
Formally archive the basic choices that they make in creating the arrangement
Agree as to why the arrangement looks the way it does.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 73
Table 2-2: Infrastructure
Table 2-3 on page 75: Migration
Table 2-4 on page 76: Servers
Table 2-5 on page 76: Networking
Problem statement Providing a way for accessing the target Power Virtual Server, which
is moved from the customer’s data centers to IBM Cloud in Sao
Paulo in a dual site configuration.
Alternatives None.
Implications Deploy WAN access and replication method for moving the data in
the target environment.
Derived requirements Provide firewall services for VPN access and filtering of traffic.
Provide IBM Cloud Object Storage services for Backup.
Provide WAN network connectivity for customer’s users and
application connectivity.
Provide bare metal servers to hosts relay applications and
proxy.
a. For more information about Equinix, see America Data Centers.
Problem statement If a major outage occurs, the customer’s users can connect to back
up site (use of DNS for servers translation; secondary site has a
different TCP IP address).
Assumptions Two sites are used for the solution: one in SAO01 and the other in
SAO04 (in a different zone).
Alternatives None.
Decision Deploy dual site solution in an IBM Cloud Multi-Zone Region (Sao
Paulo).
Justifications If a major outage occurs at the primary site, the main goal is to restart
part of the application and services in the secondary site.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 75
Table 2-4 Migration strategy and backup
Architectural decision IBM Cloud Object Storage backup is used for migrating IBM i VMs to
SAO01 and SAO04.
Problem statement Back-up and data replication between client data center and IBM
Cloud target infrastructure. No Automatic Tape Library (ATL) or VTS is
available to perform a save and restore, which is a traditional migration
method for IBM i operating system.
Assumptions The use of IBM Cloud Object Storage for the migration is one of the
available methods for moving workload to IBM Power Systems Virtual
Server in IBM Cloud.
Motivation The use of IBM Cloud Object Storage to move IBM i workloads to
SAO01 and SAO04.
Derived requirements Deploy Proxy in Front-End zones and VPN access from client on
IBM Cloud.
Buckets are needed to create on IBM Cloud Object Storage for
the data move. Needs more storage for the IBM Cloud Object
Storage backup in the source IBM i VM.
Problem statement Customer has IBM i VMs with storage 10 TB - 70 TB. Some of the
IBM i VMs include a journal of 1 TB that generates daily. The use of
a logical replication tool is the best solution to remove the delta data
after the IBM i VM restoration on the target is complete.
Assumptions The logical replication tool syncs up the data between source system
and target system.
Justifications In this case, this option is the most suggested because customer has
a third-party logical replication tool in support.
Implications Third-party vendor provides license key for the logical replication tool
to be deployed on IBM Power Virtual Server. The third-party vendor
installs the tool in the cloud.
Derived requirements The third-party vendor provides a temporary license key to migrate
data to IBM Cloud.
Assumptions WAN part is customer’s responsibility, IBM Cloud provides dual circuit
connectivity on diverse physical devices.
Decision Provide redundant connectivity in SAO01 and use the IBM Cloud
backbone for Inter-site communications.
Justifications The provided service level is consistent and the option is available to
connect the IBM Cloud site by using VPN.
Derived requirements Deploy GRE and Direct link connectivity for Front-End zones
communications.
A fiber cross-connection is created through a network service provider (NSP) in an IBM Cloud
network Point of Presence (PoP). IBM engineers facilitate end-to-end connectivity with your
selected NSP, and you can access your cloud infrastructure in the local IBM Cloud data
center.
The NSP runs last-mile links directly between a router on your network and an IBM Cloud
router. As with all of the Direct Link products, you can add global routing that enables private
network traffic to all IBM Cloud locations.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 77
IBM Direct Link 2.0:
– Direct Link Connect
– Direct Link Dedicated
For more information about which Direct Link solution to order, see the following IBM Cloud
Docs web pages:
Getting started with IBM Cloud Direct Link on Classic
Getting started with IBM Cloud Direct Link (2.0)
Important: Tables Table 2-2 on page 74 - Table 2-5 on page 76 list decisions only as an
example. Real world decisions can vary according to the customer, scenario, third-party
vendor applications, in-house applications, region, networking, and so on.
A certified IBM i architect can help make decisions about your scenario.
Chapter 2. Migrating your IBM i LPAR to IBM Power Systems Virtual Server 79
80 Power Virtual Server for IBM i
3
A VPN creates an encrypted tunnel between the on-premises environment and the cloud
infrastructure, ensuring that data transferred across this connection is protected from
unauthorized access and potential breaches. This tunnel enables enterprises to extend their
private network into the cloud securely, maintaining data integrity and confidentiality.
Key benefits of VPN connectivity within IBM Power Virtual Server include:
Data Security and Encryption: VPNs provide end-to-end encryption, which is crucial for
protecting sensitive information as it travels between on-premises systems and the Power
Virtual Server cloud.
Private Connectivity: Unlike public internet connections, VPNs allow for secure and private
access, reducing exposure to potential attacks.
Seamless Integration: VPNs in IBM Power Virtual Server are designed to integrate
smoothly with existing on-premises networks and support hybrid cloud deployments.
Cost-Effectiveness: Establishing a VPN connection is often more economical than other
private networking solutions, such as dedicated lines, while still providing a reliable,
secure link.
Scalability: VPN connections can be adjusted to meet the evolving needs of businesses,
making them suitable for both small-scale projects and large, complex workloads.
IBM Power Virtual Server supports various VPN configurations to cater to different business
requirements, such as site-to-site VPNs, which connect entire networks, and point-to-site
VPNs, which allow individual devices to connect securely. These options ensure that
enterprises can tailor their connectivity solutions to their specific use cases, whether they
involve development, testing, or full-scale production.
The next sections will delve deeper into the network overview and specific scenarios for
deploying and managing VPN connectivity within the IBM Power Virtual Server ecosystem.
This comprehensive understanding will help enterprises optimize their cloud strategies while
maintaining security and operational efficiency.
Thus, VPNs are an integral part of cloud security architecture, enabling organizations to
confidently leverage cloud technologies while safeguarding their data. The protection, privacy,
compliance, and seamless integration that VPNs provide make them a valuable tool in today's
cloud-first business landscape. This ensures that organizations can focus on their core
operations without worrying about the safety and privacy of their data and applications in the
cloud.
Monitoring and logging tools help maintain the performance, reliability, and security of VPN
connections. IBM Power Virtual Server provides tools and dashboards for:
– Real-time monitoring of connection status, latency, and data transfer rates.
– Logging and auditing VPN activity to detect anomalies, troubleshoot issues, and
ensure compliance with security policies.
These tools assist administrators in maintaining optimal network performance and securing
data traffic over the VPN.
3.2.4 Scenarios demonstrating the use of VPN for initial setups and scaling
environments
VPN connectivity plays a crucial role in the various stages of cloud adoption, from initial setup
to scaling and optimizing the environment. Organizations leverage VPNs to ensure secure,
encrypted communication between their on-premises infrastructure and the cloud, facilitating
seamless integration and expansion as business needs grow. Here, we explore different
scenarios that illustrate the use of VPNs in both initial setups and scaling of IBM Power
Systems Virtual Serverenvironments.
1. Initial Setup of Hybrid Cloud Environments
When an organization first deploys its IBM Power Virtual Server infrastructure, a VPN is
essential for securely connecting its on-premises data center to the cloud. This secure
connection allows for the following:
– Data Migration: VPNs enable safe and encrypted data transfer during the migration
phase, ensuring that sensitive information is protected against potential cyber threats.
– Testing and Validation: Before launching production workloads, enterprises can use
VPNs to test and validate applications in the cloud environment. This helps identify and
resolve connectivity, security, or performance issues before going live.
– Network Configuration and Synchronization: VPNs facilitate the configuration of cloud
resources to mirror on-premises setups, making it easier to integrate existing systems,
services, and applications into the new hybrid environment.
Example 3-1
A financial services company expanding its operations to the cloud uses VPN to securely
migrate sensitive customer data from its on-premises servers to IBM Power Virtual Server.
During the initial setup, VPN encryption ensures data confidentiality, while secure tunnels
maintain consistent communication between local and cloud networks.
Example 3-2
A software development company launches a VPN connection between its local office and
IBM Power Virtual Server to enable developers to test new applications against real-world
Example 3-3
An e-commerce company experiencing seasonal spikes in traffic can scale its IBM Power
Virtual Server resources to accommodate the increase. A VPN maintains secure, consistent
communication between the cloud environment and the main data center, ensuring smooth
operation during high-demand periods.
Example 3-4
A global consulting firm uses VPNs to enable its remote workforce to securely access IBM
Power Virtual Server-hosted applications and data. This ensures that consultants can
collaborate on client projects without exposing sensitive information to potential security risks
Example 3-5
A healthcare organization planning to move its patient management system to the
cloud sets up a PoC on IBM Power Virtual Server. A VPN is used to connect the PoC
Example 3-6
A manufacturing company sets up a VPN to link its on-premises data centre with IBM Power
Virtual Server for a disaster recovery plan. In case of a disruption, employees can securely
access backup resources and continue operations without significant impact.
VPNs are indispensable in enabling secure communication between on-premises and cloud
environments for initial setups and scaling operations. From data migration and testing to
scaling workloads and remote access, VPNs ensure the security, reliability, and efficiency of
cloud adoption processes. For organizations using IBM Power Systems Virtual Servers,
leveraging VPNs helps maintain data protection and optimize performance while providing the
flexibility to expand and adapt to business needs.
The Power Virtual Server network framework integrates smoothly with IBM Cloud, offering
both virtual and physical connectivity options to support complex enterprise deployments.
This ensures that organizations can connect their on-premises infrastructure to the cloud with
minimal disruption while maintaining high security and performance standards. The
architecture is built to enable seamless communication between on-premises systems, cloud
resources, and third-party services, facilitating hybrid cloud and multi-cloud strategies.
The IBM Power Virtual Server network overview provides a comprehensive understanding of
the underlying architecture and capabilities that make it suitable for handling diverse and
demanding workloads. Whether an organization is seeking to migrate legacy applications to
the cloud, build new cloud-native applications, or implement hybrid cloud solutions, Power
Virtual Server offers the tools and network capabilities needed for success. In the following
sections, we will delve into specific network scenarios, from proof-of-concept to production
setups, and explore the technologies and configurations that support them.
3.3.2 Detailed description of the IBM Power Virtual Server network structure
The IBM Power Systems Virtual Server network structure is carefully designed to support
enterprise workloads, providing a secure, scalable, and high-performance environment within
the IBM Cloud. The network structure comprises several key components, including private
and public networks, virtual private clouds (VPCs), and connectivity options that allow
seamless integration with on-premises infrastructure and other cloud services. This structure
enables organizations to deploy and manage their applications in a cloud environment without
compromising on security or performance.
Here's a breakdown of the major elements that constitute the IBM Power Virtual Server
network structure:
1. Virtual Private Cloud (VPC)
The Virtual Private Cloud (VPC) is a foundational element in the IBM Power Virtual Server
network structure, providing a logically isolated section of the IBM Cloud where organizations
can launch and manage resources in a virtualized environment. VPCs offer:
– Network Isolation: Each VPC is isolated from other VPCs, ensuring that resources
within one VPC are not accessible by others unless explicitly configured. This isolation
enhances security, making it ideal for sensitive workloads.
– Customization and Control: Organizations have full control over their VPC's network
configurations, including IP address ranges, subnets, and routing tables, allowing them
to tailor the network to their specific requirements.
– Scalability: VPCs are highly scalable, enabling businesses to expand their network
resources as their workloads grow without disruption.
VPCs in IBM Power Virtual Server provide the flexibility of private networking with the
scalability and convenience of a public cloud, making them suitable for hybrid cloud and
multi-cloud deployments.
2. Subnets and IP Addressing
This division allows organizations to control traffic flow between public and private resources,
enhancing security and optimizing network performance. IP addressing within these subnets
can be configured as needed, with support for both IPv4 and IPv6 addresses, providing
flexibility for enterprises to design their network layout.
3. Network Security and Access Control
Security is a core component of the IBM Power Virtual Server network structure. Multiple
layers of security features are built into the network to safeguard resources and data:
Security Groups: Security groups act as virtual firewalls for resources within a VPC,
controlling inbound and outbound traffic. They are configured with specific rules to allow or
deny traffic based on criteria such as IP address, port, and protocol, ensuring only
authorized traffic can access the network resources.
Network Access Control Lists (ACLs): In addition to security groups, network ACLs
provide another layer of security by controlling traffic at the subnet level. ACLs can be
used to set up stateless filtering rules that allow or deny specific types of traffic, enhancing
protection against unwanted network traffic.
Encryption: IBM Power Virtual Server supports data encryption at various layers, including
encrypted VPN connections, to ensure that data remains confidential and secure during
transmission. Encryption protocols such as IPsec and SSL/TLS are commonly used to
protect sensitive data.
These security features enable organizations to implement a zero-trust model, where access
is restricted based on specific rules and continuously monitored, ensuring a secure network
environment for their workloads.
4. VPN and Direct Link Connections
IBM Power Virtual Server offers multiple connectivity options to link on-premises
infrastructure with the IBM Cloud:
– VPN (Virtual Private Network): VPNs provide a secure, encrypted tunnel between
on-premises networks and the Power Virtual Server environment, enabling private
access to cloud resources over the internet. VPNs are ideal for organizations that need
secure but cost-effective connectivity for development, testing, and production
workloads.
– Direct Link: For organizations requiring dedicated, high-performance connections, IBM
offers Direct Link, a private, high-speed connection between the on-premises data
center and the IBM Cloud. Direct Link bypasses the public internet, reducing latency
and improving data transfer speeds, making it suitable for latency-sensitive
applications and large data transfers.
These connectivity options allow organizations to build hybrid cloud environments, extending
their existing networks into the cloud while maintaining control over security, performance,
and data flow.
5. Transit Gateway
The Transit Gateway adds flexibility to the IBM Power Virtual Server network structure,
allowing businesses to build large-scale architectures that can span regions and integrate
multiple cloud resources.
6. Power Edge Router (PER)
The Power Edge Router (PER) is an IBM Power Virtual Server-specific networking
component designed to handle routing and connectivity tasks for Power Virtual Server
instances. It provides essential functions such as:
– Efficient Traffic Routing: PER routes traffic between subnets, VPCs, and external
networks, ensuring efficient communication within and outside the Power Virtual
Server environment.
– Enhanced Security: PER integrates with security features such as firewalls and
security groups, adding an extra layer of protection against unauthorized access and
cyber threats.
– Connectivity to External Resources: PER can be used to connect Power Virtual Server
instances to external resources, such as on-premises infrastructure or other cloud
services, facilitating hybrid and multi-cloud setups.
PER is optimized for the unique requirements of Power Virtual Server, enabling smooth and
secure communication between different network components.
7. Load Balancing
IBM Power Virtual Server supports load balancing to distribute incoming network traffic
across multiple resources, optimizing resource utilization, enhancing reliability, and ensuring
high availability. Load balancers can be configured to:
– Balance Traffic Across Instances: Distribute traffic across virtual server instances in
different subnets, ensuring no single instance is overwhelmed.
– Improve Availability: Load balancers can detect instance health and redirect traffic
away from unresponsive or overloaded instances, ensuring uninterrupted service.
– Support for Different Protocols: IBM Power Virtual Server load balancers support
multiple protocols, such as HTTP, HTTPS, and TCP, making them suitable for various
types of applications and services.
Load balancing is essential for scaling applications in the cloud, ensuring a smooth user
experience even during traffic spikes.
8. Bandwidth and Performance Optimization
Performance optimization options in IBM Power Virtual Server make it suitable for workloads
with high throughput and low-latency requirements, providing a seamless experience for
end-users.
The IBM Power Virtual Server network structure is built with flexibility, security, and
performance in mind, making it suitable for a range of enterprise applications. It includes
VPCs for isolated environments, subnets for structured IP addressing, robust security
features, flexible connectivity options (VPN, Direct Link, and Transit Gateway), Power Edge
Router (PER) for optimized routing, load balancing for high availability, and performance
optimizations for bandwidth and latency control. This comprehensive network structure
empowers organizations to build, deploy, and scale their applications in the IBM Power Virtual
Server cloud with confidence, meeting the needs of both hybrid cloud and multi-cloud
strategies.
IBM Power Virtual Server integrates natively with IBM Cloud, making it easy for organizations
to extend their on-premises infrastructure to the cloud without the need for complex
reconfigurations. This integration allows companies to create hybrid cloud environments that
combine the benefits of on-premises Power Systems with the scalability and flexibility of IBM
Cloud. Key benefits of this integration include:
Consistent Network Policies: IBM Cloud provides a unified framework that enables
consistent security and network policies across on-premises and cloud environments.
Unified Management: Power Virtual Server can be managed from the IBM Cloud Console,
allowing administrators to monitor, provision, and manage both cloud and on-premises
resources from a single interface.
Secure Data Flow: Through services like IBM Cloud Direct Link and VPN, secure and
encrypted connections can be established between on-premises infrastructure and the
IBM Cloud, ensuring data remains private and protected.
IBM Power Virtual Server integrates with IBM Cloud Direct Link, a service that provides
dedicated, high-speed, low-latency connections between on-premises data centers and the
IBM Cloud. Direct Link offers a private and secure pathway for data transfer, bypassing the
public internet and reducing the risk of data breaches or latency issues. This service is
particularly beneficial for:
Data-Intensive Applications: Direct Link supports large-scale data transfers between
Power Virtual Server and on-premises systems, making it ideal for applications requiring
high throughput, such as big data analytics and database replication.
Latency-Sensitive Workloads: By reducing latency, Direct Link ensures that
latency-sensitive applications, such as financial transactions and real-time
communications, run smoothly and efficiently.
Direct Link enhances the hybrid cloud experience, allowing enterprises to combine the
reliability and performance of their on-premises infrastructure with the scalability of IBM
Cloud.
3. Integration with IBM Cloud Services
One of the key advantages of Power Virtual Server is its integration with IBM's wide range of
cloud services, which allows organizations to enhance their applications with advanced
cloud-native functionalities. Some examples of IBM Cloud services that can be integrated
with Power Virtual Server include:
IBM Cloud Object Storage: Power Virtual Server workloads can store and access large
datasets in IBM Cloud Object Storage, which provides scalable and cost-effective data
storage solutions. This service is particularly useful for applications requiring data
archiving, backup, or storage for big data workloads.
IBM Watson® AI Services: By integrating Power Virtual Server with IBM Watson,
organizations can enrich their applications with AI capabilities such as natural language
processing, machine learning, and data analysis. This is beneficial for applications in
industries such as healthcare, finance, and retail that can leverage AI for predictive
analytics and customer insights.
IBM Cloud Databases: Power Virtual Server can connect to IBM Cloud-managed
databases, such as Db2, PosPower Virtual Servertgre SQL, and Mongo DB, to facilitate
data storage and management without the need to maintain databases on-premises.
IBM Security® Services: Power Virtual Server workloads benefit from IBM Cloud's
security tools, such as IBM Key Protect for encryption key management, IBM Cloud
Security Advisor for monitoring, and IBM Cloud Identity and Access Management (IAM)
for user authentication and access control.
Integrating these IBM Cloud services with Power Virtual Server enhances the functionality
and security of applications, enabling enterprises to build advanced, cloud-native applications
within a secure and managed environment.
4. Multi-Cloud Connectivity Using IBM Cloud Transit Gateway
IBM Cloud Transit Gateway is a service that facilitates the connection between multiple VPCs
(Virtual Private Clouds) within IBM Cloud and enables organizations to extend connectivity to
other cloud providers. Through Transit Gateway, Power Virtual Server users can integrate
Multi-cloud connectivity enables organizations to choose the best platform for each workload
and avoid vendor lock-in, providing greater flexibility and resilience.
5. Interoperability with Open-Source and Containerized Environments
IBM Power Virtual Server supports open-source technologies and containerized workloads,
enabling integration with popular container management platforms like Kubernetes and Red
Hat OpenShift. This interoperability is particularly beneficial for organizations adopting
cloud-native architectures and DevOps practices. Key features include:
Kubernetes and OpenShift Integration: Power Virtual Server can be integrated with IBM
Cloud Kubernetes Service or Red Hat OpenShift on IBM Cloud, enabling seamless
deployment and management of containerized applications. This allows developers to run
workloads in containers, improve portability, and streamline the development lifecycle.
Hybrid Cloud with OpenShift: IBM Power Virtual Server supports Red Hat OpenShift for
creating and managing hybrid cloud environments, allowing organizations to deploy
consistent Kubernetes environments across on-premises, Power Virtual Server, and other
cloud platforms. This consistency simplifies application deployment, management, and
scaling across different environments.
Support for Open-Source Software: Power Virtual Server is compatible with a wide array
of open-source software, such as databases, development tools, and middleware, making
it easier to integrate existing applications with IBM Cloud services.
IBM Power Virtual Server offers a robust set of APIs that enable automated management,
provisioning, and integration of cloud resources. These APIs allow organizations to
programmatically interact with Power Virtual Server, making it easier to integrate with other
cloud services and automate cloud operations. Benefits of API-driven integration include:
Automation of Routine Tasks: APIs allow teams to automate routine tasks, such as scaling
resources, setting up VPN connections, or deploying virtual machines, reducing the need
for manual intervention and increasing operational efficiency.
Integration with Third-Party Tools: Organizations can integrate Power Virtual Server with
third-party tools for monitoring, logging, and CI/CD (continuous integration and continuous
deployment) pipelines. This is particularly useful for DevOps teams that rely on tools like
Jenkins, Terraform, or Ansible for managing cloud infrastructure.
APIs enhance the flexibility of Power Virtual Server, allowing businesses to build highly
customized and automated cloud environments that can scale and adapt to changing
requirements.
7. Data and Application Mobility Across Environments
IBM Power Virtual Server enables data and application mobility between on-premises
infrastructure, IBM Cloud, and other cloud environments. This flexibility is essential for
enterprises that need to move workloads between environments for reasons such as
compliance, cost optimization, or performance. Key aspects include:
Backup and Disaster Recovery: Power Virtual Server enables backup and recovery
solutions that ensure data can be replicated and recovered across multiple environments,
supporting business continuity and disaster recovery strategies.
Flexible Workload Placement: Organizations can easily move workloads from on-premises
systems to IBM Power Virtual Server or other cloud platforms, optimizing resource
allocation based on cost or performance needs.
Data Synchronization and Integration: Power Virtual Server supports data synchronization
tools that facilitate the movement and integration of data across different environments,
ensuring data consistency and availability.
Data and application mobility help organizations avoid cloud lock-in, allowing them to
maintain flexibility in choosing the most cost-effective and suitable environment for each
workload.
The integration capabilities of IBM Power Virtual Server with IBM Cloud and other cloud
services provide enterprises with a powerful, versatile platform for building hybrid and
multi-cloud environments. From high-performance Direct Link connections and multi-cloud
routing with Transit Gateway to container support with Kubernetes and OpenShift, Power
Virtual Server offers a wide range of options for connecting, securing, and scaling
applications across different environments. API-driven automation and data mobility further
enhance Power Virtual Server's adaptability, making it a valuable solution for organizations
seeking a flexible and secure cloud platform that integrates seamlessly with other cloud
providers and services.
Here's an in-depth look at the core elements of IBM Power Virtual Server network topology:
1. Subnets
Private and Public Subnets: IBM Power Virtual Server supports both private and public
subnets:
Private Subnets: These are used for resources that do not require direct internet access.
Private subnets are ideal for sensitive workloads, such as databases, application
backends, or internal services, as they are not exposed to the public internet. Traffic to and
from private subnets can be controlled through routing tables and security groups.
Public Subnets: Resources in public subnets have direct internet access, making them
suitable for applications that need to interact with the internet, such as web servers or API
gateways. Public subnets are often used for front-end services or other workloads that
require external connectivity.
Network Segmentation and Security: By dividing networks into subnets, organizations can
create distinct zones within their environment, each with its own security policies and access
controls. This segmentation helps contain and limit the potential impact of security incidents
and allows for a granular approach to traffic control.
Scalability: Subnets in IBM Power Virtual Server are highly scalable, meaning that they can
be expanded as needed to accommodate growing workloads. Organizations can add more
subnets within a Virtual Private Cloud (VPC) to support different types of applications,
environments (development, testing, production), or geographic locations.
2. Routing
Routing is a critical component of the IBM Power Virtual Server network topology, as it
determines how data packets travel between different subnets, VPCs, and external networks.
Routing tables in Power Virtual Server define the rules for directing traffic, ensuring that data
reaches its intended destination efficiently and securely.
Routing Tables: In Power Virtual Server, routing tables are used to control the flow of traffic
within a VPC. Each routing table consists of a set of rules that specify the paths data packets
should take based on their destination IP addresses. Routing tables are associated with
specific subnets, allowing administrators to customize traffic paths for different parts of the
network.
Static and Dynamic Routing: IBM Power Virtual Server supports both static and dynamic
routing options:
Static Routing: With static routing, administrators manually configure routes within the
routing table. This method offers control over traffic paths but requires manual updates if
the network changes. Static routing is suitable for smaller or simpler networks where traffic
paths remain relatively consistent.
Dynamic Routing: Dynamic routing uses protocols like BGP (Border Gateway Protocol) to
automatically adjust routes based on network conditions. This is useful for larger, more
complex networks, where dynamic routing can optimize traffic flow, improve resilience,
and reduce the administrative burden.
Route Propagation with Transit Gateway: For organizations with multiple VPCs or hybrid
cloud environments, IBM Cloud Transit Gateway can facilitate route propagation, allowing
VPCs to share routing information and connect seamlessly. This simplifies network
management in multi-VPC or multi-region setups by centralizing and automating route
management, making it easier to build complex network topologies.
3. IP Addressing
IBM Power Virtual Server network topology is built within the context of a Virtual Private Cloud
(VPC), which is a logically isolated section of the IBM Cloud where organizations can deploy
and manage resources securely. The VPC structure provides a private network environment
with control over IP ranges, routing tables, subnets, and security settings.
VPC Peering and Transit Gateway: IBM Power Virtual Server allows VPC peering,
enabling direct network connectivity between VPCs within the same IBM Cloud region. For
cross-region or multi-cloud connectivity, IBM Cloud Transit Gateway serves as a central
hub to route traffic between multiple VPCs, making it easier to build and manage complex
network topologies.
Flexible Network Design: VPCs enable organizations to design flexible and scalable
network architectures tailored to their applications and workloads. Power Virtual Server
Network topology in IBM Power Virtual Server is further strengthened by security measures
and access controls designed to protect resources and manage traffic within and outside the
VPC.
Security Groups: Security groups act as virtual firewalls at the instance level, allowing
administrators to specify inbound and outbound rules for each instance. Security groups
enable fine-grained control over traffic, ensuring that only authorized traffic can reach the
resources.
Access Control Lists (ACLs): ACLs operate at the subnet level, providing an additional
layer of security by filtering traffic entering and exiting the subnet. ACLs are useful for
controlling traffic between different segments of the network, such as restricting access
from one subnet to another.
Firewall Rules: Power Virtual Server allows administrators to define firewall rules to block
or allow specific types of traffic. Firewall configurations can be applied to both public and
private subnets to protect against unauthorized access and reduce the risk of attacks.
The network topology in IBM Power Virtual Server, comprising subnets, routing, and IP
addressing, provides a flexible and secure foundation for deploying cloud resources. Subnets
allow for effective network segmentation, while routing tables and protocols manage traffic
flow across different network segments. IP addressing ensures unique and organized
resource identification, and NAT enhances security for private subnets. Together, these
elements enable organizations to create isolated, highly controlled, and scalable network
environments within IBM Cloud. The result is a robust network infrastructure that supports a
wide variety of applications and workloads, ranging from secure, private deployments to
public-facing applications.
IBM Power Virtual Server uses Virtual Private Cloud (VPC) environments to provide network
isolation for each tenant, ensuring that resources in one VPC are securely separated from
those in another. VPCs are logically isolated environments within IBM Cloud, enabling
organizations to manage and control their network resources independently.
"Dedicated IP Ranges: Each VPC is assigned a unique range of IP addresses, preventing
IP conflicts and ensuring that network resources are isolated.
"Segmentation with Subnets: Within a VPC, organizations can further segment their
network into private and public subnets, isolating sensitive resources from those exposed
Network isolation in Power Virtual Server offers robust protection against threats by
preventing cross-tenant access and enforcing secure boundaries between different
environments within IBM Cloud.
2. Security Groups
Security groups in Power Virtual Server act as virtual firewalls at the instance level, allowing
organizations to control inbound and outbound traffic to each virtual machine or resource
within the VPC.
Granular Access Control: Administrators can set rules in security groups that define which
types of traffic (based on IP address, port, and protocol) are allowed or denied. This
enables fine-grained control over access, limiting exposure to only authorized users and
applications.
Customizable Rules: Security groups can be customized based on the needs of each
application or workload. For example, an application server might allow HTTP and HTTPS
traffic from the internet, while a database server might only permit internal traffic from
specific subnets or IP addresses.
Stateful Filtering: Security groups in Power Virtual Server are stateful, meaning that if a
request is allowed outbound, the corresponding response is automatically allowed
inbound. This simplifies configuration and enhances security by tracking connections.
Security groups help limit the network attack surface by ensuring that only specified traffic can
reach Power Virtual Server resources, reducing the risk of unauthorized access.
3. Network Access Control Lists (ACLs)
Network ACLs in Power Virtual Server provide an additional layer of security at the subnet
level, complementing security groups by filtering traffic entering and exiting specific subnets.
Stateless Filtering: Unlike security groups, ACLs are stateless, meaning that both inbound
and outbound rules must be explicitly configured. This allows administrators to have more
precise control over traffic for specific subnet zones.
Layered Security: ACLs work alongside security groups, creating a layered security model
that enhances overall protection. For example, ACLs can be used to allow or deny traffic at
the subnet level, while security groups further control access to individual resources within
the subnet.
Custom Rule Management: ACLs can be configured with rules that permit or deny traffic
based on criteria such as IP range, port, and protocol, allowing for highly specific control of
network traffic.
By providing traffic control at the subnet level, ACLs help protect resources from malicious
traffic, restrict access, and add redundancy to security configurations.
4. Data Encryption
Data encryption in Power Virtual Server ensures that sensitive data remains protected both in
transit and at rest, preventing unauthorized access to information even if it is intercepted.
Encryption in Transit: IBM Power Virtual Server supports encrypted VPN connections
(using protocols like IPsec and SSL/TLS) to secure data traveling between on-premises
environments and the Power Virtual Server cloud. This ensures that data remains private
and protected from interception as it moves through public networks.
Encryption at Rest: Power Virtual Server can use encryption mechanisms such as
AES-256 to secure stored data, preventing unauthorized access to data stored on virtual
Encryption in Power Virtual Server provides comprehensive data protection, ensuring that
sensitive information is secure at all stages-whether in transit across networks or stored on
the cloud.
5. Network Address Translation (NAT)
NAT enhances security for private resources in Power Virtual Server by limiting their exposure
to the internet and preventing unauthorized inbound access.
6. Firewalls
Power Virtual Server supports firewalls to provide an additional layer of protection for
applications and resources. Firewalls help regulate and monitor traffic, allowing or blocking
data packets based on predefined security rules.
Customizable Firewall Rules: Firewalls in Power Virtual Server allow organizations to
define rules for inbound and outbound traffic at various levels, such as IP address, port,
and protocol, based on application requirements.
Preventing Unauthorized Access: By setting strict rules, firewalls prevent unauthorized
access to applications and resources, providing a defense against common attacks such
as DDoS (Distributed Denial of Service) and SQL injection.
Enhanced Monitoring: Firewalls offer logging and monitoring features that provide visibility
into network traffic and potential security events, enabling administrators to quickly
respond to suspicious activity.
Firewalls add another security layer by controlling data flow at the network perimeter, helping
protect applications and data from external threats.
7. Identity and Access Management (IAM)
IBM Power Virtual Server leverages IBM Cloud Identity and Access Management (IAM) to
enforce access control and manage permissions, ensuring that only authorized users can
access specific resources.
Role-Based Access Control (RBAC): IAM allows organizations to assign roles to users,
granting only the necessary permissions for their job functions. This principle of least
privilege reduces the risk of unauthorized access and accidental data exposure.
Granular Permissions: IAM provides fine-grained control over access to specific
resources, enabling organizations to restrict access to sensitive resources like VMs,
networks, or storage based on user roles.
IAM enables comprehensive access control for Power Virtual Server, allowing administrators
to manage and secure user permissions effectively.
8. Monitoring and Logging
Power Virtual Server includes monitoring and logging tools that enable continuous tracking of
network activity, providing insights into potential security incidents and performance issues.
Activity Logs: Power Virtual Server logs network and access activity, allowing
administrators to track actions within the network, including configuration changes, login
attempts, and traffic patterns. Logs can be used to audit activity, identify unusual behavior,
and ensure compliance with regulatory standards.
Real-Time Monitoring: IBM Cloud offers monitoring tools that provide real-time insights
into network performance and security events, helping administrators detect and respond
to security threats quickly.
Alerts and Notifications: Power Virtual Server monitoring tools allow administrators to set
up alerts for suspicious activity, such as unexpected traffic spikes or failed login attempts.
These notifications enable rapid response to potential security breaches.
By providing monitoring and logging capabilities, Power Virtual Server enhances visibility into
the cloud environment, allowing organizations to maintain security oversight and respond
promptly to incidents.
9. Compliance and Regulatory Security
IBM Power Virtual Server is designed to help organizations meet regulatory and compliance
requirements across various industries, such as healthcare, finance, and government.
Compliance Certifications: IBM Cloud, including Power Virtual Server, holds certifications
for various industry standards, such as HIPAA, GDPR, and SOC 2. This ensures that
Power Virtual Server meets stringent regulatory requirements for data protection, security,
and privacy.
Auditing and Reporting Tools: Power Virtual Server provides auditing tools to track
compliance with industry regulations, offering detailed reports and logs for regulatory
inspections.
Data Sovereignty Controls: IBM Power Virtual Server enables organizations to specify
data storage locations, ensuring that data is stored and processed in compliance with
local data sovereignty laws.
Power Virtual Server's compliance capabilities make it easier for organizations to operate
securely within regulated environments and meet specific legal obligations for data protection.
IBM Power Virtual Server incorporates a comprehensive set of security measures designed
to protect cloud environments, data, and applications. These include network isolation with
VPCs, security groups, ACLs, encryption, NAT, firewalls, IAM, and monitoring tools that
provide layered security across the network. By combining these features, Power Virtual
Server ensures that organizations have the tools to maintain a secure, compliant, and
resilient network environment. This multi-layered security approach enables businesses to
focus on their core operations while trusting that their data and resources are safeguarded
against unauthorized access and cyber threats.
Below, we explore the key network features of IBM Power Virtual Server relative to these
other cloud providers, with a focus on VPC design, connectivity options, network security, and
hybrid and multi-cloud support.
1. Virtual Private Cloud (VPC) Structure and Customization
IBM Power Virtual Server: IBM Power Virtual Server offers VPCs that are logically isolated
environments within the IBM Cloud, designed to provide flexible control over network
configurations, including IP address ranges, subnets, and routing. Power Virtual Server
allows organizations to segment their workloads effectively with private and public
subnets, which is especially valuable for isolating mission-critical applications from public
internet access.
AWS: AWS VPCs offer similarly robust controls, allowing users to create isolated
networks, assign CIDR blocks, and configure subnets and routing tables. AWS VPCs are
highly customizable and integrate seamlessly with a wide range of AWS services, but are
generally optimized for x86 workloads.
Azure: Azure VNet provides comparable features to IBM Power Virtual Server VPCs,
including network segmentation, IP address management, and routing. Azure VNets are
also highly integrated with Azure services but are predominantly geared toward Windows
and Linux workloads.
Google Cloud Platform (GCP): GCP VPCs offer flexible, global networks that allow for
multi-region deployments within a single VPC. GCP also supports custom routing and IP
ranges similar to Power Virtual Server, but is generally optimized for Google's cloud-native
services rather than Power workloads.
IBM Power Virtual Server offers VPC capabilities tailored to IBM Power workloads,
providing essential controls for legacy and high-compute applications. While AWS, Azure,
and GCP also offer customizable VPCs, Power Virtual Server stands out by natively
supporting AIX and IBM i workloads, which is not available in other cloud providers.
2. Connectivity Options (Direct Link, VPN, and Hybrid Cloud Support)
IBM Power Virtual Server: Power Virtual Server offers IBM Cloud Direct Link, which
provides high-speed, dedicated connectivity between on-premises data centers and IBM
Cloud, bypassing the public internet for secure, low-latency connections. Power Virtual
Server also supports VPN connections for secure data transmission, making it well-suited
for hybrid cloud deployments. Additionally, Power Virtual Server integrates with IBM Cloud
Transit Gateway for connecting multiple VPCs or regions.
AWS: AWS provides Direct Connect, a similar service to IBM Direct Link, which enables
private connectivity to AWS cloud resources. AWS also offers VPN solutions for secure
connections and AWS Transit Gateway for inter-VPC routing. AWS hybrid cloud solutions
integrate well with on-premises systems through AWS Outposts, but it lacks native support
for IBM Power workloads.
IBM Power Virtual Server provides comparable connectivity options with a strong emphasis
on integrating IBM Power workloads into hybrid cloud environments, particularly through IBM
Direct Link and VPN solutions. While AWS, Azure, and GCP offer robust private connectivity
options, Power Virtual Server is uniquely suited to companies with existing IBM Power
infrastructure.
3. Network Security Features
IBM Power Virtual Server: Power Virtual Server incorporates comprehensive security
features, including security groups, network ACLs, encryption for data at rest and in
transit, and IAM for role-based access control. Additionally, IBM Power Virtual Server
supports compliance with industry standards, making it suitable for highly regulated
industries. Integration with IBM Cloud Security Advisor provides centralized monitoring
and security insights, enhancing security management.
AWS: AWS offers a broad range of security features, including security groups, network
ACLs, and extensive encryption options. AWS Identity and Access Management (IAM)
provides fine-grained access controls, and services like AWS Shield and GuardDuty offer
advanced security and threat detection. AWS is well-established in meeting regulatory
requirements but lacks native IBM Power support.
Azure: Azure provides security features like Network Security Groups (NSGs), encryption,
and Azure Active Directory for access control. Azure Security Center offers monitoring and
threat detection similar to IBM Cloud Security Advisor. Azure is highly compliant with
regulatory standards and is widely used in regulated sectors, but it is not specifically
optimized for IBM Power workloads.
GCP: GCP's security offerings include firewall rules, encryption, and Identity and Access
Management (IAM). Google Cloud Armor provides protection against DDoS attacks, and
Cloud Security Command Center offers monitoring and security insights. GCP focuses on
x86 workloads and cloud-native services, with no specific optimizations for IBM Power.
IBM Power Virtual Server's security features are comparable to those of AWS, Azure, and
GCP, with the added advantage of IBM's experience in supporting mission-critical IBM Power
workloads. Power Virtual Server's security and compliance capabilities make it especially
suitable for industries with stringent regulatory requirements, while AWS, Azure, and GCP
provide more generalized security options for x86 and cloud-native workloads.
4. Hybrid and Multi-Cloud Integration
IBM Power Virtual Server: IBM Power Virtual Server supports hybrid cloud deployments
by integrating seamlessly with IBM Cloud as well as on-premises IBM Power systems. IBM
Cloud Satellite extends IBM Cloud services to any location, making it easier to deploy and
manage Power Virtual Server in a multi-cloud environment. IBM's partnership with cloud
providers like AWS enables multi-cloud strategies, though Power Virtual Server itself is
hosted within IBM Cloud.
AWS: AWS has extensive multi-cloud capabilities through its support for third-party tools
and integrations. AWS Outposts provides on-premises cloud solutions similar to IBM
IBM Power Virtual Server is uniquely positioned for IBM Power hybrid and multi-cloud
scenarios, thanks to its native support for IBM Power workloads and integration with IBM
Cloud Satellite. While AWS, Azure, and GCP provide strong multi-cloud capabilities, Power
Virtual Server stands out as the only option specifically optimized for IBM Power applications.
5. Performance and Scalability
IBM Power Virtual Server: IBM Power Virtual Server is optimized for high-performance,
compute-intensive workloads typical of IBM Power systems, with specific optimizations for
AIX, IBM i, and Linux. Power Virtual Server offers scalable options within IBM Cloud, along
with Direct Link and high-throughput networking capabilities to support demanding
enterprise applications.
AWS: AWS offers scalable networking solutions and performance features across multiple
instance types optimized for high-performance computing. However, AWS's compute
options are generally optimized for x86 architectures, limiting direct compatibility with IBM
Power workloads.
Azure: Azure provides scalable solutions across a range of instance types for Windows
and Linux workloads. Azure's high-performance networking and ExpressRoute integration
support demanding applications but lack the specific optimizations for IBM Power
environments.
GCP: GCP is well-suited for scalable, high-performance cloud-native applications, with
support for global load balancing and dedicated instances for compute-intensive
workloads. However, GCP's offerings focus primarily on x86 and containerized workloads,
not IBM Power.
IBM Power Virtual Server is the preferred choice for organizations seeking high performance
for IBM Power-specific workloads. While AWS, Azure, and GCP offer scalable and
high-performance solutions, Power Virtual Server's optimizations for AIX, IBM i, and Linux on
Power distinguish it as the best option for companies heavily invested in IBM Power
architecture.
IBM Power Virtual Server provides a robust cloud platform with networking features tailored to
IBM Power workloads, delivering secure, high-performance, and flexible options for hybrid
and multi-cloud deployments. Compared to AWS, Azure, and GCP, Power Virtual Server
excels in its specialized support for IBM Power systems, making it the top choice for
organizations running mission-critical applications on AIX, IBM i, or Linux on Power. While
other providers offer competitive features, Power Virtual Server remains unmatched in its
ability to integrate IBM Power workloads seamlessly into a modern cloud environment,
ensuring enterprise-grade security, scalability, and compliance tailored to the needs of IBM
Power customers.
Power Virtual Server network scenarios can be broadly categorized into non-production,
proof-of-concept environments, and production deployments, each with specific requirements
and considerations. In non-production setups, the focus is often on rapid deployment and
cost-effective solutions to validate applications. Production scenarios, on the other hand,
demand high availability, security, and optimized performance for stable, uninterrupted
operations. Additionally, specialized components such as Power Edge Router (PER), Transit
Gateway, and VPN on VPC provide advanced capabilities to meet the needs of enterprise
environments, including local and global routing, hybrid connectivity, and options for seamless
migration from legacy systems.
In this section, we will explore different Power Virtual Server network scenarios in detail,
examining configurations for proof-of-concept (PoC) environments, production scenarios, and
specific elements like PER, Transit Gateway, and enhanced connectivity options. By
understanding these scenarios, organizations can make informed decisions about how best
to utilize Power Virtual Server's network features to optimize performance, maintain security,
and scale with confidence in a cloud-first world.
This PoC deployment enabled the financial services company to confidently migrate its
application to IBM Power Virtual Server, having verified performance, optimized
configurations, and tested the migration process-all while maintaining production uptime.
Multi-Region Architecture: An e-commerce business with a global user base deploys Power
Virtual Server instances in North America, Europe, and Asia-Pacific regions. Each region has
its own redundant network setup, including VPN and Direct Link connections. This
configuration reduces latency for regional users and maintains service even if one region
experiences an outage.
Advantages:
Optimized Traffic Routing: PER routes traffic efficiently across subnets and VPCs,
supporting high-performance networking.
Enhanced Security: PER integrates with security policies, adding a layer of protection for
sensitive data.
Configuration Steps and Routing Capabilities Configuring PER involves defining routing rules
for different subnets and connecting it to external networks like on-premises systems or
third-party cloud services. Routing tables can be customized to manage the flow of data and
prioritize traffic based on business needs.
Transit Gateway
Local and Global Routing Capabilities IBM Cloud Transit Gateway enables centralized routing
across VPCs within IBM Cloud. This service simplifies network management by acting as a
central hub for routing, allowing organizations to connect multiple VPCs efficiently.
Integration Benefits with Other Cloud Services Transit Gateway integrates with IBM Direct
Link and VPN, enabling hybrid connectivity to on-premises environments and multi-cloud
setups. This integration allows businesses to build complex, flexible network topologies that
support applications spanning multiple clouds.
Impact: The migration to VPN on VPC provides enhanced security, better performance, and
easier integration with other IBM Cloud services, making it ideal for production workloads that
require robust security.
Conclusion
Production scenarios in IBM Power Virtual Server benefit from thoughtful network
architecture, robust security, and high availability. By leveraging features like Power Edge
Router, Transit Gateway, VPN on VPC, and flexible bandwidth options, organizations can
build scalable, secure, and resilient cloud environments tailored to meet the demands of their
mission-critical applications.
IBM Power Systems Virtual Server provides different tools to fit operational needs and
includes some constrains that are related to its cloud nature.
This chapter describes different backup options that can be used for normal operations,
makes suggestions for backup strategies, and discusses the differences between methods.
Third party software solutions for IBM i backup are also available, but not all of them can back
up using network facilities.
Backing up over a network connection is typically not as fast as backing up to physical tape,
so you need to consider that backup windows might be longer in the cloud compared to your
on-premises environment. Fast connections increase the transfer rate, but backup windows
must be sized and considered in this environment.
IBM Cloud Object Storage can support storing large amounts of data from multiple backups.
This parallel storage system provides concurrent access from anywhere, reducing
bottlenecks and offering almost endless scalability. IBM Cloud Object Storage stores data in
primary or auxiliary storage layers, allowing different service levels for backups and archives
and reducing costs when needed.
IBM Power Systems Virtual Server provides snapshots and clones, which are used to take a
complete or partial copy of disks to improve recovery. Disks also can be copied by using a
gold image and stored on IBM Cloud Object Storage for DR activities, which simplifies and
improves the full system backup procedure.
When sizing buckets (Cloud Object Storage containers) for your backup environment, the
following issues must be considered:
Size and frequency of ingress and egress requests
Amount of data that is transferred
Type of communications between the server instance and IBM Cloud Object Storage
Needed redundancy
Using server-side data compression, which can use IBM Power Systems Virtual Server
hardware compression and uncapped capacity when available, can reduce the amount of
data that is transfered and stored on IBM Cloud Object Storage. This reduction in data also
reduces transfer times and storage costs.
The use of incremental and differential backups, which is supported by IBM i and other
backup solutions will also reduce the amount of data that must be saved and stored. This is
one the most used mechanism when data is backed up to the cloud. By using differential
backups, the backup window and size of backup media can be reduced dramatically.
However, the time it takes to restore increases because the process starts with the full backup
and continues in partial increments. It is recommended that differential backups are used with
full backups one a week or one a month, and are combined with full disk clones or Disaster
Recovery (DR) software.
The limited capability to start an instance from physical or virtual media, along with the pre
and post processing requirements, makes it difficult to use a backup and restore preocess for
your disaster recovery strategy.
You can also recover your system fully during a disaster or failure, or restore single objects or
libraries from your save media. IBM BRMS can also perform some daily maintenance
activities that are related to your backup routine.
In addition to these backup and recovery functions, IBM BRMS can support and manage an
unlimited number of media, shared tape devices, automated tape libraries, virtual tape
devices, optical devices, and IBM Storage Protect (previously IBM Tivoli® Storage Manager)
servers. IBM BRMS enables you to track all of your media from creation to expiration. You no
longer must track which items are on which volumes, nor be concerned that you might
accidentally write over active data.
As your business needs change and grow, you can add functions to the IBM BRMS base
product by purchasing and installing other options.
Note: For more information about IBM BRMS, see Systems management Backup,
Recovery, and Media Services for i.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 115
You can work with resources and files directly by using Cloud Storage Solutions commands,
or your applications can work with resources and files by using the Cloud Storage Solutions
API.
Cloud Storage Solutions passes information about file transfers to and from the cloud to a
registered IBM i exit point. To have your applications receive that information, you can register
the applications as exit programs and associate them with the Cloud Storage Solutions exit
point.
Important: You can add IBM Cloud Storage Solutions license to your IBM i instance just
selecting the checkbox on the instance creation wizard.
IBM BRMS can be used to transfer virtual save media, from tape or optical image catalogs, to
and from the cloud by using product IBM Cloud Storage Solutions for i (5733ICC). IBM Cloud
Storage Solutions for i allows cloud connector resources to be defined for cloud storage
providers, such as IBM Cloud Object Storage with S3 protocol, and for private interfaces,
such as file transfer protocol (FTP).
IBM BRMS creates storage locations for each cloud resource that is defined on a system.
When virtual media is moved to a cloud storage location, the media is transferred to the cloud
by using the cloud resource. When that media is moved from a cloud location, the media is
transferred back to the IBM i system. Media also is automatically transferred back to the
system during a restore when no local save media is available to the restore.
If Falconstor Storsafe VTL is used, a VM instance that uses Red Hat Enterprise Linux will
be provisioned using the “FalconStor StorSafe VTL for Power Virtual Server Cloud” option
from the catalog.
– The required resources should be ordered based on Falconstor Sizing Calculator
For more information about IBM Cloud Storage Solutions for i, see IBM Cloud Storage
Solutions for i User’s Guide.
Terminology
The following terminology is important to understand:
Cloud location is an IBM BRMS storage location that is associated with a Cloud Storage
Solutions cloud resource.
Cloud resource is a Cloud Storage Solutions cloud resource.
Location: An IBM BRMS storage location.
Media is used for Cloud Storage Solutions topics, or virtual media from an image catalog.
Move and movement refers to media that changes from one IBM BRMS storage location to
another. This logical movement is reflected in IBM BRMS databases only. It does not imply
that the media is physically transferred to other storage.
System refers to an IBM i system that uses Cloud Storage Solutions.
Transfer refers to media that is physically changing from IBM i storage to storage that is
associated with a cloud connector. IBM BRMS media movement to or from a cloud
location causes media to be transferred to or from a cloud resource.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 117
Creates local or remote directories when needed.
Tracks transfer progress.
Identifies server errors.
Provides the WRKSTSICC tool to identify active, failed, and successful transfers, and show
progress of active transfers:
– Operations are run in jobs
– View the status of those jobs
– Work with those jobs (for example, end)
Uses the CRTS3RICC command to create a Cloud Storage Solutions AWS S3 or IBM Cloud
Object Storage resource. A resource defines a cloud server location and the credentials
that are needed to access that location. After a resource is created, the files can be copied
between IFS directories and the cloud server location.
Uses the CHGS3RICC command to change an AWS S3 or IBM Cloud Object Storage
resource. A resource defines an AWS S3 or IBM Cloud Object Storage cloud server
location and the credentials that are needed to access that location. It also changes a
resource to use different credentials to access the same bucket, or to specify a different
bucket.
Uses the DSPS3RICC command to display an AWS S3 or IBM Cloud Object Storage
resource. A resource defines an AWS S3 or IBM Cloud Object Storage cloud server
location and the credentials that are needed to access that location.
Overrides database files to the scope of the activation group so that the user does not
need to be concerned about library lists.
APIs implemented with Qlg_Path_Name_T path name formats so that native IBM i programs
can work seamlessly with the APIs.
Provides synchronous or asynchronous copy and delete operations.
Seamless handles CCSID string conversions (IBM i interfaces works in EBCDIC, IBM
Cloud Object Storage, and FTP Linux in UTF-8).
Works with the native IBM i messaging system to return server errors as diagnostic or
escape messages.
For more information about the IBM i Cloud storage solution, see the following resources:
Cloud Storage Solutions for i 5733ICC Support Documentation summary
IBM BRMS Cloud Education videos:
– Automatic Transfers of Media to Cloud Storage
– User Initiated Transfers of Media to Cloud Storage
IBM Cloud Storage Solutions for i must be present with at least one Cloud Resource.
All required PTFs must be present. See link references below for additional information.
When you provide the cloud resource name (that is, the cloud storage resource name), IBM
BRMS performs the following tasks:
Creates the virtual tape
Shows the image catalog
When the policy is set, IBM BRMS turn-key automatically runs Cloud Storage Solutions
commands.
IBM BRMS creates the following objects when it detects the IBM Cloud Object Storage or
FTP Resource Name:
Media Class: One class for Virtual Tape (if MSE is on the system) and one class for Virtual
Optical.
Storage Location: Based on the Resource Name.
Move Policy: Based on Resource Name.
Media Policy: Based on Resource Name.
Four Backup Control Groups:
– QCLDBIPLnn: Backs up what is minimally needed for a system D-IPL (must be burned to
a DVD).
– QCLDBSYSnn: Backs up all system data, except *SAVSYS.
Paired with QCLDBIPLnn.
– QCLDBUSRnn: Backs up all user data.
– QCLDBGRPnn: Backs up what is minimally needed to and from the cloud, except *SAVSYS.
– Paired with QCLDBUSRnn (must be burned to a DVD).
Important: You can use any of these Control Groups as templates to create your own, but
must preserve the prefix
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 119
4.2.5 Backing up the IBM i system to the cloud
IBM BRMS stores system backup media in cloud locations the same way it stores media in
physical media storage devices.
This backup solution is an example of how to use IBM BRMS with IBM Cloud Storage
Solutions for i to save your entire system to virtual media in the cloud. Consider the following
points:
If IBM Cloud Storage Solutions are used for i V1.2.0 with compression or encryption, it is
not possible to recover the system from the cloud if a disaster occurs.
The nn in the control group name is a number that IBM BRMS assigns to the connector for
which the control group was created.
Backup strategies often require full system backups at specific intervals and daily,
incremental backups to capture changes. It is assumed in this example that a full backup
of the system was run with the QCLDBSYSnn and QCLDBIPLnn control groups to obtain a full
backup of the user data.
When IBM BRMS is used to perform a system backup to media at a conventional location,
control groups run in succession save specific objects and data. The control groups obtain
the data that is required to restore the entire system.
The use of IBM BRMS to back up a system and store the data to media in the cloud requires
a similar process. Special control groups that are created by IBM BRMS store the media in
the cloud.
System backups to the cloud follow the same procedure as transferring data to the cloud. The
only difference is that the entire system is backed up at a specific point by using the cloud
resource control groups. As with backups to a conventional location, the cloud backup must
specifically save the system and user group objects and can other production data can be
saved on other media.
Beginning with the full backup control groups, QCLDBSYSnn and QCLDBIPLnn, complete the
following steps:
1. Sign onto the console.
2. Verify that the cloud location is available by using the WRKLOCBRM command.
3. Begin with the QCLDBSYSnn control group by running the STRBKUBRM CTLGRP(QCLDBSYS01)
SBMJOB(*NO) command to start the system backup.
4. Run the QCLDBIPLnn control group by using the STRBKUBRM
CTLGRP(QCLDBIPL01)SBMJOB(*NO) command.
5. Begin the backup by running the STRBKUBRM CTLGRP(QCLDBUSR01) SBMJOB(*NO) command.
6. Run the STRBKUBRM CTLGRP(QCLDBGRP01) SBMJOB(*NO) command.
7. After these backups complete, review the job logs to ensure that the full backup was
successful.
After obtaining a full backup of a system, move to a backup plan that is similar to the following
example:
On Sunday, run the following commands to obtain a full backup of the system:
– STRBKUBRM CTLGRP(QCLDBSYS01) SBMJOB(*NO)
– STRBKUBRM CTLGRP(QCLDBIPL01) SBMJOB(*NO)
Important: It is critical to run these cloud control groups in the order that is indicated here;
otherwise, all necessary media information is not available to perform a recovery.
As a part of your backup process, it can be advantageous to have the backup media quickly
accessible on the system for a period after the transfer. If cloud resources are the only
location where your system backup is stored, consider copying your QCLDBGRPnn and
QCLDBIPLnn control group data to optical media and storing the discs with other physical
media. Keeping this backup data accessible allows restores to be performed from the media
without the time and expense of transferring the media that is in the cloud back to the system.
IBM BRMS permits media that is associated with a move policy to be retained on the system
for a period after it was transferred to the cloud. This retention is done by changing the Retain
media field of the move policy to keep the media for a specified number of days after the
move. Use the Work with IBM BRMS policies WRKPCYBRM *MOV command.
The same restrictions and precautions that must be observed when backing up your entire
system to the cloud also apply when customized control groups are used to perform the
system backup to the cloud.
In addition, when creating custom control groups for a system backup to the cloud, some
restrictions exist on the naming conventions. Consider the following points when control
groups are copied and modified:
The new control group name must begin with a QCLD prefix to enable automatic transfers to
the cloud.
The new control group names cannot begin with QCLDB, QCLDUIPL, or QCLDUGRP.
When IBM BRMS is used to perform a system backup to media in a cloud location, the control
group or multiple control groups run in succession and save specific objects and data.
In system backup that uses the turnkey settings, IBM BRMS stores the data in the cloud in
volumes of a default size that is enforced in the media class.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 121
The IBM BRMS implementation enforces the use of the default media class QCLDVRTOPT with a
value of IMGSIZ(*DVD4700) for virtual optical media volumes. Copying a control group for use
as a modified user control group named, for example, QCLDUGRPnn, also results in the same
implementation that enforces the default media class value.
If this default size is too small for the data users to store, a modified, custom control groups
must be created with larger media sizes. To use a user-defined media size that is controlled
by a user-defined media class, a control group must be created and used that does not follow
the IBM BRMS turnkey automated control group naming conventions.
Note: Control groups cannot use the QCLDU prefix instead adopting a new name with the
QCLDC prefix.
To customize the system backup and use modified control groups that can use nondefault
media sizes, complete the following steps:
1. Sign onto the console.
2. Verify that the cloud location is available by using the WRKLOCBRM command.
3. Copy control group QCLDBSYSnn to a new custom control group and update any entries as
needed. Also, rename the control group; for example, QCLDCSYS01.
4. Copy the default cloud-named media policy to a new custom name and set the wanted
cloud virtual media class that is to be used.
5. Run the QCLDCSYS01 control group by using the STRBKUBRM CTLGRP(QCLDCSYS01)
SBMJOB(*NO) command to start the system backup.
6. Copy the control group QCLDBIPL01 to a new custom control group and update any entries
as needed. Also, rename the control group; for example, QCLDCIPL01.
7. Update the custom control group attributes to use the new, modified media policy.
8. Run the QCLDCIPL01 control group with STRBKUBRM CTLGRP(QCLDCIPL01) SBMJOB(*NO).
9. Copy the control group QCLDBUSR01 to a new custom control group and update any entries
as needed. Also, rename the control group; for example, QCLDCUSR01.
10.Update the custom control group attributes to use the new, modified media policy. Begin a
backup that uses this control group by running the STRBKUBRM CTLGRP(QCLDCUSR01)
SBMJOB(*NO) command.
11.Copy the control group QCLDBGRP01 to a new custom control group and update any entries
as needed. Also, rename control group; for example, QCLDCGRP01.
12.Update the custom control group attributes to use the new, modified media policy.
13.Run a backup that uses this control group by running the STRBKUBRM CTLGRP(QCLDCGRP01)
SBMJOB(*NO) command.
14.After these backups complete, review the job logs to ensure that the backup was
successful.
15.Run the cloud control groups regarding the previous sample.
As a part of your backup process, it can be advantageous to have the backup media quickly
accessible on the system for a period after the transfer.
If cloud resources are the only location where your system backup is stored, consider copying
your QCLDCGRP01 and QCLDCIPL01 control group data to physical media. Keeping this backup
data accessible allows restores to be performed from the media without the time and expense
of transferring the media that is in the cloud back to the system.
3. Sign on by using your Dedicated Service Tools (DST) username and password, or select
PF18 to bypass the use of the DST tool. Select Next and then, press PF18.
Note: The console times out if it is inactive after 5 minutes. You must close your console
browser and start a new console connection.
If you see a break message during the backup process, press Enter to return to the
window in which you entered the STRBKUBRM command so that you can see the progress
of the backup.
4. Run the IBM BRMS control group QCLDBSYS01. Put the system in a restricted state and
then, run the ENDSBS SBS(*ALL) DELAY(120) command.
5. Display QSYSOPR MSGQ on the command line. Run the DSPMSG QSECOFR command and look
for the following messages:
– System ended to restricted condition.
– A request to end TCP/IP has completed.
6. Change the subsystems to process for control group QCLDBSYS0:
a. Use the WRKCTLGBRM command, and find QCLDBSYS01.
b. Select Option 9=Subsystems to process.
c. Change the restart to *NO for Seq 10 Subsystem *ALL, as shown in Figure 4-2 on
page 124.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 123
Figure 4-2 Change restart to *No on subsystem to process
7. Run the First backup from the console: STRBKUBRM CTLGRP(QCLDBSYS01) SBMJOB(*NO).
8. Check the backup for errors. It is normal to have some errors (see Figure 4-3), including
the following examples:
– Objects not saved (Some objects are not required for the recovery).
– Media not transferred (You will complete this step manually after the Second
backup).
9. Check the subsystems after the backup completes. You see only subsystem QCTL in an
RSTD status. If it is not in this status, end all subsystems again and run the ENDSBS
SBS(*ALL) DELAY(120) command.
10.Change IBM BRMS control group QCLDBIPL01:
a. Run the WRKCTLGBRM command.
b. Select Option 8=Change attributes.
c. Page down, change the Automatically backup media information to *LIB and the
Append to media to *NO.
d. Select Option 9=Subsystems to process.
11.Issue the second backup from the console STRBKUBRM CTLGRP(QCLDBIPL01) SBMJOB(*NO).
12.Check the backup for errors. It is normal to have some errors, including the following
examples:
– Objects not saved (Some objects are not required for the recovery).
– Media not transferred (You will complete this step manually after the Second
backup).
13.Identify the volumes that used both backups QCLDBSYS01 and QCLDBIPL001 and transfer to
IBM Cloud Object Storage.
14.Check the status of the transfer by running the WRKSTSICC STATUS(*ALL) command (a
status of Failed is normal). The volumes are transferred in the next step.
15.Identify which volumes were used for the backups: WRKMEDBRM TYPE(*TRF), as shown in
Figure 4-5.
16.Transfer the volumes to IBM Cloud Object Storage. Run the STRMNTBRM command and the
WRKSTSICC STATUS(*ALL) command. You see the volume name, status, and complete
percentage for each file transfer. Wait until all volumes are successfully completed before
proceeding to the next step.
17.Verify that all of the volumes that were used for the full-system backup no longer feature a
status of *TRF. Then, run the WRKMEDBRM TYPE(*TRF) command. No volumes are listed.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 125
18.As with other recoveries that are performed by using IBM BRMS, a recovery report is used
to assist with successful recoveries from save media that was transferred to the cloud. To
generate a report for recovery from the cloud, run the following command:
STRRCYBRM OPTION(*CTLGRP) ACTION(*REPORT) CTLGRP((QCLDBSYS01 1) (QCLDBIPL01 2)
Important: It is important to review the recovery report to ensure that it is complete. If any
of the media that was produced during the backup process was successfully transferred to
the cloud, it is not included in the recovery report.
The CTLGRP and PERIOD parameters that were specified in the STRRCYBRM command help
identify objects that were saved to volumes that were not transferred to the cloud. If objects
are on volumes that were not included in the recovery report, they are listed in a missing
objects Attention section that is near the top of the report.
After the recovery report is verified, the report is stored in a safe location so that it can be
referred to during a recovery.
Daily incremental backups can be run Monday - Saturday by using the following control
groups:
STRBKUBRM CTLGRP(QCLDBUSR01) SBMJOB(*NO)
STRBKUBRM CTLGRP(QCLDBGRP01) SBMJOB(*NO)
Figure 4-6 shows how to perform full system saves by using IBM BRMS, IBM Cloud Storage
(the IBM Cloud Storage GUI), and IBM Cloud Object Storage.
Figure 4-6 IBM i source that uses IBM BRMS and IBM Cloud Storage
System recoveries cannot be performed directly from IBM BRMS media that was transferred
to the cloud. To perform a system recovery from cloud media, special procedures must be
followed to create physical optical installation media.
The physical optical media contains SAVSYS data and objects from other libraries, such as
QUSRSYS, QBRM, QUSRBRM. After the optical media is used to restore Licensed Internal Code
(LIC), the operating system, and other required objects, subsequent restores can be
performed directly from the cloud media by using IBM BRMS.
Some media that is listed in your recovery report must be recovered from physical media.
Complete the following steps:
1. Locate the media that requires conversion to a physical copy. Use the volume identifiers to
locate:
– LIC
– Operating system objects
– IBM BRMS product and associated libraries
– User profiles and configuration data
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 127
– Required system libraries
Normally, these media are stored in the QIBM BRMS_XXXXXXXX directory. The files in this
directory use the same name as the media volume identifiers that are listed in your
recovery report.
2. By using your connection to the cloud, transfer this media to your system to write the files
to optical discs. A .iso extension must be added to the file name if it is required by the
image burning software that is used.
Note: Before restoring from media in the cloud, the IBM BRMS media database must
be updated. Cloud volumes must be registered with IBM BRMS.
3. Run the program call ADDLIBLE LIB(QICC) QICC/REGEXTPTS ACTION(R) to register the
cloud exit programs.
4. Ensure that the media library name is correct for the Device prompt and enter *YES in the
Create parent directories prompt.
After this step, the physical media is not required. The system is now in a restricted state.
TCP/IP must be started to allow IBM BRMS to download the media that is required for
cloud recovery.
5. To start TCP/IP, enter the following commands:
STRTCP STRSVR(*NO) STRIFC(*NO) STRPTPPRF(*NO) STRIP6(*YES)
STRTCPIFC INTNETADR('nnn.nnn.nnn.nnn')
where ‘nnn.nnn.nnn.nnn’ is the internet address of the recovery system.
6. Press Enter.
IBM BRMS downloads the media from the cloud and begins recovering all remaining system
data. The restored system performs verification as a final stage of the recovery. To allow the
system to verify the system information, end TCP/IP by using the ENDTCP command.
After system verification is complete, restart TCP/IP by using the STRTCP command and then,
IPL the system.
4.2.8 Full-system recovery from the cloud using IBM i as an NFS server
In this section, we describe to perform a full-system recovery from the cloud.
Setting up IBM i Network Install Server with NFS Server and NFS Client
To set up IBM i Network Install Server with NFS server and NFS client, provision an IBM i VSI
in target Power Systems Virtual Server location to be an NFS Server.
The IBM i IP address and the IBM i service tools server (LAN console connection) IP
addresses cannot be the same.
IBM i V7.5 NFS Server and higher use TCP instead of UDP ports, and need additional
settings to work as an Install server.
Note: IBM BRMS stores media in the cloud as files in the QIBM BRMS_XXXXXXXX
directory, where XXXXXXXX is the name of the system that performed the backups (see to
Figure 4-7).
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 129
Figure 4-7 IBM BRMS storing media on IBM Cloud Object Storage
7. Check the status of the transfer jobs by using the WRKSTSICC STATUS(*ALL) command.
Then, after all volumes complete the transfer and show a status of Success, see the next
section, Creating virtual optical device on IBM i NFS Server.
5. Add the next two volumes by using the same name for the TOFILE; that is, the To image
file.
6. Load the image catalog, as shown in Figure 4-10.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 131
13.Run the CHGTFTPA AUTOSTART(*YES) ALTSRCDIR('/install/sysipl') command to change
the TFTP Attributes.
14.Specify the alternative source directory where the volumes are stored.
15.Run the CHGAUT OBJ('/install/sysipl') USER(QTFTP) DTAAUT(*RX) SUBTREE(*ALL)
command to change Object Authority.
16.End the TCP Server TFTP by running the ENDTCPSVR SERVER(*TFTP) command.
17.Start the TCP Server TFTP by running the STRTCPSVR SERVER(*TFTP) command.
Note: The CMNxx Resource MUST be on the same VLAN as your IBM i NSF Server.
You also MUST End TCP/IP and Vary Off the Line Description using that CMNxx.
7. Configure Service Tools LAN Adapter (see Figure 4-12 on page 133):
– IP version allowed: IPV4.
– Internet address: Use a free IP address on the same subnet of the client (TARGET).
– Gateway router address: Use the same gateway on the client (TARGET).
– Subnet mask: Use the same.
Complete the following steps:
a. Select F7=Store.
b. Select F13= Deactivate.
c. Select F14=Activate.
Note: After you select F14=Activate, the adapter restarts, which might not be ready
immediately.
8. Work with IP IPv4 Connection Status and verify the status of port 3000 status:
a. Run the NETSTAT command.
b. Select Option 3=Work with IPv4 connection status.
c. Select F14=Display port numbers.
d. Verify that port number 3000 (as-sts) is running, which is Service Tools Server.
9. On the client server, create the optical device by running the CRTDEVOPT command:
a. Select F4 to prompt.
b. Set the local internet address as *SRVLAN.
c. Set the remote internet address as the IP address of the IBM i VSI NFS Server.
d. Set the network image directory as ‘/install/sysipl’.
10.Run the WRKCFGSTS *DEV INSTALL command. Select Option 1= Vary On for device
INSTALL.
11.Run the WRKIMGCLGE IMGCLG(*DEV) DEV(INSTALL) command to verify that you can access
the remote image catalog.
12.Verify the Catalog, Type, and Directory by running the /install/sysipl command.
Verify the status of the volumes (Mounted or Loaded).
Installing LIC and operating system on IBM i client server from NFS
server
Complete the following steps to install the LIC and operating system on IBM i Power Virtual
Server on client server by using NFS server:
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 133
Warning: Before you begin the scratch installation on the (TARGET) IBM i, document all
your network information. You must re-create the network information after the installation
completes. For example, document the following information:
CFGTCP:
– Work with TCP/IP interfaces
– Work with TCP/IP routes
DSPLIND:
– CLOUDINIT0
– CLOUDINIT1
– CLOUDINIT2
WRKHDWRSC *CMN:
Display all of the available CMNxx resource information and document the location and
resource name:
– Resource name . . . . . . . : CMN03
– Location: U9009.22A.788D380-V5-C4-T1
2. Install the licensed internal code by completing steps 4 - 6 that are described at this IBM
Documentation web page.
3. In the menu, a prompt to IPL or install the System is displayed. Select Option 2=Install
the operating system.
4. Select Option 5=Network device.
5. Configure the network device (see Figure 4-14 on page 135) by using the following
settings:
– Server IP: IP address of the SOURCE NFS IBM i Server.
– Path Name: Name of the Directory where the image volumes are located.
6. Check whether the language group is correct. Then, press Enter twice, and confirm.
7. Complete steps 19 - 28 at this IBM Documentation web page.
After the first login at the IBM i main menu, the System name is changed.
8. At the command line, enter GO LICPGM.
9. Select Option 10=Display Installed Licensed Programs.
The Base IBM i that is installed and the Library QGPL and QUSRSYS are *BACKLEVEL.
10.Recover the IBM BRMS product and associated libraries on the IBM i Power Virtual Server
instance Client server by using NFS Server. On the Client Server Create Optical Device,
run the CRTDEVOPT command. Then, select F4 to prompt and use the following settings:
– Local internet address: *SRVLAN
– Remote internet address: IP address of the IBM i VSI NFS Server
– Network image directory: ‘/install/sysipl’
11.Run the WRKCFGSTS *DEV INSTALL command. Then, select Option 1= Vary On for device
INSTALL.
12.Verify that you can access the remote image catalog by running the following command:
WRKIMGCLGE IMGCLG(*DEV) DEV(INSTALL)
13.Verify the Catalog, Type, and Directory by running the /install/sysipl command.
Verify the status of the volumes (Mounted or Loaded).
Tip: Your IBM BRMS Recovery Report is available in the QP1ARCY file.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 135
14.Starting with STEP004: Recover the IBM BRMS Product and Associated Libraries in your
IBM BRMS Recovery Report, complete the following steps:
a. Run the following command to change the QSYSOPR message queue to prevent
messages that are not related to the recovery from interrupting the recovery process:
CHGMSGQ MSGQ(QSYSOPR) DLVRY(*NOTIFY) SEV(99)
b. Press Enter.
c. Recover the libraries by specifying the saved item, the name of the stand-alone device,
or media library that is used, and the volume identifiers and sequence numbers that
are listed. For type *FULL, run the following command:
RSTLIB SAVLIB(saved-item) DEV(device-name) VOL(volume-identifier)
OPTFILE('')
Example 4-1 shows the QBRM, QMSE, and QUSRBRM saved item.
In our example, run the commands that are shown in Example 4-2 to recover the IBM
BRMS libraries.
Example 4-5 Sample of report that contains item from a cloud backup
----- Attention --------------------------------------------------
THIS REPORT CONTAINS ITEMS FROM A CLOUD BACKUP.
PLEASE RUN THE FOLLOWING PROGRAM CALL TO SET UP THE CLOUD VOLUMES:
CALL QBRM/Q1AOLD PARM('CLOUD ' 'FIXDRVOL ' 'Q06990' 'Q07898' 'Q08807'
'Q13692' 'Q32656')
Note: You find your VOLUMES in the IBM BRMS Report QP1A2RCY file.
20.Run the following command to restore a current version of your user profiles:
STRRCYBRM OPTION(*SYSTEM) ACTION(*RESTORE)
Then, press Enter.
Attention: Press F9 in the Select Recovery Items display to return to the Restore.
Ensure that the tape device name or media library name is correct for the Device prompt
and that the following prompts are specified:
*SAVLIB for Restore to library
*SAVASP for Auxiliary storage pool
*YES is specified for Create Parent Directories
If you are recovering to a different system or logical partition, specify the following prompts:
*ALL for the Data base member option
*COMPATIBLE for the Allow object differences
*NONE for the System resource management
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 137
Figure 4-15 shows a sample of STRRCYBRM.
21.Select F9= Restore Command Defaults and make the changes that are shown in
Figure 4-16.
Note: If you receive a message that reads: Waiting for reply to message on message
queue QSYSOPR, select SysReq and then, type 6 to display QSYSOPR system messages.
If you are prompted to load volume Qxxxxx on INSTALL (C G), enter C to cancel and
continue with the restore.
24.Follow Step 011 in your IBM BRMS Recovery Report and run the following command:
CHGUSRPRF USRPRF(QSECOFR) PASSWORD(new-password)
26.Follow Step 013 in your IBM BRMS Recovery Report. Select Option 1=Select for all of
the “Saved Item” and then, press Enter.
27.Display the remaining objects during the restore (see Figure 4-18).
28.Work with TCP/IP interfaces and add internet addresses from target IBM i.
Important: Before you began the scratch installation on the (target) IBM i VSI,
document all your network information.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 139
Note: Use the IP Address information that you documented from the target IBM i VM that
was created in the previous step. The following information is needed:
Internet address: IP address x.x.x.x
Subnet mask: 255.255.255.x
Line description: Use one of the three CLOUDINITx line descriptions that were restored
or use the same description that was documented.
29.Work with TCP/IP Interfaces to start the Interface by running the CFGTCP command:
Select Option 1= Work with TCP/IP interfaces.
Select Option 9= Start.
30.Find the Resource URI that was used for IBM Cloud Object Storage (it is where the
volumes are stored), as shown in Figure 4-19:
a. Work with IBM Cloud Storage Resources.
b. Run the WRKCFGICC command and then, press Enter.
c. Select Option 5=Display.
34.Move volumes from IBM Cloud Object Storage to the TARGET IBM i VSI Client:
a. Run the WRKMEDBRM command and then, press Enter.
Find your volumes that is listed your IBM BRMS Recovery Report.
If you volume has a plus (+) to the right, it is part of a serial set.
b. Select Option 6=Work with serial set, as shown in Figure 4-20.
Notice that the volumes are in the IBM Cloud Object Storage location.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 141
35.Select Option 8=Move, as shown in Figure 4-21 (all the volumes at the same time) and
then, press Enter.
36.Change the Storage location to *HOME and then press Enter, as shown in Figure 4-22.
37.Check the IBM Cloud Storage Status transfer by running the WRKSTSICC STATUS(*ALL)
command:
Check that the status is “Active”.
The operation being run; for example, FRMCLD is a copy from the cloud operation. Oper =
FRMCLD (From Cloud).
After all of the jobs have a status of Success, you can continue with the IBM BRMS restore
by running the WRKSTSICC STATUS(*ALL) command.
38.Follow Step 014 in your IBM BRMS Recovery Report and the following command:
INZBRM OPTION(*DEVICE)
WRKDEVBRM
You see the Virtual Tape Device “TOR1CLDTAP *VRTTAP”.
39.Recover IBM Product Libraries on the IBM i Power Systems Virtual Server Instance Client
Server (target) by using IBM Cloud Object Storage:
Follow Step 017 in your IBM BRMS Recovery Report and run the following command:
STRRCYBRM OPTION(*IBM) ACTION(*RESTORE)
Note: The Restore Command default settings are used to specify the correct Device
parameter and change the Create parent directories prompt back to *NO.
40.Review the list of Recovery Items, as shown in Figure 4-23, and Remove any that were
restored. Select Option 4=Remove and then, press Enter.
You can also see that the Volume Serial is from the optical media.
After you remove the items, they are removed from the list.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 143
43.Recover User Libraries on the IBM i Power Systems Virtual Server Instance Client Server
(target) by using IBM Cloud Object Storage:
Follow Step 018 in your IBM BRMS Recovery Report by running the STRRCYBRM
OPTION(*ALLUSR) ACTION(*RESTORE) command.
44.Review the list of Recovery Items and remove any that were restored.
Select Option 4=Remove and then, press Enter.
You can also see that the Volume Serial is from the optical media.
Press F11=Object View (shows you which Control Group created the saved item).
Remove any items that were created by QCLDBIPL01 (see Figure 4-24).
After you remove the items, they are removed from the list.
48.Recover Directories and Files on the IBM i Power Systems Virtual Server Instance Client
Server (TARGET) by using IBM Cloud Object Storage.
Follow Step 020 in your IBM BRMS Recovery Report:
STRRCYBRM OPTION(*LNKLIST) ACTION(*RESTORE)
49.Review the list of Recovery Items and remove any that were restored:
a. Select Option 4=Remove and press Enter.
You can also see that the Volume Serial is from the optical media.
b. Press F11=Object View (shows you which Control Group created the saved item).
c. Remove any items that were created by QCLDBIPL01.
After you remove the items, they are removed from the list.
50.Select *LINK, the saved items:
a. Select Option 1=Select.
b. Press Enter to recover the saved items.
51.Follow Step 025 in your IBM BRMS Recovery Report:
UPDPTFINF
52.Follow Step 026 in your IBM BRMS Recovery Report:
RSTAUT USRPRF(*ALL)
53.Follow Step 027 in your IBM BRMS Recovery Report (restores your System Values):
UPDSYSINF LIB(QUSRSYS) TYPE(*SYSVAL)
54.Follow Step 030 in your IBM BRMS Recovery Report:
DSPJOBLOG JOB(*) OUTPUT(*PRINT)
55.Change IPL Attributes:
CHGIPLA STRRSTD(*YES)
After the IPL, you can verify the system.
56.Change System Value for QIPLTYPE:
WRKSYSVAL QIPLTYPE
Select Option 2=Change.
Select 0=Unattended IPL.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 145
57.Follow Step 031 in your IBM BRMS Recovery Report:
PWRDWNSYS OPTION(*IMMED) RESTART(*YES)
Press F16=Confirm.
58.Complete the following steps at the IPL or Install the System menu:
a. Select Option 3=Use Dedicated Service Tools (DST).
b. Sign on to Dedicated Service Tools (DST).
c. Select Option 7=Start a service tool.
d. Select Option 7=Operator panel functions.
In IPL mode:
a. Select Option 2=Normal.
b. Press F8 to set the IPL attributes and restart the system.
c. Press Enter to confirm.
Note: Check that IPL source is set to 2 and the IPL mode is set to 2.
Figure 4-26 shows a summary of your IBM i NFS client through an IBM i NFS Server by using
a backup that was created by using IBM BRMS that is on IBM Cloud Object Storage.
Figure 4-26 Full system recovery from the cloud by using IBM i
4.3.1 Introduction
This method is the most popular backup and recovery method that is used by IBM i
customers at the time of this writing.
As a target device, use the Image Catalog facility with virtual optical or virtual tape media, and
save files or the new offering from IBM (see 4.4, “FalconStor overview” on page 161).
This virtual media that is produced must be stored and archived. IBM Cloud Object Storage is
the native solution to save backups with almost endless capacity.
Communicating with IBM Cloud Object Storage can be done by using the IBM i Cloud
Storage Solutions software offering, or the use of the Open Source capacity in the IBM i
operating system.
In this section, we discuss how to back up data to a save file or an image catalog by using
native commands. Also discussed is how to save and restore data from IBM Cloud Object
Storage. Moreover, a library cab be saved and transferred to the cloud by the way of IBM
BRMS, and the movement to the cloud can be automated.
The copy command cannot use Cloud Storage Solutions to work with files in the /QSYS.LIB
file system. The size of files that you can copy to a resource is determined by the cloud
service provider.
When you copy files to or from the cloud, the operation is run asynchronously; that is, in its
own batch job instead of in the same job as the command.
When you copy files asynchronously, you do not have to wait for much time for large files to
finish copying before other commands can be run. Also, you can use the IBM i facilities to
work with asynchronous jobs; for example, by scheduling when the jobs run.
In the Submit to batch field, you can specify instead that the copy operation be run in the
same job as the command. This command is not thread safe.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 147
1. From the IBM i command line, enter CPYTOCLD and then, press F4.
2. Complete the required fields as listed in Table 4-1 and then, press Enter:
Submit to Batch enter *NO to run the copy operation in the same job as the command.
Leave the *YES value to run the copy operation in its own job.
Local file name Enter the IFS path and the name of the file to copy; for
example:/home/user/jdoe/file.txt. The path must begin with a forward
slash (/) and is not case-sensitive.
Cloud file name Enter a path and name for the cloud copy of the file; for example,
dir1/dir2/file.txt.
The path is created if it does not exist. When you specify this path, do not
include the container, bucket, or root directory that is defined in the
resource. Cloud Storage Solutions combines that directory with this path to
create the full path in the cloud.
The container, bucket, or root directory that is defined in the resource must
exist before you copy files to the cloud.
When overwriting a file, in most cases the directory and file name are
case-sensitive. If you overwrite a file on an IBM i FTP computer, the
directory and file name are not case-sensitive unless they are on the
/QOpensys file system.
You must have Execute (*X) authority on all directories in the path to which you copy the file,
and Write (*W) authority on the last directory in the path. I
f the file was copied before and exists in the path, the user must have Write access to it. For
example, to copy file.txt to /home/user/jdoe, you must have Execute authority on the home,
user, and jdoe directories, and Write authority on jdoe. If file.txt is available, you must
have Write authority on it.
You cannot use Cloud Storage Solutions to work with files in the /QSYS.LIB file system. If you
copy a file from an FTP cloud server to the IBM i computer, and that file was not originally
copied to the FTP server by using Cloud Storage Solutions, Cloud Storage Solutions assigns
the file a coded character set identifier (CCSID) of 65535. A CCSID of 65535 means that the
operating system treats the file as binary data and it is unreadable in an editor.
If you copy a file from an Amazon S3 or IBM Cloud Object Storage cloud server to the IBM i
computer and that file was not originally copied to the cloud server by using Cloud Storage
Solutions, Cloud Storage Solutions reads the data and from it assigns the file a coded
character set identifier (CCSID) of 1208 (UTF-8) if it is text, or 65535 if it is binary.
When files are copied to or from the cloud, the operation is run asynchronously; that is, its
own batch job instead of in the same job as the command. When files are copied
asynchronously, you do not have to wait a long time for large files to finish copying before
running other commands.
Submit to Batch Enter *NO to run the copy operation in the same job as the command.
Leave the *YES value to run the copy operation in its own job.
Cloud file name Enter the cloud path and the name of the file to be copied; for example:
dir1/dir2/file.txt.
Do not include the container or bucket that is defined in the resource. The
cloud file name path is appended to those directories to construct the full path
for the cloud copy of the file.
In most cases, the directory and file name are case-sensitive. The directory
and file name are not case-sensitive with IBM i FTP resources unless they
are on the /QOpensys file system.
Local file name Enter the IFS path and file name of the file being copied, for
example:/home/user/jdoe/file.txt.
The path must begin with a forward slash (/). Path and file names are not
case-sensitive. Path directories are created if they do not exist locally. You
can enter a local file name that is different from the cloud file name.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 149
When a large library must be saved to a single file, some PASE or QSHELL commands
can be used to split and compress the data into several components.
Sample scenario
In this example, one library and security information are saved in two different Save Files, the
data is compressed, and then uploaded to IBM Cloud Object Storage, assuming
communications exist.
Note: For more information about how to connect from IBM Power Systems Virtual Server
and IBM Cloud Object Storage, see IBM i Migration to Cloud with IBM Power Systems
Virtual Server.
4. Update the YUM environment and install some tools, as shown in Example 4-9.
5. Configure awscli to communicate with IBM Cloud Object Storage by using aws configure.
6. Use your credentials from your IBM Cloud Object Storage bucket to complete the
configuration.
For more information about how to create your IBM Cloud Object Storage resource,
bucket, and credentials and configure AWS CLI, by IBM Cloud Docs web page.
7. Compress the files and upload to IBM Cloud Object Storage, as shown in Example 4-10.
Example 4-10 Saving and compressing data to Cloud Object Storage bucket
8. List the backup on IBM Cloud Object Storage, as shown in Example 4-11.
Example 4-11 List the backup from IBM Cloud Object Storage
aws --endpoint-url=https://s3.eu-gb.cloud-object-storage.appdomain.cloud s3 ls s3://ibmi-backup/
4.3.5 Backup by using image catalog and IBM Cloud Object Storage
The image catalog is a feature on IBM i as an image device from V5R1. From V5R3, you can
save to your disk by using optical images; from V5R4, you can create Virtual Tape images.
This useful feature can be used to install PTFs, perform system upgrades, and save our data
to disk, which avoids save file limitations to only one library.
Media can be saved to a Virtual Tape Image Catalog and duplicated to a physical tape, or
from a Virtual Optical Image Catalog to a CD or DVD (images must be created with the
correct size and format).
For more information about maximum capacities, see this IBM Support web page.
When backing up data on IBM Cloud Power Virtual Server instances, you can use both of
these Image catalog types; however, consider the following environment characteristics when
restoring:
A full system restore or recover cannot be made from SAVSYS by using tape images.
Because media is transferred to IBM Cloud Object Storage or a gateway appliance,
consider the number of image files and their size when transferring and restoring data.
Large Image files can slow the restore process when only one file must be restored.
Small Image files can be a bad idea when many terabytes must be backed up because of
image catalog limits and the number of files that is created.
Managing multiple backup sets can be complicated. Consider the use of IBM BRMS to
create backups and manage your image catalogs.
Whenever possible, use the IBM Cloud internal network to avoid extra charges.
The following commands can be used to manage the image catalog:
– WRKIMGCLG: Allows the user to work with a list of image catalogs.
– CRTIMGCLG: Creates an image catalog object (*IMGCLG) in library QUSRSYS and
associates the image catalog with a target directory.
– CHGIMGCLG: Changes the attributes of an image catalog.
– DLTIMGCLG: Deletes an image catalog.
– ADDIMGCLGE: Creates a virtual image in the image catalog directory.
– CHGIMGCLGE: Changes the attributes of a virtual tape volume.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 151
– RMVIMGCLGE: Removes a virtual volume from an image catalog and optionally deletes
the virtual volume.
– RTVIMGCLG: Used in a CL procedure to retrieve the name of the image catalog that is
loaded in a virtual device, or to retrieve the name of the virtual device where an image
catalog is loaded.
– STRNETINS: Starts a network installation from the NFS server.
– LODIMGCLGE: Changes the status of a virtual tape or optical volume within an image
catalog.
– LODIMGCLG: Loads an image catalog on a virtual tape or optical device to make the
virtual volumes accessible by the device.
– WRKIMGCLGE: Works with the images in the specified image catalog.
– VFYIMGCLG: Used to work with the image catalogs that are on the system.
2. Create a virtual optical device, make it available, load it with the image catalog and
initialize, as shown in Example 4-13.
3. Start the use of the virtual device and backup to the new virtual media, as shown in
Example 4-14.
When the virtual optical media must be transferred to IBM Cloud Object Storage, use IBM
Cloud Storage Solutions for i or the same procedure, as shown in Example 4-15.
Example 4-15 Upload file with IBM Cloud Storage Solutions for i
CPYTOCLD RESOURCE(ICOS01) LOCALFILE('/IMGCLG/OBACKUP/OBKP01') CLOUDFILE(OBACKUP01)
Example 4-16 Transferring file to IBM Cloud Object Storage using PASE and AWSCLI
CALL QP2TERM
TIP: Use QSHELL and include awscli in the CLs to upload data to IBM Cloud Object
Storage. Run the following command from QCMD:
SBMJOB CMD(QSH CMD('/QOpenSys/usr/bin/sh -c "cd /IMGCLG/OBACKUP ;
PATH=$PATH:/QOpenSys/pkgs/bin;export PATH;cat OBKP01 | pigz -9 -p40 -c | aws
--endpoint-url=https://s3.eu-gb.cloud-object-storage.appdomain.cloud s3 cp -
s3://ibmi-backup/OBKP01.gz" ')) JOB(S3CP)
4.3.6 Sample save and restore IBM i objects to IBM Cloud Object Storage
In this section, we introduce IBM Cloud Object Storage. You also learn how to copy a save file
directly from your IBM i partition to your own bucket in IBM Cloud Object Storage.
This process can be useful not just as a backup solution, but also as a simple migration
strategy; especially to get started and move a few applications and many small databases to
IBM Power Virtual Server to get started with your own PoC or testing.
Therefore, it can be easy to move a save file directly from IBM i by using the IBM Cloud
Storage Solutions LPP. Your bucket includes public and private endpoints that can be used
whether your source partition is within the IBM Cloud, or is external.
Note: The use of the private endpoints requires Direct Link, which is why we use the public
interface to the IBM Cloud Object Storage Bucket in this example.
On IBM i, and in this sample, we use the IBM Cloud (S3/COS) Connector (5733-ICC). This
Copy From Cloud (CPYFRMCLD) command, and Copy to Cloud (CPYTOCLD) command are
5733-ICC CL commands that copy files between IBM Cloud Object Storage and the IFS.
These IFS files can be used with image catalogs to automate the process (as does IBM
BRMS).
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 153
Although Backup, Recovery, and Media Services (and IBM BRMS Network in an HA/DR
scenario) is not used in this example, if you can use IBM Cloud Storage without IBM BRMS,
IBM BRMS is fully integrated with IBM Cloud Storage and can be of great help in managing
Terabytes of data and thousands of objects in an HA/DR plan.
Figure 4-27 Create an IBM Cloud Object Storage resource by way of CRTS3RICC
4. Enter your Bucket Name and your Resource URI, as shown in Figure 4-28. Press Enter.
Object KMSAMPLE
You save your applications and data into a save file, move to IBM Cloud Object Storage and
then, restore from IBM Cloud Object Storage. As a prerequisite, complete the steps that are
described in “Backing up IBM i object to IBM Cloud Object Storage” on page 154.
In this example, assume that a SAVLIB was performed of a business database and objects
(ACMEAIR Library) to a save file ACMEAIR.FILE by using the CPYTOCLD command. Because
this process is out of the scope of this example, assume that the file is in IBM Cloud Object
Storage bucket.
The objective is to use an IBM Cloud Object Storage service with Reader access and
credentials to restore this data to your IBM i VM that is running in IBM Power Systems Virtual
Server.
Note: Although the graphics that are used in the figures in this section show how the
service it works, you are encouraged to create your own IBM Cloud Object Storage service
from the IBM Cloud Catalog, your first bucket and credentials, and get started with your
first backup and restore.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 155
Complete the following steps to transfer and restore a library from IBM Cloud Object Storage
to the Integrated File System:
1. Run the CPYFRMCLD command (see Figure 4-30) and complete the following information:
– Resource Name: The resource name that was created.
– Submit to Batch: *NO (for example, interactive; for large files, use *YES).
– Cloud File Name: The file name that is in the IBM Cloud Object Storage Bucket.
In our example, acmeair.file is the name of the save file that is in the IBM Cloud
Object Storage service. It was sent to IBM Cloud Object Storage by using a CPYTOCLD
command or another upload process (HTTP, S3 client, Aspera, and so on).
– Local file name: An IFS directory and file name as a destination; for example, your
home directory /home/QSECOFR/acmeair.file. If the file exists, it is overwritten.
Figure 4-31 Display the objects restored from IBM Cloud Object Storage
Note: The following commands can be used to verify that the cloud resource is working
correctly:
CPYTOCLD
CPYFRMCLD
2. Configure IBM BRMS objects that are required to use the cloud resource:
INZBRM OPTION(*DATA)
3. Create a virtual tape device:
CRTDEVTAP DEVD(IBM BRMSCLDTAP) RSRCNAME(*VRT)
4. Vary on the virtual tape device:
VRYCFG CFGOBJ(IBM BRMSCLDTAP) CFGTYPE(*DEV) STATUS(*ON)
5. Create an image catalog:
CRTIMGCLG IMGCLG(IBM BRMSCLDTAP) TYPE(*TAP) DIR('/tmp/IBM BRMSCLDTAP')
CRTDIR(*YES)
6. Add a volume to the image catalog:
ADDIMGCLGE IMGCLG(IBM BRMSCLDTAP) FROMFILE(*NEW) TOFILE(*GEN) VOLNAM(BRMCLD)
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 157
7. Load the image catalog on the device:
LODIMGCLG IMGCLG(IBM BRMSCLDTAP) DEV(IBM BRMSCLDTAP)
8. Configure the virtual tape device in IBM BRMS:
WRKDEVBRM
Specify the following information:
– Opt field: 1= Create
– Device field: IBM BRMSCLDTAP
– Category field: *VRTTAP
9. Add a media class for the virtual tape media:
WRKCLSBRM TYPE(*MED)
Specify the following information:
– Opt field: 1=Create
– Class field: IBM BRMSCLDTAP
– Density field: *VRT256K
– Shared media field: *NO
10.Add volume BRMCLD to the IBM BRMS media inventory:
ADDMEDBRM VOL(BRMCLD) MEDCLS(IBM BRMSCLDTAP) IMGCLG(IBM BRMSCLDTAP)
11.Add a media policy for the virtual tape media:
WRKPCYBRM TYPE(*MED)
Specify the following information:
– Opt field: 1=Create
– Policy field: IBM BRMSCLDTAP
– Media class field: IBM BRMSCLDTAP
12.Create a library to save to the cloud:
CRTLIB LIB(CLDLIB)
13.Save the library to the virtual tape volume:
SAVLIBBRM LIB(CLDLIB) DEV(IBM BRMSCLDTAP) MEDPCY(IBM BRMSCLDTAP)
14.Move volume BRMCLD to the cloud location:
WRKMEDBRM VOL(BRMCLD)
Verify the following information:
– Volume is in *ACT status
– Location is *HOME
– Volume is in *ACT status
– Location is CLOUD
Specify the following information:
– Opt field: 8=Move
– Storage location field: CLOUD
The control groups create cloud media that is formatted so it can be downloaded and burned
directly to physical optical media. All remaining data on the system can be backed up to
media in the cloud and restored directly from the cloud without a need to create physical
media.
Automatic transfers of media to IBM Cloud Object Storage are shown in Figure 4-32.
Control groups that feature a QCLD prefix cause IBM BRMS to automatically create and
transfer media to the cloud during a backup.
For example, a library that is named PAYROLL can be used to automatically transfer the media
to the cloud and then, transfer the media back to the system by restoring the library.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 159
6. Add an entry for the CLDLIB library:
– Seq: 20
– Backup Items: CLDLIB
7. Use the default values in the remaining fields, as shown in Figure 4-33.
As shown in Figure 4-35, the volume status is *ACT and the location point is cloud resource.
FalconStor StorSafe is an optimized backup and deduplication solution that provides Virtual
Tape Library (VTL) emulation, high-speed backup/restore, data archival to supported S3
clouds for long-term storage, global data deduplication, and enterprise-wide replication,
without requiring changes to the existing environment.
FalconStor StorSight is a single integrated platform that simplifies the management of data
across legacy, modern, and virtual storage environments. StorSight gathers and consolidates
information coming from different StorSafe servers into a scalable repository of services,
tenants, users, predictive analytics, alert rules, reporting, and historical data. StorSight
provides a web-based portal for centralized management and monitoring of multiple backup
and deduplication server.
FalconStor StorSafe VTL is offered through the IBM Cloud Catalog for both Off-premises and
On-premises with the following names:
FalconStor StorSafe VTL for Power On-Premises provides flexible deployment options for
the user:
– It can be installed on any X86 server that is on FalconStor certification matrix.
– It can be installed as a VM in Hypervisors such as VMware, HyperV, Xen, and KVM.
– It can be installed inside Power Virtual Server as an LPAR.
FalconStor StorSafe VTL for Power Virtual Server Cloud comes bundled with OS and can
be deployed from the IBM Cloud catalog with one click.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 161
Must have technical refresh 9 installed (MF99309).
MF69817
Backup/Recovery group PTF SF99954 level 12, April 2 2024 update contains PTFS needed
to run ISCSI and the latest cumulative PTFs enable IPsec and VPN (which requires
Navigator).
Connectivity
The following connections are required:
Network connections between VTL and the management console
iSCSI connections between VTL and IBM i host clients
IBM Cloud Direct Link connection between VTL and the IBM Cloud Object Storage (COS)
in the IBM Cloud Classic environment
Internet connection with the FalconStor license server for online registration (optional)
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 163
Figure 4-37 Connectivity for Native backup deployment
Scenario
Network default interface eth0 has a public IP address and is used for all external
connections.
Network interface eth1 has a private IP address and is used to add routes to other
networks.
# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.150.162 netmask 255.255.255.248 broadcast
192.168.150.167
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 172.24.2.70 netmask 255.255.255.224 broadcast 172.24.2.95
Create routes from the private IP to connect to the other networks. For example,
ADDRESS1=10.0.0.0
is for routing to the Classic infrastructure and ADDRESS2=172.22.0.0 is added for routing to
other private networks.
# cat /etc/sysconfig/network-scripts/route-eth1
# Created by cloud-init on instance boot automatically, do not edit.
#
ADDRESS0=172.24.2.64
GATEWAY0=172.24.2.65
NETMASK0=255.255.255.224
ADDRESS1=10.0.0.0
GATEWAY1=172.24.2.65
NETMASK1=255.0.0.0
ADDRESS2=172.22.0.0
GATEWAY2=172.24.2.65
NETMASK2=255.255.0.0
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 165
Figure 4-40 Connectivity for Cloud-to-Cloud Replication Deployment
The following connections are required for connectivity for hybrid cloud deployment:
Network connection between the on-premises VTL and the Power Virtual Server VTL in
IBM Cloud
Network connection between the VTL console in the classic infrastructure and the
on-premises VTL
Network connections between the VTL console in the classic environment and the Power
Virtual Server VTL in IBM Cloud
The following describes how IBM Cloud can be used for workload migration:
1. IBM BRMS running at the primary site on an IBM i host client performs I/O to a FalconStor
VTL server via iSCSI to back up the OS boot image and application data on virtual tapes.
The IBM i host clientneeds to be prepared for migration; two virtual tapes are needed; the
first tape will contain the base operating system for the initial remote boot and the second
tape will contain the remaining data of the host machine image.
2. The primary VTL server deduplicates data and replicates both tapes to another VTL
server in IBM Cloud.
3. On the replica site, the replica virtual tapes are promoted; the first tape is assigned to an
IBM i remote boot server via iSCSI.
4. The IBM i remote server in the cloud mounts the OS image on the first promoted tape on
an NFS share, then it boots another IBM i virtual server from the NFS boot directory
running the same OS as the IBM i of the primary site.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 167
5. The second tape is assigned to the new IBM i via iSCSI. This new server will read data
from this second tape to complete the workload migration and become the clone of the
IBM i on the primary site.
The following connections are required for workload migration deployment conectivity
Network connection between the on-premises VTL and the Power Virtual Server VTL in
IBM Cloud
Network connection between the VTL console in the classic infrastructure and the
on-premises VTL
Network connections between the VTL console in the classic infrastructure and the Power
Virtual Server VTL in IBM Cloud
iSCSI or Fiber Channel connection between IBM i host clients and the on-premises VTL
iSCSI connections between IBM i host clients and the Power Virtual Server VTL in IBM
Cloud
IBM Cloud Direct Link connection between the Power Virtual Server VTL and the IBM
Cloud Classic environment, if the IBM Cloud Object Storage (COS) is used for the
deduplication repository
Internet connection with the FalconStor license server for online registration (optional)
When an IBM i host client is used for a workload migration scenario, it needs to be an NFS
server to use the boot image for the cloned machine. Refer to IBM documentation for more
information.
4.4.3 IBM i Migration Example using FalconStor: Details for 7 Key Steps
Here’s the image showiing overview for these step.
Step 1. Prepare for migration using BLDMIGDSLO script on Machine C to make the first tape,
which will contain SAVSYS and the base IBM i operating system.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 169
Figure 4-46 IBM i Main menu
Issue STRTCPIFC ‘10.7.2.54’ -> message show ‘User QSECORFR started IP interface
10.7.2.54 on ETH8.’
Step 2. Perform a SAVE 21 on Machine C on-premises at the customer site. This makes a
second tape.
Issue ‘BLDMIGDSLO’
Go to Configure TCP/IP
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 171
Figure 4-50 TCP/IP main menu
Issue ‘STRTCP’
Step 3. Replicate both virtual tapes from the customer's site to StorSafe VTL in Power Virtual
Server.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 173
Operation success completed as picture Figure 4-54 below.
Step 4. Restore the boot manager from StorSafe VTL virtual tape to Machine A in Power
Virtual Server.
Step 5. Run BLDBOOTENV on Machine A to read the data from the DSLO tape which will put
Machine B's boot image onto an NFS share.
From IBM i main menu issue command ‘BLDBOOTENV’ status will change to ‘Current library
changed to FALCONSTOR.’ Cont hit Enter until screen show status ‘Restore and setup
complete.’ as Figure 4-55 below.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 175
Step 7. Perform a RESTORE 21 on machine B and reboot to complete the migration of
Machine C to Power Virtual Server!
Select option ‘21’ - for System and user data showing as below.
Refer to the FalconStor Solution Sizing page to get the sizing information about the
deduplication repository, backup cache, memory, CPU, and the machine type.
Refer to the IBM Cost Estimation page, click Estimate costs on the top right side panel, select
Virtual Tape Library as OS, enter values for usage parameters based on the Solution Sizing
tool results, click Calculate cost and Save to see the cost. Click Review estimate to go the
Cost estimator page. Additional costs may apply based on extra capacity for the Cloud Object
Storage (COS), or additional network and infrastructure components. For the COS capacity
go to Catalog, type Object Storage in the Search box, Select the Standard plan. Click
Estimate costs, enter a value for Monthly average capacity, and the Calculate cost again.
The FalconStor VTL will be activated with a temporary license after deployment. Once
deployment is complete, look for an email from [email protected] to receive a
permanent license and then replace the temporary license with the permanent license via the
VTL management GUI.
Add VTL servers. Right-click the Servers object and click Add.
Connect to your server with the root user account and the related password. The default
password IPStor101 expires after installation, and you need to set a new password the
first time you log in to the server, the configuration wizard launches.
Check items below in the configuration wizard that appears after connecting to a server.
Click Configure for each item that applies and click Skip for items that do not apply:
– Set up network parameters and hostname.
– Skip Fiber Channel for virtual servers. Enable it only for an on-premises VTL server
that is using Fiber Channel protocol to communicate with backup clients.
– Prepare devices by selecting all devices and setting an appropriate reservation type for
each block storage: Configuration repository for the 20 GB device, Deduplication
repository for index and folder devices, Tapes for backup cache devices.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 177
– Enable Configuration Repository using 10 GB of the reserved device.
– Create Virtual Tape Library database using another 10 GB of the reserved device,
select the Express method, enable virtual tape library software compression, accept
default thresholds, and do not create a mirror for the database.
– Skip virtual tape encryption.
– Skip physical tape libraries/drives assignment.
– Create virtual tape libraries with IBM drives.
For IBM i host clients, select the virtual tape library type as FalconStor FALCON TS3500L32
(03584L32) or FALCON TS3500L32 (03584L32) and the media type as ULTRIUM3 (LT03) or
newer. By using Falcon library types, you get the 3584-403 device types to configure on IBM i
host clients.
Set the library name, the drive name prefix, number of drives.
Leave default settings and do not enable any service options.
Set the barcode range for tapes, adjust current settings, if desired, set the number of
Import/Export slots to 1. The number of slots in a virtual tape library can be larger than the
supported number of its equivalent physical library of the same model. Skip thewarning that
may be displayed based on the library model.
Enable Tape Capacity On demand (COD) to create small resources for your tapes and then
automatically allocate additional space when needed. The minimum value for Incremental
size is (Maximum Capacity - Initial Tape Size) / 63.
You can create virtual tapes at this time or later by skip assigning virtual tape library to clients,
then skip the item to enable deduplication. Exit the wizard after selecting the option Don't
show this next time.
If you use a reverse proxy server to make remote connections to IBM COS private endpoints,
use the CLI command object-storage-add to enter the proxy IP address and user/password.
Command:
Oi <HMAC access key ID> -Os <HMAC secret key> -Ob <bucket name>
Enable iSCSI target mode from the Options menu for connection to backup application
servers that access virtual resources.
Configure iSCSI from the client side according to the Configure an iSCSI client section below.
Create iSCSI host clients to represent your backup application servers. Right-click the Clients
object and click Add. Enter a name, enter client iSCSI initiator name as the one configured on
the client side, for example, iqn.1994-05.com.ibm:apacibmi74falcon, and allow
unauthenticated access.
Create an iSCSI target for the client: Right-click the newly created client under iSCSI Clients
object and click Create Target. Enter the iSCSI target name as the one configured on the
client side, for example, iqn.2000-03.com.falconstor:h21-47.ibmi94. Select the VTL server IP
address.One iSCSI target should be created for each iSCSI client initiator.
Assign virtual tape libraries to the new iSCSI target: Select the iSCSI client under iSCSI
Clients, click the Resources tab. Right-click the newly created iSCSI target and click Assign.
Select available libraries to assign.
On the client side, to discover assigned devices, run the IPL I/O processor option to send a
login request. You can run the IPL using a SQL command or the Start System Service Tools
command (STRSST):
Run the SQL command with the IPL I/O processor option:
Run the IPL I/O processor option that will send a login request from the client iSCSI initiator to
the VTL server with a nonexistent target. Use option IPL I/O processor as picture below
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 179
If everything is successful, the VTL server receives the iSCSI login request with the following
sample messages in the system log:
Confirm the iSCSI device is now available to the client. For example, the 3584-403 is
displayed as a FalconStor vendor device Type-Model configured for the client:
Expand the VTL Library: System object, right-click Deduplication Policies, and select New.
Enter a name for the policy. Select the deduplication cluster that was previously created.
Select Inline Deduplication trigger with the option to switch to post-processing. Leave default
priority and retry settings.
If applicable, on the primary VTL server, designate the replica server as the target server and
create deduplication/replication policies.
Check Type-Model.
Confirm the IBM PTF is installed on the client by checking the Type-Model that should
display as 298A-001 as Figure 4-63 below. This indicates that the iSCSI bus IOP resource
is operational on the client. If it does not exist, contact IBM to install the required PTF file.
SQL commands
1. Confirm the SQL service is operational by running the run STRSQL command. This
service is used for configuring communication between the client iSCSI initiator and VTL
iSCSI target.
2. Run the SQL command to add the iSCSI target and the initiator information on the client;
you can set the target to any value matching IQN patterns.
CALL QSYS2.ADD_ISCSI_TARGET(
TARGET_NAME=>'iqn.2000-03.com.falconstor:h21-47.ibmi94',
TARGET_HOST_NAME=>'172.22.21.47',
INITIATOR_NAME=>'iqn.1994-05.com.ibm:apacibmi74falcon');
The iSCSI target name does not exist on VTL yet. After the SQL command completes, you
will use the same target name on VTL when configuring the iSCSI client later.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 181
3. Run the SQL command with the IPL I/O processor option that will send a login request
from the client iSCSI initiator to the VTL server with a nonexistent target:
CALL QSYS2.CHANGE_IOP(IOP=>'ISCSI', OPTION=>'IPL');
4. Confirm the IPL I/O processor successfully completes by checking the VTL system log,
/var/log/messages. Take a note of the target name in the log, that you need to use when
configuring the iSCSI target in the VTL console. You can a message similar to the
following one:
May 30 14:31:36 h21-47 fsiscsid[9033]:
IPSTOR||1653892296||E||0x0000c352||Login request to nonexistent target %1
from initiator %2||iqn.2000-03.com.falconstor:h21-47.ibmi94 (ip
172.22.21.47)||iqn.1994-05.com.ibm:apacibmi74falcon
4.5.1 Snapshot
By using the snapshot interface, a relationship can be created between source disks and a
target disks (target disks are created as part of the snapshot API) at time T1. The snapshot
API tracks the delta changes that are made to the source disk beyond time T1. This feature
enables the user to restore the source disks to their T1 state later.
Several use cases are available for the snapshot feature. For example, an administrator plans
to upgrade the middleware on their system, but they want to revert the middleware to its
original state before proceeding with an upgrade.
If the middleware fails, the source disk can be restored to its previous state by completing the
following steps:
1. Start the snapshot API with the source disks where the middleware information is stored.
2. Upgrade the middleware.
3. If the upgrade fails, restore the source disks by using the snapshot that was created in the
previous step.
4. If the upgrade succeeds, delete the snapshot that was created in the first step.
Best practices
Consider the following best practices:
Before you take a snapshot, ensure that all of the data is flushed to the disk. If you take a
snapshot on a running VM and did not flush the file system, you might lose some content
that is in memory.
It is recommended that all of the applications that are on the snapshot volume be
quiesced.
The clone operation continues to copy data from the source disks to target disks in the
background. Depending on the size of the source disks and the amount of data to be copied,
the clone operation can take a significant amount of time.
You cannot modify the source or target disk attributes, such as disk size, while the clone
operation is in progress.
Best practice
Quiesce all of the applications on the volume that you want to clone.
If the restore operation succeeds, the backup snapshots are deleted. If the restore operation
fails, pass in the restore_fail_action query parameter with a value of retry to attempt the
restore operation again.
To roll back a previous disk state, pass in the restore_fail_action query parameter with a
value of rollback. When the restore operation fails, the VM enters an Error state.
Best practice
During the restore operation, it is critical that your source disks be quiesced. Your source
disks cannot be in use.
Considerations
Consider the following points:
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 183
If the restore operation fails, contact your storage support administrator. A failed restore
operation can leave behind incomplete states, which might require a cleanup initiative
from an IBM operations team.
If you choose to restore a shared volume on one VM, you cannot perform the snapshot,
restore, clone, or capture operations on the other VMs that use the shared volume (while
the restore operation is running).
Example 4-17 shows how to use IBM Cloud CLI and power-iaas plugin to take a Snapshot
This example assume you have installed IBM Cloud CLI and the power-iaas plugin in a Linux
server. For detailed information about IBM Cloud CLI setup, follow this link:
Example 4-17 Sample shows how to use IBM Cloud CLI to take a Snapshot
#!/bin/bash
# Constants #
error=0;
errmsg="";
# Cloud Account data
apikey="<APIKEY>";
Resource="<CRN Resource ID>";
# SSH Server
SvrAddr="<SSH Server Address>";
UserID="<SSH UserID>";
# Instance ID
InstanceName="<InstanceName>";
# Object Storage
Region="<REGION>";
AccessKey="<ICOS AccessKey>";
SecretKey="<ICOS SecretKey>";
###############################################
SnapshotName="Snap_$InstanceName_"$(date +"%Y-%m-%d_%I%M%S");
###############################################
echo '=================';
echo 'Suspend Db2 for i';
echo '=================';
# Quiesce the database - Suspend Database disk activity
ssh $UserID@$SvrAddr "system 'CHGASPACT ASPDEV(*SYSBAS) OPTION(*FRCWRT)';system
'CHGASPACT ASPDEV(*SYSBAS) OPTION(*SUSPEND) SSPTIMO(120)'"
Example 4-18
ibmcloud login --apikey $apikey -r $Region
ibmcloud pi snap list
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 185
Restoring to snapshot
Example 4-19 shows a n example of restoring to snapshot.
Important: Instance must be in Power Off state before running a snapshot restore
operation.
Both can be used to create a new instance, or as a Full System Backup alternative.
This example shows how to create an export to your IBM Cloud Object Storage. Size
restrictions apply.
Example 4-20 shows how export your instance using the IBM Cloud CLI.
if (($error == 0))
then
echo '==============================='
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 187
4.5.5 Use cases
This section provides some use cases.
Chapter 4. Back up for IBM i on IBM Power Systems Virtual Server 189
190 Power Virtual Server for IBM i
5
Note: When using IBM i stock images you must remember the default values force case
sensitive passwords and up to 15 characters.
2. After a new window opens to access the default credentials for the initial login by way of
the web console, enter the user profile QSECOFR and password QSECOFR (upper case), as
shown in Figure 5-2.
Type user
Type password
4. Accept the software agreements by selecting Option 5 for each. Then, press Enter (see
Figure 5-4).
Type option 5
5. To use the keys that are available from VNC, click Next. Then, select PF15= Accept All,
as shown in Figure 5-5 on page 194.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 193
Figure 5-5 Accept the licenses agreements for Machine code
6. Repeat Step 5 until the display shows all of the software agreements are in the Accept
status (see Figure 5-6). Then, press PF3.
Attention: This QSECOFR profile is for demonstration purposes. The QSECOFR profile is
used for the SSH connection. However, the preferred practices that is suggested not to
use QSECOFR profile with SSH on IBM i. Use a different user profile with similar authority.
The new user profile has a Home directory as well; this directory often is
/home/<userprofile>.
Note: You can add as many devices that you require on QAUTOVRT.
10.Issue the WRKSYSVAL QLMTSECOFR command. Then, press Enter and select option 2 =
Change. Then, change the value from 1 to 0. Press Enter.
11.Verify that cloud-init is configured as shown in Figure 5-8. Run the CFGTCP command,
select Option 1 and then, press Enter.
12.Issue the following commands to start the Telnet and SSH services:
STRTCPSVR *TELNET
STRTCPSVR *SSH
13.Verify that the ports for SSH and Telnet are listening, as shown in Figure 5-9. Issue the
NETSTAT *CNN command. Then, press Enter and then, press Shift + PF2.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 195
5.1.1 Remote access to IBM i using tunneling
The public IP address blocks most ports. Therefore, you must use SSH tunneling or configure
your certificates and use SSL to allow IBM Access Client Solution to connect over public IP.
Before you use an SSH tunnel, you must create a user profile with USRCLS(*SECOFR) specified.
Complete the following steps to access the solution by using SSH tunneling.
1. Open a PuTTY terminal and create a session that uses the public IP, as shown in
Figure 5-10.
The ports that are required to configure on PuTTY are listed in Table 5-1.
449 localhost:449
50000 localhost:23
8470 localhost:8470
8471 localhost:8471
8472 localhost:8472
8473 localhost:8473
8474 localhost:8474
8475 localhost:8475
8476 localhost:8476
9470 localhost:9470
9471 localhost:9471
9472 localhost:9472
9473 localhost:9473
9474 localhost:9474
9475 localhost:9475
9476 localhost:9476
Note: Steps 5, 6, and 7 that are shown in Figure 5-11 are repeated for each port
that is in Table 5-1 on page 196.
b. Return to Session on Category that is on the left side of PuTTY and click Save.
c. Click Open at the session in PuTTY and click Yes to trust this host and connection.
d. A new window terminal is opened. Enter QSECOFR and the password that was changed,
and then, press Enter. The tunneling is now ready. Hold the terminal open, as shown in
Figure 5-12.
Important: The tunneling that is done by using PuTTY is for a Windows system. If you
use another operating system, such as Linux or Mac, the SSH tunneling to allow ACS to
connect over the External IP is different. For more information, see this IBM Cloud Docs
web page.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 197
2. Select Access client solutions → Management → 5250 Session Manager → New
Display Session. A new window opens, and the 5250 display setup is shown (see
Figure 5-13).
Click Okay
Important: For this example, 50000 was chosen as the source port number. This value
also was configured in PuTTY. Do not change the source port numbers. When
telnetting, avoid making the source port the same as the destination.
3. A window opens in which the connection with the PuTTY tunnel is established. Enter the
credentials of the QSECOFR profile and their password. Finally, a window capture shows the
Main Menu. Select File → Save as to save the 5250 display.
Note: A commonly secure Telnet session is used on-premises. However, because the use
of SSH tunnel encryption is working in this example, 992 port is not necessary.
IBM Power Systems Virtual Server instances provide a web console from the portal, which is
based on noVNC and HTML5. Any HTML5 browser can be used to open our console
session.
Figure 5-3 on page 193 shows the web-based console that presents the following limitations:
Session expires after approximately 5 minutes of inactivity.
Options that are available at the bottom of the window can be confusing because they
include function keys and other special keys.
Cut and paste cannot be used in the window.
Configuration procedure
IBM i Access Client Solutions (ACS) must be installed so that the IBM i ACS console can be
configured.
Download the installation package from one of the following web pages:
IBM Support: IBM i Access - Client Solutions
IBM Entitled Systems Support
Note: The client software is available free of charge. The license is installed on the IBM i
instance.
Configure a Private Network and add it to the instance. In this example, the 192.168.80.0/24
network was added.
Tip: A ticket in IBM Cloud is required to activate the VLAN and subnet. For more
information, see IBM Cloud Support.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 199
4. Select the Resource Name that is connected to your Private Network. If nothing is shown,
press F21, and the adapters that are in use are shown (see Figure 5-15).
Type 1 to select
5. Assign the LAN Adapter its own IP address to the LAN Adapter, as shown in Figure 5-16.
Press F7 to save, F13 to deactivate and then, F14 to reactivate the adapter.
Note: When an adapter is in use, IPL your IBM i system before using the LAN Console.
Add a descriptions
Click OK
7. Go to the Console tab and select the Service hostname, as shown in Figure 5-18. This
name is the IP address for the Service Tools Server LAN Adapter.
8. Verify the connection. If all of the information is correct, click OK and start the console
connection.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 201
5.2 Using snapshots on IBM i instances
Snapshot is a resource used to create a checkpoint in IBM i VMs for a possible future
rollback. It uses copy-on-write procedures to minimize snapshot time to near zero, which
allows the VM to be restored quickly.
To ensure data integrity, a disk stage must be completed on IBM i to save data that is cached
in memory.
Snapshots are useful before the following change management tasks are conducted:
Performing an OS upgrade
Installing PTFs
Making changes to system values
Updating application programs
Note: For Windows, some functions require to use latest operating system release.
This process can take some minutes. Wait until the end of the process and then, restart
the system.
2. In a command window, run the command that is shown in Example 5-2 to check whether
the setup processes succeed.
The system prompts you for your account’s email and password.
If you use more than one account, select the account you use and the region by using the
item number that is shown in the window.
4. To install the required plug-in to work with Power Systems Virtual Server, run the
command that is shown in Example 5-4.
5. Click Y when you are prompted to continue with the setup process.
The available workspacer in this account are then listed (see Example 5-5).
7. To list the available instances, run the command that is shown in Example 5-7.
8. By using this list, you can copy the instance ID that is must be frozen by using snapshot.
9. Return to the green window console or terminal session and perform a disk stage. Then,
quiesce the database, as shown in Example 5-8. All data that is cached in memory is
written to disk, and the transactions are held in memory until the snapshot completes.
This action must be performed on any available ASP before the snapshot command is run.
10.After the data is written to disk and transactions are held in memory, continue ibmcloud
cli and take the snapshot. Run the ibmcloud pi instance snapshot create command
and target your VM instance. Then, choose a name to identify the snapshot (see
Example 5-9).
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 203
Example 5-9 Taking the snapshot by using ibmcloud cli
ibmcloud pi instance snapshot create EW01-IBMi01 --name SNP01_EW01-EWIBMi01
11.Important: If you have more volumes in addition to the boot disk, you need to create a
coma separeted list with the volumes id, using the --volumes parameter.
12.Run the ibmcloud pi instance snapshot list <InstanceName> command to list the
snapshot. You see that the snapshot state is next to the Instance ID and Snapshot ID. Wait
for the available status.
13.When the status is available, resume database activity, as shown in Example 5-10.
16.Run the ibmcloud pi instance snapshot list command to see the restore operation
status.
17.Start the VM instance once the snapshot is restored.
Provisioning Power Virtual Server with VPC landing zone by using deployable architectures
provides an automated deployment method to create an isolated Power Virtual Server
workspace and connect it with IBM Cloud services and public internet. Network management
components like DNS, NTP, proxy servers and NFS as a Service might be installed.
Comparing the provisioning through the projects UI, user interaction is minimized and
ready-to-go deployment time of a Power Virtual Server workspace is reduced from days to
less than 1 hour.
Automated Power Virtual Server with VPC landing zone provisioning that is described in this
guide is based on IBM Cloud catalog deployable architectures. In this documentation, we
describe only specifics that are related to Power Virtual Server with VPC landing zone
deployable architecture.
Figure 5-20 on page 206 below mapping out the ‘Standard’ variation.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 205
Figure 5-20 Power Virtual Server with VPC landing zone 'Standard' variation
5.3.2 Extend Power Virtual Server with VPC landing zone - Standard variation
Important: This variation has a prerequisite. You must deploy the 'Create a new
architecture Standard' variant first.
The 'Extend Power Virtual Server with VPC landing zone' variation creates an additional
Power Virtual Server workspace and connects it to the existing Power Virtual Server with VPC
landing zone. It builds on existing Power Virtual Server with VPC landing zone deployed as a
variation 'Create a new architecture'. This is typically used for High Availability scenarios in
the same regions.
Figure 5-21 on page 208 below showing the strucure of ‘Quickstart’ variation.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 207
Figure 5-21 Power Virtual Server with VPC landing zone 'Quickstart' variation
You can run AIX, IBM i, and Linux images on your virtual server instances. Select the required
T-shirt size and a virtual server instance with chosen T-shirt size or custom configuration is
deployed. The T-shirt sizes and the configuration parameters mapping are shown in the
following Figure 5-22 on page 209.
This variation helps to install the deployable architecture 'Power Virtual Server for SAP HANA'
on top of a pre-existing Power Virtual Server(Power Virtual Server) landscape. 'Power Virtual
Server for SAP HANA' automation requires a schematics workspace id for installation. The
'Import' solution creates a schematics workspace by taking pre-existing VPC and Power
Virtual Server infrastructure resource details as inputs. The ID of this schematics workspace
will be the pre-requisite workspace id required by 'Power Virtual Server for SAP HANA' to
create and configure the Power Virtual Server instances for SAP on top of the existing
infrastructure.
Chapter 5. Hints and tips for IBM i deployments on IBM Power Systems Virtual Server 209
210 Power Virtual Server for IBM i
6
IBM Power Virtual Server meets this requirement by enabling clients to take advantage of DR
solutions between two IBM i Virtual Server Instances (VSIs) in separate IBM Cloud data
centers. DR solutions for IBM Power Virtual Server can be based on operating system level
replication or logical replication.
Important: With the availability of Global Replication Service (GRS) within Power Virtual
Server, IBM i can now use storage level replication between two Power Virtual Server data
centers. This is discussed in section 6.4, “Disaster Recovery with storage replication” on
page 293
Replication solutions between two data centers always require that a network connection be
established between the data centers allow the necessary data flow to occur securely. This
requirement also applies to DR with IBM Power Virtual Server, which requires specific
networking steps in IBM Cloud before implementing the replication solution.
This chapter addresses the two phases of DR configuration for IBM i workloads on IBM Power
Systems Virtual Server:
Performing the required network configuration
Implementing the DR solution
– Operating system level replication use case with IBM PowerHA SystemMirror for i
geographic mirroring.
– Logical Replication use case with TSP Bus4i.
4. Search the results for Power and select Power Systems Virtual Servers, as shown in
Figure 6-2 on page 214.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 213
Figure 6-2 Results for Power
5. Under Select Region, choose your region, as shown in Figure 6-3. You are limited to only
one service per region.
6. Select a Service Name or chose the default name. Then, click Create, as shown in
Figure 6-4 on page 215.
Your Power Systems Virtual Server location service now appears under the Services tab,
as shown in Figure 6-5.
Repeat this process to create a second Power Systems Virtual Server location service.
7. Click the Power Systems Virtual Server location Service that you created and provision a
subnet to be used by your Power Systems Virtual Server Instances (see Figure 6-6 on
page 216).
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 215
Figure 6-6 Resource list
8. Choose Subnets from the menu on the left, as shown in Figure 6-7.
The rest of the fields are automatically populated based on the CIDR you provided.
9. Click Create Subnet. A VLAN ID is then associated with the subnet, as shown in
Figure 6-9.
10.Open a Support Ticket with Power Systems to request that the subnet be configured to
allow local communication between any Power Systems Virtual Server Instance you
create in this Power Systems Virtual Server location service. Provide your Power Systems
Virtual Server location service and your subnet in the ticket.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 217
Without completing this step, the Power Systems Virtual Server Instances that you create
cannot ping between each other, even if they are on same subnet in the same Power Systems
Virtual Server location.
It is here that you provision the IBM i VSIs. Consider the following points:
– Enter a name for your VSI (for example, IBMi-74-Lon06) and select how many VSIs you
must configure. The names of the VSI are appended with a -1, -2, and so on, if you
select more than one VSI. IBM i gives the LPAR a system name that consists of the first
eight characters, and it is best to use only alphanumeric characters for IBM i naming.
– Specify whether this VSI is pinned to the host where it is running by selecting Soft or
Hard pin.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 219
– A public SSH key must be added to securely connect to this VSI, as shown in
Figure 6-13.
5. Select the operating system IBM i and the wanted image from the image list or any other
image that you imported by way of the Boot Image menu.
Regarding the IBM i Software Licenses, if you implement an operating system Level
Replication solution with geographic mirroring, be sure to include IBM i PowerHA. Also,
include other IBM i licenses as necessary.
6. For the Tier type, you can choose between the available disk types available: Type 1 or 3.
Type 3 is a less expensive option, which we selected.
7. For the Machine type, choose between S922 or E980. S922 is the cheaper of the two
options, which we selected.
9. Choose the number of cores and RAM that you need (the minimum core is 0.25). Because
IBM i partitions often require at least 4 GB of memory to start, use this minimum value
(see Figure 6-15).
10.You can also attach more volumes to the VSI. In our example, we did not need to do so
and used only the root volume that is included. It is highly recommended to add the disks
individually after the initial Load Source (LS) volume is created. Doing so makes it easier
to keep track of which disk unit ID matches the volume name in the IBM Cloud UI.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 221
11.No matter what operating system version or release that is used, it is always
recommended to use the latest PTFs, which can be found at this IBM Support web page.
12.Because PowerHA for IBM i is owned by HelpSystems, see this web page to ensure that
the latest HA group PTFs are installed.
13.Scroll down to choose your subnet on which these VSIs are to be provisioned. It is
assumed one or more subnets were created before this step.
14.Click Attached Existing (see Figure 6-16).
15.Choose the subnet that you want to attach and then, click Attach, as shown in
Figure 6-17.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 223
Complete the following steps to start the DL order process:
1. Log in to the IBM Cloud UI.
2. Choose Catalog and then, search for direct, as shown in Figure 6-19.
3. Select Direct Link Connect on Classic and then, click Create. No options are available
to select, as shown in Figure 6-20.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 225
Global routing requires extra charges and allows for easier Power Systems Virtual Server
location-to-Power Systems Virtual Server location communication. You also must order a
Vyatta Gateway Router to complete your Global routing option by way of a GRE tunnel.
IBM Support can help you with this part of the order.
In our example, we use Local Routing, ordered a Vyatta Gateway in each Power Systems
Virtual Server location, and provisioned a GRE tunnel end-to-end.
6. Select the box to accept the offer and then, click Create.
A support case is opened that includes the required information, as shown in Figure 6-23.
After this process is complete, you are contacted by IBM Support and requested to
complete and answer some questions in an attached document. You must return the
completed document as an attachment to the same Support ticket.
After this step is complete, IBM Support requests that you open a new Support ticket and
address it to the Power Systems. Include the information in the original DL ticket. This new
ticket is then sent to the Power Systems Virtual Server location support to configure their
side of the DL connection.
This step is the last step before DL communication works. You can test your connection by
pinging IBM Cloud Linux or Windows VSI from your Power Systems Virtual Server
Instances and vice versa.
In our example, we used two Vyatta Gateways (one in each Power Systems Virtual Server
location) to provide end-to-end Power Systems Virtual Server location-to-Power Systems
Virtual Server location communication by using GRE tunnels.
7. Log in to IBM Cloud and click the catalog and then, search for Vyatta.
Our example involves ordering one Vyatta in LON06 and the other in TOR01 data centers where
the Power Systems Virtual Server locations exist.
In our example, we used two Vyatta Gateways: one in each Power Systems Virtual Server
location to provide end-to-end Power Systems Virtual Server location-to-Power Systems
Virtual Server location communication by using GRE tunnels.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 227
2. Select Gateway Appliance and then create it, as shown in Figure 6-25.
3. Select AT&T vRouter, which is the Vyatta Gateway. Other choices of gateways are
available, but we use Vyatta for our example.
4. Enter a name for the gateway and include the Power Systems Virtual Server location
name in it so that you can distinguish them later.
5. Select a location to match your Power Systems Virtual Server location.
Choose the following options:
– Clear the High Availability option unless you want to order one, which means that you
order two Vyatta Gateways in each Power Systems Virtual Server location. We cleared
this option for our example.
– Select the location by pressing the arrow key in each location to find the specific data
center where you Power Systems Virtual Server location are available.
– Choose a POD if several PODs are in the selected data center location.
– Select the CPU single or dual processor. We chose Single Processor.
– Select the amount of RAM that you want and add SSH keys if you want to log in without
the use of a password. (This step can be completed later.)
7. Accept the service agreement and click Create. The Vyatta gateway is now being
provisioned, which can take several hours.
This process must be done in each of the Power Systems Virtual Server locations.
After the Vyatta Gateway is provisioned, it is listed under Devices, where you can find your
Vyatta and root user passwords (see Figure 6-27).
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 229
8. Log in to the Vyatta gateway by using a browser and accessing it by using the following
link:
https://<ip_address_of_the_vyatta_gateway>
Use the following credentials, as shown under devices in IBM Cloud UI and password tab
on the left (see Figure 6-28):
– User: Vyatta
– Password: As shown in the GUI
Typically, you use a command line to SSH to the Vyatta for more configuration tasks. You use
the Vyatta user identification to complete the configurations.
You also must provide the subnets information in each Power Systems Virtual Server location
in the ticket.
The items that are shown in red in Figure 6-29 is what must be configured at your end of the
GRE tunnel in each Vyatta Gateway.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 231
Consider the following points about your tunnel:
IP address is 172.20.2.1/30, where 255.255.255.252 translates to /30
Destination IP is their tunnel source IP
Source IP is the IP address of the Vyatta gateway
The Vyatta Gateway address can be find in the IBM Cloud UI under Devices.
Log in to IBM Cloud UI and click IBM Cloud on upper left side. Then, click Devices and
choose the Vyatta system that you want to configure:
vyatta-labservices-lon.ibm.cloud
vyatta-labservices-tor.ibm.cloud
Open a browser and log in to the Vyatta Gateway by using the following credentials (see
Figure 6-32 on page 234):
User ID: Vyatta
Password: As shown in the GUI
https://10.72.74.203
ssh [email protected]
Note: Before logging in to a 10.x.x.x private IPs in IBM Cloud, you must start your
MotionPro Plus VPN access, which gives you access to IBM Cloud private IPs.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 233
Figure 6-32 Vyatta login window
Now that you verified you access to the Vyatta Gateways, you can access it by way of SSH to
continue your GRE tunnel provisioning.
Set up Power Systems Virtual Server location GRE tunnels in the Vyatta Gateways.
For more information about configuring GRE tunnels, see the following resources:
IBM Cloud Docs: VPN into a secure private network
Vyatta System’s Tunnels Refeence Guide
IBM Cloud Docs: Configuring connectivity to Power Systems Virtual Server
Note: Before logging in to a 10.x.x.x private IPs in IBM Cloud, you must start your
MotionPro Plus VPN access.
After the support team finishes configuring the GRE tunnel, you must configure your end of
the GRE tunnel on the two Vyatta Gateways.
For more information about configuring GRE tunnels, see the following resources:
IBM Cloud Docs: VPN into a secure private network
Vyatta System’s Tunnels Refeence Guide
IBM Cloud Docs: Configuring connectivity to Power Systems Virtual Server
Note: Before logging in to a 10.x.x.x private IPs in IBM Cloud, you must start your
MotionPro Plus VPN access.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 235
Setting up GRE Power Systems Virtual Server location tunnel in LON06
Log in to the Gateway by using the following information (see Figure 6-34):
User ID: Vyatta
Password: As shown in the GUI
ssh [email protected]
ssh to LON06 Vyatta Gateway
For this example, we use the information that is provided by IBM Support for LON06 GRE, as
shown in Figure 6-35.
You can verify that your GRE tunnel is set up by running the following commands:
configure
show interfaces tunnel
For this example, we use the information that was provided by IBM Support for TOR01 GRE,
as shown in Figure 6-36.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 237
Run the following commands (for this example, we named our tunnel tun0 in the Vyatta
Gateway, which is the same name as the other Vyatta Gateway):
configure
set interfaces tunnel tun0 address 172.20.8.1/30
set interfaces tunnel tun0 local-ip 10.114.118.34
set interfaces tunnel tun0 remote-ip 10.254.0.30
set interfaces tunnel tun0 encapsulation gre
set interfaces tunnel tun0 mtu 1300
commit
exit
You can verify that your GRE tunnel is set up by running the following commands:
configure
show interfaces tunnel
In this example, we chose the tunnel address and tunnel source and destination IPs. The
tunnel address can be any IP subnet that you choose (we named our tunnel tun1 in both
Vyatta gateways).
We also selected a similar IP as the IPs that are used in the Power Systems Virtual Server
location GRE tunnels. We choose a CIDR of /30 because we need only two IP address: one
in Tor01 and one in Lon06. Consider the following points:
In Lon06 Vyatta, the GRE Vyatta-to-Vyatta tunnel address is 172.20.4.1/30.
In Tor01 Vyatta the GRE Vyatta-to-Vyatta tunnel address is 172.20.4.2/30.
Your tunnel destination IP is the IP address of the Vyatta gateway in each location.
Your tunnel source IP is the IP address of the Vyatta gateway in each location.
We call the tunnels tun1 in both locations.
The final step that is needed is to set up static routes in each Vyatta to point the subnets for
our Power Systems Virtual Server location to the correct tunnels.
Find the subnets that you created in each Power Systems Virtual Server location in TOR01 and
LON06 by accessing the services in the IBM Cloud UI for each Power Systems Virtual Server
location.
The static routes in LON06 must to point to the subnets in TOR01 and vice versa.
We configure both GREs to the Power Systems Virtual Server location and between Vyattas.
Run the following commands in each Vyatta Gateway after login in by way of ssh using the
Vyatta userID:
In TOR01 Vyatta:
– configure
– set protocols static route 192.168.6.0/24 next-hop 172.20.8.2
– set protocols static route 192.168.50.0/24 next-hop 172.20.4.2
– commit
– exit
In LON06 Vyatta:
– configure
– set protocols static route 192.168.50.0/24 next-hop 172.20.2.2
– set protocols static route 192.168.6.0/24 next-hop 172.20.4.1
– commit
– exit
End-to-end connectivity not exists and you can ping between your Power Systems Virtual
Server Instances in each Power Systems Virtual Server location. You also can ping from the
Power Systems Virtual Server Instance to IBM Cloud services, such as Linux/Windows VSI.
If you cannot ping the IBM Cloud VSIs from the Power Systems Virtual Server location VSIs,
open a ticket to address this issue. IBM Support must address this issue from their Cisco
Router side.
All access to Cloud Object Storage from Power Systems Virtual Server Instance is by way of
this reverse proxy. It is accessed by way of https://<reverse_proxy_ip>.
A Centos or Red Hat VSI must be provisioned in IBM cloud to configure a reverse proxy. This
VSI must have public access. After the configuration process is complete, the public access
can be made unavailable.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 239
Architecture
The overall architecture of our deployment is shown in Figure 6-37.
Any unplanned outage can include severe implications, especially if the duration of the
outage or recovery time exceeds business expectations. Some of these implications can
include loss of data, revenue, worker productivity, company reputation, and client loyalty.
Geographic mirroring refers to the IBM i host-based replication solution that is provided as a
function of IBM PowerHA SystemMirror for i.
Geographic mirroring requires a two-node clustered environment and uses data port
services. These data port services are provided by the System Licensed Internal Code
(SLIC) to support the transfer of large volumes of data between a source node and a target
node.
The processes that keep the Independent Auxiliary Storage Pool (IASP) synchronized run on
the nodes that own the IASPs. This transport mechanism communicates over TCP/IP.
6.2.1 Prerequisites
To implement PowerHA SystemMirror for i geographic mirroring, the following products must
be installed on both nodes in the cluster:
HA Switchable Resources, 5770-SS1 option 41.
IBM PowerHA SystemMirror for i, 5770-HAS *BASE.
PowerHA for i Standard Edition, 5770-HAS option 2.
6.2.2 Planning
Geographic mirroring is a host-based replication solution. Because the IBM i manage data
transmission between a current production and current backup system, the type of storage
that is used can be internal or external.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 241
Geographic mirroring can be configured to use synchronous or asynchronous transmission
delivery mode. Depending on the transmission delivery mode, the mirroring mode also can be
configured to be synchronous or asynchronous.
These writes must complete to the disk (often, disk cache) on the preferred target system
(acknowledgment operation [3]) and the production copy system (acknowledgment operation
[4]) before sending the acknowledgment to the storage management function of the operating
system of the production copy.
In this configuration, the mirror copy independent auxiliary storage pool (IASP) is always
eligible to become the production copy IASP because the order of writes is preserved on the
mirror copy system. For this reason, this configuration is preferred, if possible.
However, in many cases, the network infrastructure does not allow this configuration as a
practical solution. For example, on lower bandwidth or high latency communications links, it is
not feasible to wait for the acknowledgment of the write being sent to the disk write cache, or
even the memory, on the preferred target system.
However, when you use asynchronous mirroring mode, the acknowledgment is sent back
from the mirror copy system when that data is in memory on the mirror copy system.
This approach provides a faster acknowledgment because no waiting is necessary for the
write to complete on the mirror copy system.
The physical write operation (5) is performed later (asynchronously) to the disk on the mirror
copy system. This approach is sometimes referred to as sync/async mode.
In this mode, the pending updates must be completed before the mirror copy can become the
production copy.
Performance might improve slightly on the production copy system during normal operation.
However, switch-over or fail-over times are slightly longer because changes to the backup
IASP are still in the main memory of the backup system. These changes must be written to
disk before the IASP can be varied on.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 243
Asynchronous transmission mode with asynchronous mirroring mode specifies that
transmission delivery and mirroring mode are asynchronous. This approach is sometimes
referred to as async/async mode (see Figure 6-41).
The write on disk operation does not wait until the operation is delivered to the mirror copy
system. Asynchronous transmission delivery requires asynchronous mirroring mode.
It works by duplicating any changed IASP disk pages in the *BASE memory pool on the
production copy system. Likely, page faulting results in the *BASE pool. These duplicated disk
pages are sent asynchronously while the write order to the target is preserved.
Therefore, at any time, the data on the target system (although not updated) still represents a
crash-consistent copy of the source system.
Communication lines with longer latency times might tie up more memory resources for
maintaining changed data. Therefore, the environment must be sized correctly in terms of
bandwidth/latency and system resources.
Tracking space
Tracking space enables geographic mirroring to track changed pages while it is in suspended
status. With tracked changes, geographic mirroring can avoid full resynchronization after it
resumes in many cases, which minimizes the exposure of timeframes where no valid mirror
copy is available.
Tracking space is configured when geographic mirroring is configured or later by using the
Change Auxiliary Storage Pool Session (CHGASPSSN) command.
Suspend timeout
The suspend timeout in the ASP session specifies how long the application can wait when
geographic mirroring cannot be performed. When an error, such as a failure of the
communication link, prevents geographic mirroring from occurring, the production copy
system waits and retries during the specified suspend timeout before it suspends geographic
mirroring, which allows the application to continue.
The timeout value can be tuned by using the Change Auxiliary Storage Pool Session
(CHGASPSSN) command. The default value of the Suspend timeout (SSPTIMO) parameter is 120
seconds.
Network topology
Geographic mirroring can be used in environments over any distance; however, only business
needs determine the latency that is acceptable for a specific application. Many factors affect
communications latency. As a result, these factors might affect geographic mirroring
performance.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 245
From an HA perspective, geographic mirroring interfaces that are associated with different
Ethernet adapters are considered a preferred practice. The use of redundant switches and
routers further improves the overall HA value of the environment (see Figure 6-42).
Synchronization priority
One of the attributes that must be considered for geographic mirroring is synchronization
priority.
The synchronization priority setting refers to a full or partial synchronization that is performed
initially after you set up geographic mirroring, and after the geographic mirror session is
suspended or detached.
It is always best to synchronize the data as quickly as possible. However, potential drawbacks
exist if you simply set the synchronization priority to *HIGH in all configurations.
To determine an approximation for the time that is needed for initial synchronization, take the
total space that is used in the IASP times 8 to convert the number of bytes to bits (because
throughput is normally measured in bits per second) divided by the effective communications
capability of the chosen communications links.
Although asynchronous geographic mirroring allows more flexibility for the distance between
systems, implications still result from undersizing the source, target, or the network
infrastructure between the sites.
Minimizing the latency (that is, the time that the production system waits for the
acknowledgment that the information was received on the target system) is key to good
application performance.
If the current strategy is to end production applications and perform backups on the
production system and your business has no requirement to change this strategy, you can
continue to run backups as usual. This strategy ensures that the mirror copy is in sync for the
longest period.
If you want backups from the target/mirror copy, geographic mirroring does not allow
concurrent access to the mirror copy of the IASP.
This rule has implications that are considered when you perform backups.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 247
Creating the Production IBM i instance
When creating the instance, the default size for the Load Source (LS) volume is 80 GB. Start
with only the Load Source, and then, add each new volume (SYSBAS or IASP) individually
(later on in the process) to keep track of each disk unit ID as they are added because this
process can otherwise be difficult to map later.
The names of the disks (from IBM Cloud Services) across all instances within that server
must be unique. These names do not present themselves in the IBM i interface; instead, they
are visible from the Cloud interface only. Therefore, it is useful to individually keep track of the
disk unit ID (from IBM i interface) and disk name (from Cloud interface) so you can assign
your disks to the suitable ASP. These disks are best created at a later step in the process.
When choosing a name for the instance, consider that the IBM i uses the first 8 characters of
that instance name as the system name. Therefore, it is best to choose a unique 8-character
instance name as well.
Note: This window is the operating system log-in. The DST/SST password is changed
later.
3. In the Work with software agreements window, use Option 5 (Display) for all, and press
Enter. Then, press F15 (Accept ALL) on each license. After all licenses are accepted,
press F3 (Exit) to main menu.
During this time, the IP interfaces and line descriptions are still being configured. This
process can take up to 5 minutes to complete. If any external IPs were requested, they are
not displayed in CFGTCP, Option 1 (Work with TCP/IP interfaces), as shown in
Figure 6-43.
Be sure that all other post-IPL jobs and servers are active as expected.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 249
Creating the HA/DR IBM i instance
Follow the same recommendations are described for the Production IBM i and ensure that
unconfigured disks for the IASP remain unchanged.
Establishing network connectivity between production and HA/DR
instances
After both instances are created, it is imperative that both instances can communicate for
intended PowerHA Cluster communication and PowerHA Geographic Mirroring replication.
For better performance, it is recommended to use one set of IP interfaces for inter-LPAR
communication, such as FTP, remote journaling, and BRMS, and include PowerHA Cluster
communication.
A secondary set of interfaces can be reserved for PowerHA Geographic Mirroring so that any
spikes in replication traffic have less effect on packet delay for other inter-LPAR
communication.
It is also useful to use a separate IP interface for user log-in and administration. This interface
can be an internal IP interface, where the user connects to the corporate network, and then
accesses the systems by way of the internal IP. Optionally, an external IP interface can be
assigned to the instance by way of the IBM Cloud Services.
Creating IASP
The IASP is created on the Production IBM i instance in preparation for IASP-enablement.
These disks are virtual volumes that are created from external storage, with RAID protection
pre-assigned. Therefore, protection does not need to be added to the IASP disks upon
creation. To create the IASP, run the following command (see Figure 6-45):
CFGDEVASP ASPDEV(<IASP Name>) ACTION(*CREATE) TYPE(*PRIMARY) PROTECT(*NO)
ENCRYPT(*NO) UNITS(*SELECT)
After the process is completed, the ASP device description can be varied on, and the
resource name shows with the same name as the IASP.
At any time, a matching device description can be created on the HA/DR node with the
following command:
CRTDEVASP(<IASP Name>) RSRCNAME(<IASP Name>)
This information is only the device description. No IASP exists on the HA/DR node now, nor
should such an IASP exist. The unconfigured disks for PowerHA Geographic Mirroring are
created later.
Note: It is recommended that customers use IBM Lab Services for assistance in migrating
a non-IASP environment to an IASP-enabled environment before moving forward with any
PowerHA-managed solution. Only data that is included in the IASP is being replicated, and
switchover or failover activities fail if the partitions were not correctly configured to support
IASP.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 251
3. Select Option 8 (Start) to start clustering on the production node. The status changes to
Active.
4. From the same window, select Option 1 (Add) to add the DR node to the cluster. Enter
the IP address, and specify the Start Indicator as *NO. That node now is listed and shows
a status of New.
5. Select Option 8 (Start) to start clustering on the DR node, as shown in Figure 6-47. The
status changes to Active.
3. Select Option 6 (Work with nodes) and then, select Option 1 (Add) to add the DR node
to the device domain. The window now lists both nodes as Active in that device domain.
2. Select Option 1 (Cluster resource group) from the pop-up menu (at V7R4) and press
Enter.
3. Specify Type=*DEV, Exit Program=*NONE, and User Profile=*NONE.
4. Enter a “+” next to Recovery domain node list and then, press Enter.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 253
5. For the first entry in the recovery domain (see Figure 6-51), use the name of the
production node with a node role of *PRIMARY and a site name that is unique from the DR
node. Specify at least one IP address to be used by the production node for geographic
mirroring (replication). Then, page down.
6. For the next entry in the recovery domain, use the name of the DR node with a node role
of *BACKUP, a sequence number of 1, and a site name that is unique from the Production
node. Specify at least one IP address to be used by the DR node for geographic mirroring
(replication). Press Enter and then, page down.
7. (Optional) Enter a text description for the CRG.
8. Press Enter to create the CRG. The status shows as Inactive.
3. If automatic vary-on of the IASP is wanted after switchover or failover (which is more
common), change the Configuration object online to *ONLINE.
4. Specify *NONE for the Server takeover IP address because managing an automated
switch of a primary IP interface is done more efficiently with IASP exit programs. Press
Enter to complete the process.
– ASP Session Name is the label for the geographic mirroring replication, against which
actions can be performed; for example, Suspend, Resume, Detach, and Reattach.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 255
3. The next window shows a list of unconfigured disk units from the DR node. Select those
units that are to be included in geographic mirroring. Select F9 (Calculate selection) to
view the capacity that results after the configuration process is complete.
4. Press Enter to begin configuration of geographic mirroring. This process can take up to
several hours, depending on the number and size of disks used (see Figure 6-55).
5. After the configuration process is complete, DSPASPSSN SSN(<ASP Session Name>) shows
the status as *RESUMPND.
6. Vary on the IASP device, and recheck the ASP session status. It shows as *SUSPENDED.
7. Select WRKCLU, Option 9 (Work with cluster resource groups) and use Option 8
(Start) to start the CRG. The status changes to Active (see Figure 6-56).
The production copy can function normally during synchronization, but performance might be
affected negatively.
During synchronization, the contents of the mirror copy are unusable, and it cannot become
the production copy. If the independent disk pool is made unavailable during the
synchronization process, synchronization resumes where it left off when the IASP is made
available again.
The message CPI095D Cross-site Mirroring (XSM) synchronization for IASP is sent to
the QSYSOPR message queue every 15 minutes to indicate the progress of the
synchronization and type.
Important: Any changes that are made on the mirror copy while it is detached are
undone, and any tracked changes from the production copy are applied.
Planned switchover
Perform a WRKASPJOB to check which jobs use the IASP. You must end all applications and
jobs by using the IASP.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 257
Complete the following steps to perform a planned switch by using IBM i control language
(CL) commands:
1. From the Work with Cluster (WRKCLU) menu that is shown, select Option 9 to work with
cluster resource groups.
2. On the Work with Cluster Resource Groups display, specify Option 3 (Change primary)
to change the CRG primary node.
3. On the Change CRG Primary (CHGCRGPRI) display, press Enter.
The display shows a status of inhibited while the switch occurs.
4. After the switch is finished, you can run the Display ASP Session DSPASPSSN SSN
(CLDGEOMIR) command to confirm that the nodes and ASP copies were reversed.
Unplanned switchover
A failover occurs when the source node fails and the backup node takes over. The default
failover procedures depend on the cluster failover wait time and failover default action
settings.
Important: As with a planned switch, the success of a failover operation also depends on
your previous testing and verification to show that the business applications can run on the
backup node and users can access those applications.
For an automatic failover event, a CPABB02 Cluster resource groups are failing over to
node backup-node. (C G) inquiry message is sent to the cluster or CRG message queue on
the backup node if a failover message queue is defined for the cluster or CRG.
If no failover message queue is defined, the failover starts immediately without posting any
message.
The cluster parameters or Failover Wait Time (FLVWAITTIM) and Failover Default Action
(FLVDFTACT) determine the next actions.
Setting Failover Wait Time parameter as a duration in minutes or *NOMAX allows a user on
the backup node to respond to the CPABB02 inquiry message to proceed with the failover or
cancel the failover.
Note: Regardless of the cluster parameter settings, the primary IASP is taken offline by
PowerHA for a failover event.
The cluster partition condition means that the backup node cannot determine the status of the
production node reliably. In this situation, an automatic failover does not occur, regardless of
the cluster failover settings. In fact, a failover is not possible unless you perform the following
steps:
1. Determine whether the primary node or source node is still in operation.
If the production workload can continue, you do not need to fail over to the backup node.
For a cluster communications failure only, message ID CPDB715 is issued on the primary
node.
Mirroring continues and after communications are restored, the cluster software
reconnects the cluster nodes automatically.
Otherwise, if the primary node is no longer responsive or available, the status of that node
must be changed from Partition to Failed. A status of Partition or Failed allows a failover to
proceed to the backup node by running the following steps:
2. Log on to the backup system and verify the node status. If the primary node is in a
Partition status, a manual switch or failover cannot be performed. Use Work with Cluster
Nodes (WRKCLU) menu Option 6.
3. The status of the primary node must be changed to Failed by using the Change Cluster
Node Entry (CHGCLUNODE). This command also varies off the IASP on the target node.
4. The node status changed to Failed. The CHGCLUNODE command also triggers a cluster
failover without a failover inquiry message, but it still requires the user to vary on the IASP
on the new primary or source node and start the takeover IP interface.
5. Vary on the IASP on the backup (now source) node. The status changes to AVAILABLE.
Also, start the takeover address, if required.
6. Run the following Display ASP Session (DSPASPSSN) command to verify that the node ASP
copy shows as AVAILABLE. The backup node now has the production copy:
DSPASPSSN SSN(CLDGEOMIR)
Important: The terms can be confusing here. After a switch or failover, the “backup”
node (which refers to a physical system or partition) is the “source” node and contain
the “production” copy of the IASP when it is shown in the DSPASPSSN display.
7. After the original primary system is repaired and started, ensure that the IASP on that
node is in a varied-off status and that the cluster node is Inactive. Start the repaired node
from the current production node and verify that all nodes show Active, as shown in
WRKCLU, Option 6.
8. Start the CRG if it is not started by using the following the STRCRG CLUSTER(CLOUDCLU)
CRG(CLDGEOMIR) command.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 259
9. Use the Work with Cluster (WRKCLU) menu and select Option 10 to work with ASP copy
descriptions. Select Option 22 to change session on either ASP’s Opt line and then, press
F4.
10.On the Change ASP Session display, enter the *RESUME.
11.Refresh the display to show that the status changed to RESUMING.
Pressing F11 displays the progress and the amount of data that is out of sync.
After an unplanned failover, a full resynchronization of the IASP often is required. The full
resynchronization can take an extended time.
Consider the backup copy is unusable until the resume process is finished.
Note: It is considered a preferred practice to run the Reclaim Storage (RCLSTG) command
on SYSBAS on a failed production node after you perform a failover and before you
resume mirroring. Also, you must schedule an RCLSTG of the IASP at the earliest
convenience, preferably on the backup node before you switch back to “preferred
production”.
6.2.5 Troubleshooting
With any new set-up, errors can occur. Although some errors can be related to infrastructure
issues, others arise because of familiarity with new technology and the differences that using
the IBM Cloud presents from traditional solutions.
The following changes that can make alternative methods different than what is expected in
non-cloud environments:
When selecting disk units to assign to the new cloud instances, start by assigning only the
initial Load Source (LS) volume, and identifying that volume in the IBM Cloud GUI for easy
reference.
After the instance is created, make note of the disk unit ID in SST/DST. Add subsequent
disks individually, and use a naming scheme in the IBM Cloud GUI that easily identifies
SYSBAS disks or IASP disks.
At the same time, keep track of the associated disk unit ID from the IBM i instance. This
information is crucial in determining which disk units to add with CFGDEVASP and
CFGGEOMIR.
When selecting a name for the IBM i instances, choose a name that is 8 characters or
less, because the IBM i truncates the instance name to 8 characters when assigning a
system name.
As with any geographic mirroring solution, ensuring sufficient throughput on the replication
interfaces is key. Separate other traffic onto other subnets (and ideally other adapters) so
that the replication traffic is not hindered and subject to auto-suspend issues or lengthy
switchover or failover processes.
It is best practice to keep the PowerHA for IBM i clustering communication on a separate
subnet than replication because it requires the “heartbeat” traffic to respond in a timely
manner to prevent cluster suspend (that is, “Partition”) status. This configuration prevents
any wanted switchover or failover from being enabled.
For issues that are specific to PowerHA for IBM i geographic mirroring configuration, errors, or
performance, start by contacting IBM Support and requesting the High Availability Solutions
(HAS) team first. They can then contact development teams or Cloud Support, if necessary.
6.3.1 Introduction
For many years, logical replication solutions were the only choice in the high availability (HA)
and Disaster Recovery (DR) product market for IBM i and its predecessors.
IBM’s introduction of PowerHA and its hardware-based replication solutions (at the operating
system level or External Storage-based) provided a different approach for HA and DR in the
IBM i platform.
However, even today most HA/DR solution implementations in the IBM i market are still
logical replication solutions.
Logical replication solutions work by replicating changes that are made to database files and
other IBM i objects in real time, allowing the customer to start a role swap within their HA/DR
product if an outage occurs. It also promotes the secondary system into the production
system and enables the customer to continue operations.
The logical replication products use the IBM i built-in database journaling function to detect
changes that are made to database files and the audit journal to detect changes that are
made to other type of objects. Later, they use remote journaling or a propriety journal scrape
method to get the data out of the journals before sending it across the network of the
secondary server, where a journal apply process is used to update the secondary database.
The options that are available for implementing logical replication solutions for IBM i includes
the following products:
Bus4i from T.S.P. Company for Information Systems
RobotHA from HelpSystems
From Precisely:
– Assure QuickEDD
– Assure MIMIX
– Assure iTera
Maxava HA from Maxava
Ha4i from Shield Advanced Solutions
iSB-HA from iSam Blue‘s
hiCluster from Rocket Software
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 261
For the purposes of the logical replication use case that is included in this chapter, we use the
Bus4i product.
6.3.2 Prerequisites
Bus4i features the following minimum prerequisites:
1 GB of main memory
20 GB of available disk space
The system is installed with IBM i 6.1 or later and the TCP/IP services for DDM and
REXEC is active
6.3.3 Planning
Unscheduled computer downtimes are any IT user’s ultimate nightmare. This issue critical if
permanent system availability is essential. Situations exist in which downtimes for repairs and
maintenance must be avoided at all cost.
The T.S.P. IBMi High Availability System with BUS4i renders possible permanent mirroring of
two or multiple complex systems of all IBM Power System models on IBM I basis.
In this process, production databases, objects, user profiles, IFS, authorization lists, spooled
files, and control information about batch jobs are permanently mirrored. This mirroring
assures in this process that operations can be carried out immediately with the secondary
system when scheduled or unscheduled downtimes occur Figure 6-57
When the nonfunctional server unit is restored, the systems are resynchronized.
To keep users consistently updated about the mirroring process status, they are notified
malfunctions by using SMS or email.
To replicate a system that uses BUS4i, some products characteristics must be considered as
described in the following sections.
In various cases, guaranteed transmission of each transaction is required. The use of remote
journals might be the bet solution here because it also is the synchronous transmission and
receiving process.
Intelligent sequence control restricts continuous verification of data integrity and consistency
to occasional sampling for internal auditing. Administrative costs are reduced to a minimum
because all changes, amendments, or deletions are automatically mirrored to the secondary
system in real time.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 263
Transmission is restricted to changes, new files, or deletions in critical directories. This
restriction leaves the bandwidth of data links available for other user data.
Regular monitoring of systems is reduced to a minimum. At the same time, this safety feature
gives administrators and management a sense of security.
This support assures that contemporary applications with trigger functions can be entirely
mirrored and are fully operational if an emergency occurs.
Mirroring in a test system enables you to verify alterations to the test system, which allows
precise definition and minimization of downtimes in the real system.
If a new system is installed, the unit that is in place can be used as backup system. A costly
identical second system does not need to be installed.
Some applications require guaranteed copies, and it is here that the synchronized process is
suited. The transmission system waits with further processing until the receiving function
acknowledges receipt and processing.
Processing routines that are independent from transmitting and receiving processes
Owing to autonomous processes, users have more flexibility in designing their systems and
routines (see Figure 6-60).
If the processing routine is stopped, consistent data backup on tape is made in the secondary
system. Modifications are still accepted by the receiving process. Therefore, the backup
window during data protection is minimized.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 265
Spooled files
Some spooled files (print-out) cannot be repeated; for example, printing invoices. The
mirrored version in the secondary system is halted to assure only one print-out is made.
If an emergency occurs, print-out can be carried on by the secondary system without any
complex recalculation.
When a switch-over occurs, starting new batch jobs on the secondary system is restricted to
those jobs that are not yet started in the primary system. It is no longer necessary to look for
jobs that must be run.
Your daily inspection of the process list is no longer necessary, and routine activities are
reduced to a minimum.
The T.S.P. system BUS4i is based on the operational principle of journaling. Journals record
all changes to databases and objects. The licensed program BUS4i selects and sends the
The second system features a copy of the production application, the database, and all
affiliated objects. The BUS4i system on the secondary system processes the incoming
journal entries and implements the changes to database and the objects. Any defined
commitment boundaries are considered. Therefore, two identical applications are available in
autonomous systems.
After it is configured and started, the sending and receiving processes and the applying
processes run automatically. These processes include menu-driven maintenance and
monitoring features.
The receiving process accepts data sets that are sent in the sending process and stores
these data sets in a database file.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 267
The applying process in the secondary system is autonomous from the sending/receiving
process, as shown in Figure 6-63.
From the Primary System, the BUS4i main menu can be accessed by running the bus4i
command (see Figure 6-65).
Normally, BUS4i manages a main menu with the title in white for the primary system
environment and another main menu with the title in red for the secondary system
environment so that it is easy to identify which environment you are working in (see
Figure 6-66).
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 269
BUS4i handles replication between systems by establishing different mirror groups, including
the following standard groups:
P2S_#IFS for the replication of IFS objects
P2S_#OBJ for non-Database objects
P2S_#SPL for Spools
P2S_#SYS for QGPL and QUSRSYS libraries objects
P2S_#SYS2 for Database catalog objects, such as triggers and constraints
In addition to these standard mirror groups, user mirror groups can be created. For example,
the libraries of a specific application can be included, such as the specific mirror groups
P2S_ERP and P2S_PAY to replicate the libraries of the for Payroll and Human Resources
applications.
A mirror group P2S_#NEU also is available that can be used to include in it all the libraries
that are created new in the system that were not added to another mirror group.
P2S_ is used to indicate that the mirror group handles the replication from Primary to
Secondary for the objects that are contained in it.
In the same way, the S2P_ prefix is used for the corresponding mirror groups in the case of
replication from Secondary to Primary, as shown in Figure 6-67.
For each configured mirror group, you can customize the different control parameters, such
as the Journal and Journal Receiver, that are used to journalize the objects of that group, as
shown in Figure 6-68 on page 271.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 271
For mirror groups #IFS, #SYS, and QSYS2, the QAUDJRN system audit journal from the
QSYS library, the QAOSDIAJRN from the QUSRSYS, and the QSQJRN from QSYS2 are
used (see Figure 6-70 Figure 6-71 and Figure 6-72 on page 273).
The objects that are included in each of the mirror groups are defined by managing the
corresponding filters, as shown in Figure 6-73.
Example 6-1
In the case of the mirror group P2S_#IFS, which controls the replication of IFS
objects from the primary system to the secondary system, you can include a green
entry that indicates inclusion for the /* directory. Then, you can include several
red entries that indicate exclusion for all of those directories that do not need
to be replicated to the secondary system
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 273
Similarly, by using the mirror groups P2S_OBJ and P2S_SPL, you can control which objects
and spool files are included in or excluded from replication, as shown in Figure 6-74 and
Figure 6-75.
By default, all objects that are in new libraries that were not added to any user mirror group
are included in the mirror group P2S_#NEU for replication.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 275
If a specific situation it is not interesting enough to include some of these libraries or objects
to the replication (for example, in the case of new libraries with large volumes of redundant
data for testing), these libraries or objects can be excluded by using an entry in red in this
same mirror group (see Figure 6-78).
Libraries that belong to a specific application can be assigned to a specific user mirror group
for them by individual name or generic name, as shown in Figure 6-79.
Both IBM i VMs are physically hosted in a Power System S922 and include assigned 0.25
processor units in shared uncapped mode, 8 GB of memory, and a standard disk
configuration in their system ASP.
The IBM i VMs are in the same private subnet. The communications between them is done
through the ports DDM, REXEC, and 140XX (for the different mirror groups).
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 277
Complete the following steps to start the planned switchover:
1. At the command line of the primary system, enter bus4i and press Enter to reach the
BUS4i main menu for the primary system (see Figure 6-81).
2. Select Option 1. Work with transmission processes to check that all of the mirror
groups are defined, in ACTIVE status, and no latency exists on them, as shown in
Figure 6-82.
In the main menu of the primary system, Option 2. Administer Mirror Groups also can be
used.
4. In the main menu of the primary system, select Option 3. Service Functions to check the
status of the compared data for each mirror group.
All of the mirror groups stay in ACTIVE status and without any errors or warnings reported,
as shown in Figure 6-84.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 279
If errors or warnings exist for some mirror group, they must be investigated and fixed before
switchover operation is performed. Errors are displayed in red and warnings are displayed in
yellow, as shown in Figure 6-85.
Figure 6-85 BUS4i administering service functions with examples of errors and warnings
5. All of the tasks that are involved in the switchover of the primary system environment are
specified in the switchover command. This information can be checked from the main
menu of the primary system by selecting Option 9. Switch System (Primary to
Secondary System) and then, selecting Option 3. Administer Commands. If some
adjustments to the switchover process must be done, they can be performed by using this
interface (see Figure 6-86).
A switchover message appears that indicates you are switching the PRIMARY system to the
SECONDARY system. Answer with Y=Yes and then, press Enter to continue, as shown in
Figure 6-88.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 281
8. A warning message appears that indicates a potential problem with duplicated IP
addresses. This message is normal and is not an issue. Therefore, answer Y=Yes to
continue switching, as shown in Figure 6-89.
9. The Work with All Job Queues window is displayed, as shown in Figure 6-90. Check that
all of the application jobs are completed and no active user jobs exist in the system. Press
F3=Exit to continue with the switchover process.
Figure 6-90 BUS4i switchover Work with All Job Queues window
After the switchover process is completed in the primary system, complete the following steps
to perform the same procedure on the secondary system:
1. From the command line of the secondary system, enter bus4i and press Enter to access
the main menu of the secondary system (the system with title in red).
2. Select Option 9. Switch System (Secondary to Primary System).
3. In switchover menu, select Option 20, Start Secondary to Primary System and then,
press Enter and proceed in the same way as you did for the primary system (see
Figure 6-92 and Figure 6-93 on page 284.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 283
Figure 6-93 BUS4i switchover confirmation message in secondary system
4. After the switchover process is complete on the secondary system, verify the new roles of
each system by entering bus4i in the command line of each system.
As shown in Figure 6-94, the preferred target system EWIBMI04 shows the main menu for
primary system (with title in white).
From the switchover menu, you can use Option 1. Administer Parameters to verify
whether the roles for the primary and secondary systems were switched, as shown in
Figure 6-96 and Figure 6-97 on page 286.
Figure 6-96 BUS4i switchover system parameters on new primary system after switchover
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 285
Figure 6-97 BUS4i switchover system parameters on new secondary system after switchover
5. After verifying that the switchover process was run correctly in the primary and secondary
system environments, from the command line of the new primary system (EWIBMI04, which
is the former secondary system), start the replication process from the new primary
system to the new secondary system by running the bus4i *yes command (see
Figure 6-98).
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 287
6. In the main menu of the primary system, use Option 2. Administer mirror groups to
check the status of the different mirror groups, as shown in Figure 6-101.
If you wait approximately 15 minutes, you can use Option S. Transfer Rate to see the
transfer rates for each mirror group, as shown in Figure 6-102.
In normal operation, BUSi4 ISB replicates the IBM i system to the ISB
NAS device in real time, that device can be located on campus or
elsewhere, as show in Figure 6-103.
Figure 6-103 Diaster Recovery, Resilency and Security with BUS4i ISB
Recovery procedures with daily backups to tape can result in data loss
of up to 24 hours or more depending on the case.
Example 6-2
Last backup on a Friday. Crash or security incident on a Monday at noon.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 289
Figure 6-104 Protective Shielding without tool
The restore procedure with daily tape backup and BUS4i ISB protects
against data loss, object changes and printouts (spool files).
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 291
Figure 6-107 Backup process
IBM Power Virtual Server now supports storage level replication between different Power
Virtual Server data centers. Global Replication Service on IBM Power Systems Virtual Server
provides asynchronous data replication between two sites over long distances. Asynchronous
replication updates the backup instance at regular intervals, rather than continuously. This
requires less bandwidth and connectivity, is less costly and uses less resources.
The benefits of Global Replication on Power Virtual Server include the following:
Maintain a consistent and recoverable copy of the data at the remote site, created with
minimal impact to applications at your local site.
Efficiently synchronize the local and remote sites with support for failover and fail-back
modes, helping to reduce the time that is required to switch back to the local site after a
planned or unplanned outage.
Replicate more data in less time to remote locations.
Chapter 6. Disaster Recovery with IBM Power Systems Virtual Server 293
Maintain redundant data centers in distant geographies for rapid recovery from disasters.
Eliminate costly dedicated networks for replication and avoid bandwidth upgrades.
IBM has a multi-cloud strategy for IBM i, which is why is important to know more information
about IBM i licensing as part of the journey to the cloud to move IBM i VMs from on-premises
to off-premises considering the base operating system and the Licensed Program Products
(LPPs).
As announced in AD24-0492, dated 12 March 2024, starting 7 May 2024, the IBM i Processor
and User Transfer offering will change for clients who acquire new specific IBM Power10
processor-based servers. This affects the IBM i P05 and P10 software tiers on the following
machine types:
IBM Power S1014 (9105-41B) 4-core processor (P05 software tier)
IBM Power S1014 (9105-41B) 8-core processor (P10 software tier)
IBM Power S1022 (9105-22A) (P10 software tier)
IBM Power S1022s (9105-22B) (P10 software tier)
When purchasing these machine types, the following IBM i licensing options are available on
the machine:
Acquire or renew IBM i Subscription Term licenses.
Convert IBM i non-expiring licenses to IBM i Subscription Term license at a lower-priced
Subscription option. Transfer of IBM i non-expiring licenses is not supported. IBM now
provides the conversion pricing features for this lower-priced Subscription option.
Software tiers are determined by the server model.Power Virtual Server uses S922/S1022
which are P10 tier and E980/E1080 which is P30 tier. A customer on P05 hardware on-prem
would have to license to a higher tier in Power Virtual Server.
For customers moving to Power Virtual Server, they will get a partition on a system with a
different serial number than on-prem. There is no way to preserve an on-prem serial number.
Power Virtual Server has a pinning option that can be used to try and keep a partition on a
system that they licensed their software against, however this means that Power Virtual
Server cannot dynamically move a partition when planned/unplanned maintenance occurs.
An outage will need to be scheduled.
On the future road map Virtual Serial Number (VSN) support is planned, which would allow
partitions to move around in Power Virtual Server and maintain the same serial number.
In general, IBM software acquired through Passport Advantage (PPA) can be brought to
Power Virtual Server. Examples include IBM WebSphere Application Server (WAS),
WebSphere® MQ, Db2 Connect, and Lotus Notes.
IBM i 7.1 went out of service 5/1/2024 and is NO LONGER SUPPORTED in Power Virtual
Server
Each LPP in the package includes all of the features and optional features. For example, the
5770-BR1 solution includes the Network Feature and the Advanced Feature and the base
product.
Other LPPs are available for IBM i, which can be included in your VM instance. To include one
or more LPPs, complete the following steps:
1. Go to Virtual server instances in the Power Systems Virtual Server user interface and click
your instance.
2. Click the Edit details option in the Server details window. A menu appears.
3. Select the required licenses that you want to include in your VM instance. As of this
writing, you can purchase the following licenses through Power Systems Virtual Server:
– IBM i Cloud Storage Solutions (5773-ICC).
– IBM i Power HA (5770-HAS).
– IBM DB2® Web Query for i Standard Edition (5733-WQS).
– IBM Rational Development Studio for i (5770-WDS).
Any size of IBM Cloud Storage Solutions for i on IBM Power Systems Virtual Server includes
the following list prices:
Monthly: 71US per core, per month
Perpetual: 3400US per partition
IBM i PowerHA
IBM i PowerHA geographic mirroring offers a straightforward, cost-effective, high availability
(HA) solution for the small to mid-sized client. Typically, PowerHA geographic mirroring is
used with internal disk storage, and it provides an alternative to solutions that require the
extra configuration and management that are associated with an external storage device.
If the systems are in different locations, it also can provide protection in a site outage; that is,
Disaster Recovery (DR). Geographic mirroring refers to the IBM i host-based replication
solution that is provided as a function of IBM PowerHA SystemMirror for i.
For more information, see IBM PowerHA SystemMirror for i: Using Geographic Mirroring
(Volume 4 of 4), SG24-8401.
Use scenarios
In Scenario 1 (see Figure A-2 on page 299), a customer can use IBM i operating system
capabilities to set up a DR backup site in Power Virtual Server. The geomirroring capability of
IBM i PowerHA allows for IBM i-to-IBM i connectivity across distances. This option applies to
only customers that uses Independent ASP (IASPs).
In scenario 2 (see Figure A-3), a customer can deploy across IBM Cloud regions. For
example, PowerHA Enterprise Edition can be used to enable DR across Washington and
Dallas.
Important: IBM PowerHA for i (5770-HAS) Enterprise Edition is available in only the IBM
Power Systems Virtual Server on IBM Cloud.
By using Db2 for i advanced query optimization technologies, Db2 Web Query provides an
on-premises or in the IBM Cloud solution for modernizing traditional reporting. It provides
For example, in one virtual server, the client has two developers during shift 1, and two
developers during shift 2, the user count is 4.
Per terms
In each virtual server, a user can start the program from any number of devices (such as
terminals, PCs with terminal emulators, or PCs with IDEs). The licensee must obtain
separate, dedicated entitlements for each user who accesses the program directly or
indirectly (for example, by way of a multiplexing program, device, or application server)
through any means. An entitlement for a user is unique to that user and cannot be shared or
reassigned other than for the permanent transfer of the authorized entitlement to another
user.
IBM Rational Development Studio for i (5770-WDS), the per-user price includes all of
following features:
ILE Compilers (feature 5101 of 5770-WDS):
– ILE RPG
– ILE RPG *PRV
– ILE COBOL
– ILE COBOL *PRV
– ILE C
– ILE C++
– ILE IXLC for C/C++
Heritage Compilers (feature 5102 of 5770-WDS):
– S/36 Compatible RPG II
– S/38 Compatible RPG II
– RPG/4004
– S/36 Compatible COBOL
– S/38 Compatible COBOL
– COBOL/400
Application Development ToolSet (ADTS) (feature 5103 9f 5770-WDS):
– Source Entry Utility (SEU)
– Screen Design Aid (SDA)
– Report Layout Utility (RLU)
– Programming Development Manager (PDM)
User licensing information and compiler features are listed in Table A-1 on page 301.
Includes three features (all Per user pricing $120 per user $3885US per user
compilers) $11,655US for three users
Note: If you lift and shift an IBM i VM from on-premises to off-premises and you licensed
one of the extra LLPS, these licenses do not work on IBM Power Systems Virtual Server.
Therefore, you must order a license and approve the service agreement and then, click
Save edits and order to complete the instance modification process and accept the price.
One example of an IBM i program that is acquired by way of Passport Advantage is Rational®
Developer for i (RDi). For RDi, you can bring your current RDi license to the Power Systems
Virtual Server offering whether RDi was obtained through the Power Systems hardware
channel or through Passport Advantage.
If you do not have an RDi license and need the product, obtain a license by using Passport
Advantage. Then, bring that license to the Power Systems Virtual Server offering.
Note: Other examples of IBM software that is acquired by way of Passport Advantage are:
IBM WebSphere MQ
Db2 Connect
Lotus Notes
For more information, see this IBM Passport Advantage web page.
Important: Consider those averages as guidelines only; it was taken from tests, and can
vary on your scenario or situation.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
IBM Power Systems High Availability and Disaster Recovery Updates: Planning for a
Multicloud Environment, REDP-5663
IBM PowerHA SystemMirror for i: Preparation (Volume 1 of 4), SG24-8400
IBM PowerHA SystemMirror for i: Using DS8000 (Volume 2 of 4), SG24-8403
IBM PowerHA SystemMirror for i: Using IBM Storwize (Volume 3 of 4), SG24-8402
IBM Power Security Catalog, SG24-8568
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
The following websites also are relevant as further information sources:
IBM Support portal:
https://www.ibm.com/support/home/
Power Systems Virtual Server FAQ:
https://cloud.ibm.com/docs/power-iaas?topic=power-iaas-power-iaas-faqs
Power Systems Virtual Server getting started:
https://cloud.ibm.com/docs/power-iaas?topic=power-iaas-getting-started
Power Systems Virtual Server network architecture diagrams:
https://cloud.ibm.com/docs/power-iaas?topic=power-iaas-network-architecture-dia
grams
Megaport ordering considerations:
https://cloud.ibm.com/docs/dl?topic=dl-megaport
Megaport portal:
https://portal.megaport.com/login
PowerHA SystemMirror for i Product Information:
https://helpsystemswiki.atlassian.net/wiki/spaces/IWT/pages/163577866/Welcome+t
o+PowerHA+SystemMirror+for+i
SG24-8513-01
ISBN
Printed in U.S.A.
®
ibm.com/redbooks