On-premise to Cloud
migration through HSR
Lift Shift approach
Sudhansu Swain / SAP BASIS HANA LEAD CONSULTANT /
28.07.2024
STEPS TO MIGRATE SAP
APPS AND HANA DB TO
CLOUD
INTRODUCTION
Using the Lift and Shift approach, SAP & Database is going to migrate from on-premise to Cloud. The scope and
impact of this migration method extend beyond its apparent simplicity, encompassing both IT and business
domains. ‘Lift and Shift’ is a topic woven between Cloud Adoption Strategy, Digital Transformation, and Innovation.
Tools are available for SAP Migration to Cloud, Cloud Endure (AWS), Azure Site Replication (ASR), Backup & restore,
export and import and DB replication methodologies. The tool we’ve used is open source tool (RSYNC) for
applications migration and DB replication (HSR).
Key notes: This is AS-IS or Lift & Shift migration. No changes on source end and assuming DB was already installed
on AWS/Azure/GCP.
What moves, what changes, what not
Migration to the Cloud is an opportunity for on-prem teams, especially business teams, to know that business design
and processes are not changing however environment is changing. Environment is made up of physical servers,
operating system, network, storage, firewall, central IT components and tools etc. SAP’s database and application
are tightly integrated with the environment.
Database moves, application is mostly recreated at target and same goes with the environment (target components
differ as per chosen cloud provider – that’s a different topic).
Picture given below depicts what changes and what not.
Lift and Shift Migration in SAP
In SAP environments, one typical cloud migration strategy adopted is ‘Lift and Shift’ (called Rehosting). Generally, a
large focus remains around bringing database and applications into target. Database-native and SAP-native
methods are evaluated for Database and Application. This matches the lift and shift/rehosting definition & scope.
However, in the migration process, environment gets changed as it must be recreated at target. It does not change
business processes directly, but indirect impacts are common.
In other words, the attention to environment should be part of replatforming, where As-Is is first discovered and then
adopted as per To-Be (based on Cloud Provider components). This cycle is the time when a natural transformation is
happening, and some cloud advantages come by default.
Environment on cloud side is different in term of design and building blocks, both. The design goes through architect
lens. Most cloud providers have reference architecture which are based on best practices and robust design
principles i.e., different landing zones for common platform and applications. These design patterns are mostly
future-ready and the mapping from As-Is to To-Be brings digital transformation and Cloud based innovation at the
front yard. This is the first sweet spot of Cloud Adoption and Innovation. This is where business and IT teams can
take interest in design and building blocks at target. This all happens under the ‘Lift and Shift’ umbrella quietly. This
is where ‘Lift and Shift’ word is insufficient to justify its scope and impact.
REPORT TITLE | 4
Example
Lets see an example of environment below and understand why it gets changed and how it impacts application.
Consider on-prem S/4 on HANA database with SUSE has to be move to Azure/AWS. Following are few example
components which may go through transformation:
Component Description Technical Impact Business Impact
Source Upgrade may
Upgrade would be
Database/Ap Database/Applicatio change how
needed impacting
plication n may not be business
scope and timelines
Version compatible with processes/UI
of the project
target Cloud behave
Application
Cloud HA Availability may be
The design would
implementation impacted - should
generally be
High depends on many be positive ideally.
changed so the
Availability Cloud provider Maintenance
maintenance/option
specific technical methods would
s/SLA
components change positively
too
Application
Availability may be
Cloud providers This should be
impacted - should
have many explored in time to
Disaster be positive ideally.
innovative align with
Recovery Maintenance
options/offerings business/technical
methods would
available for DR requirement
change positively
too
REPORT TITLE | 5
How ports are
opened at on-prem
Related business
and how they are
The change may processes may
opened at target
Network Ports impact many jobs, not work at all
would be different.
interface not to run until ports are
Method would differ
rightly opened
based on Cloud
Provider
On-prem may have
Webdispatcher/HW-
Load Balancer and Placement change, Scenarios using
Internet target may have LB type and number WD/LB, should be
facing software Load of hops change thoroughly tested
scenarios Balancer and its would need right for impact and
placement would testing resolution
mostly be in landing
zone
On-prem would be
Appropriate
an stable Various options at
solution and
environment tuned target are available
config is
over years however and right-sized
Storage Types necessary to
target cloud storage solution should be
avoid wider
may need to be chosen from
performance
tuned to get to right- beginning
bottlenecks.
sized storage
Traditional methods
of log analysis and
The extractors The extractor
Logging and monitoring can have
should work should work
Monitoring alternate &
seamlessly seamlessly
innovative solutions
from cloud-provider
REPORT TITLE | 6
Irrespective of on-
prem status, target
New or changed
may ideally enable
security measures
ssl, encryption and This needs thorough
may impact
Security other security analysis, planning
functionality
hardening at and testing
which was
databse,
working before
application, os and
network layer
DNS server can be
replaced with DNS- The setup/config It would have
Move to SaaS
services. Active may or may not have indirect impact on
products/tool
Directory can be impact on name application/availa
s
replaced with AD- resolution, SSO etc bility
services.
REPORT TITLE | 7
STEPS FOR MIGRATION:
Pre-steps:
1. once Application and Database VMs are built please make sure respective RPMs are installed
2. Reference SAP NOTE: 2455582 - Linux: Running SAP applications compiled with GCC 6.x and 2777782 –
SAP Hana DB recommended settings for RHEL 8
3. Hostnames (host name used during DB installation) should be different between source and Target DBs
4. SID and Instance no for both DB and Application should be same
5. sid-adm /Sudo -root user and password information for source
6. SAP* and DDIC access to Source system
7. For HANA database obtain passwords of users SYSTEM (System DB and Tenant DB), SCHEMA, DBACOCKPIT
and any other database admin user from Tenant DB
8. List down installed Plug-ins in Source DBs
9. Take the necessary screenshots of the source system like STMS, SMLG, RZ12,certificates,License etc.
10. Take export of Printers,users,RFCs for safe side.
TARGET System Preparation:
1. Prepare the build sheet for all SAP component systems
2. VM Systems build with 8.x on AWS/Azure
3. Maintain UID & GID with On Prem and AWS/Azure same
4. Verify the SAP recommended packages , kernel parameters, FS, memory & CPU in both Apps and DB nodes
5. Verify if the required installation packages are available (Database, Application , DAA, SWPM and Kernel)
6. Database Servers checks/Readiness (Storage, OS, Networks, Files hares) in the target server
7. Install HANA 2.0 SP05 revision 56 (same as source in our case) with required plugins and Run Hardware
check tool, ensure that the target SID matches the source SID and instance number
8. Please note that HANA replications supports from lower (source) to Higher (Target AWS/Azure)
9. Connectivity between Database Server and Application Server on AWS/Azure
10. Check list of IP addresses at Source and allow necessary ports in AWS/Azure firewall for inbound requests
11. Port 4nn00 - 4nn99 needs to be allowed for HSR between the Source and Target DB's
12. Port 3298 can be allowed to check with niping
13. Run niping command to listen inbound requests from SourceFrom Source : ./niping -c -H <IP> -B 2000000 -L
14. In Target : ./niping -s -I 0 -T trace_niping &
15. Configure HSR replication from On-Prem node to AWS Primary node
16. Sharing of .dat and .key files from Source DB & list of parameters that needs to be set in AWS/Azure DB
17. Stop DB in AWS/Azure
18. Take the backup of existing .dat and .key files and replace the Source DB .dat and .key files and maintain
above mentioned parameters
19. Maintain source DB IP/hostname in /etc/hosts files in AWS/Azure DB
20. start DB in AWS/Azure
21. Enable HSR replication from On-Prem node to AWS Primary node
22. Monitor the replication (Full and delta completion)
23. Take-over AWS/Azure Primary DB to normal mode and do the health check
24. Update DB license
25. Create all OS level application users (<sid>adm, daaadm, sapadm) with proper group ids (sapsys, sapinst )
same as source systems in Target AWS/Azure nodes
Application sync from Source to Target suing Rsync
Sync FS from on Prem to AWS/Azure Application Server for
all
/usr/sap/
/sapmnt/<SID>
/usr/sap/DAA
/home/<SID>adm
/home/daaadm
REPORT TITLE | 9
/home/sapadm
NFS mount points
/usr/sap/trans
/sap/Interface
Post checks
1. Sync the file system permission same as Source system
2. Update the hdbuserstore in all application servers and verify application to database connectivity in the
application host using R3trans -d
3. Bring-up application server and do Sanity check
4. Adapt profile parameters for application and DB for if required and restart the SAP application
5. Perform Post migration steps for <SID> Application and Database
6. Check system consistency using SICK
7. Perform Post Installation Action for Transport Organizer in SE06
8. Import profiles of active servers in RZ10
9. Generate system loads in SGEN
10. Modify the GUI initial screen in SE61 if required
11. Install DAA agents in all the servers
12. Application monitoring to be configured in Solution Manager and confirmed
13. Shutdown the application servers
REPORT TITLE | 10
DATABASE READINESS CHECKS:
1. Take FS screen shot of all Source systems
2. Check no other backup was triggered after full online backup for HANA DB & application (if applicable) for
<SID> and stop all scheduled backups and BTC jobs with BTCTRNS1
3. Check replication sync status
4. Stop Batch jobs
5. Lock Users and Stop Application servers
6. Wait for Delta replication completion (Point in time data)
7. Take-over AWS/Azure HANA DB from replication mode to normal mode, run command 'hdbnsutil -
sr_takeover' in azure DB.
8. Once takeover is completed, disable the replication in azure DB and disable replication on Source DB
(hdbnsutil -sr_disable)
9. Rename DB hostname to same as Source DB alias hostname
10. Include DB/App host into Domain along with source system alias name
11. Stop DB node in Azure
12. Revert all the parameters in global.ini which was added during HSR configuration
13. Revert both SSFS files from current to original (during installation) in AWS/Azure DB
14. Start DB node in Azure
15. Start the SAP application
16. Enable the replication on Azure DB between Node A and Node B
17. Verify the license on DB, Application and perform Sanity checks
18. Configure & Run Backup and completion.
IMPORTANT NOTE:
If there is some data to be synced on applications please re-run Rsync (Delta Sync)
1. Perform a DNS switch so that respective hostnames will point to AWS/Azure IPs
2. Start Database and applications on Target.
Let’s understand few things on Rsync and csync in linux
Rsync offers a reliable method of transmitting only changes within files.
Eg:
suppose 2 job scheduled in crontab job at 10:00 am & 11:00am (at hour intervel of time).
New folder is the folder need to move from source system to target system. and newfolder consists of 4 files
at 10:00am, at 10:35 new file created, so total number of files available now in new folder is 5. When crontab job
schedules at 11:00am only 5th file will move from source system to target system with Rsync tool.
This applies not only to text files but also binary files. To detect the differences between files.
Few important Facts about rsync:
1. Rsync can be particularly useful when large amounts of data containing only minor changes need to be
transmitted regularly.
2. Rsync is a tool that copies data only in one direction at a time. (for source to target system only).
3. If you need a bidirectional tool which is able to synchronize both source and destination, use Csync.
4. The SOURCE and DEST placeholders can be paths, URLs, or both.
5. When working with Rsync, you should pay particular attention to trailing slashes. A trailing slash after the
directory denotes the content of the directory. No trailing slash denotes the directory itself.
Pre-requsites:
• current user has write permissions to the directory in the target system.
• Ensure that rsync service already installed in both source and target systems.
yum install rsync
Different rsync commands:
1. Moving the files with in the system to different directory:
source-dir> rsync -avz backup.tar.gz /var/backup/
REPORT TITLE | 12
Here:
• source_dir - is the source directory where the backup.tar.gz (which file need to move to target) exist.
• -avz:
-v Outputs more verbose text
-a Archive mode; copies files recursively and preserves timestamps, user/group ownership, file permissions, and
symbolic links.
-z Compresses the transmitted data
• backup.tar.gz is the file which needs to move to target.
• /var/backup/ is the target directory.
2. To sync directories:
tux > rsync -avz tux /var/backup/
tux is the directory.
3.Copying Files and Directories Remotely
pre-requisites: for remote system.
• The Rsync tool is required on both machines.
• To copy files from or to remote directories requires an IP address or a domain name.
• A user name is optional if your current user names on the local and remote machine are the same.
• Make sure that target server user used in rsync is having with write access.
cmd:
tux > rsync -avz file.tar.xz [email protected]:X:/path in the destination server
Configuring and Using an Rsync Server
Rsync can run as a daemon ( rsyncd ) listing on default port 873 for incoming connections. This daemon can receive
“copying targets”.
REPORT TITLE | 13
REPORT TITLE | 14
REPORT TITLE | 15
Let’s understand on the High Availability on Physical layer
REPORT TITLE | 16
Physical layer basically beyond the BASIS regular tasks but we need to consider this topic at least briefly
Some examples of High Availability on the physical layer
SAP HANA DB PHYSICAL SERVER with two convergent network cards, each network card have two physical ports (for
example Ethernet and Fiber)
(POF - Network card failure, Cable failure)
Preparing for a potential Point of Failure (POF): Fault-tolerant network infrastructure
Switch 0 and Switch 1
(POF - Switch failure)
Preparing for a potential Point of Failure (POF): Fault-tolerant network infrastructure
Data Storage
(POF - Disk(s) failure)
Preparing for a potential Point of Failure (POF): Fault-tolerant disk storage
High Availability on SAP Application layer
Consider typical configuration SAP configuration for High Availability
REPORT TITLE | 17
PAS - AAS - ASCS - ERS - SAP HANA DB (Primary) - SAP HANA DB (Secondary)
What Are SAP PAS, AAS, and ASCS?
PAS stands for primary application server.
AAS stands for additional application server.
ASCS stands for ABAP Central Services. It is the core SAP application service. It consists of following servers:
• Message Server: works as a load balancer. All user requests are first processed by the message server and
then distributed to each SAP application server.
• Enqueue Server: manages a lock table. To prevent different operations from modifying a record at the same
time, the table is locked to ensure data consistency.
The differences between PAS and AAS: The PAS contains the ASCS, but an AAS does not. In a system, there is only
one PAS, but there can be multiple AASs. The number depends on the service requirements.
If any problem occurs in the ASCS, the entire SAP system breaks down. Therefore, adopt the HA architecture for the
ASCS.
Step 1:
Preparing for a potential Point of Failure (POF): AAS/PAS failure
Preparation:
1. Create SAP Logon group
2. Preparation from frontend side
Step 1.1. Create SAP Logon group
Transaction - SMLG
On this example Instances 33, 34 - PAS/AAS binding in
Logon Group
"Common_group"
Instance 33 - Available (status green)
Instance 34 - Unavailable but SAP system is available for
connecting users
REPORT TITLE | 18
Entry point - Message Server
Transaction - SMMS
To Avoid any message service id unknown issue, Please add
new entry to file
C:\Windows\System32\drivers\etc\services
on Front-End side (SAP Logon / MS Windows OS)
Attention - last string in the file must be empty
Configuring Logon Groups Link:
https://help.sap.com/doc/saphelp_nw75/7.5.5/en-
US/20/330e506567031de10000000a44538d/frameset.h
tm
Step 1.2. Preparation from frontend side
SAPUILandscape.xml (Configuaration file for SAP Logon 7.5
and Higher):
<Service type="SAPGUI" uuid="Any guid" name="Your name
here" systemid="Your SID here"
server="Common_group" sncop="-1" sapcpg="1100"
dcpg="2" msid="Any guid"/>
REPORT TITLE | 19
<Messageservers><Messageserver uuid="Any guid"
name="Your SID" host="ASCS
FQDN"/></Messageservers></Landscape>
Step 2:
Preparing for a potential POF: ASCS/ERS failure
The scenario for RHEL:
Configure SAP S/4HANA ASCS/ERS with Standalone
Enqueue Server 2
https://access.redhat.com/articles/3974941
The scenario for SLES:
https://www.suse.com/media/white-
paper/sap_netweaver_availability_cluster_740_setup_guide
.pdf
High Availability on DB layer
Step 3:
Preparing for a potential POF: SAP HANA DB failure
SAP HANA system replication provides the possibility to copy
and continuously synchronize a SAP HANA database to a
secondary location in the same or another data center
Prerequisites:
1. SAP HANA DB (Secondary) must have the same
system number and SID as SAP HANA DB (Primary)
2. SAP HANA DB (Primary) and SAP HANA DB
(Secondary) must have the same version number
and the same set of Add-ons
REPORT TITLE | 20
Steps to perform HSR between source and Target:
Assuming that HANA Database is already installed on
AWS/Azure with same version as of Source.
(HSR Replication from source to target works (lower to higher
version) but not other way (higher to lower version replication
does not work)
Steps:
1) Enable replication from Source DB
hdbnsutil -sr_enable --name=SourceA
2) Verfiy the state of primary server
hdbnsutil -sr_state
3) Secondary (DB on AWS/Azure/GCP)
Check HANA is stopped
4) Secondary side: register (DB on AWS/Azure/GCP)
hdbnsutil -sr_register --name=TargetB --remoteHost=sourceDBHost --remoteInstance=$$ --replicationMode=async --
operationMode=logreplay
5) Start Hana on Secondary
sapcontrol -nr $$ -function StartSystem HDB
6) Verify cluster
hdbnsutil -sr_state
7) check the replication status via Hana studio or python script
REPORT TITLE | 21
8) unregister replication on Target Database on AWS / Azure
hdbnsutil -sr_unregister
9) Stop Database and applications on-Premise
Stopsap all
HDB stop
Or
Sapcontrol -nr 00 -function StopSystem SID
10) start HANA Database on AWS/Azure/GCP
sapcontrol -nr $$ -function StartService SID
sapcontrol -nr $$ -function StartSystem
Steps to perform RSYNC between source and target:
Sudo to root and execute below.
rsync -avhz --partial /home/<SID>/ <sid>adm@<Target IP>:/home/<sid>adm/
rsync -avhz --partial /usr/sap/<SID>/ <sid>adm @<Target IP>:/usr/sap/<SID>/
rsync -avhz --partial /sapmnt/<SID>/ <sid>adm @<Target IP>:/sapmnt/<SID>/
rsync -avhz --partial /usr/sap/DAA
/home/daaadm
-a, --archive, archive mode, equivalent to -rlptgoD. This option tells rsync to syncs directories recursively, transfer
special and block devices, preserve symbolic links, modification times, groups, ownership, and permissions.
-z, --compress. This option forces rsync to compresses the data as it is sent to the destination machine. Use this
option only if the connection to the remote machine is slow.
-v, --verbose increase verbosity
REPORT TITLE | 22
--h, --human-readable output numbers in a human-readable format
--progress show progress during transfer
1. ASCS & ERS Host ==> Requires Following file system
/home/<sid>adm
/usr/sap
/usr/sap/DAA
/usr/sap/<SID>/ASCS60
/usr/sap/<SID>/ERS70
/sapmnt/<SID> ==> common mount exists across all SAP Servers
2. Validate the FS on target server and verify the permissions with <sid>adm:sapsys after sync in /usr/sap.
3. Verify the host file and maintain "/etc/hosts" on DB ,Ascs, ERS, App and hard code the VIP hostname with
physical host IP.
4. Append the entries to the file /etc/services
5. Change Environment variable files if <SID> is different in all APP servers
Go to /home/<sid>adm
ls –alt
egrep <SID> .sap* - change to target <SID> if SID is different
Chown –R <sid>adm:sapsys .sap*
Chown –R <sid>adm:sapsys *
Check the home path for <sid>adm,daaadm
Edit /usr/sap/sapservices accordingly
Make sure below slinks are working. If not exist please do create them
REPORT TITLE | 23
lrwxrwxrwx 1 sidadm sapsys 19 Nov 9 18:49 profile -> /sapmnt/<SID>/profile
lrwxrwxrwx 1 sidadm sapsys 18 Nov 9 18:49 global -> /sapmnt/<SID>/global
/usr/sap/<SID>/SYS/exe
lrwxrwxrwx 1 sidadm sapsys 18 Nov 9 18:50 uc -> /sapmnt/<SID>/exe/uc
lrwxrwxrwx 1 sidadm sapsys 19 Nov 9 18:55 nuc -> /sapmnt/<SID>/exe/nuc
lrwxrwxrwx 1 sidadm sapsys 30 Nov 9 18:56 dbg -> /sapmnt/<SID>/exe/uc/linuxx86_64
lrwxrwxrwx 1 sidadm sapsys 24 Nov 9 18:56 run -> /usr/sap/<SID>/SYS/exe/dbg
rsync – sync completion sample screenshot for /sapmnt/SID
REPORT TITLE | 24
Check HDB user store and update with required details
Start Databse
hdbuserstore SET DEFAULT DBHOSTNAME:35015 <schema> <Password>
sapcontrol -nr 00 -function StartService <SID>
sapcontrol -nr 00 -function Start
R3trans –d should return (0000)
Start app server/(s)
sapcontrol -nr $$ -function StartService <SID>
sapcontrol -nr $$ -function StartSystem
Please note that Start will start local instances whereas StartSystem will start all instances.
Look at below link how to install and configure RSYNC in case not present at OS
https://linuxize.com/post/how-to-use-rsync-for-local-and-remote-data-transfer-and-synchronization/
SSFS keys Preparation for replication:
Copy system PKI ssfs key and dat file from the primary site to the secondary site.
The files can be found at the following locations:
/usr/sap/<SID>/SYS/global/security/rsecssfs/data/SSFS_<SID>.DAT
/usr/sap/<SID>/SYS/global/security/rsecssfs/key/SSFS_<SID>.KEY
We can use rsync utility for this purpose
rsync -arv /usr/sap/<SID>/SYS/global/security/rsecssfs/*
root@secondary_hostname:/usr/sap/<SID>/SYS/global/security/rsecssfs/
Links: https://launchpad.support.sap.com/#/notes/2369981
2369981 - Required configuration steps for authentication with HANA System Replication
REPORT TITLE | 25
SYSTEM REPLICATION IN HANA STUDIO
From SystemDB side
Go to Configuration and Monitoring > Configure System Replication -> Enable system replication
SETTING FOR ABAP AND JAVA SYSTEMS
All ABAP/JAVA systems in Landscape must be known about SAP HANA DB (Primary) and SAP HANA DB (Secondary).
For ABAP systems:
For this purpose log-on as a <sid>adm os-user and add a new a key to hdbuserstore:
hdbuserstore -i Set DEFAULT hostname_primary:3<instance>13;
hostname_secondary:3<instance>13@<SID> SAPABAP1 <password for SAPABAP1, usually master password>;
Again log-on as a <sid>adm and check hdbuserstore:
hdbuserstore LIST DEFAULT
For JAVA systems:
Launch configtool
/usr/sap/<SID>/JC##/j2ee/configtool/configtool.sh
## - Instance ID
Secure store
Connection Pools
Url - jdbc:sap//hostname_primary:3<instance>13; hostname_secondary:3<instance>13?databaseName = <SID of
Java system>
User: SAPJAVA1
Takeover in Hana Studio
SAP HANA DB (Secondary) - SystemDB
Configuration and Monitoring > Configure System Replication > Perform takeover
Shutdown SAP HANA DB (Primary)
N.B. Scenario with two primary instances unacceptable. After takeover ex-primary instance must be shutdown
REPORT TITLE | 27
Takeover in Hana Cockpit
SAP HANA DB (Secondary)
System Replication Overview
Take Over
Shutdown SAP HANA DB (Primary)
N.B. Scenario with two primary instances unacceptable. After takeover ex-primary instance must be shutdown
REPORT TITLE | 28
Replication monitoring from Hana Cockpit and CLI
CLI:
su -<sid>adm
hdbnsutil -sr_state
REPORT TITLE | 29
python $DIR_INSTANCE/exe/python_support/systemReplicationStatus.py
Replication overview check in Hana Cockpit:
From SystemDB side
System Replication Overview
Notable Information about the Secondary System;
Password for SYSTEM/ SAPABAP1 for SYSTEMDB - Master password which was provided during installation.
Password for SYSTEM/ SAPABAP1 for Tenant - Password from a tenant on a primary database.
After replication Secondary system accessible only with <sid>adm user
REPORT TITLE | 30
We can register Secondary system in HANA Cockpit with sidadm OS-user credentials.
Cluster commands used frequesntly :
1. How to check HA Configuration?
crm configure show
2. Check HA Status?
crm status
3. Putting HA Node in Maintenace Mode?
crm mainteanance <hostname>
4. Manually Starting Specific HA Node Resource?
crm resource start <resource name> <hostname>
5. How to clean up old configuration Files ?
crm resource cleanup <Rsc_esource name><hostname>
REPORT TITLE | 31
6. How to disable Maintenace Mode in HA?
crm configure property maintenance-mode=false
7.How to disable Stonith Service in cluster?
crm configure property stonith-enabled=false
8. How to Start Cluster in HA?
crm cluster start
9. How to Stop Cluster in HA?
crm cluster stop
10. How to Restart Cluster?
crm cluster restart
11. How to see cluster Status?
crm cluster status
12.Starting the cluster stack on one node?
systemctl start pacemaker
13.Stopping the cluster stack on one node?
systemctl stop corosync
14.Restarting the cluster stack on one node?
systemctl restart corosync
15. How to check the HA/DR Status?
Hdbnsutil -sr_state
16. How to break replication between Primary and Secondary Node i.e HA Enviroment (With SIDADM User)?
hdbnsutil -sr_unregister --id=abc
17. How to register and establish the HA Nodes ( With SIDADM User)?
hdbnsutil -sr_register --name=msap --remoteHost=msap00081 --remoteInstance=00 --replicationMode=async --
operationMode=logreplay
18. How to check HA Configuration Entries Log files?
cat /etc/default/grub | grep GRUB_CMLINE_LINUX_DEFAULT
19. How to restart kdumpstatus ?
REPORT TITLE | 32
servicekdump status
20. How to check HA/DR Status with SIDADM User in details?
hdbcons -e hdbindexserver "replication info"
21. How to check the HA/DR Status with Python Script using SIDADM User?
Type Command: cdpy
It will directly map to python script directory
Cmd: Python systemReplicationstatus.py
22: How to check HA/DR Status with root user?
cmviewcl
23. which location stores for HA/DR Configuration ?
/opt/cmcluster/run
24. How to Start Multi Node HANA DB?
sapcontrol -nr 00 -function StartSystem HDB 1000 500
25. How to check Multi Node HANA DB Services status?
ssapcontrol -nr 00 -function GetSystemInstanceList
26. How to check HANA Services Status of single node?
sapcontrol -nr 00 -function GetProcessList
27.How to Stop Multi Node HANA DB Instances?
sapcontrol -nr 20 -function StopSystem HDB
28.How to Start Multi Node HANA DB Instances?
sapcontrol -nr 20 -function StartSystem HDB
29. How to Check HANA Process on OS Level related to Indexserver?
ps -ef | grep indexserver
30. How to check HANA Process on OS Level related to HANA DB?
HDB proc or HDB info
Steps for enabling the cluster node into maintenance mode during planned maintenance activity:
1. Place node(s) into maintenance.
2. Disable STONITH
REPORT TITLE | 33
3. Stop pacemaker service on maintenance nodes.
4. Perform maintenance tasks
5. Start HANA and replication
6. Stop and restart pacemaker
7. Enable STONITH
8. Take node(s) out of maintenance mode
Let’s discuss in detail on above tasks:
Step 1. Set cluster to maintenance mode:
crm configure property maintenance-mode=true
Step 2. Run "crm status" command to be sure cluster is fully in maintenance mode before continuing.
Step 3. Disable stonith in the cluster:
crm configure property stonith-enabled=false
Step 4. Make all the necessary changes.
Step 5. Enable stonith in the cluster:
crm configure property stonith-enabled=true
Step 6. Remove maintenance mode from cluster:
crm configure property maintenance-mode=false
I hope you like the Document..Keep Learning..All the Best.. 😊
REPORT TITLE | 34