SAPHanaSR-angi is the SAPHanaSR Advanced Next Generation Interface. It is quite similar to the classical SAPHanaSR, but not fully backward compatible. Fortunately, you can upgrade existing clusters without HANA downtime.
In this blog article you will learn how to upgrade the SUSE HA cluster and the HANA HA/DR provider scripts from classical SAPHanaSR to the SAPHanaSR-angi setup. The upgrade should lead to the same configuration as a fresh installation from scratch.
In our blog “What is SAPHanaSR-angi” you get information about new SAPHanaSR-angi setup. So now let us learn about the upgrade procedure by discussing six questions:
- When do I need to upgrade my cluster configuration?
- Which cluster attributes and configuration items will change?
- How does the overall upgrade procedure look like?
- Which prerequisites does an upgrade to SAPHanaSR-angi need?
- What exactly are the commands I need to run?
- Where can I find further information?
When do I need to upgrade my cluster configuration?
Contents
- 1 When do I need to upgrade my cluster configuration?
- 2 Which cluster attributes and configuration items will change?
- 3 How does the overall upgrade procedure look like?
- 4 Which prerequisites does an upgrade to SAPHanaSR-angi need?
- 5 What exactly are the commands I need to run?
- 6 Where can I find further information?
In three situations you might find it advised to upgrade the SUSE HA cluster from SAPHanaSR to SAPHanaSR-angi:
- You want to benefit from one of the new features, e.g. reduced HANA downtime in case of filesystem outage.
- You want to install new clusters in addition to existing ones. In that case you might install the new clusters with SAPHanaSR-angi and upgrade the existing ones.
- You are running SUSE HA for SAP HANA scale-up as well as scale-out.
Which cluster attributes and configuration items will change?
SAPHanaSR-angi unifies HA for HANA scale-up and scale-out. Therefor it handles scale-up as subset of scale-out, which changes the structure of attributes.
The most significant changes for scale-up are:
- The SAPHana RA and its multi-state config will be replaced by the new SAPHanaController and its clone promoteable config.
- The SAPHanaSR.py HADR provider hook script will be replaced by the new susHanaSR.py.
- Tools are placed in /usr/bin/ instead of /usr/sbin/.
- Node attributes will be removed – hana_<sid>_vhost, hana_<sid>_site, hana_<sid>_remoteHost, lpa_<sid>_lpt, hana_<sid>_op_mode, hana_<sid>_srmode, hana_<sid>_sync_state, first and second field of hana_<sid>_roles.
- Site and global attributes will be added to property SAPHanaSR – hana_<sid>_glob_topology, hana_<sid>_glob_prim, hana_<sid>_glob_sec, hana_<sid>_site_lpt_<site>, hana_<sid>_site_lss_<site>, hana_<sid>_site_mns_<site>, hana_<sid>_site_srr_<site>, hana_<sid>_site_opMode_<site>, hana_<sid>_site_srMode_<site>, hana_<sid>_site_srPoll_<site>.
Below you see two examples for significant changes in the cluster information base (CIB), the HANA controller resource agent configuration and the resource agent attributes.
# crm configure show rsc_SAPHanaCon_HA1_HDB00 mst_SAPHanaCon_HA1_HDB00 primitive rsc_SAPHanaCon_HA1_HDB00 ocf:suse:SAPHanaController op start interval=0 timeout=3600 op stop interval=0 timeout=3600 op promote interval=0 timeout=3600 op monitor interval=60 role=Promoted timeout=700 op monitor interval=61 role=Unpromoted timeout=700 params SID=HA1 InstanceNumber=00 PREFER_SITE_TAKEOVER=yes DUPLICATE_PRIMARY_TIMEOUT=600 AUTOMATED_REGISTER=yes meta priority=100 clone mst_SAPHanaCon_HA1_HDB00 rsc_SAPHanaCon_HA1_HDB00 meta clone-max=2 clone-node-max=1 interleave=true promotable=true
Example: New SAPHanaController resource configuration in CIB
# crm configure show SAPHanaSR property SAPHanaSR: hana_ha1_site_srHook_JWD=PRIM hana_ha1_glob_topology=ScaleUp hana_ha1_site_lss_WDF=4 hana_ha1_site_srr_WDF=S hana_ha1_site_lss_JWD=4 hana_ha1_site_srr_JWD=P hana_ha1_site_srMode_JWD=sync hana_ha1_site_srMode_WDF=sync hana_ha1_site_mns_JWD=sle12b hana_ha1_site_mns_WDF=sle12a hana_ha1_site_lpt_JWD=1689930278 hana_ha1_site_opMode_JWD=logreplay hana_ha1_site_srHook_WDF=SOK hana_ha1_site_lpt_WDF=30 hana_ha1_site_opMode_WDF=logreplay hana_ha1_glob_prim=JWD hana_ha1_site_srPoll_WDF=SOK hana_ha1_site_srPoll_JWD=PRIM hana_ha1_glob_sec=WDF
Example: New SAPHanaSR attributes in CIB
Also the HANA HA/DR provider hook scripts will change. Here you see as example susHanaSR.py replacing SAPHanaSR.py . The other hook scripts shipped by SUSE are also affected.
# su - ha1adm -c "SAPHanaSR-manageProvider --show --provider=sushanasr" [ha_dr_provider_sushanasr] provider = susHanaSR path = /usr/share/SAPHanaSR-angi execution_order = 1
Example: New susHanaSR.py in HANA global.ini
The sudoers configuration has changed as well.
# cat /etc/sudoers.d/SAPHanaSR
ha1adm ALL=(ALL) NOPASSWD: /usr/bin/SAPHanaSR-hookHelper --sid=HA1 *
ha1adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_ha1_site_srHook_*
Example: New path in sudoers configuration
How does the overall upgrade procedure look like?
The upgrade procedure consists of four phases. During the whole time, Linux cluster and HANA are kept running. However, cluster resource management is disabled and the system goes thru fragile states during the upgrade.
-
Phase 1 – preparing
1.1 Check for sane state of cluster, HANA and system replication
1.2 Collect information, needed for upgrade
1.3 Make backup of CIB, sudoers and global.ini
-
Phase 2 – removing
2.1 Set SAPHana or SAPHanaController resource to maintenance
2.2 Remove SAPHanaSR.py or SAPHanaSrMultiTarget.py from global.ini, HANA and sudoers
2.3 Remove SAPHana or SAPHanaController resource config from CIB
2.4 Remove SAPHanaSR property attributes from CIB
2.5 Remove SAPHanaSR node attributes from CIB
2.6 Remove SAPHanaSR or SAPHanaSR-ScaleOut RPM
-
Phase 3 – adding
3.1 Install SAPHanaSR-angi RPM
3.2 Add susHanaSR.py to sudoers, global.ini, HANA
3.3 Add angi SAPHanaController resource config to CIB
3.4 Refresh SAPHanaController resource and set it out of maintenance
3.5 Add SAPHanaFilesystem resource (optional)
-
Phase 4 – finalizing
4.1 Check for sane state of cluster, HANA and system replication
4.2 Test RA on secondary and trigger susHanaSR.py (optional)
4.3 Remove ad-hoc backup from local directories
Which prerequisites does an upgrade to SAPHanaSR-angi need?
The upgrade procedure depends on an initial setup as described in setup guides and manual pages. Please see requirements below and in manual pages SAPHanaSR(7) or SAPHanaSR-ScaleOut(7). The procedure does not necessarily need downtime for HANA, if planned and executed carefully. Nevertheless, it should be done under friendly conditions.
- OS, Linux cluster and HANA are matching requirements for SAPHanaSR, or SAPHanaSR-ScaleOut respectively, and SAPHanaSR-angi.
- The resource configuration matches a documented setup. Even if the general upgrade procedure is expected to work for customized configuration, details might need special treatment.
- The whole upgrade procedure is tested carefully and documented in detail before being applied on production.
- Linux cluster, HANA and system replication are in sane state before the upgrade. All cluster nodes are online.
- The HANA database is idle during the upgrade. No other changes on OS, cluster, database or infrastructure are done in parallel to the upgrade.
- Linux cluster, HANA and system replication are checked and in sane state before set back into production.
The script SAPHanaSR-upgrade-to-angi-demo might help you checking whether your cluster matches this prerequisites. Please read manual page SAPHanaSR-upgrade-to-angi-demo(8) for details.
What exactly are the commands I need to run?
Due to complexity of the procedure and variability of customer setups, this blog article can not describe each and every step in detail. But the manual page SAPHanaSR_upgrade_to_angi(7) provides you with details on necessary steps for upgrading an SUSE cluster to SAPHanaSR-angi.
Further, the package SAPHanaSR 0.162.5 and later ships an example script SAPHanaSR-upgrade-to-angi-demo. This script helps you creating a step-by-step list of detailed commands needed for the upgrade. Please read manual page SAPHanaSR-upgrade-to-angi-demo(8) for details.
Note: The script SAPHanaSR-upgrade-to-angi-demo is shipped without any warranty or support.
To give you a first impression for the details some examples are given in the following sections.
-
Collecting information
Please carefully collect and document the following information before upgrading a cluster.
- Path to config backup directory at both sites
- Name of both cluster nodes, respectively both HANA master nameservers, see SAPHanaSR-showAttr(8)
- HANA SID and instance number, name of <sid>adm
- HANA virtual hostname, in case it is used
- Name and config of existing SAPHana, or SAPHanaController, resources and related constraints in CIB, see ocf_suse_SAPHana(7) or ocf_suse_SAPHanaController(7)
- Path to sudoers permission config file and its content, e.g. /etc/sudoers.d/SAPHanaSR
- Name of existing SAPHanaSR.py, or SAPHanaSrMultiTarget.py, section in global.ini and its content, see SAPHanaSR.py(7), SAPHanaSrMultiTarget.py(7) and SAPHanaSR-manageProvider(8)
- Name and config for new SAPHanaController resources and related constraints, path to config template, see ocf_suse_SAPHanaController(7)
- Path to config template for new sudoers permission and its content, see susHanaSR.py(7)
- Path to config template for new susHanaSR.py section, e.g. /usr/share/SAPHanaSR-angi/global.ini_susHanaSR, see susHanaSR.py(7)
- Name and config for new SAPHanaFilesystem resources, path to config template, see ocf_suse_SAPHanaFilesystem(7) (optional)
The manual page SAPHanaSR_upgrade_to_angi(7) gives you examples for how to collect that information. The script SAPHanaSR-upgrade-to-angi might help you collecting this information.
-
Removing SAPHana RA configuration from CIB while keeping HANA running
The example below shows what commands the script SAPHanaSR-upgrade-to-angi recommends for removing the SAPHana resource configuration from CIB, without stopping the resource. Of course, other items need to be removed as well, see above.
# /root/bin/SAPHanaSR-upgrade-to-angi-demo –run f_maintenance-on-classic f_remove-saphanacon-classic ######## run f_maintenance-on-classic ######### cs_wait_for_idle -s 3 >/dev/null crm resource maintenance msl_SAPHana_HA1_HDB00 on cs_wait_for_idle -s 3 >/dev/null crm resource maintenance cln_SAPHanaTop_HA1_HDB00 on cs_wait_for_idle -s 3 >/dev/null echo "property cib-bootstrap-options: stop-orphan-resources=false" | crm configure load update - ######## run f_maintenance-on-classic ######### ######## run f_remove-saphanacon-classic ######### cs_wait_for_idle -s 3 >/dev/null cibadmin --delete --xpath "//rsc_colocation[@id='ip_with_SAPHana']" cibadmin --delete --xpath "//rsc_order[@id='SAPHanaTop_first']" cibadmin --delete --xpath "//master[@id='msl_SAPHana_HA1_HDB00']" cs_wait_for_idle -s 3 >/dev/null crm resource refresh rsc_SAPHana_HA1_HDB00 ######## end f_remove-saphanacon-classic #########
Example: SAPHanaSR-upgrade-to-angi-demo suggests how to remove SAPHana
Note: Do not forget to change back to stop-orphan-resource=true after the upgrade.
-
Setting up SAPHanaSR-angi once the old package and configuration has been removed
Once you have removed the old SAPHanaSR configuration and package, you can install and configure the new SAPHanaSR-angi package. Please follow our setup guide “SAP HANA System Replication Scale-Up – Performance Optimized Scenario with SAPHanaSR-angi” ( https://documentation.suse.com/sbp/sap-15/html/SLES4SAP-hana-angi-perfopt-15/index.html ). This document guides you through phase 3 of the upgrade procedure. You might integrate the tasks of phase 3 into your deployment automation.
Another option would be to use SAPHanaSR-upgrade-to-angi-demo for creating all commands, including phase 3 and phase 4. See section below.
-
Creating a complete command sequence for the upgrade procedure
The script SAPHanaSR-upgrade-to-angi-demo can help creating a sequence of commands which is aimed to cover all steps of the four phases. The script collects information from the running cluster with SAPHanaSR. Based on that data it suggests step by step the commands to upgrade the cluster. The script does not change the running configuration. The script is shipped with the RPM as sample. Please copy it to /root/bin/ on both cluster nodes. See manual page SAPHanaSR-upgrade-to-angi-demo(8) for details on usage and requirements of the script.
The example below shows how to create a draft for an upgrade command sequence. That draft can be used to create a runbook specific to the cluster at hand. Please check the suggested commands and parameters before applying them.
# /root/bin/SAPHanaSR-upgrade-to-angi-demo --upgrade | tee SAPHanaSR-upgrade-draft.txt ######## run f_show-state ######### cs_wait_for_idle -s 3 >/dev/null crm_mon -1r --include=failcounts,fencing-pending;echo;SAPHanaSR-showAttr;cs_clusterstate -i|grep -v "#" Cluster Summary: * Stack: corosync * Current DC: sle12b (version 2.1.2+20211124.ada5c3b36-150400.4.9.2-2.1.2+20211124.ada5c3b36) - partition with quorum * Last updated: Mon May 6 17:02:13 2024 * Last change: Mon May 6 17:01:34 2024 by root via crm_attribute on sle12a * 2 nodes configured * 6 resource instances configured Node List: * Online: [ sle12a sle12b ] Full List of Resources: * rsc_ip_HA1_HDB00 (ocf::heartbeat:IPaddr2): Started sle12a * rsc_stonith_sbd (stonith:external/sbd): Started sle12b * Clone Set: msl_SAPHana_HA1_HDB00 [rsc_SAPHana_HA1_HDB00] (promotable): * Masters: [ sle12a ] * Slaves: [ sle12b ] * Clone Set: cln_SAPHanaTop_HA1_HDB00 [rsc_SAPHanaTop_HA1_HDB00]: * Started: [ sle12a sle12b ] ... ######## run f_make-backup ######### ... ######## run f_maintenance-on-classic ######### ... ######## run f_remove-srhook-classic ######### ... ######## run f_remove-saphanacon-classic ######### ... ######## run f_remove-saphanatop-classic ######### ... ######## run f_remove-property ######### ... ######## run f_remove-node-attribute ######### ... ######## run f_remove-rpm-classic ######### ... ######## run f_install-rpm-angi ######### ... ######## run f_add-srhook-angi ######### ... ######## run f_add-saphanatop-angi ######### ... ######## run f_add-saphanacon-angi ######### ... ######## run f_add-saphanafil-angi ######### ... ######## run f_maintenance-off-angi ######### ... ######## run f_show-state ######### ... ######## run f_check-final-state ######### ... ######## run f_test-secondary ######### ... ######## run f_show-state ######### ... Cluster state: S_IDLE ######## end f_show-state #########
Example: SAPHanaSR-upgrade-to-angi-demo suggests a complete command sequence
# less SAPHanaSR-upgrade-draft.txt ...
Example: Reading the script´s output as first draft
You can use this draft for preparing a detailed runbook for the upgrade procedure. Please check the proposed commands. Sometimes you need to adapt something. Of course you should check the result finally as well.
Note: Test the created commands on a test cluster before using SAPHanaSR-upgrade-to-angi-demo on a production cluster.
If you feel uncomfortable with performing an upgrade to SAPHanaSR-angi, please do not hesitate to contact SUSE services.
Where can I find further information?
– Related blog articles
https://www.suse.com/c/tag/towardszerodowntime
– Product documentation
https://documentation.suse.com/
https://documentation.suse.com/sbp/sap-15/
https://www.suse.com/releasenotes/
– Manual pages
SAPHanaSR-angi(7), SAPHanaSR_upgrade_to_angi(7), SAPHanaSR-upgrade-to-angi-demo(8), SAPHanaSR_maintenance_examples(7), SAPHanaSR-showAttr(8), crm_mon(8), cibadmin(8), cs_wait_for_idle(8), susHanaSR.py(7)
(Visited 1 times, 1 visits today)