Database : Oracle Database 19.0.0.0 EE
Operating System : HP-UX 11.31 64 Bit
Database Type : Oracle 6 Node RAC DB
Incident Summary: Error encountered while adding a disk to the ASM diskgroup. SQL> alter diskgroup DG_testdb_DB_XP8 add disk '/dev/dg_testdb_db_xp8_60267/testdb_db_3412'; alter diskgroup DG_testdb_DB_XP8 add disk '/dev/dg_testdb_db_xp8_60267/testdb_db_3412' * ERROR at line 1: ORA-15032: not all alterations performed ORA-15137: The ASM cluster is in rolling patch state. Changes in the environment: This is a six-node RAC setup where the GRID Bug fix patch "37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543]" was recently applied on all of the cluster nodes post which post-patch step failed and the issue was overlooked by the DBA. Observation: A few days later, there was a requirement to add a new disk to the ASM diskgroup. During this activity, the above error message was encountered. Based on the error, it appears that the cluster patch level is not consistent across all nodes. Let’s proceed with troubleshooting the issue. Steps performed to troubleshoot the issue :Step1 : Lets check opatch lspatches output on all nodes. Node1: grid@testdbnode1:/oracle_19c_grid/app/oracle/product/19.3/OPatch#./opatch lspatches 37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543] 37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031) 37260974;Database Release Update : 19.26.0.0.250121 (37260974) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) OPatch succeeded. Node2: grid@testdbnode2:/oracle_19c_grid/app/oracle/product/19.3/OPatch#./opatch lspatches 37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543] 37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031) 37260974;Database Release Update : 19.26.0.0.250121 (37260974) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) OPatch succeeded. Node3: grid@testdbnode3:/oracle_19c_grid/app/oracle/product/19.3/OPatch#./opatch lspatches 37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543] 37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031) 37260974;Database Release Update : 19.26.0.0.250121 (37260974) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) OPatch succeeded. Node4: grid@testdbnode4:/oracle_19c_grid/app/oracle/product/19.3/OPatch#./opatch lspatches 37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543] 37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031) 37260974;Database Release Update : 19.26.0.0.250121 (37260974) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) OPatch succeeded. Node5: grid@testdbnode5:/oracle_19c_grid/app/oracle/product/19.3/OPatch#./opatch lspatches 37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543] 37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031) 37260974;Database Release Update : 19.26.0.0.250121 (37260974) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) OPatch succeeded. Node6: grid@testdbnode6:/oracle_19c_grid/app/oracle/product/19.3/OPatch#./opatch lspatches 37328497;ASM INSTANCE TERMINATED WITH ORA-00600 [KFCBPING05], [7543] 37268031;OCW RELEASE UPDATE 19.26.0.0.0 (37268031) 37260974;Database Release Update : 19.26.0.0.250121 (37260974) 29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247) OPatch succeeded. The opatch lspatches command confirms that the bug-fix patch 37328497 is listed in the lsinventory on all nodes. Step2 : Let's check the opatch logs for all nodes. Node1: grid@testdbnode1:/home/grid#cd /oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch grid@testdbnode1:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch# grep -i "Patch 37328497 successfully applied" *.log opatch2025-12-10_02-08-53AM_1.log:[Dec 10, 2025 2:18:30 AM] [INFO] Patch 37328497 successfully applied. grid@testdbnode1:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#egrep -i "error|fail|make" opatch2025-12-10_02-08-53AM_1.log [Dec 10, 2025 2:08:53 AM] [INFO] Runtime args: [-Xmx1536m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=apply, -DOPatch.ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=., -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/oracle_19c_grid/app/oracle/product/19.3/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=] /usr/ccs/bin/make /oracle_19c_grid/app/oracle/product/19.3/lib/libasmclntsh19.so -f /oracle_19c_grid/app/oracle/product/19.3/rdbms/lib/ins_rdbms.mk make libasmclntsh19.so returned code 0 [Dec 10, 2025 2:16:24 AM] [INFO] OUI-67050:Running make for target ioracle [Dec 10, 2025 2:16:24 AM] [INFO] Start invoking 'make' at Wed Dec 10 02:16:24 IST 2025Wed Dec 10 02:16:24 IST 2025 [Dec 10, 2025 2:16:24 AM] [INFO] Running make command: /usr/ccs/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3 OPATCH_SESSION=apply [Dec 10, 2025 2:18:29 AM] [INFO] Finish invoking 'make' at Wed Dec 10 02:18:29 IST 2025 [Dec 10, 2025 2:18:29 AM] [INFO] OPatch will clean up 'restore.sh,make.txt' files and 'scratch,backup' directories. [Dec 10, 2025 2:18:29 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_02-08-53AM/backup/inventory/Components21/oracle.crs/19.0.0.0.0/UnixActions/makedeps.xml" [Dec 10, 2025 2:18:30 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_02-08-53AM/backup/inventory/make/makeorder.xml" [Dec 10, 2025 2:18:30 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_02-08-53AM/make.txt" Node2: grid@testdbnode2:/home/grid#cd /oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch grid@testdbnode2:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#grep -i "Patch 37328497 successfully applied" *.log opatch2025-12-10_05-13-56AM_1.log:[Dec 10, 2025 5:22:54 AM] [INFO] Patch 37328497 successfully applied. grid@testdbnode2:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#egrep -i "error|fail|make" opatch2025-12-10_05-13-56AM_1.log [Dec 10, 2025 5:13:57 AM] [INFO] Runtime args: [-Xmx1536m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=apply, -DOPatch.ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=., -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/oracle_19c_grid/app/oracle/product/19.3/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=] /usr/ccs/bin/make /oracle_19c_grid/app/oracle/product/19.3/lib/libasmclntsh19.so -f /oracle_19c_grid/app/oracle/product/19.3/rdbms/lib/ins_rdbms.mk make libasmclntsh19.so returned code 0 [Dec 10, 2025 5:20:58 AM] [INFO] OUI-67050:Running make for target ioracle [Dec 10, 2025 5:20:58 AM] [INFO] Start invoking 'make' at Wed Dec 10 05:20:58 IST 2025Wed Dec 10 05:20:58 IST 2025 [Dec 10, 2025 5:20:58 AM] [INFO] Running make command: /usr/ccs/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3 OPATCH_SESSION=apply [Dec 10, 2025 5:22:53 AM] [INFO] Finish invoking 'make' at Wed Dec 10 05:22:53 IST 2025 [Dec 10, 2025 5:22:53 AM] [INFO] OPatch will clean up 'restore.sh,make.txt' files and 'scratch,backup' directories. [Dec 10, 2025 5:22:53 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-13-56AM/backup/inventory/Components21/oracle.crs/19.0.0.0.0/UnixActions/makedeps.xml" [Dec 10, 2025 5:22:54 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-13-56AM/backup/inventory/make/makeorder.xml" [Dec 10, 2025 5:22:54 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-13-56AM/make.txt" Node3: grid@testdbnode3:/home/grid#cd /oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch grid@testdbnode3:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#grep -i "Patch 37328497 successfully applied" *.log opatch2025-12-10_05-20-55AM_1.log:[Dec 10, 2025 5:29:40 AM] [INFO] Patch 37328497 successfully applied. grid@testdbnode3:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#egrep -i "error|fail|make" opatch2025-12-10_05-20-55AM_1.log [Dec 10, 2025 5:20:56 AM] [INFO] Runtime args: [-Xmx1536m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=apply, -DOPatch.ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=., -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/oracle_19c_grid/app/oracle/product/19.3/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=] /usr/ccs/bin/make /oracle_19c_grid/app/oracle/product/19.3/lib/libasmclntsh19.so -f /oracle_19c_grid/app/oracle/product/19.3/rdbms/lib/ins_rdbms.mk make libasmclntsh19.so returned code 0 [Dec 10, 2025 5:27:53 AM] [INFO] OUI-67050:Running make for target ioracle [Dec 10, 2025 5:27:53 AM] [INFO] Start invoking 'make' at Wed Dec 10 05:27:53 IST 2025Wed Dec 10 05:27:53 IST 2025 [Dec 10, 2025 5:27:53 AM] [INFO] Running make command: /usr/ccs/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3 OPATCH_SESSION=apply [Dec 10, 2025 5:29:38 AM] [INFO] Finish invoking 'make' at Wed Dec 10 05:29:38 IST 2025 [Dec 10, 2025 5:29:38 AM] [INFO] OPatch will clean up 'restore.sh,make.txt' files and 'scratch,backup' directories. [Dec 10, 2025 5:29:39 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-20-55AM/backup/inventory/Components21/oracle.crs/19.0.0.0.0/UnixActions/makedeps.xml" [Dec 10, 2025 5:29:39 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-20-55AM/backup/inventory/make/makeorder.xml" [Dec 10, 2025 5:29:39 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-20-55AM/make.txt" Node4: grid@testdbnode4:/home/grid#cd /oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch grid@testdbnode4:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#grep -i "Patch 37328497 successfully applied" *.log opatch2025-12-10_05-13-56AM_1.log:[Dec 10, 2025 5:23:02 AM] [INFO] Patch 37328497 successfully applied. grid@testdbnode4:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#egrep -i "error|fail|make" opatch2025-12-10_05-13-56AM_1.log [Dec 10, 2025 5:13:57 AM] [INFO] Runtime args: [-Xmx1536m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=apply, -DOPatch.ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=., -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/oracle_19c_grid/app/oracle/product/19.3/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=] /usr/ccs/bin/make /oracle_19c_grid/app/oracle/product/19.3/lib/libasmclntsh19.so -f /oracle_19c_grid/app/oracle/product/19.3/rdbms/lib/ins_rdbms.mk make libasmclntsh19.so returned code 0 [Dec 10, 2025 5:21:08 AM] [INFO] OUI-67050:Running make for target ioracle [Dec 10, 2025 5:21:08 AM] [INFO] Start invoking 'make' at Wed Dec 10 05:21:08 IST 2025Wed Dec 10 05:21:08 IST 2025 [Dec 10, 2025 5:21:08 AM] [INFO] Running make command: /usr/ccs/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3 OPATCH_SESSION=apply [Dec 10, 2025 5:23:01 AM] [INFO] Finish invoking 'make' at Wed Dec 10 05:23:01 IST 2025 [Dec 10, 2025 5:23:01 AM] [INFO] OPatch will clean up 'restore.sh,make.txt' files and 'scratch,backup' directories. [Dec 10, 2025 5:23:01 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-13-56AM/backup/inventory/Components21/oracle.crs/19.0.0.0.0/UnixActions/makedeps.xml" [Dec 10, 2025 5:23:01 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-13-56AM/backup/inventory/make/makeorder.xml" [Dec 10, 2025 5:23:01 AM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_05-13-56AM/make.txt" Node5: grid@testdbnode5:/home/grid#cd /oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch grid@testdbnode5:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#grep -i "Patch 37328497 successfully applied" *.log opatch2025-12-10_18-12-24PM_1.log:[Dec 10, 2025 6:21:33 PM] [INFO] Patch 37328497 successfully applied. grid@testdbnode5:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch# egrep -i "error|fail|make" opatch2025-12-10_18-12-24PM_1.log [Dec 10, 2025 6:12:24 PM] [INFO] Runtime args: [-Xmx1536m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=apply, -DOPatch.ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=., -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/oracle_19c_grid/app/oracle/product/19.3/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=] /usr/ccs/bin/make /oracle_19c_grid/app/oracle/product/19.3/lib/libasmclntsh19.so -f /oracle_19c_grid/app/oracle/product/19.3/rdbms/lib/ins_rdbms.mk make libasmclntsh19.so returned code 0 [Dec 10, 2025 6:19:39 PM] [INFO] OUI-67050:Running make for target ioracle [Dec 10, 2025 6:19:39 PM] [INFO] Start invoking 'make' at Wed Dec 10 18:19:39 IST 2025Wed Dec 10 18:19:39 IST 2025 [Dec 10, 2025 6:19:39 PM] [INFO] Running make command: /usr/ccs/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3 OPATCH_SESSION=apply [Dec 10, 2025 6:21:31 PM] [INFO] Finish invoking 'make' at Wed Dec 10 18:21:31 IST 2025 [Dec 10, 2025 6:21:31 PM] [INFO] OPatch will clean up 'restore.sh,make.txt' files and 'scratch,backup' directories. [Dec 10, 2025 6:21:31 PM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_18-12-24PM/backup/inventory/Components21/oracle.crs/19.0.0.0.0/UnixActions/makedeps.xml" [Dec 10, 2025 6:21:31 PM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_18-12-24PM/backup/inventory/make/makeorder.xml" [Dec 10, 2025 6:21:31 PM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_18-12-24PM/make.txt" Node6: grid@testdbnode6:/home/grid#cd /oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch grid@testdbnode6:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#grep -i "Patch 37328497 successfully applied" *.log opatch2025-12-10_18-26-45PM_1.log:[Dec 10, 2025 6:35:38 PM] [INFO] Patch 37328497 successfully applied. grid@testdbnode6:/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch#egrep -i "error|fail|make" opatch2025-12-10_18-26-45PM_1.log [Dec 10, 2025 6:26:46 PM] [INFO] Runtime args: [-Xmx1536m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/oracle_19c_grid/app/oracle/product/19.3/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=apply, -DOPatch.ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=., -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/oracle_19c_grid/app/oracle/product/19.3/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=] /usr/ccs/bin/make /oracle_19c_grid/app/oracle/product/19.3/lib/libasmclntsh19.so -f /oracle_19c_grid/app/oracle/product/19.3/rdbms/lib/ins_rdbms.mk make libasmclntsh19.so returned code 0 [Dec 10, 2025 6:33:49 PM] [INFO] OUI-67050:Running make for target ioracle [Dec 10, 2025 6:33:49 PM] [INFO] Start invoking 'make' at Wed Dec 10 18:33:49 IST 2025Wed Dec 10 18:33:49 IST 2025 [Dec 10, 2025 6:33:49 PM] [INFO] Running make command: /usr/ccs/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle_19c_grid/app/oracle/product/19.3 OPATCH_SESSION=apply [Dec 10, 2025 6:35:37 PM] [INFO] Finish invoking 'make' at Wed Dec 10 18:35:37 IST 2025 [Dec 10, 2025 6:35:37 PM] [INFO] OPatch will clean up 'restore.sh,make.txt' files and 'scratch,backup' directories. [Dec 10, 2025 6:35:37 PM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_18-26-45PM/backup/inventory/Components21/oracle.crs/19.0.0.0.0/UnixActions/makedeps.xml" [Dec 10, 2025 6:35:38 PM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_18-26-45PM/backup/inventory/make/makeorder.xml" [Dec 10, 2025 6:35:38 PM] [INFO] Deleted the file "/oracle_19c_grid/app/oracle/product/19.3/.patch_storage/NApply/2025-12-10_18-26-45PM/make.txt" Based on the patching logs, it is evident that the bug-fix patch was successfully applied on all cluster nodes. Step3 : Now check the kfod patches and patch level on all nodes. Node1: grid@testdbnode1:/home/grid#kfod op=patchlvl ------------------- Current Patch level =================== 3284843566 grid@testdbnode1:/home/grid#kfod op=patches --------------- List of Patches =============== 29517247 37260974 37268031 37328497 Node2: grid@testdbnode2:/home/grid#kfod op=patchlvl ------------------- Current Patch level =================== 3284843566 grid@testdbnode2:/home/grid#kfod op=patches --------------- List of Patches =============== 29517247 37260974 37268031 37328497 Node3: grid@testdbnode3:/home/grid#kfod op=patchlvl ------------------- Current Patch level =================== 3284843566 grid@testdbnode3:/home/grid#kfod op=patches --------------- List of Patches =============== 29517247 37260974 37268031 37328497 Node4: grid@testdbnode4:/home/grid#kfod op=patchlvl ------------------- Current Patch level =================== 3284843566 grid@testdbnode4:/home/grid#kfod op=patches --------------- List of Patches =============== 29517247 37260974 37268031 37328497 Node5: grid@testdbnode5:/home/grid#kfod op=patchlvl ------------------- Current Patch level =================== 3284843566 grid@testdbnode5:/home/grid#kfod op=patches --------------- List of Patches =============== 29517247 37260974 37268031 37328497 Node6: grid@testdbnode6:/home/grid#kfod op=patchlvl ------------------- Current Patch level =================== 3284843566 grid@testdbnode6:/home/grid#kfod op=patches --------------- List of Patches =============== 29517247 37260974 37268031 37328497 The kfod patches and patch levels appear to be identical across all nodes. So what’s next? What could be causing the issue? Step4 : Now check the cluster active version across all the cluster nodes. The command "crsctl query crs activeversion -f" returns the active version for the entire cluster, so it does not need to be executed on every node. In this case, I executed it on each node only for informational purposes. Node1: grid@testdbnode1:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1179688039]. Node2: grid@testdbnode2:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1179688039]. Node3: grid@testdbnode3:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1179688039]. Node4: grid@testdbnode4:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1179688039]. Node5: grid@testdbnode5:/home/grid#crsctl query crs activeversion -f software Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1179688039]. Node6: grid@testdbnode6:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1179688039]. Although the cluster active patch level [1179688039]is the same across all nodes, the actual patch level must be [3284843566]since the cluster is operating in the [ROLLING PATCH] mode. This is the root cause of the issue. The cluster patch level must be accurate, and the cluster upgrade state should remain in the [NORMAL] mode. Step5 : Let's check the softwarepatch on all nodes. grid@testdbnode1:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode1 is [3284843566]. grid@testdbnode2:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode2 is [3284843566]. grid@testdbnode3:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode3 is [1179688039]. grid@testdbnode4:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode4 is [1179688039]. grid@testdbnode5:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode5 is [3284843566]. grid@testdbnode6:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode6 is [1179688039]. From the above output, you can observe that the clusterware patch level on Node1, Node2, and Node5 is 3284843566. Ideally, this value should match the patch level reported by the "crsctl query crs activeversion -f" command. However, in our case, it does not match. On the other hand, the clusterware patch level on Node3, Node4, and Node6 is 1179688039, which matches the active version output. This also indicates that the updated clusterware patch level 1179688039 was not applied correctly on all nodes. The actual cluster patch level should be [3284843566]in both activeversion output and softwarepatch output. This indicates that the clusterware patch level was not updated properly on all nodes. As a result, the cluster is stuck in the [ROLLING PATCH] state instead of returning to [NORMAL] mode. In the [ROLLING PATCH] state, the cluster continues to function normally; however, any CRS-level operations such as adding a new disk to an ASM diskgroup will fail, as these actions are not permitted while the cluster is in [ROLLING PATCH] mode. What would be the solution to fix this issue ??? Step5 : Use below commands to update the patch level in OLR and OCR. clscfg -localpatch :- This command updates the Oracle Local Registry (OLR) on the local node where it is run to update the new software patch level. CRS should be down to execute this command. When to use: Only one node or local node has an incorrect patch level. clscfg -patch :- This command updates the Oracle Cluster Registry (OCR) which stores the clusterware configuration. This command is run to update the clusterware patch level. CRS should be up to execute this command. When to use: Multiple nodes differ or complete mismatch across cluster. In this case, we will use the second command, "clscfg -patch", because the clusterware patch level is not consistent across all nodes. On problematic node, login as root user and execute below command. Node3: root@testdbnode3:/#/oracle_19c_grid/app/oracle/product/19.3/bin/clscfg -patch Now execute the following command to transition the cluster upgrade state from [ROLLING PATCH] to [NORMAL]. You only need to run this command from any one node. Running it once is sufficient, as it updates the cluster upgrade state across the entire cluster. root@testdbnode1:/#/oracle_19c_grid/app/oracle/product/19.3/bin/crsctl stop rollingpatch CRS-1161: The cluster was successfully patched to patch level [3284843566] Step6: Now check the clusterware activeversion and softwarepatch level across all the nodes. Node1: grid@testdbnode1:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3284843566]. Node2: grid@testdbnode2:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3284843566]. Node3: grid@testdbnode3:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3284843566]. Node4: grid@testdbnode4:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3284843566]. Node5: grid@testdbnode5:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3284843566]. Node6: grid@testdbnode6:/home/grid#crsctl query crs activeversion -f Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [3284843566]. Node1: grid@testdbnode1:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode1 is [3284843566]. Node2: grid@testdbnode2:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode2 is [3284843566]. Node3: grid@testdbnode3:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode3 is [3284843566]. Node4: grid@testdbnode4:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode4 is [3284843566]. Node5: grid@testdbnode5:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode5 is [3284843566]. Node6: grid@testdbnode6:/home/grid#crsctl query crs softwarepatch Oracle Clusterware patch level on node testdbnode6 is [3284843566]. You can now see that the clusterware patch level and the software patch level are matching. Additionally, the cluster upgrade state has successfully transitioned from [ROLLING PATCH] to [NORMAL]. Issue has been resolved by updating OCR by clscfg -patch command and ASM disk was added to the diskgroup successfully withou any futher errors. |
Thanks for reading this post ! Please comment if you like this post ! Click on FOLLOW to get next blog updates !

Very useful
ReplyDelete