Environment: Current Database: Oracle RAC 19.19 Operating System : HP-UX 11.31 Error: CLSRSC-507: The root script cannot proceed on this node Node1 because either the first-node operations have not completed on node Node3 or there was an error in obtaining the status of the first-node operations. Change in environment: Node1 was in unresponsive state where basic commands like bdf, ls,glance, etc. were not responding and not returning any output. Due to this, the System Team decided to reboot the server. However, post-reboot, ASM failed to start. To resolve the issue, the System Team proceeded to deconfigure and reconfigure Oracle Clusterware (CRS). During the CRS reconfiguration phase, the root.sh script failed with the error message mentioned above. Cause: During CRS reconfiguration, the root.sh script failed at Step 8 with the following error messages: root@rac1:/oracle_19c_grid/app/oracle/product/19.3#sh root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle_19c_grid/app/oracle/product/19.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Smartmatch is deprecated at /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsupgrade.pm line 6512. Using configuration parameter file: /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsconfig_params The log of current session can be found at: /oracle_19c_grid/app/orabase/crsdata/rac1/crsconfig/rootcrs_rac1_2025-08-02_12-18-42AM.log 2025/08/02 12:19:00 CLSRSC-594: Executing installation step 1 of 19: 'ValidateEnv'. 2025/08/02 12:19:00 CLSRSC-594: Executing installation step 2 of 19: 'CheckFirstNode'. 2025/08/02 12:19:04 CLSRSC-594: Executing installation step 3 of 19: 'GenSiteGUIDs'. 2025/08/02 12:19:05 CLSRSC-594: Executing installation step 4 of 19: 'SetupOSD'. 2025/08/02 12:19:05 CLSRSC-594: Executing installation step 5 of 19: 'CheckCRSConfig'. 2025/08/02 12:19:08 CLSRSC-594: Executing installation step 6 of 19: 'SetupLocalGPNP'. 2025/08/02 12:19:11 CLSRSC-594: Executing installation step 7 of 19: 'CreateRootCert'. 2025/08/02 12:19:12 CLSRSC-594: Executing installation step 8 of 19: 'ConfigOLR'. 2025/08/02 12:19:15 CLSRSC-507: The root script cannot proceed on this node rac1 because either the first-node operations have not completed on node rac3 or there was an error in obtaining the status of the first-node operations. Died at /oracle_19c_grid/app/oracle/product/19.3/crs/install/oraocr.pm line 271. Troubleshooting Steps: Step1: Based on the error message, it was evident that the RAC installation had been previously attempted on Node3 (rac3), and the root.sh script was not completed successfully at that time. To verify this, the root.sh script log on rac3 was reviewed. Upon investigation, it was found that this Oracle 19c RAC cluster was actually upgraded from 12c in the past. During the upgrade process, the rootupgrade.sh script was executed successfully without any errors. Since this was an upgrade and not a fresh installation, rootupgrade.sh was used instead of root.sh. rac3:/home/oracle#cd /oracle_19c_grid/app/oracle/product/19.3/install rac3:/oracle_19c_grid/app/oracle/product/19.3/install#cat root_rac_2021-05-14_21-49-16-20.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle_19c_grid/app/oracle/product/19.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsconfig_params The log of current session can be found at: /oracle_19c_grid/app/orabase/crsdata/rac3/crsconfig/rootcrs_rac3_2021-05-14_09-50-49PM.log 2021/05/14 21:53:26 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'. 2021/05/14 21:53:27 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2021/05/14 21:53:27 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'. 2021/05/14 21:53:27 CLSRSC-4005: Failed to patch Oracle Trace File Analyzer (TFA) Collector. Grid Infrastructure operations will continue. 2021/05/14 21:54:08 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'. 2021/05/14 21:54:08 CLSRSC-464: Starting retrieval of the cluster configuration data PRCT-1470 : failed to reset the Rapid Home Provisioning (RHP) repository PRCR-1172 : Failed to execute "srvmhelper" for -getmgmtdbnodename 2021/05/14 21:55:29 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes. 2021/05/14 21:58:15 CLSRSC-693: CRS entities validation completed successfully. 2021/05/14 21:58:59 CLSRSC-515: Starting OCR manual backup. 2021/05/14 22:01:35 CLSRSC-516: OCR manual backup successful. 2021/05/14 22:05:56 CLSRSC-486: At this stage of upgrade, the OCR has changed. Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR. 2021/05/14 22:05:57 CLSRSC-541: To downgrade the cluster: 1. All nodes that have been upgraded must be downgraded. 2021/05/14 22:05:57 CLSRSC-542: 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down. 2021/05/14 22:07:16 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2021/05/14 22:07:16 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'. 2021/05/14 22:07:29 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'. 2021/05/14 22:09:06 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'. 2021/05/14 22:09:07 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'. 2021/05/14 22:13:39 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode 2021/05/14 22:13:40 CLSRSC-482: Running command: '/oracle_12C_grid/app/oracle/product/12.2.0/bin/crsctl start rollingupgrade 19.0.0.0.0' CRS-1131: The cluster was successfully set to rolling upgrade mode. 2021/05/14 22:13:45 CLSRSC-482: Running command: '/oracle_19c_grid/app/oracle/product/19.3/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /oracle_12C_grid/app/oracle/product/12.2.0 -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false ' ASM configuration upgraded in local node successfully. 2021/05/14 22:13:53 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode 2021/05/14 22:14:27 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2021/05/14 22:14:53 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. 2021/05/14 22:15:47 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'. 2021/05/14 22:15:52 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'. 2021/05/14 22:17:06 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'. 2021/05/14 22:17:06 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'. 2021/05/14 22:18:19 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'. 2021/05/14 22:19:38 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'. 2021/05/14 22:19:39 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab' 2021/05/14 22:22:31 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'. 2021/05/14 22:23:48 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'. 2021/05/14 22:24:56 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'. 2021/05/14 22:27:49 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'sys'.. Operation successful. 2021/05/14 22:29:18 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'. 2021/05/14 22:30:11 CLSRSC-474: Initiating upgrade of resource types 2021/05/14 22:32:14 CLSRSC-475: Upgrade of resource types successfully initiated. 2021/05/14 22:34:32 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'. 2021/05/14 22:35:16 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded Question remains same why root.sh script was failed on Node1 then ? What next ? Step2: Upon reviewing the error messages in root.sh, it appears that the failure could be due to one of two reasons: either the root.sh script had previously failed on Node3 (rac3), or there was an issue in retrieving the status of the first node's operations during the current execution. CLSRSC-507: The root script cannot proceed on this node rac1 because either the first-node operations have not completed on node rac3 or there was an error in obtaining the status of the first-node operations. Died at /oracle_19c_grid/app/oracle/product/19.3/crs/install/oraocr.pm line 271. Let's look at the line number 271 in the script "/oracle_19c_grid/app/oracle/product/19.3/crs/install/oraocr.pm" on Node1. oracle@rac1:/oracle_19c_grid/app/oracle/product/19.3/crs/install#ls -ltr oraocr.pm -rwxr-xr-x 1 root oinstall 89044 May 3 2024 oraocr.pm Open the file using the vi editor (vi <filename>), press ESC and type the following to enable line numbers (:set nu), navigate to line number 271. You will find the following code at that line: oracle@rac1:/oracle_19c_grid/app/oracle/product/19.3/crs/install#view oraocr.pm 263 if ($CFG->ASM_STORAGE_USED) 264 { 265 trace("Retrieve OCR locations from the sync file"); 266 my $syncFile = getOCRLocSyncFile(); 267 if (! -e $syncFile) 268 { 269 my $localNode = tolower_host(); 270 my $installNode = getInstallNode(); 271 die(dieformat(507, $localNode, $installNode)); 272 } 273 274 my @ocr_locations = read_file(getOCRLocSyncFile()); 275 $OCR = $ocr_locations[0]; 276 chomp($OCR); 277 } From the above code, it is evident that the script is attempting to access the OCR file but is unable to read it successfully. To understand this behavior in more detail, we need to examine what exactly the script is trying to read. Search for the function getOCRLocSyncFile() within the same file. You can do this in vi editor by typing (/getOCRLocSyncFile). Once located, review the code starting at the corresponding line number to understand the logic implemented within the function. 2136 sub getOCRLocSyncFile 2137 #------------------------------------------------------------------------------ 2138 # Function : Populates the sync filename to store the list of OCR locations 2139 # Args : None 2140 # Returns : OCR Sync filename 2141 #------------------------------------------------------------------------------ 2142 { 2143 my $sync_file = catfile($CFG->params('GPNPGCONFIGDIR'), 2144 'gpnp', 'seed', 'asm', 'ocrloc.conf'); 2145 trace("Sync file: $sync_file"); 2146 return $sync_file; The analyzed code attempts to read the file located at: $CRS_HOME/gpnp/seed/asm/ocrloc.conf To validate this, the ocrloc.conf file was checked across all nodes in the cluster. It was observed that this file was missing on all nodes, including the one where the root.sh script was executed. Since the ocrloc.conf file was not present, the root.sh script failed when it attempted to access and read this file during Clusterware configuration. oracle@rac1:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -ltr -rw-r--r-- 1 oracle oinstall 3746 May 14 2021 credentials.xml oracle@rac2:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -ltr -rw-r--r-- 1 oracle oinstall 3746 May 14 2021 credentials.xml oracle@rac3:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -ltr -rw-r--r-- 1 oracle oinstall 3746 May 14 2021 credentials.xml oracle@rac4:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -ltr -rw-r--r-- 1 oracle oinstall 3746 May 14 2021 credentials.xml What be the next plan of action ? Step 3: To validate, the same file was checked on other 19c cluster nodes. It was observed that the ocrloc.conf file was present on those systems. oracle@testdb1:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -tlr -rw-r--r-- 1 oracle oinstall 3779 Oct 21 2022 credentials.xml -rwxr-x--- 1 oracle oinstall 59 Oct 21 2022 ocrloc.conf oracle@testdb2:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -tlr -rw-r--r-- 1 oracle oinstall 3779 Oct 21 2022 credentials.xml -rwxr-x--- 1 oracle oinstall 59 Oct 21 2022 ocrloc.conf oracle@testdb3:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -tlr -rw-r--r-- 1 oracle oinstall 3779 Oct 21 2022 credentials.xml -rwxr-x--- 1 oracle oinstall 59 Oct 21 2022 ocrloc.conf oracle@testdb4:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#ls -tlr -rw-r--r-- 1 oracle oinstall 3779 Oct 21 2022 credentials.xml -rwxr-x--- 1 oracle oinstall 59 Oct 21 2022 ocrloc.conf Let's check the content of this file. oracle@testdb1:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#cat ocrloc.conf +OCR_VOTE/testdb-cluster/OCRFILE/registry.255.1118667903 This file contains the OCR location. Let's find out the current OCR location from existing cluster. #To check OCR location oracle@rac1:/home/oracle#cd /var/opt/oracle oracle@rac1:/var/opt/oracle#ls -ltr *.loc -rw-r--r-- 1 root sys 68 Oct 21 2022 oraInst.loc -rw-r--r-- 1 root oinstall 132 Oct 21 2022 olr.loc -rw-r--r-- 1 root oinstall 262 Jun 2 19:28 ocr.loc oracle@rac1:/var/opt/oracle#cat ocr.loc ocrconfig_loc=+DG_OCR_VOTE_XP8/rac-luster/OCRFILE/registry.255.1198761621 local_only=false Let's create same file ocrloc.conf in the location "$CRS_HOME/gpnp/seed/asm/" and add OCR location of existing cluster and save the file. Let's do it on problematic node only first. oracle@rac1:/home/oracle#cd /oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm oracle@rac1:/oracle_19c_grid/app/oracle/product/19.3/gpnp/seed/asm#vi ocrloc.conf +DG_OCR_VOTE_XP8/rac-luster/OCRFILE/registry.255.1198761621 Step 4: Now run the root.sh script again. root@rac1:/oracle_19c_grid/app/oracle/product/19.3#sh root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle_19c_grid/app/oracle/product/19.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Smartmatch is deprecated at /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsupgrade.pm line 6512. Using configuration parameter file: /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsconfig_params The log of current session can be found at: /oracle_19c_grid/app/orabase/crsdata/rac1/crsconfig/rootcrs_rac1_2025-08-02_06-01-20PM.log 2025/08/02 18:01:41 CLSRSC-594: Executing installation step 1 of 19: 'ValidateEnv'. 2025/08/02 18:01:42 CLSRSC-594: Executing installation step 2 of 19: 'CheckFirstNode'. 2025/08/02 18:01:46 CLSRSC-594: Executing installation step 3 of 19: 'GenSiteGUIDs'. 2025/08/02 18:01:46 CLSRSC-594: Executing installation step 4 of 19: 'SetupOSD'. 2025/08/02 18:01:47 CLSRSC-594: Executing installation step 5 of 19: 'CheckCRSConfig'. 2025/08/02 18:01:48 CLSRSC-594: Executing installation step 6 of 19: 'SetupLocalGPNP'. 2025/08/02 18:01:51 CLSRSC-594: Executing installation step 7 of 19: 'CreateRootCert'. 2025/08/02 18:01:52 CLSRSC-594: Executing installation step 8 of 19: 'ConfigOLR'. 2025/08/02 18:01:54 CLSRSC-594: Executing installation step 9 of 19: 'ConfigCHMOS'. 2025/08/02 18:01:54 CLSRSC-594: Executing installation step 10 of 19: 'CreateOHASD'. 2025/08/02 18:01:57 CLSRSC-594: Executing installation step 11 of 19: 'ConfigOHASD'. 2025/08/02 18:02:14 CLSRSC-594: Executing installation step 12 of 19: 'SetupTFA'. 2025/08/02 18:02:14 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2025/08/02 18:02:15 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. 2025/08/02 18:02:20 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2025/08/02 18:02:23 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. 2025/08/02 18:02:30 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'. 2025/08/02 18:03:27 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2672: Attempting to start 'ora.evmd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-1705: Found 0 configured voting files but 1 voting files are required, terminating to ensure data integrity; details at (:CSSNM00065:) in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc CRS-2883: Resource 'ora.cssd' failed during Clusterware stack start. CRS-4406: Oracle High Availability Services synchronous start failed. CRS-41053: checking Oracle Grid Infrastructure for file permission issues PRVG-2032 : Group of file "/oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day_.ocr" did not match the expected value on node "rac1". [Expected = "root(0)" ; Found = "sys(3)"] PRVG-2032 : Group of file "/oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day.ocr" did not match the expected value on node "rac1". [Expected = "root(0)" ; Found = "sys(3)"] PRVG-2032 : Group of file "/oracle_19c_grid/app/orabase/crsdata/rac1/ocr/backup00.ocr" did not match the expected value on node "rac1". [Expected = "root(0)" ; Found = "sys(3)"] PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/bin/oraipsecsetup" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/bin/orald" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/peer/cwallet.sso" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/root/ewallet.p12" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/gpnp/profiles/peer/profile.xml" does not exist on node "rac1". PRVH-0154 : File "/oracle_19c_grid/app/oracle/product/19.3/lib/libcrf19.so" is empty on node "rac1". PRVG-2032 : Group of file "/oracle_19c_grid/app/oracle/product/19.3/bin/tfactl" did not match the expected value on node "rac1". [Expected = "oinstall(107)|root(0)" ; Found = "sys(3)"] CRS-4000: Command Start failed, or completed with errors. 2025/08/02 18:13:24 CLSRSC-117: (Bad argc for has:clsrsc-117) Died at /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsinstall.pm line 2114. After creating the missing ocrloc.conf file, we observed that Step 8 in the root.sh execution progressed successfully. The script continued and completed up to Step 17. However, it is now failing at the stage where it attempts to start the cluster, indicating a different error unrelated to the earlier missing file issue. Let's troubleshoot this error now..... Step 5: From the error messages, it is clear that there is some error while starting CSSD daemon. Error Message: CRS-1705: Found 0 configured voting files but 1 voting files are required, terminating to ensure data integrity; details at (:CSSNM00065:) in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc CRS-2883: Resource 'ora.cssd' failed during Clusterware stack start. CRS-4406: Oracle High Availability Services synchronous start failed. CRS-41053: checking Oracle Grid Infrastructure for file permission issues PRVG-2032 : Group of file "/oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day_.ocr" did not match the expected value on node "rac1". [Expected = "root(0)" ; Found = "sys(3)"] PRVG-2032 : Group of file "/oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day.ocr" did not match the expected value on node "rac1". [Expected = "root(0)" ; Found = "sys(3)"] PRVG-2032 : Group of file "/oracle_19c_grid/app/orabase/crsdata/rac1/ocr/backup00.ocr" did not match the expected value on node "rac1". [Expected = "root(0)" ; Found = "sys(3)"] PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/bin/oraipsecsetup" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/bin/orald" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/peer/cwallet.sso" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/root/ewallet.p12" does not exist on node "rac1". PRVH-0102 : File "/oracle_19c_grid/app/oracle/product/19.3/gpnp/profiles/peer/profile.xml" does not exist on node "rac1". PRVH-0154 : File "/oracle_19c_grid/app/oracle/product/19.3/lib/libcrf19.so" is empty on node "rac1". PRVG-2032 : Group of file "/oracle_19c_grid/app/oracle/product/19.3/bin/tfactl" did not match the expected value on node "rac1". [Expected = "oinstall(107)|root(0)" ; Found = "sys(3)"] CRS-4000: Command Start failed, or completed with errors. 2025/08/02 18:13:24 CLSRSC-117: (Bad argc for has:clsrsc-117) Died at /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsinstall.pm line 2114. First check the above problematic files availability and their permissions. oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day_.ocr -rw------- 1 root sys 3207168 Feb 10 15:18 /oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day_.ocr oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day.ocr -rw------- 1 root sys 3207168 Feb 9 15:16 /oracle_19c_grid/app/orabase/crsdata/rac1/ocr/day.ocr oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/orabase/crsdata/rac1/ocr/backup00.ocr -rw------- 1 root sys 3227648 Apr 29 00:27 /oracle_19c_grid/app/orabase/crsdata/rac1/ocr/backup00.ocr oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/bin/oraipsecsetup /oracle_19c_grid/app/oracle/product/19.3/bin/oraipsecsetup not found oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/bin/orald /oracle_19c_grid/app/oracle/product/19.3/bin/orald not found oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/peer/cwallet.sso /oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/peer/cwallet.sso not found oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/root/ewallet.p12 /oracle_19c_grid/app/oracle/product/19.3/gpnp/wallets/root/ewallet.p12 not found oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/gpnp/profiles/peer/profile.xml /oracle_19c_grid/app/oracle/product/19.3/gpnp/profiles/peer/profile.xml not found oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/lib/libcrf19.so -rw-r--r-- 1 root oinstall 0 May 14 2021 /oracle_19c_grid/app/oracle/product/19.3/lib/libcrf19.so oracle@rac1:/home/oracle#ls -ltr /oracle_19c_grid/app/oracle/product/19.3/bin/tfactl lrwxr-xr-- 1 root sys 30 Aug 2 17:53 /oracle_19c_grid/app/oracle/product/19.3/bin/tfactl -> /opt/oracle.ahf/tfa/bin/tfactl Most of the files were present in the respective locations with given permissions. However, the script is still reporting file availability and permission-related errors. This suggests that these errors may be post-facto or misleading, and the actual issue appears to be different. Let's try to check the problematic trace files for any error message. Checking cluster alert log file: $view /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/alert.log 2025-08-02 17:41:08.020 [OHASD(53911)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 53911 2025-08-02 17:41:08.151 [OHASD(53911)]CRS-0714: Oracle Clusterware Release 19.0.0.0.0. 2025-08-02 17:41:08.181 [OHASD(53911)]CRS-2112: The OLR service started on node rac1. 2025-08-02 17:41:10.247 [OHASD(53911)]CRS-1301: Oracle High Availability Service started on node rac1. 2025-08-02 17:41:13.863 [ORAAGENT(53963)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 53963 2025-08-02 17:41:14.067 [ORAROOTAGENT(53974)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 53974 2025-08-02 17:41:14.712 [CSSDAGENT(53984)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 53984 2025-08-02 17:41:14.713 [CSSDMONITOR(53986)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 53986 2025-08-02 17:41:16.280 [ORAAGENT(53994)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 53994 2025-08-02 17:41:16.749 [MDNSD(54004)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 54004 2025-08-02 17:41:16.826 [EVMD(54006)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 54006 2025-08-02 17:41:18.006 [GPNPD(54034)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 54034 2025-08-02 17:41:19.294 [GPNPD(54034)]CRS-2328: Grid Plug and Play Daemon(GPNPD) started on node rac1. 2025-08-02 17:41:19.284 [GIPCD(54090)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 54090 2025-08-02 17:41:20.123 [GIPCD(54090)]CRS-7517: The Oracle Grid Interprocess Communication (GIPC) failed to identify the Fast Node Death Detection (FNDD). 2025-08-02 17:41:21.806 [CSSDMONITOR(54112)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 54112 2025-08-02 17:41:22.373 [CSSDAGENT(54118)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 54118 2025-08-02 17:41:27.533 [OCSSD(54120)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 54120 2025-08-02 17:41:28.732 [OCSSD(54120)]CRS-1713: CSSD daemon is started in hub mode 2025-08-02 17:51:22.683 [CSSDAGENT(54118)]CRS-5818: Aborted command 'start' for resource 'ora.cssd'. Details at (:CRSAGF00113:) {0:5:3} in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ohasd_cssdagent_root.trc. 2025-08-02 17:51:23.664 [OHASD(53911)]CRS-2757: Command 'Start' timed out waiting for response from the resource 'ora.cssd'. Details at (:CRSPE00221:) {0:5:3} in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ohasd.trc. 2025-08-02 17:51:23.686 [OCSSD(54120)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc 2025-08-02 17:51:23.689 [OCSSD(54120)]CRS-1652: Starting clean up of CRSD resources. 2025-08-02 17:51:23.699 [OCSSD(54120)]CRS-1653: The clean up of the CRSD resources failed. 2025-08-02 17:51:23.716 [OCSSD(54120)]CRS-8503: Oracle Clusterware process OCSSD with operating system process ID 54120 experienced fatal signal or exception code 6. 2025-08-02T17:51:23.979297+05:30 Errors in file /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc (incident=297): CRS-8503 [] [] [] [] [] [] [] [] [] [] [] [] Incident details in: /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/incident/incdir_297/ocssd_i297.trc 2025-08-02 17:51:25.453 [CSSDMONITOR(56528)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 56528 2025-08-02 17:51:31.559 [CRSCTL(55654)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_55654.trc. 2025-08-02 17:51:31.578 [CRSCTL(54408)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_54408.trc. 2025-08-02 17:51:34.551 [CSSDAGENT(56571)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 56571 2025-08-02 17:51:53.503 [CLSECHO(56978)]Can't find message for prod[usm] fac[ACFS] & msg id [9125] 2025-08-02 17:53:16.571 [CRSCTL(58721)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_58721.trc. 2025-08-02 17:53:28.290 [CRSCTL(59265)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_59265.trc. 2025-08-02 17:57:47.063 [CRSCTL(5189)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_5189.trc. 2025-08-02 17:58:19.386 [CRSCTL(5865)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_5865.trc. 2025-08-02 17:59:20.530 [CRSCTL(6952)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_6952.trc. 2025-08-02 17:59:40.770 [CRSCTL(7255)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_7255.trc. 2025-08-02 17:59:41.355 [CRSCTL(7277)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_7277.trc. 2025-08-02 17:59:49.181 [CRSCTL(7319)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_7319.trc. 2025-08-02 17:59:49.290 [OCRCONFIG(7356)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocrconfig_7356.trc. 2025-08-02 18:00:42.088 [OCRDUMP(8807)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocrdump_8807.trc. 2025-08-02 18:00:47.708 [CRSCTL(8835)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_8835.trc. 2025-08-02 18:01:26.529 [CLSECHO(9688)]CLSRSC-0567: Beginning Oracle Grid Infrastructure configuration. 2025-08-02 18:01:54.284 [CRSCTL(9935)]CRS-1013: The OCR location in an ASM disk group is inaccessible. Details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/crsctl_9935.trc. 2025-08-02 18:02:24.594 [OHASD(53911)]CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' 2025-08-02 18:02:26.448 [MDNSD(54004)]CRS-5602: mDNS service stopping by request. 2025-08-02 18:02:26.471 [GPNPD(54034)]CRS-2329: Grid Plug and Play Daemon(GPNPD) on node rac1 shut down. 2025-08-02 18:02:26.523 [MDNSD(54004)]CRS-8504: Oracle Clusterware MDNSD process with operating system process ID 54004 is exiting 2025-08-02 18:02:27.499 [OHASD(53911)]CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed 2025-08-02 18:02:27.511 [ORAROOTAGENT(53974)]CRS-5822: Agent '/oracle_19c_grid/app/oracle/product/19.3/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:1:14} in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ohasd_orarootagent_root.trc. CSSD Trace Log File: $view /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc 2025-08-02 17:41:28.853 : CSSD:1: [ INFO] clssscGetParameterProfile: buffer passed for parameter ASM discovery (3) is too short, required 24, passed 20 2025-08-02 17:41:28.853 : CSSD:1: [ INFO] clssnmReadDiscoveryProfile: voting file discovery string(/dev/dg_*/*,/dev/DG_*/*) 2025-08-02 17:51:22.647 : SKGFD:28: running stat on disk:/dev/dg_night_archival_xp8/dg_night_archival_0001 2025-08-02 17:51:22.685 : CSSD:5: [ INFO] clssscagProcAgReq: shutdown abort requested by the agent 2025-08-02 17:51:22.685 : CSSD:5: [ INFO] clssnmCheckForVfFailure: no voting file found 2025-08-02 17:51:23.688 : CSSD:5: [ INFO] clssnmCheckVFStatus: configured Sites = 0, Incative sites = 0, Mininum Sites required = 1 2025-08-02 17:51:23.688 : CSSD:5: [ INFO] Insufficient voting files available, !Error! 2025-08-02 17:51:23.688 : CSSD:5: [ INFO] Local node , number 1, state is clssnmNodeStateINACTIVE From the above error messages, it is evident that the cluster is unable to locate the OCR device (e.g., VD+OCR). As a result, the CSSD (Cluster Synchronization Services Daemon) is terminating. To investigate further, the next step is to check for any network-related issues that might be contributing to the CSSD process failure. #Check Private IP Ping response From Node1: ping -i Node1_IP Node2_IP 9000 -n 10 ping -i Node1_IP Node3_IP 9000 -n 10 ping -i Node1_IP Node4_IP 9000 -n 10 From Node2: ping -i Node2_IP Node1_IP 9000 -n 10 ping -i Node2_IP Node3_IP 9000 -n 10 ping -i Node2_IP Node4_IP 9000 -n 10 From Node3: ping -i Node3_IP Node1_IP 9000 -n 10 ping -i Node3_IP Node2_IP 9000 -n 10 ping -i Node3_IP Node4_IP 9000 -n 10 From Node4: ping -i Node4_IP Node1_IP 9000 -n 10 ping -i Node4_IP Node2_IP 9000 -n 10 ping -i Node4_IP Node3_IP 9000 -n 10 Here, is the output of Ping response output... From Node1: root@rac1:/home/oracle#ping -i Node1_IP Node2_IP 9000 -n 10 PING Node2_IP: 9000 byte packets 9000 bytes from Node2_IP: icmp_seq=0. time=1. ms 9000 bytes from Node2_IP: icmp_seq=1. time=0. ms 9000 bytes from Node2_IP: icmp_seq=2. time=0. ms 9000 bytes from Node2_IP: icmp_seq=3. time=0. ms 9000 bytes from Node2_IP: icmp_seq=4. time=0. ms 9000 bytes from Node2_IP: icmp_seq=5. time=0. ms 9000 bytes from Node2_IP: icmp_seq=6. time=0. ms 9000 bytes from Node2_IP: icmp_seq=7. time=0. ms 9000 bytes from Node2_IP: icmp_seq=8. time=0. ms 9000 bytes from Node2_IP: icmp_seq=9. time=0. ms ----Node2_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/1 root@rac1:/home/oracle#ping -i Node1_IP Node3_IP 9000 -n 10 PING Node3_IP: 9000 byte packets 9000 bytes from Node3_IP: icmp_seq=0. time=1. ms 9000 bytes from Node3_IP: icmp_seq=1. time=0. ms 9000 bytes from Node3_IP: icmp_seq=2. time=0. ms 9000 bytes from Node3_IP: icmp_seq=3. time=0. ms 9000 bytes from Node3_IP: icmp_seq=4. time=0. ms 9000 bytes from Node3_IP: icmp_seq=5. time=0. ms 9000 bytes from Node3_IP: icmp_seq=6. time=0. ms 9000 bytes from Node3_IP: icmp_seq=7. time=0. ms 9000 bytes from Node3_IP: icmp_seq=8. time=0. ms 9000 bytes from Node3_IP: icmp_seq=9. time=0. ms ----Node3_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/1 root@rac1:/home/oracle#ping -i Node1_IP Node4_IP 9000 -n 10 PING Node4_IP: 9000 byte packets 9000 bytes from Node4_IP: icmp_seq=0. time=0. ms 9000 bytes from Node4_IP: icmp_seq=1. time=0. ms 9000 bytes from Node4_IP: icmp_seq=2. time=0. ms 9000 bytes from Node4_IP: icmp_seq=3. time=0. ms 9000 bytes from Node4_IP: icmp_seq=4. time=0. ms 9000 bytes from Node4_IP: icmp_seq=5. time=0. ms 9000 bytes from Node4_IP: icmp_seq=6. time=0. ms 9000 bytes from Node4_IP: icmp_seq=7. time=0. ms 9000 bytes from Node4_IP: icmp_seq=8. time=0. ms 9000 bytes from Node4_IP: icmp_seq=9. time=0. ms ----Node4_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 From Node2: oracle@rac2:/home/oracle#ping -i Node2_IP Node1_IP 9000 -n 10 PING Node1_IP: 9000 byte packets 9000 bytes from Node1_IP: icmp_seq=0. time=0. ms 9000 bytes from Node1_IP: icmp_seq=1. time=0. ms 9000 bytes from Node1_IP: icmp_seq=2. time=0. ms 9000 bytes from Node1_IP: icmp_seq=3. time=0. ms 9000 bytes from Node1_IP: icmp_seq=4. time=0. ms 9000 bytes from Node1_IP: icmp_seq=5. time=0. ms 9000 bytes from Node1_IP: icmp_seq=6. time=0. ms 9000 bytes from Node1_IP: icmp_seq=7. time=0. ms 9000 bytes from Node1_IP: icmp_seq=8. time=0. ms 9000 bytes from Node1_IP: icmp_seq=9. time=0. ms ----Node1_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 oracle@rac2:/home/oracle#ping -i Node2_IP Node3_IP 9000 -n 10 PING Node3_IP: 9000 byte packets 9000 bytes from Node3_IP: icmp_seq=0. time=0. ms 9000 bytes from Node3_IP: icmp_seq=1. time=0. ms 9000 bytes from Node3_IP: icmp_seq=2. time=0. ms 9000 bytes from Node3_IP: icmp_seq=3. time=0. ms 9000 bytes from Node3_IP: icmp_seq=4. time=0. ms 9000 bytes from Node3_IP: icmp_seq=5. time=0. ms 9000 bytes from Node3_IP: icmp_seq=6. time=0. ms 9000 bytes from Node3_IP: icmp_seq=7. time=0. ms 9000 bytes from Node3_IP: icmp_seq=8. time=0. ms 9000 bytes from Node3_IP: icmp_seq=9. time=0. ms ----Node3_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 oracle@rac2:/home/oracle#ping -i Node2_IP Node4_IP 9000 -n 10 PING Node4_IP: 9000 byte packets 9000 bytes from Node4_IP: icmp_seq=0. time=0. ms 9000 bytes from Node4_IP: icmp_seq=1. time=0. ms 9000 bytes from Node4_IP: icmp_seq=2. time=0. ms 9000 bytes from Node4_IP: icmp_seq=3. time=0. ms 9000 bytes from Node4_IP: icmp_seq=4. time=0. ms 9000 bytes from Node4_IP: icmp_seq=5. time=0. ms 9000 bytes from Node4_IP: icmp_seq=6. time=0. ms 9000 bytes from Node4_IP: icmp_seq=7. time=0. ms 9000 bytes from Node4_IP: icmp_seq=8. time=0. ms 9000 bytes from Node4_IP: icmp_seq=9. time=0. ms ----Node4_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 From Node3: oracle@rac3:/home/oracle#ping -i Node3_IP Node1_IP 9000 -n 10 PING Node1_IP: 9000 byte packets 9000 bytes from Node1_IP: icmp_seq=0. time=0. ms 9000 bytes from Node1_IP: icmp_seq=1. time=0. ms 9000 bytes from Node1_IP: icmp_seq=2. time=0. ms 9000 bytes from Node1_IP: icmp_seq=3. time=0. ms 9000 bytes from Node1_IP: icmp_seq=4. time=0. ms 9000 bytes from Node1_IP: icmp_seq=5. time=0. ms 9000 bytes from Node1_IP: icmp_seq=6. time=0. ms 9000 bytes from Node1_IP: icmp_seq=7. time=0. ms 9000 bytes from Node1_IP: icmp_seq=8. time=0. ms 9000 bytes from Node1_IP: icmp_seq=9. time=0. ms ----Node1_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 oracle@rac3:/home/oracle#ping -i Node3_IP Node2_IP 9000 -n 10 PING Node2_IP: 9000 byte packets 9000 bytes from Node2_IP: icmp_seq=0. time=1. ms 9000 bytes from Node2_IP: icmp_seq=1. time=0. ms 9000 bytes from Node2_IP: icmp_seq=2. time=0. ms 9000 bytes from Node2_IP: icmp_seq=3. time=0. ms 9000 bytes from Node2_IP: icmp_seq=4. time=0. ms 9000 bytes from Node2_IP: icmp_seq=5. time=0. ms 9000 bytes from Node2_IP: icmp_seq=6. time=0. ms 9000 bytes from Node2_IP: icmp_seq=7. time=0. ms 9000 bytes from Node2_IP: icmp_seq=8. time=0. ms 9000 bytes from Node2_IP: icmp_seq=9. time=0. ms ----Node2_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/1 oracle@rac3:/home/oracle#ping -i Node3_IP Node4_IP 9000 -n 10 PING Node4_IP: 9000 byte packets 9000 bytes from Node4_IP: icmp_seq=0. time=0. ms 9000 bytes from Node4_IP: icmp_seq=1. time=0. ms 9000 bytes from Node4_IP: icmp_seq=2. time=0. ms 9000 bytes from Node4_IP: icmp_seq=3. time=0. ms 9000 bytes from Node4_IP: icmp_seq=4. time=0. ms 9000 bytes from Node4_IP: icmp_seq=5. time=0. ms 9000 bytes from Node4_IP: icmp_seq=6. time=0. ms 9000 bytes from Node4_IP: icmp_seq=7. time=0. ms 9000 bytes from Node4_IP: icmp_seq=8. time=0. ms 9000 bytes from Node4_IP: icmp_seq=9. time=0. ms ----Node4_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 From Node4: root@rac4:/#ping -i Node4_IP Node1_IP 9000 -n 10 PING Node1_IP: 9000 byte packets 9000 bytes from Node1_IP: icmp_seq=0. time=0. ms 9000 bytes from Node1_IP: icmp_seq=1. time=0. ms 9000 bytes from Node1_IP: icmp_seq=2. time=0. ms 9000 bytes from Node1_IP: icmp_seq=3. time=0. ms 9000 bytes from Node1_IP: icmp_seq=4. time=0. ms 9000 bytes from Node1_IP: icmp_seq=5. time=0. ms 9000 bytes from Node1_IP: icmp_seq=6. time=0. ms 9000 bytes from Node1_IP: icmp_seq=7. time=0. ms 9000 bytes from Node1_IP: icmp_seq=8. time=0. ms 9000 bytes from Node1_IP: icmp_seq=9. time=0. ms ----Node1_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 root@rac4:/#ping -i Node4_IP Node2_IP 9000 -n 10 PING Node2_IP: 9000 byte packets 9000 bytes from Node2_IP: icmp_seq=0. time=0. ms 9000 bytes from Node2_IP: icmp_seq=1. time=0. ms 9000 bytes from Node2_IP: icmp_seq=2. time=0. ms 9000 bytes from Node2_IP: icmp_seq=3. time=0. ms 9000 bytes from Node2_IP: icmp_seq=4. time=0. ms 9000 bytes from Node2_IP: icmp_seq=5. time=0. ms 9000 bytes from Node2_IP: icmp_seq=6. time=0. ms 9000 bytes from Node2_IP: icmp_seq=7. time=0. ms 9000 bytes from Node2_IP: icmp_seq=8. time=0. ms 9000 bytes from Node2_IP: icmp_seq=9. time=0. ms ----Node2_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 root@rac4:/#ping -i Node4_IP Node3_IP 9000 -n 10 PING Node3_IP: 9000 byte packets 9000 bytes from Node3_IP: icmp_seq=0. time=0. ms 9000 bytes from Node3_IP: icmp_seq=1. time=0. ms 9000 bytes from Node3_IP: icmp_seq=2. time=0. ms 9000 bytes from Node3_IP: icmp_seq=3. time=0. ms 9000 bytes from Node3_IP: icmp_seq=4. time=0. ms 9000 bytes from Node3_IP: icmp_seq=5. time=0. ms 9000 bytes from Node3_IP: icmp_seq=6. time=0. ms 9000 bytes from Node3_IP: icmp_seq=7. time=0. ms 9000 bytes from Node3_IP: icmp_seq=8. time=0. ms 9000 bytes from Node3_IP: icmp_seq=9. time=0. ms ----Node3_IP PING Statistics---- 10 packets transmitted, 10 packets received, 0% packet loss round-trip (ms) min/avg/max = 0/0/0 #Check Traceroute response From Node1: traceroute -s Node1_IP -r -F Node2_IP traceroute -s Node1_IP -r -F Node3_IP traceroute -s Node1_IP -r -F Node4_IP From Node2: traceroute -s Node2_IP -r -F Node1_IP traceroute -s Node2_IP -r -F Node3_IP traceroute -s Node2_IP -r -F Node4_IP From Node3: traceroute -s Node3_IP -r -F Node1_IP traceroute -s Node3_IP -r -F Node2_IP traceroute -s Node3_IP -r -F Node4_IP From Node4: traceroute -s Node4_IP -r -F Node1_IP traceroute -s Node4_IP -r -F Node2_IP traceroute -s Node4_IP -r -F Node3_IP Here, is the output of Traceroute output... From Node1: root@rac1:/#traceroute -s Node1_IP -r -F Node2_IP traceroute to Node2_IP (Node2_IP) from Node1_IP, 30 hops max, 40 byte packets 1 rac2-priv (Node2_IP) 0.240 ms 0.087 ms 0.106 ms root@rac1:/#traceroute -s Node1_IP -r -F Node3_IP traceroute to Node3_IP (Node3_IP) from Node1_IP, 30 hops max, 40 byte packets 1 rac3-priv (Node3_IP) 0.508 ms 0.159 ms 0.079 ms root@rac1:/#traceroute -s Node1_IP -r -F Node4_IP traceroute to Node4_IP (Node4_IP) from Node1_IP, 30 hops max, 40 byte packets 1 rac4-priv (Node4_IP) 0.199 ms 0.098 ms 0.101 ms From Node2: root@rac2:/#traceroute -s Node2_IP -r -F Node1_IP traceroute to Node1_IP (Node1_IP) from Node2_IP, 30 hops max, 40 byte packets 1 rac1-priv (Node1_IP) 0.232 ms 0.100 ms 0.095 ms root@rac2:/#traceroute -s Node2_IP -r -F Node3_IP traceroute to Node3_IP (Node3_IP) from Node2_IP, 30 hops max, 40 byte packets 1 rac3-priv (Node3_IP) 0.221 ms 0.119 ms 0.089 ms root@rac2:/#traceroute -s Node2_IP -r -F Node4_IP traceroute to Node4_IP (Node4_IP) from Node2_IP, 30 hops max, 40 byte packets 1 rac4-priv (Node4_IP) 0.305 ms 0.203 ms 0.122 ms From Node3: root@rac3:/#traceroute -s Node3_IP -r -F Node1_IP traceroute to Node1_IP (Node1_IP) from Node3_IP, 30 hops max, 40 byte packets 1 Node1_IP (Node1_IP) 0.267 ms 0.127 ms 0.100 ms root@rac3:/#traceroute -s Node3_IP -r -F Node2_IP traceroute to Node2_IP (Node2_IP) from Node3_IP, 30 hops max, 40 byte packets 1 Node2_IP (Node2_IP) 0.197 ms 0.148 ms 0.110 ms root@rac3:/#traceroute -s Node3_IP -r -F Node4_IP traceroute to Node4_IP (Node4_IP) from Node3_IP, 30 hops max, 40 byte packets 1 Node4_IP (Node4_IP) 0.228 ms 0.110 ms 0.091 ms From Node4: root@rac4:/#traceroute -s Node4_IP -r -F Node1_IP traceroute to Node1_IP (Node1_IP) from Node4_IP, 30 hops max, 40 byte packets 1 Node1_IP (Node1_IP) 0.305 ms 0.121 ms 0.081 ms root@rac4:/#traceroute -s Node4_IP -r -F Node2_IP traceroute to Node2_IP (Node2_IP) from Node4_IP, 30 hops max, 40 byte packets 1 Node2_IP (Node2_IP) 0.177 ms 0.138 ms 0.115 ms root@rac4:/#traceroute -s Node4_IP -r -F Node3_IP traceroute to Node3_IP (Node3_IP) from Node4_IP, 30 hops max, 40 byte packets 1 Node3_IP (Node3_IP) 0.242 ms 0.213 ms 0.109 ms After reviewing the output of the relevant network diagnostic commands, it was confirmed that there were no network-related issues. The interconnect and public networks were functioning as expected, indicating that network problems are not the cause of the CSSD termination. What next ? What will the issue ? Step 6: Let's try to clear the socket files and try to start the cluster. #Stop CRS by root user #$CRS_HOME/bin/crsctl stop crs -f #Remove all the socket files under below directories. #cd /usr/tmp/.oracle/ #rm -rf * #cd /var/tmp/.oracle/ #rm -rf * #cd /tmp/.oracle/ #rm -rf * #Start CRS by root user $CRS_HOME/bin/crsctl start crs -wait Even after clearing the socket files also CRS was failed to start with same error. Step 7: Let's try to take the server reboot now. #Stop CRS by root user #$CRS_HOME/bin/crsctl stop crs -f #Disable CRS service CRS by root user so that after server reboot it won't start automatically. $CRS_HOME/bin/crsctl disable crs Step 8: Co-ordinate with System Admin Team to take Node1 Server Reboot..... After server reboot was completed, run the root.sh script from Node1 and check for any error messages. root@rac1:/oracle_19c_grid/app/oracle/product/19.3#sh root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle_19c_grid/app/oracle/product/19.3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Smartmatch is deprecated at /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsupgrade.pm line 6512. Using configuration parameter file: /oracle_19c_grid/app/oracle/product/19.3/crs/install/crsconfig_params The log of current session can be found at: /oracle_19c_grid/app/orabase/crsdata/rac1/crsconfig/rootcrs_rac1_2025-08-03_00-10-34AM.log 2025/08/03 00:10:53 CLSRSC-594: Executing installation step 1 of 19: 'ValidateEnv'. 2025/08/03 00:10:54 CLSRSC-594: Executing installation step 2 of 19: 'CheckFirstNode'. 2025/08/03 00:10:58 CLSRSC-594: Executing installation step 3 of 19: 'GenSiteGUIDs'. 2025/08/03 00:10:59 CLSRSC-594: Executing installation step 4 of 19: 'SetupOSD'. 2025/08/03 00:10:59 CLSRSC-594: Executing installation step 5 of 19: 'CheckCRSConfig'. 2025/08/03 00:11:02 CLSRSC-594: Executing installation step 6 of 19: 'SetupLocalGPNP'. 2025/08/03 00:11:05 CLSRSC-594: Executing installation step 7 of 19: 'CreateRootCert'. 2025/08/03 00:11:05 CLSRSC-594: Executing installation step 8 of 19: 'ConfigOLR'. 2025/08/03 00:11:08 CLSRSC-594: Executing installation step 9 of 19: 'ConfigCHMOS'. 2025/08/03 00:11:08 CLSRSC-594: Executing installation step 10 of 19: 'CreateOHASD'. 2025/08/03 00:11:11 CLSRSC-594: Executing installation step 11 of 19: 'ConfigOHASD'. 2025/08/03 00:11:12 CLSRSC-330: Adding Clusterware entries to file '/etc/inittab' 2025/08/03 00:12:08 CLSRSC-594: Executing installation step 12 of 19: 'SetupTFA'. 2025/08/03 00:12:08 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2025/08/03 00:12:08 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. 2025/08/03 00:12:15 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2025/08/03 00:12:18 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. 2025/08/03 00:12:22 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'. 2025/08/03 00:13:04 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2025/08/03 00:22:59 CLSRSC-343: Successfully started Oracle Clusterware stack 2025/08/03 00:23:00 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. 2025/08/03 00:23:27 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. 2025/08/03 00:23:50 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded The root.sh script completed successfully without any issues after the server reboot. Let's check the cluster alert log file for any error message. Cluster Alert Log File: 2025-08-03 00:12:24.822 [OHASD(30334)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 30334 2025-08-03 00:12:24.960 [OHASD(30334)]CRS-0714: Oracle Clusterware Release 19.0.0.0.0. 2025-08-03 00:12:25.382 [OHASD(30334)]CRS-2112: The OLR service started on node rac1. 2025-08-03 00:12:31.290 [OHASD(30334)]CRS-1301: Oracle High Availability Service started on node rac1. 2025-08-03 00:12:31.291 [OHASD(30334)]CRS-8017: location: /var/opt/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred 2025-08-03 00:12:38.667 [ORAAGENT(30841)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 30841 2025-08-03 00:12:38.863 [ORAROOTAGENT(30859)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 30859 2025-08-03 00:12:39.837 [CSSDMONITOR(30872)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 30872 2025-08-03 00:12:39.875 [CSSDAGENT(30869)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 30869 2025-08-03 00:12:42.001 [ORAAGENT(30934)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 30934 2025-08-03 00:12:42.657 [MDNSD(30949)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 30949 2025-08-03 00:12:42.835 [EVMD(30954)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 30954 2025-08-03 00:12:43.972 [GPNPD(31016)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 31016 2025-08-03 00:12:45.039 [GIPCD(31098)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 31098 2025-08-03 00:12:45.300 [GPNPD(31016)]CRS-2328: Grid Plug and Play Daemon(GPNPD) started on node rac1. 2025-08-03 00:12:45.877 [GIPCD(31098)]CRS-7517: The Oracle Grid Interprocess Communication (GIPC) failed to identify the Fast Node Death Detection (FNDD). 2025-08-03 00:12:48.973 [CSSDMONITOR(31211)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 31211 2025-08-03 00:12:49.931 [CSSDAGENT(31230)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 31230 2025-08-03 00:12:55.672 [OCSSD(31288)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 31288 2025-08-03 00:12:57.072 [OCSSD(31288)]CRS-1713: CSSD daemon is started in hub mode 2025-08-03 00:22:26.797 [OCSSD(31288)]CRS-1707: Lease acquisition for node rac1 number 1 completed 2025-08-03 00:22:28.001 [OCSSD(31288)]CRS-1605: CSSD voting file is online: /dev/dg_OCR_VOTE_XP8/ocr_vote_001; details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc. 2025-08-03 00:22:28.018 [OCSSD(31288)]CRS-1605: CSSD voting file is online: /dev/dg_OCR_VOTE_XP8/ocr_vote_002; details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc. 2025-08-03 00:22:28.035 [OCSSD(31288)]CRS-1605: CSSD voting file is online: /dev/dg_OCR_VOTE_XP8/ocr_vote_003; details in /oracle_19c_grid/app/orabase/diag/crs/rac1/crs/trace/ocssd.trc. 2025-08-03 00:22:29.839 [OCSSD(31288)]CRS-1601: CSSD Reconfiguration complete. Active nodes are rac1 rac2 rac3 rac4 . 2025-08-03 00:22:31.490 [OCSSD(31288)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation. 2025-08-03 00:22:31.712 [OCTSSD(46237)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 46237 2025-08-03 00:22:32.420 [OCTSSD(46237)]CRS-2403: The Cluster Time Synchronization Service on host rac1 is in observer mode. 2025-08-03 00:22:34.162 [OCTSSD(46237)]CRS-2407: The new Cluster Time Synchronization Service reference node is host rac4. 2025-08-03 00:22:34.163 [OCTSSD(46237)]CRS-2401: The Cluster Time Synchronization Service started on host rac1. 2025-08-03 00:22:44.578 [CRSD(46548)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 46548 2025-08-03 00:22:49.127 [CRSD(46548)]CRS-1012: The OCR service started on node rac1. 2025-08-03 00:22:49.332 [CRSD(46548)]CRS-1201: CRSD started on node rac1. 2025-08-03 00:22:51.794 [ORAAGENT(46818)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 46818 2025-08-03 00:22:51.995 [ORAROOTAGENT(46855)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 46855 2025-08-03 00:23:00.510 [ORAAGENT(47173)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 47173 ✅ The cluster alert log is clean and shows no error messages. ✅ The issue was successfully resolved by performing a server reboot and re-executing the root.sh script. ✅ Cluster stack is now up and running as expected. 🎉 Enjoy the troubleshooting journey!!! 📝 Stay tuned for a detailed blog post on this case !!! |
Thanks for reading this post ! Please comment if you like this post ! Click FOLLOW to get future blog updates !
Thank you for visiting my blog ! Thanks for your comment !