Environment Configuration Details: Operating System: Oracle Linux 8.10 64 Bit Oracle and Grid Software version: 26ai RAC: YES ✅ Points to be Checked Before Starting Oracle RAC Installation 1) Verify that the correct version of GRID Infrastructure and RDBMS software is downloaded for the intended Operating System. 2) Confirm that the selected Oracle GRID and Database versions are certified on the installed Operating System (check certification matrix on Oracle Support). 3) Verify whether the GRID and Database software architecture is 32-bit or 64-bit. 4) Confirm whether the Operating System architecture is 32-bit or 64-bit and ensure it matches the Oracle software architecture. 5) Ensure the Operating System Kernel version is compatible with the Oracle software version to be installed. 6) Verify that the server runlevel is configured correctly (Runlevel 3 or Runlevel 5, as per requirement). 7) Ensure the Oracle Clusterware version is equal to or greater than the Oracle RAC version planned for installation. 8) For installing Oracle Database 26ai RAC, first install Oracle Grid Infrastructure 26ai (Oracle Clusterware and ASM) on all cluster nodes. 9) Confirm that identical timezone environment variable settings (TZ) are configured on all cluster nodes. 10) Ensure identical server hardware configuration on each cluster node to simplify maintenance and ensure consistency. High Level Steps to configure Oracle 26ai RAC on Virtual Box Linux: 1) Virtual Box Configuration Step by Step 2) Linux Operating System RHEL Installation Step by Step 3) Oracle GRID 26ai Software Installation Step by Step - Create Logical Partitions - Configure ASM devices by udev.rules method - SSH Configuration - Download Softwares - GRID | RDBMS 4) Oracle 26ai Database Software Installation and DB Creation.
Note: Please note that the above requirements are for my testing environment only. You can refer below requirement for your actual production server. DATA and ARCH disk group space requirement is based on your production requirements.
Important Tips before installing any Oracle RAC software: 1) Cluster IPs (VIPs, SCAN IPs) should not be in use or ping before installation, except Public and Private IPs. 2) Stop any Third Party Antivirus Tools on all cluster nodes before installation. Ask Third Party Vendor to bypass the Private IP scanning to avoid RAC Node evictions and resource utilisation issues. 3) Public and Private ethernet card names must be same across all cluster nodes i.e. If public ethernet card name on Node1 is eth1 and private ethernet card name on Node1 is eth2 then, public ethernet card name on Node2 must be eth1 and private ethernet card name on Node2 must be eth2. 4) Ensure that /dev/shm mount area is of type tmpfs and is mounted with below options: - rw and exec permissions - Without noexec or nosuid 5) Oracle strongly recommends to disable THP (Transparent HugePages) and use standard HugePages to avoid memory allocation delays and performance issues. 6) For 2-Node RAC configuration, - 2 public IPs (One for each Node) - 2 private IPs (One for each Node) - 2 virtual IPs (One for each Node) - 1 or 3 scan IPs (If it is 1 then, mention it is in /etc/hosts file and if 3 then, use DNS for round-robin). 7) Public, VIP, and SCAN IP series should be same and private IP series should be different than Public, VIP, and SCAN series. Please note that public, vip, and scan series can be different if the same IP range is not available, but they must be on the same subnet as public IP address. Step 1: Downloading Software - VirtualBox - Linux Operating System - Oracle 26ai GRID and RDBMS Software 1) Downloading Virtual Box Refer link : Downloading Virtual Box 2) Downloading Linux Operating System Refer link : Downloading Linux OS 3) Downloading Oracle 26ai Setups Refer link : Oracle 26ai GRID Refer link : Oracle 26ai Database Step 2: Virtual Box Configuration Step by Step Refer link : Setting up Virtual Box Step by Step Step 4: Operating System Configuration Here, the installed operating system is OEL 8.10 64 Bit Operating system minimum OS requirements: 1) Oracle Linux 9.2 with the Unbreakable Enterprise Kernel 7: 5.15.0-201.135.6.el9uek.x86_64 or later Oracle Linux 9.2 with the Red Hat Compatible Kernel: 5.14.0-284.30.1.el9_2.x86_64 or later 2) Oracle Linux 8.8 with the Unbreakable Enterprise Kernel 7: 5.15.0-202.135.2.el8uek.x86_64 or later 3) Oracle Linux 8.6 with the Unbreakable Enterprise Kernel 6: 5.4.17-2136.312.3.4.el8uek.x86_64 or later Oracle Linux 8.6 with the Red Hat Compatible Kernel: 4.18.0-372.26.1.0.1.el8_6.x86_64 or later 4) Red Hat Enterprise Linux 9.2: 5.14.0-284.30.1.el9_2.x86_64 or later 5) Red Hat Enterprise Linux 8.6: 4.18.0-372.26.1.0.1.el8_6.x86_64 or later 6) SUSE Linux Enterprise Server 15 SP5 (x86_64): 5.14.21-150500.53-default or later Now its time to make changes in OS. Perform below changes in OS after installation. - Update /etc/hosts file - Stop and disable Firewall - Disable SELINUX - Create directory structure - User and group creation with permissions - Add limits and kernel parameters in configuration files - Make above changes on server and then clone the machine otherwise you have to make changes on cloned server as well. - Install required RPM packages. Please make the above changes on the server before cloning the machine; otherwise, you will need to apply the same changes again on the cloned server. 1) Updating /etc/hosts (Applies to both Nodes). [root@dbnode1 ~]# vi /etc/hosts #Public IP 10.20.30.101 dbnode1.localdomain dbnode1 10.20.30.102 dbnode2.localdomain dbnode2 #Private IP 10.1.2.201 dbnode1-priv.localdomain dbnode1-priv 10.1.2.202 dbnode2-priv.localdomain dbnode2-priv #VIP IP 10.20.30.103 dbnode1-vip.localdomain dbnode1-vip 10.20.30.104 dbnode2-vip.localdomain dbnode2-vip #scan IP 10.20.30.105 dbnode-scan.localdomain dbnode-scan 2) Commands to view/stop/disable firewall (Applies to both Nodes). [root@dbnode1 ~]# systemctl status firewalld [root@dbnode1 ~]# systemctl stop firewalld [root@dbnode1 ~]# systemctl disable firewalld [root@dbnode1 ~]# systemctl status firewalld 3) Disable SELINUX configuration (Applies to both Nodes). [root@dbnode1 ~]# cat /etc/selinux/config Check the value "SELINUX=disabled" [root@dbnode1 ~]# cat /etc/selinux/config 4) User and group creation with permissions (Applies to both Nodes). #Create groups with customized group ID. [root@dbnode1 ~]# groupadd -g 2000 oinstall [root@dbnode1 ~]# groupadd -g 2100 asmadmin [root@dbnode1 ~]# groupadd -g 2200 dba [root@dbnode1 ~]# groupadd -g 2300 oper [root@dbnode1 ~]# groupadd -g 2400 asmdba [root@dbnode1 ~]# groupadd -g 2500 asmoper #Create users for both GRID and RDBMS owners and change password. [root@dbnode1 ~]# useradd grid [root@dbnode1 ~]# useradd oracle [root@dbnode1 ~]# passwd grid [root@dbnode1 ~]# passwd oracle #Assigning primary and secondary groups to users (All Nodes). [root@dbnode1 ~]# usermod -g oinstall -G asmadmin,dba,oper,asmdba,asmoper grid [root@dbnode1 ~]# usermod -g oinstall -G asmadmin,dba,oper,asmdba,asmoper oracle #Verify group ownership for the users. [root@dbnode1 ~]# id grid [root@dbnode1 ~]# id oracle 5) Create directory structure (Applies to both Nodes). grid: ORACLE_BASE(GRID_BASE) : /u01/app/grid ORACLE_HOME (GRID_HOME) : /u01/app/23.0.0/grid oracle: ORACLE_BASE : /u01/app/oracle ORACLE_HOME : /u01/app/oracle/product/23.0.0/dbhome_1 Note: Please note that ORACLE_HOME for the oracle user is not needed to create since it will be automatically created under ORACLE_BASE as ORACLE_HOME always locates under the ORACLE_BASE for rdbms user(oracle), but for grid user, the case is different. For grid user, the ORACLE_HOME(GRID_HOME) always locates outside the ORACLE_BASE(GRID_BASE). This is because some cluster files under ORACLE_HOME(GRID_HOME) are owned by root user and the files located under ORACLE_BASE(GRID_BASE) are owned by grid user and hence it is required to locate the ORACLE_BASE(GRID_BASE) and ORACLE_HOME(GRID_HOME) in a separate directory. #Create GRID_BASE,GRID_HOME,ORACLE_BASE,ORACLE_HOME directories. [root@dbnode1 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# mkdir -p /u01/app/grid [root@dbnode1 ~]# mkdir -p /u01/app/23.0.0/grid [root@dbnode1 ~]# mkdir -p /u01/app/oraInventory [root@dbnode1 ~]# mkdir -p /u01/app/oracle [root@dbnode2 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode2 ~]# mkdir -p /u01/app/grid [root@dbnode2 ~]# mkdir -p /u01/app/23.0.0/grid [root@dbnode2 ~]# mkdir -p /u01/app/oraInventory [root@dbnode2 ~]# mkdir -p /u01/app/oracle #Grant user and group permissions on above directories. [root@dbnode1 ~]# chown -R grid:oinstall /u01/app/grid [root@dbnode1 ~]# chown -R grid:oinstall /u01/app/23.0.0/grid [root@dbnode1 ~]# chown -R grid:oinstall /u01/app/oraInventory [root@dbnode1 ~]# chown -R oracle:oinstall /u01/app/oracle [root@dbnode2 ~]# chown -R grid:oinstall /u01/app/grid [root@dbnode2 ~]# chown -R grid:oinstall /u01/app/23.0.0/grid [root@dbnode2 ~]# chown -R grid:oinstall /u01/app/oraInventory [root@dbnode2 ~]# chown -R oracle:oinstall /u01/app/oracle #Grant read,write,execute permissions to the above directories. [root@dbnode1 ~]# chmod -R 755 /u01/app/grid [root@dbnode1 ~]# chmod -R 755 /u01/app/23.0.0/grid [root@dbnode1 ~]# chmod -R 755 /u01/app/oraInventory [root@dbnode1 ~]# chmod -R 755 /u01/app/oracle [root@dbnode2 ~]# chmod -R 755 /u01/app/grid [root@dbnode2 ~]# chmod -R 755 /u01/app/23.0.0/grid [root@dbnode2 ~]# chmod -R 755 /u01/app/oraInventory [root@dbnode2 ~]# chmod -R 755 /u01/app/oracle #Verify the directory permissions. [root@dbnode1 ~]# ls -ld /u01/app/grid drwxr-xr-x 2 grid oinstall 6 Feb 16 19:58 /u01/app/grid [root@dbnode1 ~]# ls -ld /u01/app/23.0.0/grid drwxr-xr-x 2 grid oinstall 6 Feb 16 19:58 /u01/app/23.0.0/grid [root@dbnode1 ~]# ls -ld /u01/app/oraInventory drwxr-xr-x 2 grid oinstall 6 Feb 16 19:58 /u01/app/oraInventory [root@dbnode1 ~]# ls -ld /u01/app/oracle drwxr-xr-x 2 oracle oinstall 6 Feb 16 19:58 /u01/app/oracle [root@dbnode2 ~]# ls -ld /u01/app/grid drwxr-xr-x 2 grid oinstall 6 Feb 16 20:00 /u01/app/grid [root@dbnode2 ~]# ls -ld /u01/app/23.0.0/grid drwxr-xr-x 2 grid oinstall 6 Feb 16 20:00 /u01/app/23.0.0/grid [root@dbnode2 ~]# ls -ld /u01/app/oraInventory drwxr-xr-x 2 grid oinstall 6 Feb 16 20:00 /u01/app/oraInventory [root@dbnode2 ~]# ls -ld /u01/app/oracle drwxr-xr-x 2 oracle oinstall 6 Feb 16 20:00 /u01/app/oracle 6) Adding kernel and limits configuration parameters(All Nodes). [root@dbnode1 ~]# cat /etc/sysctl.conf | grep -v "#" [root@dbnode1 ~]# vi /etc/sysctl.conf fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 net.ipv4.conf.all.rp_filter = 2 net.ipv4.conf.default.rp_filter = 2 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500 [root@dbnode1 ~]# sysctl -p [root@dbnode1 ~]# cat /etc/security/limits.conf | grep -v "#" [root@dbnode1 ~]# vi /etc/security/limits.conf oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 16384 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768 oracle hard memlock 134217728 oracle soft memlock 134217728 oracle soft data unlimited oracle hard data unlimited grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 16384 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 grid hard memlock 134217728 grid soft memlock 134217728 grid soft data unlimited grid hard data unlimited 7) Install required RPM packages. bc binutils compat-openssl10 elfutils-libelf fontconfig glibc glibc-devel ksh libaio libXrender libX11 libXau libXi libXtst libgcc libstdc++ libxcb libibverbs libasan liblsan librdmacm make policycoreutils policycoreutils-python-utils smartmontools sysstat Download required RPM packages for Oracle Linux 8: 1) ksh - This will be part of your Yum Repository. 2) sysstat - This will be part of your Yum Repository. 3) libnsl-2.28-211.0.1.el8.x86_64.rpm - This will be part of your Yum Repository. (To avoid installation error "error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory" 4) cvu - This will be part of Grid Infrastructure Software. Note: Please note that I am not using ASMLib to configure logical disk partitions. Instead, I am using udev rules to configure ASM devices; therefore, I will not be installing the oracleasmlib and oracleasm-support packages. Step 5 : Clone VDI machine to create 2nd RAC Node. Refer : Cloning VDI machine Step 6 : Make Network Chages. Refer : Network Changes Step 7 : Configure shared storage for RAC database. Now its time to create logical partitions on given storage devices. Node1: Execute below commands to partition the disks. Please execute fdisk commands on Node1 only or any of the node of the cluster. No need to execute these commands on all the nodes of the cluster since these are shared disks and all are visible to all the nodes of the cluster once you create from any node. fdisk </dev/device_name> n - To create new partition p - To create Primary partition so that it can not be further extended. ENTER - Press ENTER button from your keyboard so that it will consider default value. ENTER - Press ENTER button from your keyboard so that it will consider default value. ENTER - Press ENTER button from your keyboard so that it will consider default value. w - Save the created partition [root@dbnode1 ~]# fdisk -l Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x23400f55 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 2099199 2097152 1G 83 Linux /dev/sda2 2099200 167772159 165672960 79G 8e Linux LVM Disk /dev/sdb: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdg: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdf: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdh: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdi: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-root: 47.9 GiB, 51367641088 bytes, 100327424 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-swap: 7.8 GiB, 8376025088 bytes, 16359424 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-home: 23.4 GiB, 25077743616 bytes, 48979968 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@dbnode1 ~]# fdisk /dev/sdc Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x11bf7f9a. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): Created a new partition 1 of type 'Linux' and of size 10 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sdd Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x7a564374. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): Created a new partition 1 of type 'Linux' and of size 10 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xe3a0f016. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-10485759, default 10485759): Created a new partition 1 of type 'Linux' and of size 5 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sde Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xa7776818. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): Created a new partition 1 of type 'Linux' and of size 10 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sdg Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xd2b18fec. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): Created a new partition 1 of type 'Linux' and of size 10 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sdf Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xbfa30aa1. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-20971519, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-20971519, default 20971519): Created a new partition 1 of type 'Linux' and of size 10 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sdh Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xb6ba3021. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-10485759, default 10485759): Created a new partition 1 of type 'Linux' and of size 5 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk /dev/sdi Welcome to fdisk (util-linux 2.32.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x64f9204a. Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): First sector (2048-10485759, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-10485759, default 10485759): Created a new partition 1 of type 'Linux' and of size 5 GiB. Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. [root@dbnode1 ~]# fdisk -l Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x11bf7f9a Device Boot Start End Sectors Size Id Type /dev/sdc1 2048 20971519 20969472 10G 83 Linux Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x7a564374 Device Boot Start End Sectors Size Id Type /dev/sdd1 2048 20971519 20969472 10G 83 Linux Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x23400f55 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 2099199 2097152 1G 83 Linux /dev/sda2 2099200 167772159 165672960 79G 8e Linux LVM Disk /dev/sdb: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xe3a0f016 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 10485759 10483712 5G 83 Linux Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa7776818 Device Boot Start End Sectors Size Id Type /dev/sde1 2048 20971519 20969472 10G 83 Linux Disk /dev/sdg: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xd2b18fec Device Boot Start End Sectors Size Id Type /dev/sdg1 2048 20971519 20969472 10G 83 Linux Disk /dev/sdf: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xbfa30aa1 Device Boot Start End Sectors Size Id Type /dev/sdf1 2048 20971519 20969472 10G 83 Linux Disk /dev/sdh: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xb6ba3021 Device Boot Start End Sectors Size Id Type /dev/sdh1 2048 10485759 10483712 5G 83 Linux Disk /dev/sdi: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x64f9204a Device Boot Start End Sectors Size Id Type /dev/sdi1 2048 10485759 10483712 5G 83 Linux Disk /dev/mapper/ol-root: 47.9 GiB, 51367641088 bytes, 100327424 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-swap: 7.8 GiB, 8376025088 bytes, 16359424 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-home: 23.4 GiB, 25077743616 bytes, 48979968 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes You can see the original device names are: /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi and the logical partition names are: /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 and Let's create and configure udev rules for these devices. Node1: [root@dbnode1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 80G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 79G 0 part ├─ol-root 252:0 0 47.9G 0 lvm / ├─ol-swap 252:1 0 7.8G 0 lvm [SWAP] └─ol-home 252:2 0 23.4G 0 lvm /home sdb 8:16 0 5G 0 disk └─sdb1 8:17 0 5G 0 part sdc 8:32 0 10G 0 disk └─sdc1 8:33 0 10G 0 part sdd 8:48 0 10G 0 disk └─sdd1 8:49 0 10G 0 part sde 8:64 0 10G 0 disk └─sde1 8:65 0 10G 0 part sdf 8:80 0 10G 0 disk └─sdf1 8:81 0 10G 0 part sdg 8:96 0 10G 0 disk └─sdg1 8:97 0 10G 0 part sdh 8:112 0 5G 0 disk └─sdh1 8:113 0 5G 0 part sdi 8:128 0 5G 0 disk └─sdi1 8:129 0 5G 0 part #Execute below commands to load updated block device partition tables. [root@dbnode1 ~]# partx -u /dev/sdb1 [root@dbnode1 ~]# partx -u /dev/sdc1 [root@dbnode1 ~]# partx -u /dev/sdd1 [root@dbnode1 ~]# partx -u /dev/sde1 [root@dbnode1 ~]# partx -u /dev/sdf1 [root@dbnode1 ~]# partx -u /dev/sdg1 [root@dbnode1 ~]# partx -u /dev/sdh1 [root@dbnode1 ~]# partx -u /dev/sdi1 #Find SCSI ID [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb 1ATA_VBOX_HARDDISK_VBb44e4d80-8fb7b024 [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc 1ATA_VBOX_HARDDISK_VB7672a4c6-d3ad69f4 [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd 1ATA_VBOX_HARDDISK_VBd8d4c9cc-06011151 [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sde 1ATA_VBOX_HARDDISK_VBb77e8f6b-8092b190 [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdf 1ATA_VBOX_HARDDISK_VBea72fb48-068b25e0 [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdg 1ATA_VBOX_HARDDISK_VB62dce973-4e15410a [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdh 1ATA_VBOX_HARDDISK_VB01f08ef4-8a1ff6ca [root@dbnode1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdi 1ATA_VBOX_HARDDISK_VBbfc511bc-7f3422a6 sdc 8:32 0 10G 0 disk - OCR1 1ATA_VBOX_HARDDISK_VB7672a4c6-d3ad69f4 sdd 8:48 0 10G 0 disk - OCR2 1ATA_VBOX_HARDDISK_VBd8d4c9cc-06011151 sde 8:64 0 10G 0 disk - OCR3 1ATA_VBOX_HARDDISK_VBb77e8f6b-8092b190 sdf 8:80 0 10G 0 disk - DATA1 1ATA_VBOX_HARDDISK_VBea72fb48-068b25e0 sdg 8:96 0 10G 0 disk - DATA2 1ATA_VBOX_HARDDISK_VB62dce973-4e15410a sdb 8:16 0 5G 0 disk - REDO1 1ATA_VBOX_HARDDISK_VBb44e4d80-8fb7b024 sdh 8:112 0 5G 0 disk - REDO1 1ATA_VBOX_HARDDISK_VB01f08ef4-8a1ff6ca sdi 8:128 0 5G 0 disk - ARCH1 1ATA_VBOX_HARDDISK_VBbfc511bc-7f3422a6 #Create below file and add below entries. [root@dbnode1 ~]# vi /etc/udev/rules.d/asm_devices.rules [root@dbnode1 ~]# cat /etc/udev/rules.d/asm_devices.rules KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB7672a4c6-d3ad69f4", SYMLINK+="oracleasm/disks/OCR1", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBd8d4c9cc-06011151", SYMLINK+="oracleasm/disks/OCR2", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb77e8f6b-8092b190", SYMLINK+="oracleasm/disks/OCR3", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBea72fb48-068b25e0", SYMLINK+="oracleasm/disks/DATA1", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB62dce973-4e15410a", SYMLINK+="oracleasm/disks/DATA2", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb44e4d80-8fb7b024", SYMLINK+="oracleasm/disks/REDO1", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB01f08ef4-8a1ff6ca", SYMLINK+="oracleasm/disks/REDO2", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBbfc511bc-7f3422a6", SYMLINK+="oracleasm/disks/ARCH1", OWNER="grid", GROUP="oinstall", MODE="0660" #Below commands will reload the complete udev configuration and will trigger all the udev rules. On a critical production server, this can interrupt ongoing operations and can impact business applications. Please use the below commands during downtime window only. [root@dbnode1 ~]# udevadm control --reload-rules [root@dbnode1 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 15 23:33 /dev/sda1 brw-rw---- 1 root disk 8, 17 Feb 16 00:32 /dev/sdb1 brw-rw---- 1 root disk 8, 33 Feb 16 00:32 /dev/sdc1 brw-rw---- 1 root disk 8, 49 Feb 16 00:32 /dev/sdd1 brw-rw---- 1 root disk 8, 65 Feb 16 00:32 /dev/sde1 brw-rw---- 1 root disk 8, 81 Feb 16 00:32 /dev/sdf1 brw-rw---- 1 root disk 8, 97 Feb 16 00:32 /dev/sdg1 brw-rw---- 1 root disk 8, 113 Feb 16 00:32 /dev/sdh1 brw-rw---- 1 root disk 8, 129 Feb 16 00:32 /dev/sdi1 [root@dbnode1 ~]# udevadm trigger [root@dbnode1 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 16 00:43 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Feb 16 00:43 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Feb 16 00:43 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Feb 16 00:43 /dev/sdd1 brw-rw---- 1 grid oinstall 8, 65 Feb 16 00:43 /dev/sde1 brw-rw---- 1 grid oinstall 8, 81 Feb 16 00:43 /dev/sdf1 brw-rw---- 1 grid oinstall 8, 97 Feb 16 00:43 /dev/sdg1 brw-rw---- 1 grid oinstall 8, 113 Feb 16 00:43 /dev/sdh1 brw-rw---- 1 grid oinstall 8, 129 Feb 16 00:43 /dev/sdi1 #Execute below commands to list the oracleasm disks. You can see that the symbolic links are owned by root:root, but the logical partition devices are owned by grid:oinstall. [root@dbnode1 ~]# ls -ltra /dev/oracleasm/disks/* lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR3 -> ../../sde1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/ARCH1 -> ../../sdi1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/REDO1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/DATA1 -> ../../sdf1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/DATA2 -> ../../sdg1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR2 -> ../../sdd1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/REDO2 -> ../../sdh1 [root@dbnode1 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 16 00:43 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Feb 16 00:43 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Feb 16 00:43 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Feb 16 00:43 /dev/sdd1 brw-rw---- 1 grid oinstall 8, 65 Feb 16 00:43 /dev/sde1 brw-rw---- 1 grid oinstall 8, 81 Feb 16 00:43 /dev/sdf1 brw-rw---- 1 grid oinstall 8, 97 Feb 16 00:43 /dev/sdg1 brw-rw---- 1 grid oinstall 8, 113 Feb 16 00:43 /dev/sdh1 brw-rw---- 1 grid oinstall 8, 129 Feb 16 00:43 /dev/sdi1 Node2: [root@dbnode2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 80G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 79G 0 part ├─ol-root 252:0 0 47.9G 0 lvm / ├─ol-swap 252:1 0 7.8G 0 lvm [SWAP] └─ol-home 252:2 0 23.4G 0 lvm /home sdb 8:16 0 10G 0 disk sdc 8:32 0 5G 0 disk sdd 8:48 0 10G 0 disk sde 8:64 0 10G 0 disk sdf 8:80 0 10G 0 disk sdg 8:96 0 10G 0 disk sdh 8:112 0 5G 0 disk sdi 8:128 0 5G 0 disk You don't have to create partitions again on Node2 as you have already created on Node1. Just you have to execute Partprobe command to reflect the changes. The partprobe command forces the operating system kernel to re-read the partition table of a specific storage device, updating it without requiring a system reboot. [root@dbnode2 ~]# partprobe You can see that all the devices are visible now. [root@dbnode2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 80G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 79G 0 part ├─ol-root 252:0 0 47.9G 0 lvm / ├─ol-swap 252:1 0 7.8G 0 lvm [SWAP] └─ol-home 252:2 0 23.4G 0 lvm /home sdb 8:16 0 10G 0 disk └─sdb1 8:17 0 10G 0 part sdc 8:32 0 5G 0 disk └─sdc1 8:33 0 5G 0 part sdd 8:48 0 10G 0 disk └─sdd1 8:49 0 10G 0 part sde 8:64 0 10G 0 disk └─sde1 8:65 0 10G 0 part sdf 8:80 0 10G 0 disk └─sdf1 8:81 0 10G 0 part sdg 8:96 0 10G 0 disk └─sdg1 8:97 0 10G 0 part sdh 8:112 0 5G 0 disk └─sdh1 8:113 0 5G 0 part sdi 8:128 0 5G 0 disk └─sdi1 8:129 0 5G 0 part #Execute below commands to load updated block device partition tables. [root@dbnode2 ~]# partx -u /dev/sdb1 [root@dbnode2 ~]# partx -u /dev/sdc1 [root@dbnode2 ~]# partx -u /dev/sdd1 [root@dbnode2 ~]# partx -u /dev/sde1 [root@dbnode2 ~]# partx -u /dev/sdf1 [root@dbnode2 ~]# partx -u /dev/sdg1 [root@dbnode2 ~]# partx -u /dev/sdh1 [root@dbnode2 ~]# partx -u /dev/sdi1 #Find SCSI ID [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb 1ATA_VBOX_HARDDISK_VB7672a4c6-d3ad69f4 [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc 1ATA_VBOX_HARDDISK_VBb44e4d80-8fb7b024 [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd 1ATA_VBOX_HARDDISK_VB62dce973-4e15410a [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sde 1ATA_VBOX_HARDDISK_VBd8d4c9cc-06011151 [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdf 1ATA_VBOX_HARDDISK_VBea72fb48-068b25e0 [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdg 1ATA_VBOX_HARDDISK_VBb77e8f6b-8092b190 [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdh 1ATA_VBOX_HARDDISK_VBbfc511bc-7f3422a6 [root@dbnode2 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdi 1ATA_VBOX_HARDDISK_VB01f08ef4-8a1ff6ca #Create below file and add below entries. [root@dbnode2 ~]# ls -ltr /etc/udev/rules.d -rw-r--r--. 1 root root 67 Oct 15 2023 69-vdo-start-by-dev.rules -rw-r--r--. 1 root root 148 Apr 12 2024 99-vmware-scsi-timeout.rules -rw-r--r--. 1 root root 99 Feb 15 23:59 60-vboxadd.rules [root@dbnode2 ~]# vi /etc/udev/rules.d/asm_devices.rules [root@dbnode2 ~]# cat /etc/udev/rules.d/asm_devices.rules KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB7672a4c6-d3ad69f4", SYMLINK+="oracleasm/disks/OCR1", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBd8d4c9cc-06011151", SYMLINK+="oracleasm/disks/OCR2", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb77e8f6b-8092b190", SYMLINK+="oracleasm/disks/OCR3", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBea72fb48-068b25e0", SYMLINK+="oracleasm/disks/DATA1", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB62dce973-4e15410a", SYMLINK+="oracleasm/disks/DATA2", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb44e4d80-8fb7b024", SYMLINK+="oracleasm/disks/REDO1", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB01f08ef4-8a1ff6ca", SYMLINK+="oracleasm/disks/REDO2", OWNER="grid", GROUP="oinstall", MODE="0660" KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBbfc511bc-7f3422a6", SYMLINK+="oracleasm/disks/ARCH1", OWNER="grid", GROUP="oinstall", MODE="0660" [root@dbnode2 ~]# ls -ltr /etc/udev/rules.d -rw-r--r--. 1 root root 67 Oct 15 2023 69-vdo-start-by-dev.rules -rw-r--r--. 1 root root 148 Apr 12 2024 99-vmware-scsi-timeout.rules -rw-r--r--. 1 root root 99 Feb 15 23:59 60-vboxadd.rules -rw-r--r-- 1 root root 1757 Feb 16 00:52 asm_devices.rules #Below commands will reload the complete udev configuration and will trigger all the udev rules. On a critical production server, this can interrupt ongoing operations and can impact business applications. Please use the below commands during downtime window only. [root@dbnode2 ~]# udevadm control --reload-rules [root@dbnode2 ~]# udevadm trigger [root@dbnode2 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 16 00:53 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Feb 16 00:53 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Feb 16 00:53 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Feb 16 00:53 /dev/sdd1 brw-rw---- 1 grid oinstall 8, 65 Feb 16 00:53 /dev/sde1 brw-rw---- 1 grid oinstall 8, 81 Feb 16 00:53 /dev/sdf1 brw-rw---- 1 grid oinstall 8, 97 Feb 16 00:53 /dev/sdg1 brw-rw---- 1 grid oinstall 8, 113 Feb 16 00:53 /dev/sdh1 brw-rw---- 1 grid oinstall 8, 129 Feb 16 00:53 /dev/sdi1 #Execute below commands to list the oracleasm disks. You can see that the symbolic links are owned by root:root, but the logical partition devices are owned by grid:oinstall. [root@dbnode2 ~]# ls -ltra /dev/oracleasm/disks/* lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/OCR1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/REDO2 -> ../../sdi1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/DATA2 -> ../../sdd1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/ARCH1 -> ../../sdh1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/DATA1 -> ../../sdf1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/OCR2 -> ../../sde1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/REDO1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Feb 16 00:53 /dev/oracleasm/disks/OCR3 -> ../../sdg1 [root@dbnode2 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 16 00:53 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Feb 16 00:53 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Feb 16 00:53 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Feb 16 00:53 /dev/sdd1 brw-rw---- 1 grid oinstall 8, 65 Feb 16 00:53 /dev/sde1 brw-rw---- 1 grid oinstall 8, 81 Feb 16 00:53 /dev/sdf1 brw-rw---- 1 grid oinstall 8, 97 Feb 16 00:53 /dev/sdg1 brw-rw---- 1 grid oinstall 8, 113 Feb 16 00:53 /dev/sdh1 brw-rw---- 1 grid oinstall 8, 129 Feb 16 00:53 /dev/sdi1 Step 8 : Configure ssh authentication or passwordless configuration or userless equivalience. Lets configrue ssh authentication/passwordless configuration/user equivalence for grid user which will be required for Grid Infrastructure installation. 1) Delete "/home/grid/.ssh" file fron both the nodes and create again. Node1: [grid@dbnode1 ~]$ id uid=1001(grid) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [grid@dbnode1 ~]$ rm -rf .ssh [grid@dbnode1 ~]$ mkdir .ssh [grid@dbnode1 ~]$ chmod 700 .ssh [grid@dbnode1 ~]$ cd /home/grid/.ssh [grid@dbnode1 .ssh]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: SHA256:Q//cHGoAO6D6u0oU6nQgP3Y+hiESuOWAcM4o8wW0Aa8 grid@dbnode1.localdomain The key's randomart image is: +---[RSA 3072]----+ |oo= | |+* + | |X.B . . o | |o%.o . o + | |E.O.o S o . | |++.B o + + . | | .+ + = o | | . o . . | | ..+o | +----[SHA256]-----+ [grid@dbnode1 .ssh]$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_dsa. Your public key has been saved in /home/grid/.ssh/id_dsa.pub. The key fingerprint is: SHA256:w5jXOknpZ0ft1DQ6QbNtAeJ7W1slEaXKkxLlns5tGu0 grid@dbnode1.localdomain The key's randomart image is: +---[DSA 1024]----+ | ..+++.| | .oo +o.| | ...oo+o| | + o +.=++o| | o S o.O+o.o| | + + =.*+ o| | = o =.=. | | + . = | | . E | +----[SHA256]-----+ [grid@dbnode1 .ssh]$ ls -ltr -rw------- 1 grid oinstall 2610 Feb 16 00:55 id_rsa -rw-r--r-- 1 grid oinstall 578 Feb 16 00:55 id_rsa.pub -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa [grid@dbnode1 .ssh]$ cat *.pub >> authorized_keys.dbnode1 [grid@dbnode1 .ssh]$ ls -ltr -rw------- 1 grid oinstall 2610 Feb 16 00:55 id_rsa -rw-r--r-- 1 grid oinstall 578 Feb 16 00:55 id_rsa.pub -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:57 authorized_keys.dbnode1 [grid@dbnode1 .ssh]$ scp authorized_keys.dbnode1 grid@dbnode2:/home/grid/.ssh/ The authenticity of host 'dbnode2 (10.20.30.102)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode2,10.20.30.102' (ECDSA) to the list of known hosts. grid@dbnode2's password: authorized_keys.dbnode1 100% 1192 226.7KB/s 00:00 [grid@dbnode1 .ssh]$ ls -ltr -rw------- 1 grid oinstall 2610 Feb 16 00:55 id_rsa -rw-r--r-- 1 grid oinstall 578 Feb 16 00:55 id_rsa.pub -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:57 authorized_keys.dbnode1 -rw-r--r-- 1 grid oinstall 182 Feb 16 00:58 known_hosts -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:59 authorized_keys.dbnode2 [grid@dbnode1 .ssh]$ cat *.dbnode* >> authorized_keys [grid@dbnode1 .ssh]$ chmod 600 authorized_keys [grid@dbnode1 .ssh]$ ls -ltr -rw------- 1 grid oinstall 2610 Feb 16 00:55 id_rsa -rw-r--r-- 1 grid oinstall 578 Feb 16 00:55 id_rsa.pub -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:57 authorized_keys.dbnode1 -rw-r--r-- 1 grid oinstall 182 Feb 16 00:58 known_hosts -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:59 authorized_keys.dbnode2 -rw------- 1 grid oinstall 2384 Feb 16 00:59 authorized_keys Node2: [grid@dbnode2 ~]$ id uid=1001(grid) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [grid@dbnode2 ~]$ rm -rf .ssh [grid@dbnode2 ~]$ mkdir .ssh [grid@dbnode2 ~]$ chmod 700 .ssh [grid@dbnode2 ~]$ cd /home/grid/.ssh [grid@dbnode2 .ssh]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: SHA256:FnPoXFrd85eKUoqyqv0pEkEhmdQUHG80tgp+JGRnBuw grid@dbnode2.localdomain The key's randomart image is: +---[RSA 3072]----+ |+O*Bo+ | |*o+o+ o . . . | |oo . + + + . o | |.E+ o o B o.| | ..o S . .o| | .. .. o . . .| | . . . o . . | | ... + . | | .oo++ | +----[SHA256]-----+ [grid@dbnode2 .ssh]$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_dsa. Your public key has been saved in /home/grid/.ssh/id_dsa.pub. The key fingerprint is: SHA256:9LHgDhOuUT6hOkSutVSXurDuK14eu0hChkbFQdfYUKc grid@dbnode2.localdomain The key's randomart image is: +---[DSA 1024]----+ | o+.o*. . | | .... +o | | .. . *Eo . | |oo . B = o o | |.+* + * S o | |+= = + = | |o.=oo . | |+.+.o | |.=+=. | +----[SHA256]-----+ [grid@dbnode2 .ssh]$ ls -ltr -rw-r--r-- 1 grid oinstall 578 Feb 16 00:56 id_rsa.pub -rw------- 1 grid oinstall 2610 Feb 16 00:56 id_rsa -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa [grid@dbnode2 .ssh]$ cat *.pub >> authorized_keys.dbnode2 [grid@dbnode2 .ssh]$ ls -ltr -rw-r--r-- 1 grid oinstall 578 Feb 16 00:56 id_rsa.pub -rw------- 1 grid oinstall 2610 Feb 16 00:56 id_rsa -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:58 authorized_keys.dbnode1 -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:58 authorized_keys.dbnode2 [grid@dbnode2 .ssh]$ scp authorized_keys.dbnode2 grid@dbnode1:/home/grid/.ssh/ The authenticity of host 'dbnode1 (10.20.30.101)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode1,10.20.30.101' (ECDSA) to the list of known hosts. grid@dbnode1's password: authorized_keys.dbnode2 100% 1192 214.9KB/s 00:00 [grid@dbnode2 .ssh]$ ls -ltr -rw-r--r-- 1 grid oinstall 578 Feb 16 00:56 id_rsa.pub -rw------- 1 grid oinstall 2610 Feb 16 00:56 id_rsa -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:58 authorized_keys.dbnode1 -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:58 authorized_keys.dbnode2 -rw-r--r-- 1 grid oinstall 182 Feb 16 00:58 known_hosts [grid@dbnode2 .ssh]$ cat *.dbnode* >> authorized_keys [grid@dbnode2 .ssh]$ chmod 600 authorized_keys [grid@dbnode2 .ssh]$ ls -ltr -rw-r--r-- 1 grid oinstall 578 Feb 16 00:56 id_rsa.pub -rw------- 1 grid oinstall 2610 Feb 16 00:56 id_rsa -rw-r--r-- 1 grid oinstall 614 Feb 16 00:56 id_dsa.pub -rw------- 1 grid oinstall 1393 Feb 16 00:56 id_dsa -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:58 authorized_keys.dbnode1 -rw-r--r-- 1 grid oinstall 1192 Feb 16 00:58 authorized_keys.dbnode2 -rw-r--r-- 1 grid oinstall 182 Feb 16 00:58 known_hosts -rw------- 1 grid oinstall 2384 Feb 16 01:00 authorized_keys Now we can test the ssh authentication by below commands. Ensure self authentication is also required i.e. Node1-Node1,Node1-Node2, Node2-Node2, Node2-Node1 without which we cannot proceed the installation. [grid@dbnode1 .ssh]$ ssh dbnode1 The authenticity of host 'dbnode1 (10.20.30.101)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode1,10.20.30.101' (ECDSA) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Feb 16 00:55:08 2026 [grid@dbnode1 ~]$ ssh dbnode2 Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Feb 16 00:56:14 2026 [grid@dbnode2 ~]$ ssh dbnode2 The authenticity of host 'dbnode2 (10.20.30.102)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode2,10.20.30.102' (ECDSA) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Feb 16 01:00:44 2026 from 10.20.30.101 [grid@dbnode2 ~]$ ssh dbnode1 Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Feb 16 01:00:33 2026 from 10.20.30.101 How to attach shared drive from local desktop to Virtual Box. All setup files are available at D:\RUPESH\Setups. The same location has been shared as a network drive for your reference and access. [root@dbnode1 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 3.7G 0 3.7G 0% /dev tmpfs tmpfs 3.7G 0 3.7G 0% /dev/shm tmpfs tmpfs 3.7G 9.4M 3.7G 1% /run tmpfs tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup /dev/mapper/ol-root xfs 48G 8.1G 40G 17% / /dev/mapper/ol-home xfs 24G 226M 24G 1% /home /dev/sda1 xfs 1014M 364M 651M 36% /boot Setups vboxsf 363G 142G 222G 40% /setup tmpfs tmpfs 756M 20K 756M 1% /run/user/1002 [root@dbnode1 ~]# cd /setup/ [root@dbnode1 setup]# ls -ltr -rwxrwx--- 1 root vboxsf 5253120 Sep 19 2021 oswbb840.tar -rwxrwx--- 1 root vboxsf 334707 Nov 10 2021 preupgrade_19_cbuild_12_lf.zip -rwxrwx--- 1 root vboxsf 101177744 Apr 26 2023 V1020893-01.iso -rwxrwx--- 1 root vboxsf 3579880 Jan 26 2024 winrar-x64-624.exe -rwxrwx--- 1 root vboxsf 221852512 Jan 26 2024 jdk-8u202-windows-x64.exe -rwxrwx--- 1 root vboxsf 11332480 Jun 27 2025 ChromeSetup.exe -rwxrwx--- 1 root vboxsf 6879312 Aug 14 2025 npp.8.8.5.Installer.x64.exe drwxrwx--- 1 root vboxsf 0 Sep 11 13:16 WINDOWS.X64_239000_free -rwxrwx--- 1 root vboxsf 176488552 Dec 13 02:04 VirtualBox-7.2.4-170995-Win.exe drwxrwx--- 1 root vboxsf 0 Dec 13 03:04 OEL8.10 drwxrwx--- 1 root vboxsf 0 Feb 16 19:34 Oracle_26ai_RAC [root@dbnode1 setup]# cd Oracle_26ai_RAC/ [root@dbnode1 Oracle_26ai_RAC]# ls -ltr -rwxrwx--- 1 root vboxsf 1089544451 Feb 11 19:28 LINUX.X64_2326100_grid_home.zip -rwxrwx--- 1 root vboxsf 2406058543 Feb 11 19:30 LINUX.X64_2326100_db_home.zip GRID Installation and configuration 1)Copy the GRID Infra to target server in GRID_HOME location since the setup is Gold Image Copy setup. Login as grid user and unzip the GRID setup files, you will get complete HOME binaries in GRID_HOME. 2)Start GRID Installation. #Create directory on Linux server to copy softwares. [root@dbnode1 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# df -Th Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 3.7G 0 3.7G 0% /dev tmpfs tmpfs 3.7G 0 3.7G 0% /dev/shm tmpfs tmpfs 3.7G 9.3M 3.7G 1% /run tmpfs tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup /dev/mapper/ol-root xfs 48G 8.1G 40G 17% / /dev/mapper/ol-home xfs 24G 226M 24G 1% /home /dev/sda1 xfs 1014M 364M 651M 36% /boot Setups vboxsf 363G 143G 221G 40% /setup tmpfs tmpfs 756M 20K 756M 1% /run/user/1002 [root@dbnode1 ~]# cd /setup/Oracle_26ai_RAC/ [root@dbnode1 Oracle_26ai_RAC]# ls -ltr -rwxrwx--- 1 root vboxsf 1089544451 Feb 11 19:28 LINUX.X64_2326100_grid_home.zip -rwxrwx--- 1 root vboxsf 2406058543 Feb 11 19:30 LINUX.X64_2326100_db_home.zip [root@dbnode1 Oracle_26ai_RAC]# cp LINUX.X64_2326100_grid_home.zip /u01/app/23.0.0/grid/ [root@dbnode1 Oracle_26ai_RAC]# cd /u01/app/23.0.0/grid/ [root@dbnode1 grid]# ll -rwxr-x--- 1 root root 1089544451 Feb 16 20:05 LINUX.X64_2326100_grid_home.zip [root@dbnode1 grid]# chmod 777 LINUX.X64_2326100_grid_home.zip [root@dbnode1 grid]# su - grid [grid@dbnode1 ~]$ cd /u01/app/23.0.0/grid/ [grid@dbnode1 grid]$ ll -rwxrwxrwx 1 root root 1089544451 Feb 16 20:05 LINUX.X64_2326100_grid_home.zip #Now login by grid user and extract the GRID Infrastructure software, GI patch, and OPatch. [grid@dbnode1 grid]$ pwd /u01/app/23.0.0/grid [grid@dbnode1 grid]$ unzip LINUX.X64_2326100_grid_home.zip Archive: LINUX.X64_2326100_grid_home.zip inflating: META-INF/MANIFEST.MF inflating: META-INF/ORACLE_C.SF inflating: META-INF/ORACLE_C.RSA creating: OPatch/ inflating: OPatch/README.txt creating: OPatch/auto/ .......... rdbms/mesg/ocis.msb -> oras.msb rdbms/mesg/ocisf.msb -> orasf.msb rdbms/mesg/ocisk.msb -> orask.msb rdbms/mesg/ocith.msb -> orath.msb rdbms/mesg/ocitr.msb -> oratr.msb rdbms/mesg/ocius.msb -> oraus.msb rdbms/mesg/ocius.msg -> ./oraus.msg rdbms/mesg/ocizhs.msb -> orazhs.msb rdbms/mesg/ocizht.msb -> orazht.msb [grid@dbnode1 grid]$ #Install cvuqdisk RPM on both the nodes. This RPM is part of your GI software located under below directory. [root@dbnode1 rpm]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# cd /u01/app/23.0.0/grid/cv/rpm/ [root@dbnode1 rpm]# ll -rw-r--r-- 1 grid oinstall 24520 Jan 9 23:29 cvuqdisk-1.0.10-1.rpm [root@dbnode1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm warning: cvuqdisk-1.0.10-1.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY Verifying... ################################# [100%] Preparing... ################################# [100%] Using default group oinstall to install package Updating / installing... 1:cvuqdisk-1.0.10-1 ################################# [100%] [root@dbnode1 rpm]# scp cvuqdisk-1.0.10-1.rpm dbnode2:/tmp The authenticity of host 'dbnode2 (10.20.30.102)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode2,10.20.30.102' (ECDSA) to the list of known hosts. root@dbnode2's password: cvuqdisk-1.0.10-1.rpm 100% 24KB 6.4MB/s 00:00 [root@dbnode1 rpm]# ssh dbnode2 root@dbnode2's password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Feb 16 19:57:26 2026 [root@dbnode2 tmp]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode2 ~]# cd /tmp [root@dbnode2 tmp]# ll cvuqdisk-1.0.10-1.rpm -rw-r--r-- 1 root root 24520 Feb 16 20:17 cvuqdisk-1.0.10-1.rpm [root@dbnode2 tmp]# rpm -ivh cvuqdisk-1.0.10-1.rpm warning: cvuqdisk-1.0.10-1.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY Verifying... ################################# [100%] Preparing... ################################# [100%] Using default group oinstall to install package Updating / installing... 1:cvuqdisk-1.0.10-1 ################################# [100%] Step 9 : Oracle 26ai GRID Installation Step by Step All our settings are now configured. We are ready to begin the installation process. Login as grid user and execute below command to start the 26ai Grid Insfrastructure installation. It is strongly recommended to start the Grid Infrastructure or RDBMS software installation with -applyRU option which will apply the patch first and then start the GUI console. Apply latest patch. Here, I am not going to apply any patch during installation since this is my testing environment. [grid@dbnode1 ~]$ id uid=1001(grid) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [grid@dbnode1 ~]$ cd /u01/app/23.0.0/grid/ [grid@dbnode1 grid]$ ll gridSetup.sh -rwxr-x--- 1 grid oinstall 3505 Jan 19 2022 gridSetup.sh Add scan name that you added in "/etc/hosts" file and port. By default, you will find the public and virtual node details on the screen. You have to manually add public and virtual hostname for 2nd node as well. Please note that SSH authentication or passwordless configuration or user equivalence can be configured here by clicking "ssh connectivity" option. Here, I am not going to configure ssh since I have already configured it manually. Specify Network Interface Usage: For each interface, in the Interface Name column, identify the interface using one of the following options:
Create ASM Disk Group: Provide the name of the initial disk group you want to configure in the Disk Group Name field. The Add Disks table displays disks that are configured as candidate disks. Select the number of candidate or provisioned disks (or partitions on a file system) required for the level of redundancy that you want for your first disk group. For standard disk groups, High redundancy requires a minimum of three disks. Normal requires a minimum of two disks. External requires a minimum of one disk. Flex redundancy requires a minimum of three disks. Oracle Cluster Registry and voting files for Oracle Grid Infrastructure for a cluster are configured on Oracle ASM. Hence, the minimum number of disks required for the disk group is higher. High redundancy requires a minimum of five disks. Normal redundancy requires a minimum of three disks. External redundancy requires a minimum of one disk. Flex redundancy requires a minimum of three disks. If you are configuring an Oracle Extended Cluster installation, then you can also choose an additional Extended redundancy option. The number of supported sites for extended redundancy is three. For extended redundancy with three sites, for example, two data sites, and one quorum failure group, the minimum number of disks is seven. For an Oracle Extended Cluster, you also need to select the site for each failure group. Voting disk files require a higher number of minimum disks to provide the required separate physical devices for quorum failure groups, so that a quorum of voting disk files are available even if one failure group becomes unavailable. You must place voting disk files on Oracle ASM, therefore, ensure that you have enough disks available for the redundancy level you require. If you selected redundancy as Flex, Normal, or High, then you can click Specify Failure Groups and provide details of the failure groups to use for Oracle ASM disks. Select the quorum failure group for voting files. If you do not see candidate disks displayed, then click Change Discovery Path, and enter a path where Oracle Universal Installer (OUI) can find candidate disks. Ensure that you specify the Oracle ASM discovery path for Oracle ASM disks. Select Configure Oracle ASM Filter Driver to use Oracle Automatic Storage Management Filter Driver (Oracle ASMFD) for configuring and managing your Oracle ASM disk devices. Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted. Here, ASM disks are configured by udev rules, not by oracleasm and hence you can see the grid:oinstall permissions are assigned to /dev/sd*1 devices. You don't have to change the discovery path. Also, I am using NORMAL redundancy and hence I have configured 3 disks. [root@dbnode1 ~]# ls -ltra /dev/oracleasm/disks/* lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR3 -> ../../sde1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/ARCH1 -> ../../sdi1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/REDO1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/DATA1 -> ../../sdf1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/DATA2 -> ../../sdg1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR2 -> ../../sdd1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/REDO2 -> ../../sdh1 [root@dbnode1 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Jul 12 01:55 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Jul 12 01:55 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Jul 12 01:55 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Jul 12 01:55 /dev/sdd1 [root@dbnode1 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 16 00:43 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Feb 16 00:43 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Feb 16 00:43 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Feb 16 00:43 /dev/sdd1 brw-rw---- 1 grid oinstall 8, 65 Feb 16 00:43 /dev/sde1 brw-rw---- 1 grid oinstall 8, 81 Feb 16 00:43 /dev/sdf1 brw-rw---- 1 grid oinstall 8, 97 Feb 16 00:43 /dev/sdg1 brw-rw---- 1 grid oinstall 8, 113 Feb 16 00:43 /dev/sdh1 brw-rw---- 1 grid oinstall 8, 129 Feb 16 00:43 /dev/sdi1 The question remains if NORMAL redundancy requires minimum of 2 disks then why do we configure 3 disks? This is because Voting disks should be in odd numbers to prevent node eviction issues. During installation, you are prompted to specify an Oracle base location, which is owned by the user performing the installation. The Oracle base directory is where log files specific to the user are placed. You can choose a directory location that does not have the structure for an Oracle base directory. The Oracle base directory for the Oracle Grid Infrastructure installation the location where diagnostic and administrative logs, and other logs associated with Oracle ASM and Oracle Clusterware are stored. For Oracle installations other than Oracle Grid Infrastructure for a cluster, it is also the location under which an Oracle home is placed. Oracle recommends that you install Oracle Grid Infrastructure binaries on local homes, rather than using a shared home on shared storage. Root Script Execution Configuration: 1) If you want to run scripts manually as root for each cluster member node, then click Next to proceed to the next screen. 2) If you want to delegate the privilege to run scripts with administration privileges, then select Automatically run configuration scripts, and select from one of the following delegation options: Use the root password: Provide the password to the installer as you are providing other configuration information. The root user password must be identical on each cluster member node. Use Sudo: Sudo is a UNIX and Linux utility that allows members of the sudoers list privileges to run individual commands as root. Provide the user name and password of an operating system user that is a member of sudoers, and is authorized to run sudo on each cluster member node. Above prerequisites are failed during verification checks. 1) Physical Memory : This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory. Physical Memory - This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Expected value : 8GB (8388608.0KB) Actual value : 7.3808GB (7739324.0KB) Details: - PRVF-7530 : Sufficient physical memory is not available on node "dbnode2" [Required physical memory = 8GB (8388608.0KB)] - Cause: Amount of physical memory (RAM) found does not meet minimum memory requirements. - Action: Add physical memory (RAM) to the node specified. Back to Top Verification result of failed node: dbnode1 Expected value : 8GB (8388608.0KB) Actual value : 7.3808GB (7739324.0KB) Details: - PRVF-7530 : Sufficient physical memory is not available on node "dbnode1" [Required physical memory = 8GB (8388608.0KB)] - Cause: Amount of physical memory (RAM) found does not meet minimum memory requirements. - Action: Add physical memory (RAM) to the node specified. 2) OS Kernel Parameter: kernel.panic - This is a prerequisite condition to test whether the OS kernel parameter "kernel.panic" is properly set. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Expected value : at least 1 Actual value : Current=0; Configured=undefined Details: - PRVG-1205 : OS kernel parameter "kernel.panic" does not have expected current value on node "dbnode2" [Expected = "at least 1" ; Current = "0"; Configured = "undefined"]. - Cause: A check of the current value for an OS kernel parameter did not find the expected value. - Action: Modify the kernel parameter current value to meet the requirement. Back to Top Verification result of failed node: dbnode1 Expected value : at least 1 Actual value : Current=0; Configured=undefined Details: - PRVG-1205 : OS kernel parameter "kernel.panic" does not have expected current value on node "dbnode1" [Expected = "at least 1" ; Current = "0"; Configured = "undefined"]. - Cause: A check of the current value for an OS kernel parameter did not find the expected value. - Action: Modify the kernel parameter current value to meet the requirement. Solution:(Both dbnode1 and dbnode2) By root user vi /etc/sysctl.conf kernel.panic = 1 sysctl-p 3) Package: compat-openssl10-1.0.2 (x86_64) - This is a prerequisite condition to test whether the package "compat-openssl10-1.0.2 (x86_64)" is available on the system. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Expected value : compat-openssl10(x86_64)-1.0.2 Actual value : missing Details: - PRVF-7532 : Package "compat-openssl10(x86_64)-1.0.2" is missing on node "dbnode2" - Cause: A required package is either not installed or, if the package is a kernel module, is not loaded on the specified node. - Action: Ensure that the required package is installed and available. Back to Top Verification result of failed node: dbnode1 Expected value : compat-openssl10(x86_64)-1.0.2 Actual value : missing Details: - PRVF-7532 : Package "compat-openssl10(x86_64)-1.0.2" is missing on node "dbnode1" - Cause: A required package is either not installed or, if the package is a kernel module, is not loaded on the specified node. - Action: Ensure that the required package is installed and available. Solution: [root@dbnode1 ~]# cd /setup/Oracle_26ai_RAC/ [root@dbnode1 Oracle_26ai_RAC]# ll -rwxrwx--- 1 root vboxsf 1183708 Jul 1 2022 compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm -rwxrwx--- 1 root vboxsf 2406058543 Feb 11 19:30 LINUX.X64_2326100_db_home.zip -rwxrwx--- 1 root vboxsf 1089544451 Feb 11 19:28 LINUX.X64_2326100_grid_home.zip [root@dbnode1 Oracle_26ai_RAC]# rpm -ivh compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm warning: compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY Verifying... ################################# [100%] Preparing... ################################# [100%] Updating / installing... 1:compat-openssl10-1:1.0.2o-4.el8_6################################# [100%] [root@dbnode2 boot]# cd /setup/Oracle_26ai_RAC/ [root@dbnode2 Oracle_26ai_RAC]# ll -rwxrwx--- 1 root vboxsf 1183708 Jul 1 2022 compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm -rwxrwx--- 1 root vboxsf 2406058543 Feb 11 19:30 LINUX.X64_2326100_db_home.zip -rwxrwx--- 1 root vboxsf 1089544451 Feb 11 19:28 LINUX.X64_2326100_grid_home.zip [root@dbnode2 Oracle_26ai_RAC]# rpm -ivh compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm warning: compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY Verifying... ################################# [100%] Preparing... ################################# [100%] Updating / installing... 1:compat-openssl10-1:1.0.2o-4.el8_6################################# [100%] 4) resolv.conf Integrity - This task checks consistency of file /etc/resolv.conf file across nodes. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Details: - PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: dbnode1,dbnode2 - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes specified. - Action: Make sure that ''options timeout'', ''options attempts'' and ''nameserver'' entries in file resolv.conf are proper. On HPUX these entries will be ''retrans'', ''retry'' and ''nameserver''. On Solaris these will be ''options retrans'', ''options retry'' and ''nameserver''. Make sure that the DNS server responds back to name lookup request within the specified time when looking up an unknown host name. Back to Top Verification result of failed node: dbnode1 Details: - PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: dbnode1,dbnode2 - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes specified. - Action: Make sure that ''options timeout'', ''options attempts'' and ''nameserver'' entries in file resolv.conf are proper. On HPUX these entries will be ''retrans'', ''retry'' and ''nameserver''. On Solaris these will be ''options retrans'', ''options retry'' and ''nameserver''. Make sure that the DNS server responds back to name lookup request within the specified time when looking up an unknown host name. Solution: Ignore this message related to resolv.conf since we are not using DNS here. 5) Single Client Access Name (SCAN) - This test verifies the Single Client Access Name configuration. Verification WARNING result on node: dbnode1 Details: - PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP addresses, but SCAN "dbnode-scan" resolves to only "10.20.30.105" - Cause: An insufficient number of SCAN IP addresses were defined for the specified SCAN. - Action: Define at least specified number of SCAN IP addresses in DNS for the SCAN. Verification WARNING result on node: dbnode2 Details: - PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP addresses, but SCAN "dbnode-scan" resolves to only "10.20.30.105" - Cause: An insufficient number of SCAN IP addresses were defined for the specified SCAN. - Action: Define at least specified number of SCAN IP addresses in DNS for the SCAN. Solution: Ignore this message since we are using 1 SCAN IP. It is recommended to use 3 or more scan IPs in DNS which will work in round-robin algorithm to distribute the application connection load across the nodes. 6) Daemon "avahi-daemon" not configured and running - This test checks that the "avahi-daemon" daemon is not configured and running on the cluster nodes.. This test checks that the "avahi-daemon" daemon is not configured and running on the cluster nodes. Refer to My Oracle Support notes "2625498.1" for more details regarding errors "PRVG-1359". - Cause: Cause Of Problem Not Available - Action: User Action Not Available Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Details: - PRVG-1359 : Daemon process "avahi-daemon" is configured on node "dbnode2" - Cause: The identified daemon process was found configured on the indicated node. - Action: Ensure that the identified daemon process is not configured on the indicated node. - PRVG-1360 : Daemon process "avahi-daemon" is running on node "dbnode2" - Cause: The identified daemon process was found running on the indicated node. - Action: Ensure that the identified daemon process is stopped and not running on the indicated node. Back to Top Verification result of failed node: dbnode1 Details: - PRVG-1359 : Daemon process "avahi-daemon" is configured on node "dbnode1" - Cause: The identified daemon process was found configured on the indicated node. - Action: Ensure that the identified daemon process is not configured on the indicated node. - PRVG-1360 : Daemon process "avahi-daemon" is running on node "dbnode1" - Cause: The identified daemon process was found running on the indicated node. - Action: Ensure that the identified daemon process is stopped and not running on the indicated node. Solution: [root@dbnode1 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2026-02-16 21:37:47 IST; 1h 41min ago Main PID: 1214 (avahi-daemon) Status: "avahi-daemon 0.7 starting up." Tasks: 2 (limit: 47964) Memory: 1.7M CGroup: /system.slice/avahi-daemon.service ├─1214 avahi-daemon: running [dbnode1.local] └─1273 avahi-daemon: chroot helper Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: Registering new address record for 10.1.2.201 on enp0s8.IPv4. Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: Joining mDNS multicast group on interface enp0s3.IPv6 with address fe80::a00:27ff:fef5:8fe6. Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: New relevant interface enp0s3.IPv6 for mDNS. Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: Registering new address record for fe80::a00:27ff:fef5:8fe6 on enp0s3.*. Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: Joining mDNS multicast group on interface enp0s8.IPv6 with address fe80::11e3:d8b5:da1a:dc34. Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: New relevant interface enp0s8.IPv6 for mDNS. Feb 16 21:37:48 dbnode1.localdomain avahi-daemon[1214]: Registering new address record for fe80::11e3:d8b5:da1a:dc34 on enp0s8.*. Feb 16 21:37:49 dbnode1.localdomain avahi-daemon[1214]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. Feb 16 21:37:49 dbnode1.localdomain avahi-daemon[1214]: New relevant interface virbr0.IPv4 for mDNS. Feb 16 21:37:49 dbnode1.localdomain avahi-daemon[1214]: Registering new address record for 192.168.122.1 on virbr0.IPv4. [root@dbnode1 ~]# systemctl stop avahi-daemon Warning: Stopping avahi-daemon.service, but it can still be activated by: avahi-daemon.socket [root@dbnode1 ~]# systemctl disable avahi-daemon Removed /etc/systemd/system/sockets.target.wants/avahi-daemon.socket. Removed /etc/systemd/system/multi-user.target.wants/avahi-daemon.service. Removed /etc/systemd/system/dbus-org.freedesktop.Avahi.service. [root@dbnode1 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled) Active: inactive (dead) since Mon 2026-02-16 23:19:52 IST; 9s ago Main PID: 1214 (code=exited, status=0/SUCCESS) Status: "avahi-daemon 0.7 starting up." Feb 16 23:19:51 dbnode1.localdomain avahi-daemon[1214]: Got SIGTERM, quitting. Feb 16 23:19:51 dbnode1.localdomain avahi-daemon[1214]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. Feb 16 23:19:51 dbnode1.localdomain avahi-daemon[1214]: Leaving mDNS multicast group on interface enp0s8.IPv6 with address fe80::11e3:d8b5:da1a:dc34. Feb 16 23:19:51 dbnode1.localdomain systemd[1]: Stopping Avahi mDNS/DNS-SD Stack... Feb 16 23:19:51 dbnode1.localdomain avahi-daemon[1214]: Leaving mDNS multicast group on interface enp0s8.IPv4 with address 10.1.2.201. Feb 16 23:19:51 dbnode1.localdomain avahi-daemon[1214]: Leaving mDNS multicast group on interface enp0s3.IPv6 with address fe80::a00:27ff:fef5:8fe6. Feb 16 23:19:51 dbnode1.localdomain avahi-daemon[1214]: Leaving mDNS multicast group on interface enp0s3.IPv4 with address 10.20.30.101. Feb 16 23:19:52 dbnode1.localdomain avahi-daemon[1214]: avahi-daemon 0.7 exiting. Feb 16 23:19:52 dbnode1.localdomain systemd[1]: avahi-daemon.service: Succeeded. Feb 16 23:19:52 dbnode1.localdomain systemd[1]: Stopped Avahi mDNS/DNS-SD Stack. [root@dbnode1 ~]# ssh dbnode2 root@dbnode2's password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Feb 16 23:19:27 2026 [root@dbnode2 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2026-02-16 21:39:10 IST; 1h 41min ago Main PID: 1258 (avahi-daemon) Status: "avahi-daemon 0.7 starting up." Tasks: 2 (limit: 47964) Memory: 1.6M CGroup: /system.slice/avahi-daemon.service ├─1258 avahi-daemon: running [dbnode2.local] └─1294 avahi-daemon: chroot helper Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: Registering new address record for 10.1.2.202 on enp0s8.IPv4. Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: Joining mDNS multicast group on interface enp0s3.IPv6 with address fe80::a00:27ff:fe14:a847. Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: New relevant interface enp0s3.IPv6 for mDNS. Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: Registering new address record for fe80::a00:27ff:fe14:a847 on enp0s3.*. Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: Joining mDNS multicast group on interface enp0s8.IPv6 with address fe80::772c:c2ed:d17:ca20. Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: New relevant interface enp0s8.IPv6 for mDNS. Feb 16 21:39:11 dbnode2.localdomain avahi-daemon[1258]: Registering new address record for fe80::772c:c2ed:d17:ca20 on enp0s8.*. Feb 16 21:39:12 dbnode2.localdomain avahi-daemon[1258]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. Feb 16 21:39:12 dbnode2.localdomain avahi-daemon[1258]: New relevant interface virbr0.IPv4 for mDNS. Feb 16 21:39:12 dbnode2.localdomain avahi-daemon[1258]: Registering new address record for 192.168.122.1 on virbr0.IPv4. [root@dbnode2 ~]# systemctl stop avahi-daemon Warning: Stopping avahi-daemon.service, but it can still be activated by: avahi-daemon.socket [root@dbnode2 ~]# systemctl disable avahi-daemon Removed /etc/systemd/system/sockets.target.wants/avahi-daemon.socket. Removed /etc/systemd/system/multi-user.target.wants/avahi-daemon.service. Removed /etc/systemd/system/dbus-org.freedesktop.Avahi.service. [root@dbnode2 ~]# systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled) Active: inactive (dead) since Mon 2026-02-16 23:20:51 IST; 8s ago Main PID: 1258 (code=exited, status=0/SUCCESS) Status: "avahi-daemon 0.7 starting up." Feb 16 23:20:51 dbnode2.localdomain systemd[1]: Stopping Avahi mDNS/DNS-SD Stack... Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: Got SIGTERM, quitting. Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1. Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: Leaving mDNS multicast group on interface enp0s8.IPv6 with address fe80::772c:c2ed:d17:ca20. Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: Leaving mDNS multicast group on interface enp0s8.IPv4 with address 10.1.2.202. Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: Leaving mDNS multicast group on interface enp0s3.IPv6 with address fe80::a00:27ff:fe14:a847. Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: Leaving mDNS multicast group on interface enp0s3.IPv4 with address 10.20.30.102. Feb 16 23:20:51 dbnode2.localdomain avahi-daemon[1258]: avahi-daemon 0.7 exiting. Feb 16 23:20:51 dbnode2.localdomain systemd[1]: avahi-daemon.service: Succeeded. Feb 16 23:20:51 dbnode2.localdomain systemd[1]: Stopped Avahi mDNS/DNS-SD Stack. 7) RPM Package Manager database - Verifies the RPM Package Manager database files. Verifies the RPM Package Manager database files Error. PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges. - Cause: In running the pre-requisite test suite for a planned system management operation, the indicated check was not performed because it requires root user privileges, and the root user credentials had not been supplied. - Action: To include the check, reissue the request providing the required credentials for the root user. - Refer to My Oracle Support notes "2548970.1" for more details regarding errors "PRVG-11250". - Cause: Cause Of Problem Not Available - Action: User Action Not Available Solution: Ignore this message since this RPM package requires root privileges to run. 8) cgroup OS compatibility - Verifies the cgroup configuration for OS compatibility. This checks performs it's operation by root user and hence it is failed. PRVG-11250 : The check "cgroup OS compatibility" was not performed because it needs 'root' user privileges. - Cause: In running the pre-requisite test suite for a planned system management operation, the indicated check was not performed because it requires root user privileges, and the root user credentials had not been supplied. - Action: To include the check, reissue the request providing the required credentials for the root user. - Refer to My Oracle Support notes "2548970.1" for more details regarding errors "PRVG-11250". - Cause: Cause Of Problem Not Available - Action: User Action Not Available Solution: Ignore this message since this check requires root privileges to run. 9) Device Checks for ASM - This is a prerequisite check to verify that the specified devices meet the requirements for ASM. This is a prerequisite check to verify that the specified devices meet the requirements for ASM. PRVF-9992 : Group of device "/dev/sdi1" did not match the expected group. [Expected = "asmdba"; Found = "oinstall"] on nodes: [dbnode1, dbnode2] - Cause: Group of the device listed was different than required group. - Action: Change the group of the device listed or specify a different device. - PRVF-9992 : Group of device "/dev/sdf1" did not match the expected group. [Expected = "asmdba"; Found = "oinstall"] on nodes: [dbnode1, dbnode2] - Cause: Group of the device listed was different than required group. - Action: Change the group of the device listed or specify a different device. - PRVF-9992 : Group of device "/dev/sdg1" did not match the expected group. [Expected = "asmdba"; Found = "oinstall"] on nodes: [dbnode1, dbnode2] - Cause: Group of the device listed was different than required group. - Action: Change the group of the device listed or specify a different device. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Verification result of failed node: dbnode1 Solution: This happened because, during installation, I selected the device group ownership as asmdba. The installer checks for the same group at the operating system level, but it was not found.
Please go back to the installation screen and change the group to match the one configured at the OS level for disk devices in our case, oinstall. [root@dbnode1 ~]# ls -ltra /dev/oracleasm/disks/* lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR3 -> ../../sde1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/ARCH1 -> ../../sdi1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/REDO1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/DATA1 -> ../../sdf1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/DATA2 -> ../../sdg1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR2 -> ../../sdd1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/OCR1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Feb 16 00:43 /dev/oracleasm/disks/REDO2 -> ../../sdh1 [root@dbnode1 ~]# ls -ld /dev/sd*1 brw-rw---- 1 root disk 8, 1 Feb 16 00:43 /dev/sda1 brw-rw---- 1 grid oinstall 8, 17 Feb 16 00:43 /dev/sdb1 brw-rw---- 1 grid oinstall 8, 33 Feb 16 00:43 /dev/sdc1 brw-rw---- 1 grid oinstall 8, 49 Feb 16 00:43 /dev/sdd1 brw-rw---- 1 grid oinstall 8, 65 Feb 16 00:43 /dev/sde1 brw-rw---- 1 grid oinstall 8, 81 Feb 16 00:43 /dev/sdf1 brw-rw---- 1 grid oinstall 8, 97 Feb 16 00:43 /dev/sdg1 brw-rw---- 1 grid oinstall 8, 113 Feb 16 00:43 /dev/sdh1 brw-rw---- 1 grid oinstall 8, 129 Feb 16 00:43 /dev/sdi1 10) Access Control List check: Access Control List check - This check verifies that the ownership and permissions are correct and consistent for the devices across nodes. Error: - PRVF-9992 : Group of device "/dev/sdi1" did not match the expected group. [Expected = "asmdba"; Found = "oinstall"] on nodes: [dbnode1, dbnode2] - Cause: Group of the device listed was different than required group. - Action: Change the group of the device listed or specify a different device. - PRVF-9992 : Group of device "/dev/sdf1" did not match the expected group. [Expected = "asmdba"; Found = "oinstall"] on nodes: [dbnode1, dbnode2] - Cause: Group of the device listed was different than required group. - Action: Change the group of the device listed or specify a different device. - PRVF-9992 : Group of device "/dev/sdg1" did not match the expected group. [Expected = "asmdba"; Found = "oinstall"] on nodes: [dbnode1, dbnode2] - Cause: Group of the device listed was different than required group. - Action: Change the group of the device listed or specify a different device. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Verification result of failed node: dbnode1 Solution: Refer the same solution mentioned for point 9. 11) DNS/NIS name service: DNS/NIS name service - This test verifies that the Name Service lookups for the Distributed Name Server (DNS) and the Network Information Service (NIS) match for the SCAN name entries. Error: - PRVG-1101 : SCAN name "dbnode-scan" failed to resolve - Cause: An attempt to resolve specified SCAN name to a list of IP addresses failed because SCAN could not be resolved in DNS or GNS using ''nslookup''. - Action: Check whether the specified SCAN name is correct. If SCAN name should be resolved in DNS, check the configuration of SCAN name in DNS. If it should be resolved in GNS make sure that GNS resource is online. Check Failed on Nodes: [dbnode1] Verification result of failed node: dbnode1 Solution: Ignore this since we are not using DNS here. Once the above all solutions are applied then you can click "Check Again" option to reverify the prechecks. During installation, you will get below pop-up box to run and execute orainstroot.sh and root.sh scripts on both the nodes one by one. Login as a root user on both the nodes and execute below scripts: 1) orainstroot.sh - Node1 2) orainstroot.sh - Node2 3) root.sh - Node1 4) root.sh - Node2 Note: Run the script on local node first. After successful completion, you can start the script in parallel on all other nodes in case of multiple cluster node environment. [root@dbnode1 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@dbnode2 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode2 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@dbnode1 ~]# /u01/app/23.0.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/23.0.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. RAC option enabled on: Linux Executing command '/u01/app/23.0.0/grid/perl/bin/perl -I/u01/app/23.0.0/grid/perl/lib -I/u01/app/23.0.0/grid/crs/install /u01/app/23.0.0/grid/crs/install/rootcrs.pl ' Using configuration parameter file: /u01/app/23.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/dbnode1/crsconfig/rootcrs_dbnode1_2026-02-16_11-48-07PM.log 2026/02/16 23:48:30 CLSRSC-594: Executing installation step 1 of 18: 'ValidateEnv'. 2026/02/16 23:48:30 CLSRSC-594: Executing installation step 2 of 18: 'CheckRootCert'. 2026/02/16 23:48:32 CLSRSC-594: Executing installation step 3 of 18: 'GenSiteGUIDs'. 2026/02/16 23:48:36 CLSRSC-594: Executing installation step 4 of 18: 'SetupOSD'. Redirecting to /bin/systemctl restart rsyslog.service 2026/02/16 23:48:37 CLSRSC-594: Executing installation step 5 of 18: 'CheckCRSConfig'. 2026/02/16 23:48:37 CLSRSC-594: Executing installation step 6 of 18: 'SetupLocalGPNP'. 2026/02/16 23:48:51 CLSRSC-594: Executing installation step 7 of 18: 'CreateRootCert'. 2026/02/16 23:49:37 CLSRSC-594: Executing installation step 8 of 18: 'ConfigOLR'. 2026/02/16 23:49:54 CLSRSC-594: Executing installation step 9 of 18: 'ConfigCHMOS'. 2026/02/16 23:49:55 CLSRSC-594: Executing installation step 10 of 18: 'CreateOHASD'. 2026/02/16 23:50:00 CLSRSC-594: Executing installation step 11 of 18: 'ConfigOHASD'. 2026/02/16 23:50:01 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2026/02/16 23:50:38 CLSRSC-594: Executing installation step 12 of 18: 'SetupTFA'. 2026/02/16 23:50:38 CLSRSC-594: Executing installation step 13 of 18: 'InstallACFS'. Message from syslogd@dbnode1 at Feb 16 23:52:09 ... kernel:watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [swapper/0:0] Message from syslogd@dbnode1 at Feb 16 23:52:09 ... kernel:watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [swapper/1:0] Message from syslogd@dbnode1 at Feb 16 23:52:09 ... kernel:watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [swapper/2:0] Message from syslogd@dbnode1 at Feb 16 23:52:09 ... kernel:watchdog: BUG: soft lockup - CPU#3 stuck for 23s! [kworker/3:1H:140] 2026/02/16 23:52:12 CLSRSC-594: Executing installation step 14 of 18: 'CheckFirstNode'. 2026/02/16 23:52:14 CLSRSC-594: Executing installation step 15 of 18: 'InitConfig'. 2026/02/16 23:54:17 CLSRSC-4002: Successfully installed Oracle Autonomous Health Framework (AHF). CRS-4256: Updating the profile Successful addition of voting disk b7fb6ab1c3ef4fa8bfd8adfcf2bf3a5f. Successful addition of voting disk 211ca331f8774fa4bf372b8364912322. Successful addition of voting disk cd95130fe6334f6abf5b22335b0f5f88. Successfully replaced voting disk group with +DG_OCR. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE b7fb6ab1c3ef4fa8bfd8adfcf2bf3a5f (/dev/sdf1) [DG_OCR] 2. ONLINE 211ca331f8774fa4bf372b8364912322 (/dev/sdg1) [DG_OCR] 3. ONLINE cd95130fe6334f6abf5b22335b0f5f88 (/dev/sdi1) [DG_OCR] Located 3 voting disk(s). 2026/02/16 23:57:02 CLSRSC-594: Executing installation step 16 of 18: 'StartCluster'. 2026/02/16 23:57:58 CLSRSC-343: Successfully started Oracle Clusterware stack 2026/02/16 23:58:11 CLSRSC-594: Executing installation step 17 of 18: 'ConfigNode'. clscfg: EXISTING configuration version 23 detected. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2026/02/17 00:00:27 CLSRSC-594: Executing installation step 18 of 18: 'PostConfig'. 2026/02/17 00:01:47 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@dbnode2 ~]# /u01/app/23.0.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/23.0.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. RAC option enabled on: Linux Executing command '/u01/app/23.0.0/grid/perl/bin/perl -I/u01/app/23.0.0/grid/perl/lib -I/u01/app/23.0.0/grid/crs/install /u01/app/23.0.0/grid/crs/install/rootcrs.pl ' Using configuration parameter file: /u01/app/23.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/dbnode2/crsconfig/rootcrs_dbnode2_2026-02-17_00-02-32AM.log 2026/02/17 00:02:40 CLSRSC-594: Executing installation step 1 of 18: 'ValidateEnv'. 2026/02/17 00:02:40 CLSRSC-594: Executing installation step 2 of 18: 'CheckRootCert'. 2026/02/17 00:02:40 CLSRSC-594: Executing installation step 3 of 18: 'GenSiteGUIDs'. 2026/02/17 00:02:40 CLSRSC-594: Executing installation step 4 of 18: 'SetupOSD'. Redirecting to /bin/systemctl restart rsyslog.service 2026/02/17 00:02:41 CLSRSC-594: Executing installation step 5 of 18: 'CheckCRSConfig'. 2026/02/17 00:02:41 CLSRSC-594: Executing installation step 6 of 18: 'SetupLocalGPNP'. 2026/02/17 00:02:41 CLSRSC-594: Executing installation step 7 of 18: 'CreateRootCert'. 2026/02/17 00:02:41 CLSRSC-594: Executing installation step 8 of 18: 'ConfigOLR'. 2026/02/17 00:02:48 CLSRSC-594: Executing installation step 9 of 18: 'ConfigCHMOS'. 2026/02/17 00:02:48 CLSRSC-594: Executing installation step 10 of 18: 'CreateOHASD'. 2026/02/17 00:02:50 CLSRSC-594: Executing installation step 11 of 18: 'ConfigOHASD'. 2026/02/17 00:02:50 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2026/02/17 00:03:00 CLSRSC-594: Executing installation step 12 of 18: 'SetupTFA'. 2026/02/17 00:03:00 CLSRSC-594: Executing installation step 13 of 18: 'InstallACFS'. 2026/02/17 00:03:18 CLSRSC-594: Executing installation step 14 of 18: 'CheckFirstNode'. 2026/02/17 00:03:19 CLSRSC-594: Executing installation step 15 of 18: 'InitConfig'. 2026/02/17 00:03:42 CLSRSC-594: Executing installation step 16 of 18: 'StartCluster'. 2026/02/17 00:04:13 CLSRSC-4002: Successfully installed Oracle Autonomous Health Framework (AHF). 2026/02/17 00:04:21 CLSRSC-343: Successfully started Oracle Clusterware stack 2026/02/17 00:04:21 CLSRSC-594: Executing installation step 17 of 18: 'ConfigNode'. 2026/02/17 00:04:21 CLSRSC-594: Executing installation step 18 of 18: 'PostConfig'. 2026/02/17 00:04:24 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded Click "Skip" and "Yes" to continue. Click "Next" to continue or you will get directly final screen. Click "Close" to finish the installation. The message "The configuration of Oracle Grid Infrastructure for a cluster was successful, but some configuration assistants failed, were cancelled or skipped" indicates that your installation is successfully completed, but post cluvfy verification failed which can be ignored here. Let's verify the CRS services after installation. [grid@dbnode1 grid]$ ps -ef | grep pmon grid 47901 1 0 Feb16 ? 00:00:00 asm_pmon_+ASM1 grid 100346 6773 0 00:16 pts/0 00:00:00 grep --color=auto pmon [grid@dbnode1 ~]$ ps -ef | grep d.bin root 46605 1 3 Feb16 ? 00:00:37 /u01/app/23.0.0/grid/bin/ohasd.bin reboot BLOCKING_STACK_LOCALE_OHAS=AMERICAN_AMERICA.AL32UTF8;CRS_AUX_DATA=CRS_AUXD_FASTCSS=yes root 46693 1 0 Feb16 ? 00:00:10 /u01/app/23.0.0/grid/bin/orarootagent.bin grid 46833 1 1 Feb16 ? 00:00:14 /u01/app/23.0.0/grid/bin/oraagent.bin grid 46855 1 0 Feb16 ? 00:00:05 /u01/app/23.0.0/grid/bin/mdnsd.bin grid 46857 1 0 Feb16 ? 00:00:08 /u01/app/23.0.0/grid/bin/evmd.bin grid 46893 1 0 Feb16 ? 00:00:05 /u01/app/23.0.0/grid/bin/gpnpd.bin grid 46948 1 1 Feb16 ? 00:00:12 /u01/app/23.0.0/grid/bin/gipcd.bin grid 46969 46857 0 Feb16 ? 00:00:05 /u01/app/23.0.0/grid/bin/evmlogger.bin root 47007 1 0 Feb16 ? 00:00:10 /u01/app/23.0.0/grid/bin/cssdmonitor root 47036 1 0 Feb16 ? 00:00:11 /u01/app/23.0.0/grid/bin/cssdagent grid 47064 1 3 Feb16 ? 00:00:37 /u01/app/23.0.0/grid/bin/onmd.bin -S 1 -F grid 47068 1 2 Feb16 ? 00:00:25 /u01/app/23.0.0/grid/bin/ocssd.bin -S 1 -F root 47361 1 2 Feb16 ? 00:00:26 /u01/app/23.0.0/grid/bin/osysmond.bin root 47569 47414 0 Feb16 ? 00:00:00 /u01/app/23.0.0/grid/bin/crfelsnr -n dbnode1 root 48222 1 5 Feb16 ? 00:01:04 /u01/app/23.0.0/grid/bin/crsd.bin reboot root 50887 1 2 00:00 ? 00:00:25 /u01/app/23.0.0/grid/bin/orarootagent.bin grid 50932 1 2 00:00 ? 00:00:22 /u01/app/23.0.0/grid/bin/oraagent.bin grid 51010 1 0 00:00 ? 00:00:00 /u01/app/23.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 51085 1 0 00:00 ? 00:00:00 /u01/app/23.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 51126 1 0 00:00 ? 00:00:00 /u01/app/23.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 91250 5160 0 00:12 pts/0 00:00:00 /bin/sh /u01/app/23.0.0/grid/bin/cluvfy comp baseline -collect cluster -installer -homename grid -n all grid 100955 6773 0 00:16 pts/0 00:00:00 grep --color=auto d.bin [grid@dbnode1 ~]$ /u01/app/23.0.0/grid/bin/crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [grid@dbnode1 ~]$ /u01/app/23.0.0/grid/bin/crsctl check cluster -all ************************************************************** dbnode1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** dbnode2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [grid@dbnode1 ~]$ /u01/app/23.0.0/grid/bin/crsctl stat res -t ------------------------------------------------------------- Name Target State Server State details ------------------------------------------------------------- Local Resources ------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE dbnode1 STABLE ONLINE ONLINE dbnode2 STABLE ora.chad ONLINE ONLINE dbnode1 STABLE ONLINE ONLINE dbnode2 STABLE ora.helper OFFLINE OFFLINE dbnode1 STABLE OFFLINE OFFLINE dbnode2 IDLE,STABLE ora.net1.network ONLINE ONLINE dbnode1 STABLE ONLINE ONLINE dbnode2 STABLE ora.ons ONLINE ONLINE dbnode1 STABLE ONLINE ONLINE dbnode2 STABLE ------------------------------------------------------------- Cluster Resources ------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup) 1 ONLINE ONLINE dbnode1 STABLE 2 ONLINE ONLINE dbnode2 STABLE ora.DG_OCR.dg(ora.asmgroup) 1 ONLINE ONLINE dbnode1 STABLE 2 ONLINE ONLINE dbnode2 STABLE ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE dbnode1 STABLE ora.asm(ora.asmgroup) 1 ONLINE ONLINE dbnode1 Started,STABLE 2 ONLINE ONLINE dbnode2 Started,STABLE ora.asmnet1.asmnetwork(ora.asmgroup) 1 ONLINE ONLINE dbnode1 STABLE 2 ONLINE ONLINE dbnode2 STABLE ora.cdp1.cdp 1 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE dbnode1 STABLE ora.cvuhelper 1 OFFLINE OFFLINE STABLE ora.dbnode1.vip 1 ONLINE ONLINE dbnode1 STABLE ora.dbnode2.vip 1 ONLINE ONLINE dbnode2 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE dbnode1 STABLE -------------------------------------------------------------- Step 10 : Oracle 26ai RDBMS Installation and DB Creation Step by Step Let's first configure ssh authentication for oracle user. Earlier, we configured it for grid user. Node1: [oracle@dbnode1 ~]$ id uid=1002(oracle) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [oracle@dbnode1 ~]$ rm -rf .ssh [oracle@dbnode1 ~]$ mkdir .ssh [oracle@dbnode1 ~]$ chmod 700 .ssh [oracle@dbnode1 ~]$ cd /home/oracle/.ssh [oracle@dbnode1 .ssh]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: SHA256:h8wpdBJ5sU1qmXUgn/XnwqtzZEAIpcB7ghhCBr6zqqk oracle@dbnode1.localdomain The key's randomart image is: +---[RSA 3072]----+ |+o .o.++=oo | |+ . oo.%.+.. | | o o .ooO +. . .| | o ..o*.o .. o | | o .oS . .o .| | o . . oo | | . o. | |.. ... | |E .o | +----[SHA256]-----+ [oracle@dbnode1 .ssh]$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: SHA256:sIwNH3FLuKcHCSuAxz4jt1Bq76JYDiI7pG99CUN+AaI oracle@dbnode1.localdomain The key's randomart image is: +---[DSA 1024]----+ |.. ..o | |o = o .+ . | | B ..+oo. | |E.* o*=+. | |.+.B. ==S | | ...+ o . | |* o. + o | |**... o | |=++ . | +----[SHA256]-----+ [oracle@dbnode1 .ssh]$ ls -ltr -rw-r--r-- 1 oracle oinstall 580 Feb 17 16:40 id_rsa.pub -rw------- 1 oracle oinstall 2610 Feb 17 16:40 id_rsa -rw-r--r-- 1 oracle oinstall 616 Feb 17 16:40 id_dsa.pub -rw------- 1 oracle oinstall 1405 Feb 17 16:40 id_dsa [oracle@dbnode1 .ssh]$ cat *.pub >> authorized_keys.dbnode1 [oracle@dbnode1 .ssh]$ scp authorized_keys.dbnode1 oracle@dbnode2:/home/oracle/.ssh/ oracle@dbnode2's password: authorized_keys.dbnode1 100% 1196 102.8KB/s 00:00 [oracle@dbnode1 .ssh]$ ls -ltr -rw-r--r-- 1 oracle oinstall 580 Feb 17 16:40 id_rsa.pub -rw------- 1 oracle oinstall 2610 Feb 17 16:40 id_rsa -rw-r--r-- 1 oracle oinstall 616 Feb 17 16:40 id_dsa.pub -rw------- 1 oracle oinstall 1405 Feb 17 16:40 id_dsa -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:41 authorized_keys.dbnode1 -rw-r--r-- 1 oracle oinstall 182 Feb 17 16:42 known_hosts -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:42 authorized_keys.dbnode2 [oracle@dbnode1 .ssh]$ cat *.dbnode* >> authorized_keys [oracle@dbnode1 .ssh]$ chmod 600 authorized_keys [oracle@dbnode1 .ssh]$ ls -ltr -rw-r--r-- 1 oracle oinstall 580 Feb 17 16:40 id_rsa.pub -rw------- 1 oracle oinstall 2610 Feb 17 16:40 id_rsa -rw-r--r-- 1 oracle oinstall 616 Feb 17 16:40 id_dsa.pub -rw------- 1 oracle oinstall 1405 Feb 17 16:40 id_dsa -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:41 authorized_keys.dbnode1 -rw-r--r-- 1 oracle oinstall 182 Feb 17 16:42 known_hosts -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:42 authorized_keys.dbnode2 -rw------- 1 oracle oinstall 2392 Feb 17 16:43 authorized_keys Node2: [oracle@dbnode2 ~]$ id uid=1002(oracle) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [oracle@dbnode2 ~]$ rm -rf .ssh [oracle@dbnode2 ~]$ mkdir .ssh [oracle@dbnode2 ~]$ chmod 700 .ssh [oracle@dbnode2 ~]$ cd /home/oracle/.ssh [oracle@dbnode2 .ssh]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: SHA256:i5OxHyvNwq3HUxkRyZmAbHd2Qxo8QDMOsCze66l8iH4 oracle@dbnode2.localdomain The key's randomart image is: +---[RSA 3072]----+ | .o.o*=o*. | | . .+o.o@oo | | . o. ..o.+ . | | . o . | | . . . S o | | . = .o | | . ...==o. | |...E..+oBo | |..+oo .=o. | +----[SHA256]-----+ [oracle@dbnode2 .ssh]$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: SHA256:TWR7c2gMleQaRJT8R1CusNJ25cQ3LFiKTj9rAovVSqU oracle@dbnode2.localdomain The key's randomart image is: +---[DSA 1024]----+ | +B++=. | | ++==+o | | +=+Bo*o.| | *+.B.Bo..| | ESo*oo.. | | + +o .o | | . o . o | | o | | | +----[SHA256]-----+ [oracle@dbnode2 .ssh]$ ls -ltr -rw------- 1 oracle oinstall 2610 Feb 17 16:40 id_rsa -rw-r--r-- 1 oracle oinstall 580 Feb 17 16:40 id_rsa.pub -rw-r--r-- 1 oracle oinstall 616 Feb 17 16:41 id_dsa.pub -rw------- 1 oracle oinstall 1393 Feb 17 16:41 id_dsa [oracle@dbnode2 .ssh]$ cat *.pub >> authorized_keys.dbnode2 [oracle@dbnode2 .ssh]$ scp authorized_keys.dbnode2 oracle@dbnode1:/home/oracle/.ssh/ The authenticity of host 'dbnode1 (10.20.30.101)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode1,10.20.30.101' (ECDSA) to the list of known hosts. oracle@dbnode1's password: authorized_keys.dbnode2 100% 1196 56.5KB/s 00:00 [oracle@dbnode2 .ssh]$ ls -ltr -rw------- 1 oracle oinstall 2610 Feb 17 16:40 id_rsa -rw-r--r-- 1 oracle oinstall 580 Feb 17 16:40 id_rsa.pub -rw-r--r-- 1 oracle oinstall 616 Feb 17 16:41 id_dsa.pub -rw------- 1 oracle oinstall 1393 Feb 17 16:41 id_dsa -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:41 authorized_keys.dbnode2 -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:42 authorized_keys.dbnode1 -rw-r--r-- 1 oracle oinstall 182 Feb 17 16:42 known_hosts [oracle@dbnode2 .ssh]$ cat *.dbnode* >> authorized_keys [oracle@dbnode2 .ssh]$ chmod 600 authorized_keys [oracle@dbnode2 .ssh]$ ls -ltr -rw------- 1 oracle oinstall 2610 Feb 17 16:40 id_rsa -rw-r--r-- 1 oracle oinstall 580 Feb 17 16:40 id_rsa.pub -rw-r--r-- 1 oracle oinstall 616 Feb 17 16:41 id_dsa.pub -rw------- 1 oracle oinstall 1393 Feb 17 16:41 id_dsa -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:41 authorized_keys.dbnode2 -rw-r--r-- 1 oracle oinstall 1196 Feb 17 16:42 authorized_keys.dbnode1 -rw-r--r-- 1 oracle oinstall 182 Feb 17 16:42 known_hosts -rw------- 1 oracle oinstall 2392 Feb 17 16:43 authorized_keys Now we can test the ssh authentication by below commands. Ensure self authentication is also required i.e. Node1-Node1,Node1-Node2, Node2-Node2, Node2-Node1 without which we cannot proceed the installation. [oracle@dbnode1 ~]$ id uid=1002(oracle) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [oracle@dbnode1 ~]$ ssh dbnode1 The authenticity of host 'dbnode1 (10.20.30.101)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode1,10.20.30.101' (ECDSA) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Tue Feb 17 16:34:58 2026 [oracle@dbnode1 ~]$ ssh dbnode2 Activate the web console with: systemctl enable --now cockpit.socket Last login: Tue Feb 17 16:38:10 2026 [oracle@dbnode2 ~]$ ssh dbnode2 The authenticity of host 'dbnode2 (10.20.30.102)' can't be established. ECDSA key fingerprint is SHA256:B44IDpeHxOemZ4B2OCUeQD+LaNr+AVSyhrbzScTZApM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'dbnode2,10.20.30.102' (ECDSA) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Tue Feb 17 16:54:31 2026 from 10.20.30.101 [oracle@dbnode2 ~]$ ssh dbnode1 Activate the web console with: systemctl enable --now cockpit.socket Last login: Tue Feb 17 16:54:23 2026 from 10.20.30.101 Let's create below ASM DISK GROUPS for database. - DG_DATA - DG_REDO1 - DG_REDO2 - DG_ARCH Login as grid user and execute "asmca" to start ASM DISK Group Configuration Assistantance. [grid@dbnode1 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM1 ORACLE_HOME = [/home/oracle] ? /u01/app/23.0.0/grid The Oracle base has been set to /u01/app/grid [grid@dbnode1 ~]$ asmca [grid@dbnode1 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM1 ORACLE_HOME = [/home/oracle] ? /u01/app/23.0.0/grid The Oracle base has been set to /u01/app/grid [grid@dbnode1 ~]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 5116 4976 0 4976 0 N DG_ARCH/ MOUNTED EXTERN N 512 512 4096 4194304 20472 20320 0 20320 0 N DG_DATA/ MOUNTED NORMAL N 512 512 4096 4194304 30708 29696 10236 9730 0 Y DG_OCR/ MOUNTED EXTERN N 512 512 4096 4194304 5116 4976 0 4976 0 N DG_REDO1/ MOUNTED EXTERN N 512 512 4096 4194304 5116 4976 0 4976 0 N DG_REDO2/ Create ORACLE_HOME directoryon both DB nodes. [oracle@dbnode1 ~]$ cd /u01/app/oracle [oracle@dbnode1 oracle]$ mkdir -p product/23.0.0/dbhome_1 [oracle@dbnode1 oracle]$ ls -ld /u01/app/oracle/product/23.0.0/dbhome_1 drwxr-xr-x 2 oracle oinstall 6 Feb 17 17:33 /u01/app/oracle/product/23.0.0/dbhome_1 [oracle@dbnode2 ~]$ cd /u01/app/oracle [oracle@dbnode2 oracle]$ mkdir -p product/23.0.0/dbhome_1 [oracle@dbnode2 oracle]$ ls -ld /u01/app/oracle/product/23.0.0/dbhome_1 drwxr-xr-x 2 oracle oinstall 6 Feb 17 17:33 /u01/app/oracle/product/23.0.0/dbhome_1 Let's copy the RDBMS software to ORACLE_HOME and start the Oracle 26ai database software installation and DB creation. [oracle@dbnode1 ~]$ su - Password: [root@dbnode1 ~]# cd /setup/Oracle_26ai_RAC/ [root@dbnode1 Oracle_26ai_RAC]# ll -rwxrwx--- 1 root vboxsf 1183708 Jul 1 2022 compat-openssl10-1.0.2o-4.el8_6.x86_64.rpm -rwxrwx--- 1 root vboxsf 2406058543 Feb 11 19:30 LINUX.X64_2326100_db_home.zip -rwxrwx--- 1 root vboxsf 1089544451 Feb 11 19:28 LINUX.X64_2326100_grid_home.zip [root@dbnode1 Oracle_26ai_RAC]# cp LINUX.X64_2326100_db_home.zip /u01/app/oracle/product/23.0.0/dbhome_1/ [root@dbnode1 Oracle_26ai_RAC]# cd /u01/app/oracle/product/23.0.0/dbhome_1/ [root@dbnode1 dbhome_1]# ll -rwxr-x--- 1 root root 2406058543 Feb 17 17:40 LINUX.X64_2326100_db_home.zip [root@dbnode1 dbhome_1]# chmod 777 LINUX.X64_2326100_db_home.zip [root@dbnode1 dbhome_1]# exit logout [oracle@dbnode1 ~]$ cd /u01/app/oracle/product/23.0.0/dbhome_1/ [oracle@dbnode1 dbhome_1]$ id uid=1002(oracle) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [oracle@dbnode1 dbhome_1]$ unzip LINUX.X64_2326100_db_home.zip Archive: LINUX.X64_2326100_db_home.zip inflating: META-INF/MANIFEST.MF inflating: META-INF/ORACLE_C.SF inflating: META-INF/ORACLE_C.RSA creating: OPatch/ ..... rdbms/mesg/ocisf.msb -> orasf.msb rdbms/mesg/ocisk.msb -> orask.msb rdbms/mesg/ocith.msb -> orath.msb rdbms/mesg/ocitr.msb -> oratr.msb rdbms/mesg/ocius.msb -> oraus.msb rdbms/mesg/ocius.msg -> ./oraus.msg rdbms/mesg/ocizhs.msb -> orazhs.msb rdbms/mesg/ocizht.msb -> orazht.msb [oracle@dbnode1 dbhome_1]$ [oracle@dbnode1 dbhome_1]$ pwd /u01/app/oracle/product/23.0.0/dbhome_1 [oracle@dbnode1 dbhome_1]$ ll *.zip -rwxrwxrwx 1 root root 2406058543 Feb 17 17:40 LINUX.X64_2326100_db_home.zip Let's remove the copied zip file to release the space form ORACLE_HOME mount-point otherwise, this file also wil get copied during software installation to another node. [oracle@dbnode1 dbhome_1]$ rm -rf LINUX.X64_2326100_db_home.zip Now start the Oracle 26ai Database Software Installation by executing "runInstaller" from ORACLE_HOME directory. [oracle@dbnode1 ~]$ id uid=1002(oracle) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [oracle@dbnode1 ~]$ cd /u01/app/oracle/product/23.0.0/dbhome_1/ [oracle@dbnode1 dbhome_1]$ ll runInstaller -rwxr-x--- 1 oracle oinstall 2957 Jun 7 2024 runInstaller [oracle@dbnode1 dbhome_1]$ ./runInstaller resolv.conf Integrity - This task checks consistency of file /etc/resolv.conf file across nodes Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Details: - PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: dbnode1,dbnode2 - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes specified. - Action: Make sure that ''options timeout'', ''options attempts'' and ''nameserver'' entries in file resolv.conf are proper. On HPUX these entries will be ''retrans'', ''retry'' and ''nameserver''. On Solaris these will be ''options retrans'', ''options retry'' and ''nameserver''. Make sure that the DNS server responds back to name lookup request within the specified time when looking up an unknown host name. Verification result of failed node: dbnode1 Details: - PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: dbnode1,dbnode2 - Cause: The DNS response time for an unreachable node exceeded the value specified on nodes specified. - Action: Make sure that ''options timeout'', ''options attempts'' and ''nameserver'' entries in file resolv.conf are proper. On HPUX these entries will be ''retrans'', ''retry'' and ''nameserver''. On Solaris these will be ''options retrans'', ''options retry'' and ''nameserver''. Make sure that the DNS server responds back to name lookup request within the specified time when looking up an unknown host name. Solution: Ignore this error since we are not using DNS here. 2) HugePages Existence: HugePages Existence - Checks HugePages existence Error: - Refer to My Oracle Support notes "361323.1" for more details regarding errors "PRVE-0021". - Cause: Cause Of Problem Not Available - Action: User Action Not Available Verification WARNING result on node: dbnode1 Expected value : true Actual value : false Details: - PRVE-0021 : HugePages feature is not enabled on nodes "dbnode1.localdomain" - Cause: Available memory is greater than 4GB, but OS HugePages feature is not enabled. - Action: If available memory is greater than 4GB, Oracle recommends configuring HugePages. Refer to OS documentation to configure HugePages. Verification WARNING result on node: dbnode2 Expected value : true Actual value : false Details: - PRVE-0021 : HugePages feature is not enabled on nodes "dbnode2.localdomain" - Cause: Available memory is greater than 4GB, but OS HugePages feature is not enabled. - Action: If available memory is greater than 4GB, Oracle recommends configuring HugePages. Refer to OS documentation to configure HugePages. Solution: Ignore this warning message. This can be configured later after DB creation. 3) Clock Synchronization: Clock Synchronization - This test checks the Oracle Cluster Time Synchronization Services across the cluster nodes. Error: - PRVG-1024 : The NTP daemon or Service was not running on any of the cluster nodes. - Cause: The NTP daemon was not running on any of the cluster nodes. - Action: Look at the accompanying error messages and respond accordingly. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Details: - PRVF-7590 : "ntpd" is not running on node "dbnode2" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "ntpd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available - PRVF-7590 : "chronyd" is not running on node "dbnode2" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "chronyd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available Verification result of failed node: dbnode1 Details: - PRVF-7590 : "ntpd" is not running on node "dbnode1" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "ntpd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available - PRVF-7590 : "chronyd" is not running on node "dbnode1" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "chronyd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available Solution: Ignore these messages since Oracle CTSS process will take care of time syncronization as chronyd service has been disabled on all nodes. 4) Network Time Protocol (NTP): Network Time Protocol (NTP) - This task verifies cluster time synchronization on clusters that use Network Time Protocol (NTP). Error: - PRVG-1024 : The NTP daemon or Service was not running on any of the cluster nodes. - Cause: The NTP daemon was not running on any of the cluster nodes. - Action: Look at the accompanying error messages and respond accordingly. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Details: - PRVF-7590 : "ntpd" is not running on node "dbnode2" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "ntpd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available - PRVF-7590 : "chronyd" is not running on node "dbnode2" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "chronyd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available Back to Top Verification result of failed node: dbnode1 Details: - PRVF-7590 : "ntpd" is not running on node "dbnode1" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "ntpd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available - PRVF-7590 : "chronyd" is not running on node "dbnode1" - Cause: The process identified is not running on the specified node. - Action: Ensure that the identified process is started and running on the specified node. If it is one of the Clusterware daemons then you can use ''crsctl check'' command to check status. - Liveness check failed for "chronyd" - Cause: Cause Of Problem Not Available - Action: User Action Not Available Solution: Ignore these messages since Oracle CTSS process will take care of time syncronization. 5) Single Client Access Name (SCAN): Single Client Access Name (SCAN) - This test verifies the Single Client Access Name configuration. Error: - PRVH-1558 : The IP addresses "" that the SCAN name "dbnode-scan" resolved to do not match the IP addresses "10.20.30.105" configured for SCAN VIP resources. - Cause: The Configuration Verification Utility (CVU) has determined that the indicated IP addresses that the SCAN name resolved to have been modified after the SCAN was created. - Action: Ensure that the IP addresses that the SCAN name resolves to match the IP addresses configured for SCAN VIP resources by modifying the SCAN configuration using commands ''srvctl modify scan -scanname <scan_name>'' and ''srvctl modify scan_listener -update''. - PRVG-1101 : SCAN name "dbnode-scan" failed to resolve - Cause: An attempt to resolve specified SCAN name to a list of IP addresses failed because SCAN could not be resolved in DNS or GNS using ''nslookup''. - Action: Check whether the specified SCAN name is correct. If SCAN name should be resolved in DNS, check the configuration of SCAN name in DNS. If it should be resolved in GNS make sure that GNS resource is online. Check Failed on Nodes: [dbnode2, dbnode1] Verification result of failed node: dbnode2 Verification result of failed node: dbnode1 Solution: Ignore these messages since we are not using DNS here. 6) DNS/NIS name service 'dbnode-scan': DNS/NIS name service 'dbnode-scan' - This test verifies that the Name Service lookups for the Distributed Name Server (DNS) and the Network Information Service (NIS) match for the SCAN name entries. Error: - PRVG-1101 : SCAN name "dbnode-scan" failed to resolve - Cause: An attempt to resolve specified SCAN name to a list of IP addresses failed because SCAN could not be resolved in DNS or GNS using ''nslookup''. - Action: Check whether the specified SCAN name is correct. If SCAN name should be resolved in DNS, check the configuration of SCAN name in DNS. If it should be resolved in GNS make sure that GNS resource is online. Check Failed on Nodes: [dbnode1] Verification result of failed node: dbnode1 Solution: Ignore these messages since we are not using DNS here. [root@dbnode1 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@dbnode1 ~]# /u01/app/oracle/product/23.0.0/dbhome_1/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/23.0.0/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. [root@dbnode2 ~]# id uid=0(root) gid=0(root) groups=0(root) [root@dbnode2 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@dbnode2 ~]# /u01/app/oracle/product/23.0.0/dbhome_1/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/23.0.0/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Let's create the database now. Login as oracle user and run dbca (Database Configuration Assistance) to start the DB creation. [oracle@dbnode1 ~]$ id uid=1002(oracle) gid=2000(oinstall) groups=2000(oinstall),2100(asmadmin),2200(dba),2300(oper),2400(asmdba),2500(asmoper) [oracle@dbnode1 ~]$ cd /u01/app/oracle/product/23.0.0/dbhome_1/bin/ [oracle@dbnode1 bin]$ ll dbca -rwxr-x--- 1 oracle oinstall 12723 Feb 17 19:24 dbca [oracle@dbnode1 bin]$ ./dbca [oracle@dbnode1 bin]$ ps -ef | grep pmon grid 8416 1 0 12:53 ? 00:00:00 asm_pmon_+ASM1 oracle 76087 1 0 13:56 ? 00:00:00 ora_pmon_PR1 oracle 81389 11676 0 14:02 pts/0 00:00:00 grep --color=auto pmon Oracle 26ai RAC installation completed successfully!📝 Stay tuned for a detailed blog post on this case !!! | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Thanks for reading this post ! Please comment if you like this post ! Click FOLLOW to get future blog updates !












































































































































Thanks for sharing
ReplyDelete