Environment:
Database: Oracle 19c 2-Node RAC Configuration Pre-requisites
Operating System: AIX 7.3
Step 1: Check Operating System details. #Command to check minimum Technology Level(TL) and Service Pack (SP) # oslevel -s 7300-03-00-2446 #To determine if the system architecture can run the Oracle software # /usr/bin/getconf HARDWARE_BITMODE 64 #To determine if the system is started in 64-bit mode #bootinfobootinfo -K 64 Step 2: Check if below filesets are installed. The following filesets must be installed: bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools xlC.aix61.rte.13.1.2.0 or later xlC.rte.13.1.2.0 or later xlfrte rsct.basic.rte rsct.compat.clients.rte Note: If you are using the minimum operating system TL level for AIX 7.3, then install the APAR fix: APAR IJ38518. #To check operating system installed patch: # /usr/sbin/instfix -ik IJ38518 There was no data for IJ38518 in the fix database. #To determine if the required filesets are installed and committed # lslpp -l bos.adt.base Fileset Level State Description ------------------------------------------------------------------------------------ bos.adt.base 7.3.3.0 COMMITTED Base Application Development Toolkit # lslpp -l bos.adt.lib Fileset Level State Description ---------------------------------------------------------------------------- bos.adt.libm 7.3.2.0 COMMITTED Base Application Development Math Library # lslpp -l bos.perf.libperfstat Fileset Level State Description ---------------------------------------------------------------------------- bos.perf.libperfstat 7.3.3.0 COMMITTED Performance Statistics Library Interface # lslpp -l bos.perf.perfstat Fileset Level State Description ---------------------------------------------------------------------------- bos.perf.perfstat 7.3.3.0 COMMITTED Performance Statistics Interface # lslpp -l bos.perf.protocols Fileset Level State Description ---------------------------------------------------------------------------- bos.perf.proctools 7.3.3.0 COMMITTED Proc Filesystem Tools # lslpp -l xlC.aix61.rte* Fileset Level State Description ---------------------------------------------------------------------------------------- xlC.aix61.rte 16.1.0.10 COMMITTED IBM XL C++ Runtime for AIX 6.1 and later # lslpp -l xlC.rte* Fileset Level State Description ---------------------------------------------------------------------------------------- xlC.rte 16.1.0.10 COMMITTED IBM XL C++ Runtime for AIX #lslpp -l rsct.basic.rte Fileset Level State Description ---------------------------------------------------------------------------- rsct.basic.rte 3.3.3.0 COMMITTED RSCT Basic Function #lslpp -l rsct.compat.clients.rte Fileset Level State Description ------------------------------------------------------------------------------------- rsct.compat.clients.rte 3.3.3.0 COMMITTED RSCT Event Management Client Function Step 3: Make sure that the following mount-points are not mounted with CIO (Concurrent I/O) option. #Command to check if filesystem mounted using the CIO (Concurrent I/O) option: 1. cat /etc/filesystems 2. Go to the specific filesystem mounted (e.g. /u01) 3. Under this section, look into 'options'. </etc/filesystems> ... /u01: options=cio,rw ... Step 4: umask The umask setting for the oracle and grid user has to be 022. Setting the mask to 022 ensures that the user performing the software installation creates files with 644 permissions. Verify that the umask command displays a value of 22, 022, or 0022 and the
environment variables. $su - grid $ umask $ env $su - oracle $ umask $ env Step 5: Hostname: Hostname command should return the fully qualified hostname as shown below. Ensure that the computer host name is resolvable through a Domain Name System (DNS), a network information service (NIS), or a centrally-maintained TCP/IP host file, such as /etc/hosts. Please note that below IPs are sample IPs only. Add your actual IPs in /etc/hosts file across all the cluster nodes. #cat /etc/hosts #Public IP 10.20.30.101 racnode1.localdomain.com racnode1 10.20.30.102 racnode2.localdomain.com racnode2 #Private IP 10.1.2.201 racnode1-priv.localdomain.com racnode1-priv 10.1.2.202 racnode2-priv.localdomain.com racnode2-priv #VIP IP 10.20.30.103 racnode1-vip.localdomain.com racnode1-vip 10.20.30.104 racnode2-vip.localdomain.com racnode2-vip #scan IP 10.20.30.105 rac-scan.localdomain.com rac-scan 10.20.30.106 rac-scan.localdomain.com rac-scan 10.20.30.107 rac-scan.localdomain.com rac-scan From Node1 and Node2: #ping racnode1 and ping 10.20.30.101 #ping racnode2 and ping 10.20.30.102 #ping racnode1-priv and ping 10.1.2.201 #ping racnode2-priv and ping 10.1.2.202 #ping racnode1-vip and ping 10.20.30.103 #ping racnode2-vip and ping 10.20.30.104 #ping rac-scan and ping 10.20.30.105/106/107 Note: Please note that, only Public and Private IPs should be ping. VIP and SCAN IPs should not ping. If you observe that VIP and SCAN IPs are pinging then you have to contact immediately with Sysadmin or Network team that these IPs are in use somewhere otherwise, during root.sh you will face error "IP is in use" and you have to scrap the entire installation. Step 6: Shell Limits Note: The below shell limit values are minimum values only. For production database systems, it is recommended to tune these values to optimize the performance of the system. Shell Limit Soft Limit (KB) ----------------------- ----------------------------- Soft File Descriptors at least 1024 KB Hard File Descriptors at least 65536 KB Number of processes (Soft) at least 2047 Number of processes (Hard) at least 16384 Soft STACK size at least 10240 KB Hard STACK size at least 10240 KB; at most 32768 KB Soft FILE size unlimited Soft CPU time unlimited(This is the default value.) Soft DATA segment unlimited Soft Real Memory size unlimited #To display the current value: root@racnode1:/ >ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max memory size (kbytes, -m) unlimited open files (-n) unlimited pipe size (512 bytes, -p) 64 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited root@racnode1:/ >cat /etc/security/limits * * Sizes are in multiples of 512 byte blocks, CPU time is in seconds * * fsize - soft file size in blocks * core - soft core file size in blocks * cpu - soft per process CPU time limit in seconds * data - soft data segment size in blocks * stack - soft stack segment size in blocks * rss - soft real memory usage in blocks * nofiles - soft file descriptor limit * fsize_hard - hard file size in blocks * core_hard - hard core file size in blocks * cpu_hard - hard per process CPU time limit in seconds * data_hard - hard data segment size in blocks * stack_hard - hard stack segment size in blocks * rss_hard - hard real memory usage in blocks * nofiles_hard - hard file descriptor limit * * The following table contains the default hard values if the * hard values are not explicitly defined: * * Attribute Value * ========== ============ * fsize_hard set to fsize * cpu_hard set to cpu * core_hard -1 * data_hard -1 * stack_hard 8388608 * rss_hard -1 * nofiles_hard -1 * * NOTE: A value of -1 implies "unlimited" * default: fsize = 2097151 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 2000 oracle: fsize = -1 data = -1 stack = -1 rss = -1 nofiles = -1 threads = -1 nproc = -1 stack_hard = -1 nofiles_hard = -1 nproc_hard = -1 grid: fsize = -1 data = -1 stack = -1 rss = -1 nofiles = -1 threads = -1 nproc = -1 stack_hard = -1 nofiles_hard = -1 nproc_hard = -1 Step 7: I/O Completion Ports On IBM AIX on POWER Systems (64-Bit), I/O completion ports (IOCP) should be enabled to ensure successful database installation. #Command to check whether IOCP is enabled or not. $ lsdev | grep iocp iocp0 Available I/O Completion Ports Step 8: System Configuration Parameters Verify that the kernel parameters shown in the following table are set to values greater than or equal to the minimum value shown: Parameter Minimum Value ------------ -------------- maxuprocs 16384 ncargs 128 Note: For production systems, this value should be at least 128 plus the sum of the PROCESSES and PARALLEL_MAX_SERVERS initialization parameters for each database running on the system. ulimit -a cat /etc/security/limits Step 9: Asynchronous Input Output Processes On AIX 6 and AIX 7, the Asynchronous Input Output (AIO) device drivers are enabled by default. Increase the number of aioserver processes from the default value. The recommended value for aio_maxreqs is 64k (65536). #Check the aio_maxreqs value # ioo –o aio_maxreqs aio_maxreqs = 65536 Step 10: Tuning AIX System Environment A) Tuning Virtual Memory Manager (VMM): Parameter Recommended Value minperm% 3 maxperm% 90 maxclient% = 90 90 lru_file_repage 0 strict_maxclient 1 strict_maxperm 0 #Use the vmo command to set virtual memory parameters. vmo -p -o minperm%=3 vmo -p -o maxperm%=90 vmo -p -o maxclient%=90 vmo -p -o lru_file_repage=0 vmo -p -o strict_maxclient=1 vmo -p -o strict_maxperm=0 Note: Restart the server after above parameter changes to take effect. Please note that the lru_file_repage parameter does not apply to IBM AIX 7.1 and later updates to IBM AIX 7.1. The tunable parameters strict_maxclient and strict_maxperm are deprecated in IBM AIX 7.2 and later updates to IBM AIX 7.2. #Command to check all VM parameters values in one command #vmstat -v OR #vmo -a B) Tuning Virtual Processor Manager (VPM): When we enable CPU folding with only one CPU, system may reboot and face performance issues. Use below command to check if CPU folding is enabled. #schedo -L | grep "vpm_xvcpus" | cut -d " " -f 17 0 Note: The above command returns 0 if CPU folding is enabled. #Command to set value of the vpm_xvcpus parameter to at least 2: #schedo -o vpm_xvcpus=2 C) Increasing System Block Size Allocation: #To check the current value: lsattr -EH -l sys0 | grep ncargs #Increase the space allocated for ARG/ENV list to 128. The size is specified by number of 4K blocks. #/usr/sbin/chdev -l sys0 -a ncargs='1024' D) Configuring SSH LoginGraceTime Parameter for AIX: On AIX systems, configure the OpenSSH parameter to avoid timeout errors. The OpenSSH parameter LoginGraceTime is by default commented out, and OpenSSH on AIX can sometimes result in timeout errors. To avoid these timeout errors, follow the below steps to fix. - Log by root user. - Open file vi /etc/ssh/sshd_config - Locate the commented line #LoginGraceTime 2m. - Uncomment the line, and change the value to 0. i.e. LoginGraceTime 0 Here, 0 means unlimited. - Save the file /etc/ssh/sshd_config. - Restart SSH service. E) Configuring User Process Parameters: Ensure the maximum number of processes allowed to each user is set to 16384 or greater. - Use below command: # smit chgsys - Verify that the value shown for Maximum number of PROCESSES allowed to each user is greater than or equal to 16384. - once changes are done, press Enter, then Esc+0 (Exit) to exit. Note: For Production Systems, this value should be at least 128 plus the sum of the PROCESSES and PARALLEL_MAX_SERVERS initialization parameters for each database running on the system. F) Configuring Network Tuning Parameters: # To check the current values of the network tuning parameters: # no -a | more # Command to check whether the system is running in compatibility mode: # lsattr -E -l sys0 -a pre520tune pre520tune enable Pre-520 tuning compatibility mode True Here, enable means the system is running in compatibility mode. Case 1: If the system is running in compatibility mode: Use below commands to change the below network parameter values: # /usr/sbin/no -o udp_sendspace=65536 # /usr/sbin/no -o udp_recvspace=655360 # /usr/sbin/no -o tcp_sendspace=65536 # /usr/sbin/no -o tcp_recvspace=65536 # /usr/sbin/no -o rfc1323=1 # /usr/sbin/no -o sb_max=4194304 # /usr/sbin/no -o ipqmaxlen=512 If you want the above values to be persisted when system restart, then add below entries in the /etc/rc.net file for each parameter that you changed. if [ -f /usr/sbin/no ] ; then /usr/sbin/no -o udp_sendspace=65536 /usr/sbin/no -o udp_recvspace=655360 /usr/sbin/no -o tcp_sendspace=65536 /usr/sbin/no -o tcp_recvspace=65536 /usr/sbin/no -o rfc1323=1 /usr/sbin/no -o sb_max=4194304 /usr/sbin/no -o ipqmaxlen=512 fi # Use the chdev command to change the characteristics of a network interface without restarting the system: #chdev -l en5 -a rfc1323=1 # If the system is not running in compatibility mode: Use below commands to change the parameter value: #/usr/sbin/no -p -o parameter=value #/usr/sbin/no -r -o ipqmaxlen=512 #/usr/sbin/no -n -p -o sb_max=4194304 #/usr/sbin/no -r -K -o ipqmaxlen=512 Note: Take server reboot after above parameter changes. G) Using Automatic SSH Configuration During Installation: Oracle Clusterware Installation can fail during the AttachHome operation when the remote node closes the SSH connection. To fix this problem, set the timeout wait to unlimited by setting the following parameter in the SSH daemon configuration file /etc/ssh/sshd_config on all cluster nodes. #vi /etc/ssh/sshd_config #LoginGraceTime 0 Restart the ssh service after above change. H) Running the rootpre.sh Script: Run the rootpre.sh script by root user only the first time when you install Oracle Database on IBM AIX on POWER Systems (64-Bit). No need to run the rootpre.sh script on an IBM AIX server that has an Oracle Database software already installed. Steps to run rootpre.sh script. 1) Download the Oracle Database installation software (db_home.zip) and extract the files into a Oracle home directory. 2) By root user: $ su - root password: # 3) Run the rootpre.sh script using the following command: # $ORACLE_HOME/clone/rootpre.sh Step 11: User and Group Creation # mkgroup -A id=54421 oinstall # mkgroup -A id=54322 dba # mkgroup -A id=54323 oper # mkgroup -A id=54324 backupdba # mkgroup -A id=54325 dgdba # mkgroup -A id=54326 kmdba # mkgroup -A id=54327 asmdba # mkgroup -A id=54328 asmoper # mkgroup -A id=54329 asmadmin # mkgroup -A id=54330 racdba #To create the Oracle Grid Infrastructure (grid) user: # mkuser id=54322 pgrp=oinstall groups=asmadmin,asmdba,racdba grid Here, pgrp - Primary group groups - Secondary Group $ id grid uid=54331(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba) $ id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54330(racdba) Note: Oracle Database software installations using an Active Directory (AD) account is not supported on IBM AIX on POWER Systems (64-bit). Step 12: Directory Creation with required permissions For GRID: # mkdir -p /u01/app/grid # mkdir -p /u01/app/19.0.0/grid # chown -R grid:oinstall /u01 # chmod -R 775 /u01 For RDBMS: # mkdir -p /u01/app/oracle # mkdir -p /u01/app/oraInventory # chown -R oracle:oinstall /u01/app/oracle # chown -R oracle:oinstall /u01/app/oraInventory # chmod -R 775 /u01/app Note: ORACLE_HOME or ORACLE_BASE cannot be symlinks. Step 13: User Capabilities #Ensure that the Oracle software owner user (oracle or grid) has the below capabilities. CAP_NUMA_ATTACH, CAP_BYPASS_RAC_VMM, and CAP_PROPAGATE. #To check existing capabilities: # /usr/bin/lsuser -a capabilities grid # /usr/bin/lsuser -a capabilities oracle #To add capabilities: # /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid # /usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle Step 14: Network Configuration An Oracle Clusterware configuration requires at least two interfaces: 1) Public Network:
2) Private Network:
Note: - You can use either IPv4 protocol or the IPv6 protocol on a given network. If you are using interfaces with net bonding or teaming, then you don't configure one interface of IPv4 addresses and other one of IPv6 addresses, which is not supported. You have to use both redundant interfaces of same IP protocol i.e. both of IPV4 or both of IVP6. - All the nodes in the cluster must use the same IP protocol configuration. Either all the nodes use only IPv4, or use only IPv6. You cannot have some nodes in the cluster to support IPv6, and other nodes to support only IPv4 addresses. #Command to check ethernet card properties. #lsattr -El en3 A) Network Requirements for Single Private Network Interface (Without Net Bonding - Single Private Ethernet) :
B) Network Requirements for Redundant Interconnect (With Net Bonding - Multiple Private IPs) :
Multicast Requirements for Networks Used by Oracle Grid Infrastructure: Multicasting is required on the private interconnect. For this reason, at a minimum, you must enable multicasting for the cluster. For each cluster member node, the Oracle mDNS daemon uses multicasting on all interfaces to communicate with other nodes in the cluster. You do not need to enable multicast communications across routers. - Across the broadcast domain as defined for the private interconnect - On the IP address subnet ranges 224.0.0.0/24 and optionally 230.0.1.0/24 Step 15: Storage Configuration #Command to list number of disks available on DB node root@racnode1:/ > cd /dev root@racnode1:/ > ls -ltr root@racnode1:/ >lsdev -Cc disklsdev -Cc disk hdisk0 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk1 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk2 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk3 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk4 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk5 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk6 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk7 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk8 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk9 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk10 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk11 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk12 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk13 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk14 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk15 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk16 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk17 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk18 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk19 Available C6-T0-01 MPIO IBM 2076 FC Disk hdisk20 Available C6-T0-01 MPIO IBM 2076 FC Disk Note: Ask System admin team to rename the disk name as below so that when you add any disk in future then you will have sequences maintained. For Example: root@racnode1:/dev >ls -ltr *DISK* crw-rw---- 1 grid asmadmin 13, 3 Mar 03 12:25 rASMDISKOCR1 brw-rw---- 1 root system 13, 3 Mar 03 12:25 ASMDISKOCR1 crw-rw---- 1 grid asmadmin 13, 4 Mar 03 12:25 rASMDISKOCR2 brw-rw---- 1 root system 13, 4 Mar 03 12:25 ASMDISKOCR2 crw-rw---- 1 grid asmadmin 13, 2 Mar 03 12:25 rASMDISKOCR3 brw-rw---- 1 root system 13, 2 Mar 03 12:25 ASMDISKOCR3 crw-rw---- 1 grid asmadmin 13, 16 Mar 03 12:25 rASMDISKREDO1 brw-rw---- 1 root system 13, 16 Mar 03 12:25 ASMDISKREDO1 crw-rw---- 1 grid asmadmin 13, 15 Mar 03 12:25 rASMDISKREDO2 brw-rw---- 1 root system 13, 15 Mar 03 12:25 ASMDISKREDO2 crw-rw---- 1 grid asmadmin 13, 18 Mar 03 12:25 rASMDISKARCH1 brw-rw---- 1 root system 13, 18 Mar 03 12:25 ASMDISKARCH1 crw-rw---- 1 grid asmadmin 13, 20 Mar 03 12:25 rASMDISKARCH2 brw-rw---- 1 root system 13, 20 Mar 03 12:25 ASMDISKARCH2 crw-rw---- 1 grid asmadmin 13, 5 Mar 03 12:25 rASMDISKDATA1 brw-rw---- 1 root system 13, 5 Mar 03 12:25 ASMDISKDATA1 crw-rw---- 1 grid asmadmin 13, 13 Mar 03 12:25 rASMDISKDATA2 brw-rw---- 1 root system 13, 13 Mar 03 12:25 ASMDISKDATA2 crw-rw---- 1 grid asmadmin 13, 10 Mar 03 12:25 rASMDISKDATA3 brw-rw---- 1 root system 13, 10 Mar 03 12:25 ASMDISKDATA3 crw-rw---- 1 grid asmadmin 13, 9 Mar 03 12:25 rASMDISKDATA4 brw-rw---- 1 root system 13, 9 Mar 03 12:25 ASMDISKDATA4 crw-rw---- 1 grid asmadmin 13, 12 Mar 03 12:25 rASMDISKDATA5 brw-rw---- 1 root system 13, 12 Mar 03 12:25 ASMDISKDATA5 crw-rw---- 1 grid asmadmin 13, 7 Mar 03 12:25 rASMDISKDATA6 brw-rw---- 1 root system 13, 7 Mar 03 12:25 ASMDISKDATA6 crw-rw---- 1 grid asmadmin 13, 8 Mar 03 12:25 rASMDISKDATA7 brw-rw---- 1 root system 13, 8 Mar 03 12:25 ASMDISKDATA7 crw-rw---- 1 grid asmadmin 13, 6 Mar 03 12:25 rASMDISKDATA8 brw-rw---- 1 root system 13, 6 Mar 03 12:25 ASMDISKDATA8 crw-rw---- 1 grid asmadmin 13, 11 Mar 03 12:25 rASMDISKDATA9 brw-rw---- 1 root system 13, 11 Mar 03 12:25 ASMDISKDATA9 crw-rw---- 1 grid asmadmin 13, 14 Mar 03 12:25 rASMDISKDATA10 brw-rw---- 1 root system 13, 14 Mar 03 12:25 ASMDISKDATA10 Note: - You either check the size of ASMDISK or rASMDISK - Ownership for the disks should be like below: For ASMDISK*: brw-rw---- 1 root system 13, 3 Mar 03 12:25 ASMDISKOCR1 For rASMDISK*: crw-rw---- 1 grid asmadmin 13, 3 Mar 03 12:25 rASMDISKOCR1 #Command to check size in MB of each ASM disk root@racnode1:/dev >bootinfo -s ASMDISKOCR1 51200 root@racnode1:/dev >bootinfo -s ASMDISKOCR2 51200 root@racnode1:/dev >bootinfo -s ASMDISKOCR3 51200 root@racnode1:/dev >bootinfo -s ASMDISKREDO1 204800 root@racnode1:/dev >bootinfo -s ASMDISKREDO2 204800 root@racnode1:/dev >bootinfo -s ASMDISKARCH1 524288 root@racnode1:/dev >bootinfo -s ASMDISKARCH2 524288 root@racnode1:/dev >bootinfo -s ASMDISKDATA1 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA2 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA3 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA4 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA5 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA6 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA7 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA8 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA9 512000 root@racnode1:/dev >bootinfo -s ASMDISKDATA10 512000 OR root@racnode1:/dev >bootinfo -s rASMDISKOCR1 51200 root@racnode1:/dev >bootinfo -s rASMDISKOCR2 51200 root@racnode1:/dev >bootinfo -s rASMDISKOCR3 51200 root@racnode1:/dev >bootinfo -s rASMDISKREDO1 204800 root@racnode1:/dev >bootinfo -s rASMDISKREDO2 204800 root@racnode1:/dev >bootinfo -s rASMDISKARCH1 524288 root@racnode1:/dev >bootinfo -s rASMDISKARCH2 524288 root@racnode1:/dev >bootinfo -s rASMDISKDATA1 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA2 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA3 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA4 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA5 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA6 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA7 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA8 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA9 512000 root@racnode1:/dev >bootinfo -s rASMDISKDATA10 512000 #Command to check the disks which are not in use root@racnode1:/ >lspv | grep -i nonelspv | grep -i none hdisk1 none None hdisk2 none None hdisk3 none None hdisk4 none None hdisk5 none None hdisk6 none None hdisk7 none None hdisk8 none None hdisk9 none None hdisk10 none None hdisk11 none None hdisk12 none None hdisk13 none None hdisk14 none None hdisk15 none None hdisk16 none None hdisk17 none None #Command to check the disk size root@racnode1:/ >bootinfo -s hdisk1 51200 root@racnode1:/ >bootinfo -s hdisk2 51200 root@racnode1:/ >bootinfo -s hdisk3 512000 root@racnode1:/ >bootinfo -s hdisk4 512000 root@racnode1:/ >bootinfo -s hdisk5 512000 root@racnode1:/ >bootinfo -s hdisk6 512000 root@racnode1:/ >bootinfo -s hdisk7 512000 root@racnode1:/ >bootinfo -s hdisk8 512000 root@racnode1:/ >bootinfo -s hdisk9 512000 root@racnode1:/ >bootinfo -s hdisk10 512000 root@racnode1:/ >bootinfo -s hdisk11 512000 root@racnode1:/ >bootinfo -s hdisk12 512000 root@racnode1:/ >bootinfo -s hdisk13 204800 root@racnode1:/ >bootinfo -s hdisk14 204800 root@racnode1:/ >bootinfo -s hdisk15 524288 root@racnode1:/ >bootinfo -s hdisk16 524288 Step 16: Once above are pre-requisites are completed then you can proceed with below DB verification checks and start the RAC installation activity. 1) SSH configuration for both grid and oracle user. 2) cluvfy pre-installation verification. 3) Download GRID and RDBMS Setups. 4) Download Latest RU and recommended patches from Doc 555.1. |
Thanks for reading this post ! Please comment if you like this post ! Click FOLLOW to get future blog updates !!!!
Thank you for visiting my blog ! Thanks for your comment !