Source DB Server:
Operating System: Solaris (x86-64) 11.4
Database: PR
DB Unique Name: PR
Compatible: 12.1.0.2.0
Single Instance/RAC/Oracle Restart: Oracle Restart
DB Version: Oracle 19.0.0.0 EE
Target DB Server:
Operating System: Redhat Linux 8.8 64 Bit
Database: PR
DB Unique Name: PR_LINUX
Compatible: 19.0.0
Single Instance/RAC/Oracle Restart: Oracle Restart
DB Version: Oracle 19.0.0.0 EE
Note: The compatible parameter value in Standby Database should always be higher or equal to the value of compatible parameter in Primary Database. Here, I am not changing the target Linux DB compatible parameter since this is testing environment and rollback is not required. I am suggesting you to change this for production environment since this can be required in case of reverse replication or rollback activity where your Linux DB (New Primary) will be running in 19.0.0 after switchover activity and Solaris DB (New Standby) will be running in 12.1.0.2.0.
If you want to migrate Oracle database from one operating system to different operating system platform, then you must check whether Endian Formats are matching or not. If formats are not matching then we have to use Oracle's XTT convert method to perform cross platform migration. If formats are matching, then we can simply create dataguard and keep the DC-DR in sync till the final Go Live stage. Once Go Live stage gets completed then we can simply perform switchover operation and application can access new database on new server.
Refer below steps to perform entire migration activity from Solaris to Linux.
Step 1: First check the ENDIAN_FORMAT for both source and target server operating system.
Execute below query to check ENDIAN format. set lines 300 pages 3000 col platform_name for a40 SELECT platform_name, endian_format, platform_id FROM v$transportable_platform ORDER BY platform_name; PLATFORM_NAME ENDIAN_FORMAT PLATFORM_ID ---------------------------------------- -------------- ----------- AIX-Based Systems (64-bit) Big 6 Apple Mac OS Big 16 Apple Mac OS (x86-64) Little 21 HP IA Open VMS Little 19 HP Open VMS Little 15 HP Tru64 UNIX Little 5 HP-UX (64-bit) Big 3 HP-UX IA (64-bit) Big 4 IBM Power Based Linux Big 18 IBM zSeries Based Linux Big 9 Linux IA (32-bit) Little 10 Linux IA (64-bit) Little 11 Linux OS (S64) Big 22 Linux x86 64-bit Little 13 Microsoft Windows IA (32-bit) Little 7 Microsoft Windows IA (64-bit) Little 8 Microsoft Windows x86 64-bit Little 12 Solaris Operating System (x86) Little 17 Solaris Operating System (x86-64) Little 20 Solaris[tm] OE (32-bit) Big 1 Solaris[tm] OE (64-bit) Big 2 21 rows selected. |
Here, source operating system is Solaris OS x86-64 Bit and target operating system is Linux x86-64 Bit. The ENDIAN FORMAT of both source and target operating systems is same i.e. Little
Step 2: Now check the dataguard matrix for both source and target operating system.
If you have oracle support license, then you can refer below doc for certification matrix checks.
As per the above doc, the Heterogeneous Dataguard configuration is supported for the current operating system "Solaris x86-64 Bit" to target operating system "Linux x86-64 Bit".
Short Plan of action to perform Migration activity from Solaris to Linux:
Here, user can ask to retain existing DB IPs or use new IPs. If user asks to retain existing DB IPs then for configuring new Linux servers, you need to configure environment with New IPs and during downtime, you have to swap IPs. If user doesn't ask to retain existing DB IPs, then no need to touch existing IPs.
Activity Plan:
Phase 1: Raise request with user to work on server readiness which matches existing server configuration.
Phase 2: Oracle Restart installation and configuration with latest patches and recommended proactive plus bug patches.
Phase 3: Configure DR and enable sync between Solaris DB to Linux DB and keep the DBs in sync till Go Live.
Phase 4: During Go Live phase, perform switchover operation.
Phase 5: Live the application to access New Linux DB servers.
Rollback Plan:
Phase 1: Enable reverse replication from Linux(New Primary) to Solaris(New Standby).
Phase 2: If application is not working on new Linux Server then simply perform switchback from Linux to Solaris.
Phase 3: Check the sync between DC-DR and route the application to Solaris Server.
Step 3: First capture details from existing Solaris DB server based on that new Linux DB server will be ready.
1) ASM Disk group Total Size: set lines 300 pages 3000 col ASM_COMPA for a10 col DB_COMPA for a10 col name for a8 col path for a30 select a.NAME,a.STATE,a.TYPE,a.TOTAL_MB/1024,a.FREE_MB/1024,a.COMPATIBILITY ASM_COMPA,a.DATABASE_COMPATIBILITY DB_COMPA,b.path from v$asm_diskgroup a,v$asm_disk b where a.group_number=b.group_number order by 1; By grid user: ASMCMD> lsdg |
2) Hardware Requirement:
Database - PR | ||
Disk Group/File System | Size (GB) | Description |
DATA | 4096 | 8 disks of 500 GB each (ASM Disk) |
REDO1 | 50 | 1 disk of 50 GB (ASM Disk) |
REDO2 | 50 | 1 disk of 50 GB (ASM Disk) |
/oracle | 100 | 1 disk of 100 GB (Local FS) |
/grid | 100 | 1 disk of 100 GB (Local FS) |
/arch | 500 | Considering current arch growth and future requirement |
3) For configuring Oracle software on RHEL 8.x, ensure below is the minimum kernel level.
Minimum kernel version: 4.18.0-80.el8.x86_64 or later
4) For CPU, Memory, and Swap requirement, refer existing servers or can be increased later considering DB load.
Step 4: Once the target Linux Operating system is ready then download Oracle Grid and Oracle Database software. Click on below links to download GRID+DB software from Oracle sites.
You can download latest patches if you have support license. Here, I am sharing links to download 19.17 patches. You will be able to download these patches after entering username/password if you have support license.
RDBMS Recommended Patches for 19.17 DB Home
Transfer these patches and setup files to target Linux DB server.
Step 5: Start the Oracle Restart and RDBMS software Installation and Configuration.
Now its time to configure DR on new Linux servers. I have used "rman duplicate for standby" method to configure DR.
Step 6: Capture below parameters from existing database hosted on Solaris system.
Parameter Name | Parameter Value (Solaris) | Parameter Value (Linux) |
COMPATIBLE | 19.0.0 | |
NLS_CHARACTERSET | AL32UTF8 | |
NLS_NCHAR_CHARACTERSET | AL16UTF16 | |
NLS_DATE_FORMAT | DD-MON-RR | |
NLS_SORT | BINARY | |
NLS_LENGTH_SEMANTICS | BYTE | |
SGA_TARGET | 300 GB | |
SGA_MAX_SIZE | 400 GB | |
PGA_AGGREGATE_TARGET | 60 GB | |
PGA_AGGREGATE_LIMIT | 64 GB | |
DB_FILES | 3000 | |
PROCESSES | 3000 | |
SESSIONS | 3776 | 4560 |
TRANSACTIONS | 5016 | |
OPEN_CURSORS | 500 | |
SHARED_POOL_SIZE | 10737418240 | |
DB_CACHE_SIZE | ||
LOG_BUFFER | ||
JOB_QUEUE_PROCESSES | 1140 | |
CURSOR_SHARING | FORCE | |
MEMORY_TARGET | 0 | |
OPEN_CURSORS | 1000 |
Capture online & standby redo log and temp tablespace details:
#Query to check default temporary tablespace in database col property_name for a35 col property_value for a35 select property_name,property_value from database_properties where property_name='DEFAULT_TEMP_TABLESPACE'; #Query to check temporary tablespace and temp file details set lines 300 pages 3000 col file_name for a60 col tablespace_name for a25 col status for a18 select tablespace_name,file_id,file_name,status,bytes/1024/1024/1024 SIZE_GB,maxbytes/1024/1024/1024 MAX_SIZE_GB,autoextensible from dba_temp_files order by 2; #Query to check all NLS parameters set in a source Solaris database. Ensure these should be same in target Linux DB server as well. col property_name for a35 col property_value for a35 select property_name,property_value from database_properties where property_name like '%NLS%'; |
Copy below files from Solaris to Linux DB server in respective locations.
- parameter file
- password file
- tnsnames.ora
- listener.ora
- sqlnet.ora
- wallet files
Modify the above files as per the new environment.
Pfile(Solaris): $ cat initPR .ora *.aq_tm_processes=0 *.audit_file_dest='/oracle/admin/PR/adump' *.audit_sys_operations=TRUE *.audit_trail='DB' *.compatible='12.1.0.2.0' *.control_files='+DATA/PR/control01.ctl','+REDO1/PR/control02.ctl' *.cursor_sharing='FORCE' *.db_block_size=8192 *.db_cache_size=64424509440 *.db_domain='' *.db_files=300 *.db_keep_cache_size=6442450944 *.db_name='PR' *.db_recovery_file_dest='+FRA' *.db_recovery_file_dest_size=3g *.db_securefile='ALWAYS' *.db_writer_processes=10 *.diagnostic_dest='/oracle/app/oracle/diag' *.dispatchers='' *.enable_goldengate_replication=TRUE *.fast_start_mttr_target=200 *.filesystemio_options='setall' *.log_archive_config='DG_CONFIG=(PR,PRDG)' *.log_archive_dest_1='LOCATION=+ARCH' *.log_archive_dest_2='SERVICE=PRDG LGWR ASYNC NOAFFIRM VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRDG' *.log_archive_dest_state_2='ENABLE' *.log_archive_format='PR_%t_%s_%r.arc' *.log_archive_max_processes=4 *.log_archive_trace=0 *.open_cursors=1000 *.optimizer_capture_sql_plan_baselines=FALSE *.optimizer_index_caching=0 *.optimizer_index_cost_adj=100 *.pga_aggregate_target=42949672960 *.processes=2500 *.recyclebin='OFF' *.redo_transport_user='' *.remote_login_passwordfile='EXCLUSIVE' *.resource_limit=TRUE *.resource_manager_plan='' *.sec_max_failed_login_attempts=3 *.sec_protocol_error_further_action='DELAY,3' *.sec_protocol_error_trace_action='LOG' *.session_cached_cursors=500 *.sessions=1655 *.sga_max_size=163745628160 *.sga_target=163745628160 *.shared_pool_size=21474836480 *.standby_file_management='AUTO' *.star_transformation_enabled='FALSE' *.streams_pool_size=4831838208 *.undo_management='AUTO' *.undo_retention=3600 *.undo_tablespace='UNDOTBS1' Updated Pfile(Linux): cat initPR.ora *.audit_file_dest='/oracle/app/oracle/adump' *.audit_trail='DB' *.compatible='19.0.0' *.control_files='+DATA' *.core_dump_dest='/oracle/app/oracle/diag/rdbms/PR_linux/PR/cdump' *.cursor_sharing='FORCE' *.db_block_size=8192 *.db_cache_size=234G *.db_create_file_dest='+DATA' *.db_create_online_log_dest_1='+REDO1' *.db_create_online_log_dest_2='+REDO2' *.db_files=2000 *.db_name='PR' *.db_unique_name='PR_LINUX' *.diagnostic_dest='/oracle/app/oracle_base' *.dispatchers='(PROTOCOL=TCP) (SERVICE=PRXDB)' *.filesystemio_options='SETALL' *.java_jit_enabled=TRUE *.java_pool_size=512M *.job_queue_processes=576 *.log_archive_config='DG_CONFIG=(PR,PRDG,PR_LINUX)' *.log_archive_dest_1='LOCATION=+ARCH' *.log_buffer=472992K *.log_checkpoint_timeout=0 *.nls_date_format='YYYYMMDD' *.nls_language='AMERICAN' *.nls_sort='BINARY' *.nls_territory='AMERICA' *.open_cursors=1000 *.optimizer_adaptive_plans=FALSE *.optimizer_index_cost_adj=10 *.optimizer_mode='ALL_ROWS' *.parallel_degree_policy='MANUAL' *.pga_aggregate_target=60G *.plsql_warnings='DISABLE:ALL' *.processes=1500 *.query_rewrite_enabled='TRUE' *.recyclebin='OFF' *.remote_login_passwordfile='EXCLUSIVE' *.result_cache_max_size=83904K *.session_cached_cursors=300 *.sga_max_size=400G *.sga_target=300G *.shared_pool_size=10737418240 *.shared_servers=0 *.streams_pool_size=1G *.undo_retention=10800 *.undo_tablespace='UNDOTBS1' |
tnsnames.ora (Solaris): Note: Use your actual server IPs. PR = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = PR_SERVER_IP)(PORT = 1530)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PR) ) ) PR_LINUX = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = LINUX_SERVER_IP)(PORT = 1535)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PR) ) ) Updated tnsnames.ora (Linux): PR = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = PR_SERVER_IP)(PORT = 1530)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PR) ) ) PR_LINUX = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = LINUX_SERVER_IP)(PORT = 1535)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PR) ) ) |
listener.ora (Solaris): $ cat listener.ora SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = PR) (ORACLE_HOME = /oracle/app/oracle/product/19.0.0/dbhome_1) (SID_NAME = PR) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = PR_SERVER_IP)(PORT = 1530)) ) ) Updated listener.ora (Linux): SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = PR) (ORACLE_HOME = /oracle/app/oracle/product/19.0.0/dbhome_1) (SID_NAME = PR) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = LINUX_SERVER_IP)(PORT=1535)) ) ) |
sqlnet.ora (Solaris): $ cat sqlnet.ora ENCRYPTION_WALLET_LOCATION= (SOURCE=(METHOD=FILE)(METHOD_DATA= (DIRECTORY=/oracle/app/oracle/admin/PR/wallet))) SQLNET.ALLOWED_LOGON_VERSION_SERVER=8 Updated sqlnet.ora (Linux): $ cat sqlnet.ora ENCRYPTION_WALLET_LOCATION= (SOURCE=(METHOD=FILE)(METHOD_DATA= (DIRECTORY=/oracle/app/oracle/admin/PR_LINUX/wallet))) SQLNET.ALLOWED_LOGON_VERSION_SERVER=8 |
Ensure the respective PORTs are open. Start the listener on Linux DB Server and check tnsping and sqlplus connectivity from both Solaris and Linux servers.
$lsnrctl start
$lsnrctl status
$tnsping PR
$tnsping PR_LINUX
$sqlplus sys/sys123@PR as sysdba
$sqlplus sys/sys123@PR_LINUX as sysdba
Step 7: Create and configure script to execute RMAN duplicate.
First check the connectivity from both Solaris to Linux and vice-versa. On Solaris Server: sqlplus sys/sys123@PR as sysdba sqlplus sys/sys123@PR_LINUX as sysdba On Linux Server: sqlplus sys/sys123@PR as sysdba sqlplus sys/sys123@PR_LINUX as sysdba |
If connectivity is find between Solaris to Linux and Vice-versa, then go ahead with RMAN duplicate command.
RMAN Duplicate Script Configuration: On Linux Server: $vi /home/oracle/rman_dup.sh export ORACLE_SID=PR export ORACLE_BASE=/oracle/app/oracle export ORACLE_HOME=/oracle/app/oracle/product/19.0.0/dbhome_1 export PATH=/oracle/app/oracle/product/19.0.0/dbhome_1/bin:$PATH rman target sys/India#1234@PR auxiliary sys/sys123@PR_LINUX msglog /home/oracle/RMAN_DUP.log cmdfile=/home/oracle/rman_dup.rcv vi rman_dup.rcv run { allocate channel d1 type disk; allocate channel d2 type disk; allocate channel d3 type disk; allocate channel d4 type disk; allocate channel d5 type disk; allocate channel d6 type disk; allocate channel d7 type disk; allocate channel d8 type disk; allocate channel d9 type disk; allocate channel d10 type disk; allocate auxiliary channel a1 type DISK; allocate auxiliary channel a2 type DISK; allocate auxiliary channel a3 type DISK; allocate auxiliary channel a4 type DISK; allocate auxiliary channel a5 type DISK; allocate auxiliary channel a6 type DISK; allocate auxiliary channel a7 type DISK; allocate auxiliary channel a8 type DISK; allocate auxiliary channel a9 type DISK; allocate auxiliary channel a10 type DISK; duplicate target database for standby from active database nofilenamecheck dorecover; } |
Login to Linux DB server by oracle user. Set the environment variables: $hostname $export ORACLE_SID=PR $export ORACLE_BASE=/oracle/app/oracle $export ORACLE_HOME=/oracle/app/oracle/product/19.0.0/dbhome_1 $export PATH=$ORACLE_HOME/bin:$PATH $env | grep ORA $sqlplus / as sysdba SQL> startup nomount #Execute below rman duplicate script in nohup to start the duplicate. $nohup sh rman_dup.sh & #Execute job command to ensure nohup command is running. $jobs $jobs $tail -400f RMAN_DUP.log #Take another SQLPLUS login and check the RMAN restoration status. set lines 300 pages 3000 col OPNAME for a30 select sid, SERIAL#,OPNAME,SOFAR,TOTALWORK,round(sofar/totalwork*100,2) "Completed%",ELAPSED_SECONDS/60 ELT_Minutes,TIME_REMAINING from v$session_longops where opname like '%RMAN%' and opname not like '%aggregate%' and totalwork !=0 and sofar <> totalwork order by 6 desc; alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS'; set lines 300 pages 3000 set long 99999; col COMPLETED_PCT for a15 select a.sid, a.opname, to_char(100*(sofar/totalwork), '990.9')||'%' COMPLETED_PCT, sysdate+(TIME_REMAINING/60/60/24) COMPLETED_By from v$session_longops a, v$session b where a.sid = b.sid and a.serial# = b.serial# and a.sid in (select sid from v$session where module like 'backup%' or module like '%restore%' or module like '%rman%') and sofar != totalwork and totalwork > 0; |
The complete logfile of the RMAN duplicate command: $ cat RMAN_DUP.log Recovery Manager: Release 19.0.0.0.0 - Production on Thu Jul 20 02:01:59 2023 Version 19.17.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. PL/SQL package SYS.DBMS_BACKUP_RESTORE version 19.13.00.00 in TARGET database is not current PL/SQL package SYS.DBMS_RCVMAN version 19.13.00.00 in TARGET database is not current connected to target database: PR (DBID=459681837) connected to auxiliary database: PR (not mounted) RMAN> run { 2> allocate channel d1 type disk; 3> allocate channel d2 type disk; 4> allocate channel d3 type disk; 5> allocate channel d4 type disk; 6> allocate channel d5 type disk; 7> allocate channel d6 type disk; 8> allocate channel d7 type disk; 9> allocate channel d8 type disk; 10> allocate channel d9 type disk; 11> allocate channel d10 type disk; 12> allocate auxiliary channel a1 type DISK; 13> allocate auxiliary channel a2 type DISK; 14> allocate auxiliary channel a3 type DISK; 15> allocate auxiliary channel a4 type DISK; 16> allocate auxiliary channel a5 type DISK; 17> allocate auxiliary channel a6 type DISK; 18> allocate auxiliary channel a7 type DISK; 19> allocate auxiliary channel a8 type DISK; 20> allocate auxiliary channel a9 type DISK; 21> allocate auxiliary channel a10 type DISK; 22> duplicate target database for standby from active database nofilenamecheck dorecover; 23> } 24> using target database control file instead of recovery catalog allocated channel: d1 channel d1: SID=628 device type=DISK allocated channel: d2 channel d2: SID=748 device type=DISK allocated channel: d3 channel d3: SID=767 device type=DISK allocated channel: d4 channel d4: SID=7 device type=DISK allocated channel: d5 channel d5: SID=63 device type=DISK allocated channel: d6 channel d6: SID=205 device type=DISK allocated channel: d7 channel d7: SID=244 device type=DISK allocated channel: d8 channel d8: SID=270 device type=DISK allocated channel: d9 channel d9: SID=284 device type=DISK allocated channel: d10 channel d10: SID=308 device type=DISK allocated channel: a1 channel a1: SID=1297 device type=DISK allocated channel: a2 channel a2: SID=1321 device type=DISK allocated channel: a3 channel a3: SID=1345 device type=DISK allocated channel: a4 channel a4: SID=1369 device type=DISK allocated channel: a5 channel a5: SID=1393 device type=DISK allocated channel: a6 channel a6: SID=1417 device type=DISK allocated channel: a7 channel a7: SID=1441 device type=DISK allocated channel: a8 channel a8: SID=1465 device type=DISK allocated channel: a9 channel a9: SID=1489 device type=DISK allocated channel: a10 channel a10: SID=1513 device type=DISK Starting Duplicate Db at 20-JUL-23 current log archived contents of Memory Script: { backup as copy reuse passwordfile auxiliary format '/oracle/app/oracle/product/19.0.0/dbhome_1/dbs/orapwPR' ; } executing Memory Script Starting backup at 20-JUL-23 Finished backup at 20-JUL-23 duplicating Online logs to Oracle Managed File (OMF) location duplicating Datafiles to Oracle Managed File (OMF) location contents of Memory Script: { restore clone from service 'PR' standby controlfile; } executing Memory Script Starting restore at 20-JUL-23 channel a1: starting datafile backup set restore channel a1: using network backup set from service PR channel a1: restoring control file channel a1: restore complete, elapsed time: 00:00:01 output file name=+DATA/PR_LINUX/con01.ctl output file name=+DATA/PR_LINUX/con02.ctl Finished restore at 20-JUL-23 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database contents of Memory Script: { set newname for clone tempfile 1 to new; set newname for clone tempfile 2 to new; set newname for clone tempfile 3 to new; switch clone tempfile all; set newname for clone datafile 1 to new; set newname for clone datafile 2 to new; set newname for clone datafile 3 to new; set newname for clone datafile 4 to new; set newname for clone datafile 5 to new; set newname for clone datafile 6 to new; set newname for clone datafile 7 to new; set newname for clone datafile 8 to new; set newname for clone datafile 9 to new; set newname for clone datafile 10 to new; set newname for clone datafile 11 to new; set newname for clone datafile 12 to new; set newname for clone datafile 13 to new; set newname for clone datafile 14 to new; set newname for clone datafile 15 to new; set newname for clone datafile 16 to new; set newname for clone datafile 17 to new; set newname for clone datafile 18 to new; set newname for clone datafile 19 to new; set newname for clone datafile 20 to new; set newname for clone datafile 21 to new; set newname for clone datafile 22 to new; set newname for clone datafile 23 to new; set newname for clone datafile 24 to new; restore from service 'PR' clone database ; sql 'alter system archive log current'; } executing Memory Script executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME renamed tempfile 1 to +DATA in control file renamed tempfile 2 to +DATA in control file renamed tempfile 3 to +DATA in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 20-JUL-23 channel a1: starting datafile backup set restore channel a1: using network backup set from service PR channel a1: specifying datafile(s) to restore from backup set channel a1: restoring datafile 00001 to +DATA channel a2: starting datafile backup set restore channel a2: using network backup set from service PR channel a2: specifying datafile(s) to restore from backup set channel a2: restoring datafile 00002 to +DATA channel a3: starting datafile backup set restore channel a3: using network backup set from service PR channel a3: specifying datafile(s) to restore from backup set channel a3: restoring datafile 00003 to +DATA channel a4: starting datafile backup set restore channel a4: using network backup set from service PR channel a4: specifying datafile(s) to restore from backup set channel a4: restoring datafile 00004 to +DATA channel a5: starting datafile backup set restore channel a5: using network backup set from service PR channel a5: specifying datafile(s) to restore from backup set channel a5: restoring datafile 00005 to +DATA channel a6: starting datafile backup set restore channel a6: using network backup set from service PR channel a6: specifying datafile(s) to restore from backup set channel a6: restoring datafile 00006 to +DATA channel a7: starting datafile backup set restore channel a7: using network backup set from service PR channel a7: specifying datafile(s) to restore from backup set channel a7: restoring datafile 00007 to +DATA channel a8: starting datafile backup set restore channel a8: using network backup set from service PR channel a8: specifying datafile(s) to restore from backup set channel a8: restoring datafile 00008 to +DATA channel a9: starting datafile backup set restore channel a9: using network backup set from service PR channel a9: specifying datafile(s) to restore from backup set channel a9: restoring datafile 00009 to +DATA channel a10: starting datafile backup set restore channel a10: using network backup set from service PR channel a10: specifying datafile(s) to restore from backup set channel a10: restoring datafile 00010 to +DATA channel a1: restore complete, elapsed time: 00:00:06 channel a1: starting datafile backup set restore channel a1: using network backup set from service PR channel a1: specifying datafile(s) to restore from backup set channel a1: restoring datafile 00011 to +DATA channel a2: restore complete, elapsed time: 00:00:07 channel a2: starting datafile backup set restore channel a2: using network backup set from service PR channel a2: specifying datafile(s) to restore from backup set channel a2: restoring datafile 00012 to +DATA channel a4: restore complete, elapsed time: 00:00:07 channel a4: starting datafile backup set restore channel a4: using network backup set from service PR channel a4: specifying datafile(s) to restore from backup set channel a4: restoring datafile 00013 to +DATA channel a3: restore complete, elapsed time: 00:00:10 channel a3: starting datafile backup set restore channel a3: using network backup set from service PR channel a3: specifying datafile(s) to restore from backup set channel a3: restoring datafile 00014 to +DATA channel a6: restore complete, elapsed time: 00:01:44 channel a6: starting datafile backup set restore channel a6: using network backup set from service PR channel a6: specifying datafile(s) to restore from backup set channel a6: restoring datafile 00015 to +DATA channel a7: restore complete, elapsed time: 00:01:46 channel a7: starting datafile backup set restore channel a7: using network backup set from service PR channel a7: specifying datafile(s) to restore from backup set channel a7: restoring datafile 00016 to +DATA channel a8: restore complete, elapsed time: 00:01:46 channel a8: starting datafile backup set restore channel a8: using network backup set from service PR channel a8: specifying datafile(s) to restore from backup set channel a8: restoring datafile 00017 to +DATA channel a1: restore complete, elapsed time: 00:01:45 channel a1: starting datafile backup set restore channel a1: using network backup set from service PR channel a1: specifying datafile(s) to restore from backup set channel a1: restoring datafile 00018 to +DATA channel a5: restore complete, elapsed time: 00:01:50 channel a5: starting datafile backup set restore channel a5: using network backup set from service PR channel a5: specifying datafile(s) to restore from backup set channel a5: restoring datafile 00019 to +DATA channel a9: restore complete, elapsed time: 00:01:48 channel a9: starting datafile backup set restore channel a9: using network backup set from service PR channel a9: specifying datafile(s) to restore from backup set channel a9: restoring datafile 00020 to +DATA channel a10: restore complete, elapsed time: 00:01:49 channel a10: starting datafile backup set restore channel a10: using network backup set from service PR channel a10: specifying datafile(s) to restore from backup set channel a10: restoring datafile 00021 to +DATA channel a2: restore complete, elapsed time: 00:01:48 channel a2: starting datafile backup set restore channel a2: using network backup set from service PR channel a2: specifying datafile(s) to restore from backup set channel a2: restoring datafile 00022 to +DATA channel a3: restore complete, elapsed time: 00:01:44 channel a3: starting datafile backup set restore channel a3: using network backup set from service PR channel a3: specifying datafile(s) to restore from backup set channel a3: restoring datafile 00023 to +DATA channel a6: restore complete, elapsed time: 00:00:09 channel a6: starting datafile backup set restore channel a6: using network backup set from service PR channel a6: specifying datafile(s) to restore from backup set channel a6: restoring datafile 00024 to +DATA channel a4: restore complete, elapsed time: 00:01:49 channel a7: restore complete, elapsed time: 00:00:08 channel a8: restore complete, elapsed time: 00:00:07 channel a1: restore complete, elapsed time: 00:00:07 channel a10: restore complete, elapsed time: 00:00:04 channel a5: restore complete, elapsed time: 00:01:10 channel a2: restore complete, elapsed time: 00:01:17 channel a3: restore complete, elapsed time: 00:01:16 channel a6: restore complete, elapsed time: 00:01:16 channel a9: restore complete, elapsed time: 00:01:29 Finished restore at 20-JUL-23 sql statement: alter system archive log current current log archived contents of Memory Script: { restore clone force from service 'PR' archivelog from scn 6290041177477; switch clone datafile all; } executing Memory Script Starting restore at 20-JUL-23 channel a1: starting archived log restore to default destination channel a1: using network backup set from service PR channel a1: restoring archived log archived log thread=1 sequence=39235 channel a2: starting archived log restore to default destination channel a2: using network backup set from service PR channel a2: restoring archived log archived log thread=1 sequence=39236 channel a1: restore complete, elapsed time: 00:00:01 channel a2: restore complete, elapsed time: 00:00:01 Finished restore at 20-JUL-23 datafile 1 switched to datafile copy input datafile copy RECID=30 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/system.282.1142647345 datafile 2 switched to datafile copy input datafile copy RECID=31 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/sysaux.283.1142647345 datafile 3 switched to datafile copy input datafile copy RECID=32 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/undotbs1.284.1142647347 datafile 4 switched to datafile copy input datafile copy RECID=33 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/users.259.1142647347 datafile 5 switched to datafile copy input datafile copy RECID=34 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.261.1142647349 datafile 6 switched to datafile copy input datafile copy RECID=35 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.262.1142647349 datafile 7 switched to datafile copy input datafile copy RECID=36 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.258.1142647349 datafile 8 switched to datafile copy input datafile copy RECID=37 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.260.1142647351 datafile 9 switched to datafile copy input datafile copy RECID=38 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.263.1142647351 datafile 10 switched to datafile copy input datafile copy RECID=39 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.264.1142647353 datafile 11 switched to datafile copy input datafile copy RECID=40 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.266.1142647353 datafile 12 switched to datafile copy input datafile copy RECID=41 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.265.1142647353 datafile 13 switched to datafile copy input datafile copy RECID=42 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.267.1142647361 datafile 14 switched to datafile copy input datafile copy RECID=43 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.268.1142647363 datafile 15 switched to datafile copy input datafile copy RECID=44 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/encrypted_ts.269.1142647455 datafile 16 switched to datafile copy input datafile copy RECID=45 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/system.270.1142647457 datafile 17 switched to datafile copy input datafile copy RECID=46 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/sysaux.271.1142647457 datafile 18 switched to datafile copy input datafile copy RECID=47 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/system.272.1142647459 datafile 19 switched to datafile copy input datafile copy RECID=48 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.273.1142647459 datafile 20 switched to datafile copy input datafile copy RECID=49 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.274.1142647461 datafile 21 switched to datafile copy input datafile copy RECID=50 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/apex.275.1142647461 datafile 22 switched to datafile copy input datafile copy RECID=51 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.276.1142647461 datafile 23 switched to datafile copy input datafile copy RECID=52 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.277.1142647461 datafile 24 switched to datafile copy input datafile copy RECID=53 STAMP=1142647551 file name=+DATA/PR_LINUX/DATAFILE/test_pr_dev2.278.1142647463 contents of Memory Script: { set until scn 6290041178168; recover standby clone database delete archivelog ; } executing Memory Script executing command: SET until clause Starting recover at 20-JUL-23 starting media recovery archived log for thread 1 with sequence 39235 is already on disk as file +ARCH/PR_LINUX/ARCHIVELOG/2023_07_20/1_39235_1005901785.dbf archived log for thread 1 with sequence 39236 is already on disk as file +ARCH/PR_LINUX/ARCHIVELOG/2023_07_20/1_39236_1005901785.dbf archived log file name=+ARCH/PR_LINUX/ARCHIVELOG/2023_07_20/1_39235_1005901785.dbf thread=1 sequence=39235 archived log file name=+ARCH/PR_LINUX/ARCHIVELOG/2023_07_20/1_39236_1005901785.dbf thread=1 sequence=39236 media recovery complete, elapsed time: 00:00:01 Finished recover at 20-JUL-23 contents of Memory Script: { delete clone force archivelog all; } executing Memory Script deleted archived log archived log file name=+ARCH/PR_LINUX/ARCHIVELOG/2023_07_20/1_39236_1005901785.dbf RECID=2 STAMP=1142647550 Deleted 1 objects deleted archived log archived log file name=+ARCH/PR_LINUX/ARCHIVELOG/2023_07_20/1_39235_1005901785.dbf RECID=1 STAMP=1142647550 Deleted 1 objects Finished Duplicate Db at 20-JUL-23 released channel: d1 released channel: d2 released channel: d3 released channel: d4 released channel: d5 released channel: d6 released channel: d7 released channel: d8 released channel: d9 released channel: d10 released channel: a1 released channel: a2 released channel: a3 released channel: a4 released channel: a5 released channel: a6 released channel: a7 released channel: a8 released channel: a9 released channel: a10 Recovery Manager complete. |
Step 8: Now its time to apply post checks.
On Linux DB Server: #Add space to temporary tablespace as per your business requirement. SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; SQL> alter tablespace TEMP add tempfile '+DATA' size 31g; #Query to check total space in temporary tablespace. col tablespace for a26 SELECT s.tablespace_name tablespace, d.total_gb, SUM (s.used_blocks * d.block_size) / 1024 / 1024/1024 USED_GB, d.total_gb - SUM (s.used_blocks * d.block_size) / 1024 / 1024 /1024 Free_GB FROM v$sort_segment s, ( SELECT t.name, p.block_size, SUM (p.bytes) / 1024 / 1024 /1024 TOTAL_GB FROM v$tablespace t, v$tempfile p WHERE t.ts#= p.ts# GROUP BY t.name, p.block_size ) d WHERE s.tablespace_name = d.name GROUP by s.tablespace_name, d.total_gb; #Query to check size of the redo logs with member details set lines 300 pages 3000 col name for a70 SELECT a.group#,member name,a.members,a.bytes/1024/1024/1024,a.status FROM v$log a,v$logfile b WHERE a.group#=b.group# order by 1; #Query to drop existing online and standby redo logs groups from Linux DB SQL> alter database drop logfile group 1; SQL> alter database drop logfile group 2; SQL> alter database drop logfile group 3; SQL> alter database drop logfile group 4; SQL> alter database drop standby logfile group 11; SQL> alter database drop standby logfile group 12; SQL> alter database drop standby logfile group 13; SQL> alter database drop standby logfile group 14; SQL> alter database drop standby logfile group 15; #Query to add online and standby redo logs groups in correct locations. Note: Here, use required redo log size. I have mentioned the size here as per my testing environment. SQL> alter database add logfile group 1 ('+REDO1','+REDO2') size 200m; SQL> alter database add logfile group 2 ('+REDO1','+REDO2') size 200m; SQL> alter database add logfile group 3 ('+REDO1','+REDO2') size 200m; SQL> alter database add logfile group 4 ('+REDO1','+REDO2') size 200m; SQL> alter database add standby logfile group 5 ('+REDO1','+REDO2') size 200m; SQL> alter database add standby logfile group 6 ('+REDO1','+REDO2') size 200m; SQL> alter database add standby logfile group 7 ('+REDO1','+REDO2') size 200m; SQL> alter database add standby logfile group 8 ('+REDO1','+REDO2') size 200m; SQL> alter database add standby logfile group 9 ('+REDO1','+REDO2') size 200m; On Solaris DB Server: SQL> alter system set log_archive_config='DG_CONFIG=(PR,PRDG,PR_LINUX)'; SQL> alter system set log_archive_dest_3='SERVICE=PR_LINUX LGWR ASYNC NOAFFIRM VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PR_LINUX'; SQL> alter system set log_archive_dest_state_3=defer; SQL> alter system set log_archive_dest_state_3=enable; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> select max(sequence#) Primary from v$archived_log; On Linux DB Server: SQL> recover managed standby database disconnect from session; SQL> select max(sequence#) Standby from v$archived_log where applied='YES'; |
Now our setup is ready. Both existing Solaris DB and new Linux DB server are in sync. Keep these in sync till Go Live stage. Once your migration activity Go Live phase gets started or planned then you can perform switchover to complete migration activity.
Steps to perform switchover from Solaris DB to Linux DB. On Solaris DB: SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> select max(sequence#) Primary from v$archived_log; On Linux DB : SQL> recover managed standby database disconnect from session; SQL> select max(sequence#) Standby from v$archived_log where applied='YES'; #Execute below command to initiate switchover On Solaris DB: SQL> alter database switchover to PR_LINUX; SQL> startup mount; SQL> select name,db_unique_name,open_mode,database_role from v$database; On Linux DB : SQL> alter database open; SQL> select name,db_unique_name,open_mode,database_role from v$database; -: Rollback Plan :-Note: You can enable reverse sync between Linux to Solaris. This is required if application is not working fine with new Linux server due to any issue then rollback will be easier. #To enable reverse sync between Linux to Solaris DB. On Linux DB(Current Primary): SQL> alter system set log_archive_dest_3='SERVICE=PR LGWR ASYNC NOAFFIRM VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PR'; SQL> alter system set log_archive_dest_state_3=defer; SQL> alter system set log_archive_dest_state_3=enable; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> select max(sequence#) Primary from v$archived_log; On Solaris DB (Current Standby): SQL> recover managed standby database disconnect from session; SQL> select max(sequence#) Standby from v$archived_log where applied='YES'; #Query to initiate switchback on Linux DB if application is not working on new Linux server. On Linux DB: SQL> alter database switchover to PR; SQL> startup mount; SQL> select name,db_unique_name,open_mode,database_role from v$database; On Solaris DB : SQL> alter database open; SQL> select name,db_unique_name,open_mode,database_role from v$database; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> select max(sequence#) Primary from v$archived_log; On Linux DB : SQL> recover managed standby database disconnect from session; SQL> select max(sequence#) Standby from v$archived_log where applied='YES'; |
Thanks for reading this post ! Please comment if you like this post ! Click on FOLLOW to get next blog updates !
Very useful, thanks Rupesh for sharing this.
ReplyDeleteWhat a great Document for DB migration Rupesh ! Thanks for sharing this.
ReplyDelete