Reduced Downtime Cross Platform Migration of Oracle 19c Database from AIX to Linux using Transportable Tablespace

Share on:

This procedure covers the steps needed to use Oracle V4 Cross Platform Transportable Tablespaces (XTTS) with RMAN incremental backups to migrate data between systems that have different endian formats, with the least amount of application downtime.

The first step will be to copy a full backup from the source to the destination. Then, by using a series of incremental backups, each smaller than the last, the data at the destination system can be brought nearly current with the source system, before any downtime is required. This procedure requires downtime only during the final incremental backup, and the meta-data export/import.

 

Prerequisites

The following prerequisites should be in place to ensure the success of the database migration.

  • The current version does NOT support Windows as either source or destination.
  • The source database must be in ARCHIVELOG mode.
  • Before running XTTs scripts, set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1.  Other NLS_LANG settings may cause errors.
  • RMAN on the source system must not have DEVICE TYPE DISK configured with COMPRESSED.
  • RMAN on the source system must not have a BACKUP TYPE TO COPY.  The source must have a BACKUP TYPE TO BACKUPSET.
  • RMAN on the source system must not have a default channel configured to type SBT.   I.e., this procedure can only be used with DISK channels.  
  • RMAN on the source system must not have ANY channel configuration limitations.  For example, MAXSETSIZE, MAXPIECESIZE, etc.
  • The set of tablespaces being moved must all be online and contain no offline data files.  Tablespaces must be READ WRITE.
  • The Oracle version of the source must be lower or equal to the destination.  Therefore, this procedure can be used as an upgraded method.  Transportable tablespace restrictions WILL apply. 
  • The minimum version for source and destination is 11.2.0.3. 
  • ASM can only be used for the final location of datafiles in the destination, backups cannot be placed on ASM with this version.  
  • The backup location of the destination MUST be a device with read/write privileges.  I.e., cannot be a READONLY device.  This can cause ORA-19624 on the backup set conversion. 
  • The source and target database must use a compatible character set and national character set.
  • Source and target character sets must match.
SQL> select * from nls_database_parameters
  • Source and target time zone versions must match.
SQL> select dbtimezone from v$instance;
  • The source database’s COMPATIBLE parameter must not be greater than the destination database’s COMPATIBLE parameter. Execute the below queries to validate the version and compatible parameters on the source and target.
SQL> select version from v$instance;  
SQL> show parameter compatible

For example, if the source database compatible parameter is 12.0.0 and the target is 19.0.0, then execute the following on the source database.

SQL> ALTER SYSTEM SET COMPATIBLE= '19.0.0.0' SCOPE=SPFILE;
restart database;
  • Tablespace endianness is converted with the xttdriver.pl script from Oracle support.
  • Tablespaces on the source database may not be encrypted with Transparent Data Encryption
  • Check the endianness of both source and target
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_ID = d.PLATFORM_ID;

 

Phase 1 – Initial Setup phase

Perform the following steps to configure the environment to use Cross Platform Incremental Backups:

  1. Install the desired Oracle Database software on the destination system that will run the destination database.  
  2. Create a database on the destination system to transport the tablespace(s) into and create the schema users required for the tablespace transport.
  3. The list of users/schemas and tablespaces to be migrated which are not maintained by Oracle should be created on the destination database. The list can be found using the script below.
SQL> select username, default_tablespace from dba_users where oracle_maintained=’N’;
  1. Remove all users maintained by oracle. Compare output the output from the below query
SQL> select username from dba_users where oracle_maintained=’Y’ ;
UNDO tablespace is not migrated in this procedure.
  1. Verify Tablespaces to be migrated are self-contained.
SQL> select 'EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('||distinct default_tablespace||', TRUE);' from dba_users where oracle_maintained=’N’;

Below is a sample script that will be generated above.

EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('USERS', TRUE);
SELECT * FROM TRANSPORT_SET_VIOLATIONS;
  1. It is recommended to check the count of objects on the source database before migration.
SQL> select object_type, count (*) from dba_objects where owner not in ('SYS','SYSTEM') group by object_type order by 1;
SQL> select segment_type,round(sum(bytes)/1024/1024) as SIZE_MB from dba_segments where owner not in ('SYS','SYSTEM') group by segment_type order by 2 desc;
  1. Create a new tablespace and drop the USERS tablespace on the target database.
SQL> create tablespace migtab datafile '+DATADG' size 2G;
SQL> alter database default tablespace migtab;
SQL> drop tablespace users including contents and datafiles;
  1. Install xttconvert scripts on the source system

On the source system, as the oracle software owner, download and extract rman_xttconvert_VER4.3.zip  file containing the migration scripts  from Metalink document ID  2471245.1

Copy the zip file to the source server and unzip

[oracle@source]$ pwd
/home/oracle/xtt

[oracle@source]$ unzip rman_xttconvert_VER4.zip
    Archive: rman_xttconvert_v3.zip
    inflating: xtt.properties
    inflating: xttcnvrtbkupdest.sql
    inflating: xttdbopen.sql
    inflating: xttdriver.pl
    inflating: xttprep.tmpl
    extracting: xttstartupnomount.sql
  1. Configure xtt.properties file on the source system.  Edit the xtt.properties file and set the following parameters.
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ vi xtt.properties

tablespaces=TABLESPACES TO BE MIGRATED
platformid=6
dest_datafile_location=+DATADG
src_scratch_location=/rman/backup
dest_scratch_location=/rman/restore
parallel=8
usermantransport=1  
  • On source: Location of backups as defined by src_scratch_location parameter in the xtt.properties file.  
  • On destination: Location of backups as defined by the dest_scratch_location parameter in the xtt.properties file.  
  • Location for datafiles on the destination is defined by dest_datafile_location parameter in the xtt.properties file. For the destination database using ASM, specify the ASM disk group.
  • The source platform ID can be generated with the below command.
  • For databases that are 12c and upwards, it is recommended set usermantransport parameter to 1.
SQL> select plaftform_id from v$database;
  • It is recommended this be set if the source database is running 12c or higher. This causes new 12c (and higher) functionality to be used when this parameter is set. 

 

  1. Copy xttconvert scripts and xtt.properties to the destination system

As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system.

[oracle@source]$ scp -r /home/oracle/xtt oracle@dest:/home/oracle/xtt

 

  1. Set TMPDIR environment variable

In the shell environment on both source and destination systems, set the environment variable TMPDIR to the location where the supporting scripts exist.  Use this shell to run the Perl script xttdriver.pl as shown in the steps below.  If TMPDIR is not set, output files are created and input files are expected to be in /tmp.

On source

[oracle@source]$ export TMPDIR=/home/oracle/xtt

On destination

[oracle@dest]$ export TMPDIR=/home/oracle/xtt

 

Phase 2 – Prepare Phase

  1. Run the backup on the source system.  

During the Prepare phase, datafiles of the tablespaces to be transported are backed up on source, backups transferred to the destination system and restored by the xttdriver.pl script. 

[oracle@source]$  export ORACLE_SID=source DB SID
[oracle@source]$  export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$  export TMPDIR=/home/oracle/xtt
[oracle@source]$  nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3 &
  1. Transfer the following files to the destination system.
  • If the src_scratch_location on the source system and the dest_scratch_location on the destination system refer to the same NFS storage location, then the backups do not need to be copied as they are available in the expected location on the destination system.  
  • Backups created from source src_scratch_location to destination dest_scratch_location
  • The res.txt file from source $TMPDIR to destination $TMPDIR:

In the example below, scp is used to transfer the level=0 backup created by the previous step from the source system to the destination system.

[oracle@source]$ scp /src_scratch_location/* oracle@dest:/dest_scratch
[oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt
  1. Restore the datafiles on the destination

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the restore as follows:

[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ export ORACLE_SID=TARGETDBID
[oracle@source]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3 &

Data files will be placed on the destination system in the defined dest_datafile_location.  From asmcmd confirm data files have been restored.

You can at this point proceed to Phase 4 or continue with Phase 3

 

Phase 3 – Roll Forward Phase

 During this phase, an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward.  This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup and will bring the destination datafile copies more current with the source database.  The data being transported (source) is fully accessible during this phase.  

  1. Create an incremental backup of the tablespaces being transported on the source system. This step will create an incremental backup for all tablespaces listed in xtt.properties.  
[oracle@source]$  export ORACLE_SID=source DB SID
[oracle@source]$  export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$  export TMPDIR=/home/oracle/xtt
[oracle@source]$  $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup --debug 3
  1. Transfer the incremental backup(s) (between src_scratch_location and dest_scratch_location) and the res.txt (between the $TMPDIRs) from the source to the destination.  The list of incremental backup files from the current backup can be found in the incrbackups.txt file on the source system.

If the src_scratch_location on the source system and the dest_scratch_location on the destination system refer to the same NFS storage location, then the backups do not need to be copied as they are available in the expected location on the destination system.  However, the res.txt file MUST be copied after the LAST incremental backup before it can be applied to destination

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/dest_scratch_location
[oracle@source]$ scp res.txt oracle@dest:/home/oracle/xtt
  1. Apply the incremental backup to the datafile copies on the destination system
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ export ORACLE_SID=TARGETDBID
[oracle@source]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore --debug 3 &

 

Phase 4 – Final Incremental Backup with downtime from the source

During this phase, the source data is made READ ONLY and the destination data files are made consistent with the source database by creating and applying a final incremental backup. After the destination data files are made consistent, the normal transportable tablespace steps are performed to export object metadata from the source database and import it into the destination database.  The data being transported is accessible only in READ ONLY mode until the end of this phase.  Place the tablespaces in read-only mode in the source database.

You can generate the script using the below script for the selected tablespaces.

select 'ALTER TABLESPACE '||tablespace_name||' READ ONLY; ' from dba_tablespaces where tablespace_name in (‘Input the list of tablespaces to be migrated)

 

18. Option 1

Take the incremental backup on the source system with the new 12c feature, which also runs the export. The –bkexport option is used with the xttdriver.pl script

[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ export ORACLE_SID=TARGETDBID
[oracle@source]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpexport &

You might see an error related to tablespace in read-only mode.

Transfer the final incremental backup and export the file to the destination.

Apply the last incremental backup to destination datafiles

The final incremental backup must be applied to the destination data files using “–resincrdmp”

oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ export ORACLE_SID=TARGETDBID
[oracle@source]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@dest]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl –resincrdmp &

This step will apply the last incremental backup to the data files. In addition, it will restore the export dump file from the backup set and generate a script file, xttplugin.txt.

 

Option 2

Take the incremental backup on the source system with -the backup and -restore options.

[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ export ORACLE_SID=TARGETDBID
[oracle@source]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup &

You might see an error related to tablespace in read-only mode.

Transfer the final incremental backup and export the file to the destination.

Apply the last incremental backup to destination datafiles

oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ export ORACLE_SID=TARGETDBID
[oracle@source]$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@dest]$ nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl –restore &

 

Phase 5 – Transport Phase:  Export Metadata and Plug-in Tablespaces into Destination Database

  1. Run metadata export and import using the export file

Create a directory object in the source database

create directory migdir as '/ rman/backup’;

vi exp_testmigration.par

dumpfile=EXP_SOURCE_MIG _09Mar23.dmp
directory=migdir
exclude=statistics
tablespaces=USERS
transport_full_check=yes
logfile=EXP_ SOURCE_MIG _09Mar23.log
metrics=y

[oracle@source]$ nohup expdp system/manager parfile= exp_testmigration.par &

Transfer the dump files to the target server

 

  1. Import the metadata on the target and plug the datafiles
sqlplus / as sysdba

SQL> create a directory object on the destination database
create directory impdir as '/rman/restore';

SQL> grant read,write on directory impdir TO system;

Generate the list of tablespaces and data files to be migrated using the xttdriver.pl script with -e option

[oracle@source]$  $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

This will generate a sample Data Pump network_link transportable import command in the file xttplugin.txt with the transportable tablespaces parameters TRANSPORT_TABLESPACES and TRANSPORT_DATAFILES correctly set. In addition, a data pump export file will also be created.

Edit the import par file below and replace the tablespaces and data files to be transported.

[oracle@source]$  vi impmigration.par

dumpfile= EXP_SOURCE_MIG _09Mar23.dmp
directory=cmbdb_imp_dir
transport_datafiles='/dest_datafile_location/TS1.dbf','/dest_datafile_location/TS2.dbf'
logfile=IMP_SOURCE_MIG _09Mar23.log
full=y

[oracle@dest]$nohup  impdp system/oracle parfile= impmigration.par  &

There is the option to run the import across the network using a public database link.

 

Phase 6  Validate the Transported Data

Check the destination database for objects, pl/sql objects, sequences, views, etc., that are not transported by TTS.

SQL> select segment_type,round(sum(bytes)/1024/1024) as SIZE_MB from dba_segments where owner not in ('SYS','SYSTEM') group by segment_type order by 2 desc;

 

  1. Check tablespaces on target for corruption
select 'VALIDATE TABLESPACE '||tablespace_name||' CHECK LOGICAL; ' from dba_tablespaces;

 

  1. Make destination tablespaces read/write

Connect to the destination database and execute the below command to generate the script to alter the tablespaces.

select 'ALTER TABLESPACE '||distinct default_tablespace||' READ WRITE; ' from dba_users where oracle_maintained=’N’;

Compare the output with a list of databases in dba_tablepaces and be sure to include all tablespaces that have been migrated.

 

Reference

  • V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1).html
Share on:

More from this Author

How to Delete Older Kernel Versions in Red Hat Enterprise Linux (RHEL)

In Oracle Red Hat Enterprise Linux (RHEL), the conventional method for removing old kernels involves using the yum package manager. Here are the ... Read More

How to resolve ORA-19751: Could not create the change tracking file error after RMAN Restore and Recovery

The following steps can be taken to address the “ORA-19751: could not create the change tracking file” error when attempting to open the ... Read More

Back to Top