XTTS Series One: U2L Migration Solution XTTS Use

Posted by sargenle on Sun, 28 Jul 2019 12:51:33 +0200

The positioning of this series is to study XTTS and related technologies in depth. As the beginning of this series, in line with the principle of practicality, I first refine and simplify the actual implementation plan of U2L migration in an actual production environment, aiming at clearly explaining how to use XTTS as a solution for U2L migration, first achieving the primary goal that can be followed up, and then further study if interested. Relevant details.

1. Overview of XTTS

Using XTTS(Cross Platform Transportable Tablespaces) migration mode, exactly refers to the enhanced version of XTTS, with cross-platform byte order conversion, full initialization, multiple incremental roll-forward functions, thus effectively shortening the production downtime of the formal migration phase, and successfully completing the migration of U2L. For example, this demand is as follows:

Source end Target end
IP address 10.6.xx.xx 10.5.xx.xx
operating system AIX 5.3 RHEL 6.7
Whether RAC no yes
Database name sourcedb targetdb
Migrating business users JINGYU JINGYU

2. Migration preparation phase

2.1 Check Self-Inclusion.
This time, only JINGYU users will be migrated and self-contained validation of the table space where the JINGYU users are located will be checked.

SQL> select distinct tablespace_name from dba_segments where owner='JINGYU' order by 1; 

TABLESPACE_NAME 
------------------------------ 
DBS_D_JINGYU 
DBS_I_JINGYU

SQL> execute dbms_tts.transport_set_check('DBS_D_JINGYU, DBS_I_JINGYU');
PL/SQL procedure successfully completed. 

SQL> select * from TRANSPORT_SET_VIOLATIONS; 
no rows selected

If no result row is returned from the above query, the self-contained check is passed.

2.2 Create XTTS working directory
The working directory of XTTS I set up this time is / exp/newxx, which creates relevant directories at both source and target sides, uploads and decompresses the xttconvert script provided by MOS (document ID 1389592.1).

--Source end AIX Create relevant directories
mkdir -p  /exp/newxx
mkdir -p  /exp/newxx/src_backup
mkdir -p  /exp/newxx/tmp
mkdir -p  /exp/newxx/dump
mkdir -p  /exp/newxx/backup_incre
chown -R ora103:dba /exp/newxx

--Source end AIX upload rman-xttconvert_2.0.zip to/exp/newxx
cd /exp/newxx
unzip rman-xttconvert_2.0.zip

--Target end Linux Create relevant directories
mkdir -p  /exp/newxx
mkdir -p  /exp/newxx/src_backup
mkdir -p  /exp/newxx/tmp
mkdir -p  /exp/newxx/dump
mkdir -p  /exp/newxx/backup_incre
chown -R ora11g:dba /exp/newxx

--Target end Linux upload rman-xttconvert_2.0.zip to/exp/newxx
cd /exp/newxx
unzip rman-xttconvert_2.0.zip

2.3 Source Open bct
block change tracking at source

SQL> alter database enable block change tracking using file '/exp/newxx/bct2'; 
Database altered.

If bct is found to be ineffective during the test phase (incremental backup takes a long time), consider manually performing a level 0 backup of table spaces:

--Manual level 0 Back up the table space to be migrated (just for incremental readability) bct,Do not do other recovery operations)
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;
RMAN> backup incremental level 0 tablespace DBS_D_JINGYU, DBS_I_JINGYU format '/exp/test/%U.bck';

Note: This is a special case, not necessarily encountered, according to your actual test situation to choose whether to do. This time about 2T table space level 0 backup time: 2h.

2.4 Configure xtt.properties

Configure the source AIX xtt.properties properties properties file:

cd /exp/newxx
vi xtt.properties

#Add the following configuration information:
tablespaces=DBS_D_JINGYU,DBS_I_JINGYU
platformid=6
dfcopydir=/exp/newxx/src_backup
backupformat=/exp/newxx/backup_incre
backupondest=/exp/newxx/backup_incre
stageondest=/exp/newxx/src_backup
storageondest=+DG_DATA/targetdb/datafile
parallel=16
rollparallel=16
getfileparallel=6

Configure the target Linux xtt.properties properties properties file:

cd /exp/newxx
vi xtt.properties

#Add the following configuration information:
tablespaces=DBS_D_JINGYU,DBS_I_JINGYU
platformid=6
dfcopydir=/exp/newxx/src_backup
backupformat=/exp/newxx/backup_incre
backupondest=/exp/newxx/backup_incre
stageondest=/exp/newxx/backup_incre
storageondest=+DG_DATA/targetdb/datafile
parallel=16
rollparallel=16
getfileparallel=6
asm_home=/opt/app/11.2.0/grid
asm_sid=+ASM1

Note: Platform id = 6 here is determined by the OS platform at the source. It can be confirmed by querying the platform_id field of v$database or referring to the platform corresponding in v$transportable_platform.

2.5 Target Establishment of User Roles in Advance
The target end creates JINGYU users, and the default table space can be modified only after the metadata import is completed.
Following is the corresponding creation at the target end after the source side executes the statement to obtain the creation user and the corresponding roles and permissions (if you know the user password and permissions of the business user to migrate, you can also choose to create them directly):

--Source-side execution:
--create user
sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/create_user.sql
select 'create user '||name||' identified by values '''||password||''';' from user$ where name = 'JINGYU' and type#=1;
spool off
exit

--create role
sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/create_role.sql
select 'grant '||GRANTED_ROLE||' to '||grantee||';' from dba_role_privs where grantee = 'JINGYU';
spool off
exit

--owner by sys Table permissions need to be given manually
sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/grant_sys_privs.sql
select 'grant '||PRIVILEGE||' on '||owner||'.'||table_name||' to '||GRANTEE||';' from dba_tab_privs where owner='SYS'  and GRANTEE = 'JINGYU';
spool off
exit

--Source Verification SQL Correct or not:
cat /exp/newxx/scripts/create_user.sql
cat /exp/newxx/scripts/create_role.sql
cat /exp/newxx/scripts/grant_sys_privs.sql

--Target end execution:
@/exp/newxx/scripts/create_user.sql
@/exp/newxx/scripts/create_role.sql
@/exp/newxx/scripts/grant_sys_privs.sql

2.6 Full Backup of Table Space
The source AIX executes the configuration file generated during the execution of the script of creating xtts tablespace full backup by backup of transmitted business tablespace. It is used for data file conversion and incremental backup and recovery. At the same time, the content of configuration file will change during each incremental backup, and it is used for new incremental recovery, mainly SCN. Change.
Increase rman backup parallelism:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;

Edit the backup file. Each backup failure will result in a fails file in / exp/newxx/tmp that needs to be deleted before it can be run again

cd /exp/newxx
--full_backup.sh The script is as follows
export ORACLE_SID=sourcedb
export TMPDIR=/exp/newxx/tmp
export PERL5LIB=/opt/app/ora103/10.2.0/product/perl/lib
/opt/app/ora103/10.2.0/product/perl/bin/perl /exp/newxx/xttdriver.pl -p –d

Perform full backup in the background:

cd /exp/newxx
nohup sh full_backup.sh > full_backup.log &

View the full backup size generated by / exp/newxx/src_backup (this test size is 2T, backup takes 4 hours and 34 minutes)

2.7 Full Restoration and Conversion of Table Space
Transfer files to the target end

cd /exp/newxx/src_backup
scp * ora11g@10.5.xx.xx:/exp/newxx/src_backup
--scp Copies take 10 hours
cd /exp/newxx/tmp
scp * ora11g@10.5.xx.xx:/exp/newxx/tmp

Target Linux performs table space recovery and converts data files to ASM disk groups. Each recovery failure occurs when the fails file is deleted from / exp/newxx/tmp before it can be run again (see the special instructions in Section 3.2 below).

cd /exp/newxx
--full_restore.sh The script is as follows
export TMPDIR=/exp/newxx/tmp
export ORACLE_SID=targetdb1
/opt/app/ora11g/product/11.2.0/perl/bin/perl /exp/newxx/xttdriver.pl -c –d

Perform full recovery and conversion in the background:

nohup sh full_restore.sh > full_restore.log &

The recovery and conversion time: 4 hours and 15 minutes.

3. Incremental rollforward stage

3.1 Incremental backup of table space
Incremental backup at source:

cd /exp/newxx
--Incremental backup script incre_backup.sh The contents are as follows
export ORACLE_SID=sourcedb
export TMPDIR=/exp/newxx/tmp
export PERL5LIB=/opt/app/ora103/10.2.0/product/perl/lib
/opt/app/ora103/10.2.0/product/perl/bin/perl /exp/newxx/xttdriver.pl -i –d

Perform incremental backups in the background:

cd /exp/newxx
nohup sh incre_backup.sh > incre_backup.log &

Make sure that the xtt.properties file is configured correctly before incremental backup. Incremental backup takes several minutes, indicating that bct is working.

--(Choose to do)Make a test validation table for the second time:
SQL> create table JINGYU.xttstest tablespace DBS_D_JINGYUas SELECT * FROM DBA_objects;
Select count(1) from JINGYU.xttstest;

Transfer files to the target end:

cd /exp/newxx/backup_incre
scp *_1_1 ora11g@10.5.xx.xx:/exp/newxx/backup_incre
cd /exp/newxx/tmp
scp * ora11g@10.5.xx.xx:/exp/newxx/tmp

3.2 Table Space Incremental Recovery
Incremental recovery at the target end:

cd /exp/newxx
--incre_recover.sh The script is as follows
export TMPDIR=/exp/newxx/tmp
export ORACLE_SID=targetdb1
/opt/app/ora11g/product/11.2.0/perl/bin/perl /exp/newxx/xttdriver.pl -r –d

Implement incremental recovery in the background:

nohup sh incre_recover.sh > incre_recover.log &

In particular:
1. The above steps of incremental roll-forward can be repeated many times before the formal migration, which can be used for multiple incremental table space recovery of the target database, making the target database nearly identical with the production database before the formal migration, and greatly reducing the migration downtime.
2. After each backup (full and incremental) is successful, files will be generated in the source / exp/newxx/tmp directory, and all files in this directory need to be transferred to / exp/newxx/tmp (each overwrite can be done)
3. After each backup (full and incremental), the latest xttplan.txt.new file will be generated in the / exp/newxx/tmp directory, which records the latest scn of each table space. The old xttplan.txt file needs to be renamed at the linux end before each incremental recovery as follows:
cd /exp/newxx/tmp
mv xttplan.txt xttplan.old1.txt
mv xttplan.txt.new xttplan.txt

4. Formal migration phase

4.1 Table space read only
After the application side stops business, the database level reviews confirm that there is no user session connection.
Source AIX will modify the transport business table space to READ ONLY state:

sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/read_only.sql
select 'alter tablespace '||name||' read only;' from v$tablespace where name in ('DBS_D_JINGYU','DBS_I_JINGYU') order by 1;
spool off
exit

cat /exp/newxx/scripts/read_only.sql 
@/exp/newxx/scripts/read_only.sql

4.2 Last incremental operation
According to the previous incremental roll-forward phase, the last incremental backup and recovery is completed.
In this test, the last incremental backup took 21 minutes.

4.3 Target Open Flash Back
Target-side Linux opens flashback before importing metadata

SQL> alter system set db_recovery_file_dest_size=100g scope=both;
System altered.

SQL> alter system set db_recovery_file_dest='+DG_DATA' scope=both;
System altered.

SQL> alter database flashback on;
Database altered.

SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES

SQL> create restore point before_imp_xtts guarantee flashback database;
Restore point created.

SQL> select name from v$restore_point;
//Verify that there is a restore point just established.

4.4 Importing XTTS metadata
4.4.1 AIX source exports XTTS metadata:

create directory dump as '/exp/newxx/dump';

Export table space, user metadata:

--Exporting table space metadata( vi expdp_xtts.sh)
expdp system/oracle parfile=expdp_xtts.par
--expdp_xtts.par The contents are as follows:
directory=dump
dumpfile=tbs_xtts.dmp
logfile=expdp_xtts.log
transport_tablespaces=('DBS_D_JINGYU','DBS_I_JINGYU')
transport_full_check=y
metrics=yes

--Export user metadata( vi expdp_xtts_other.sh)
expdp system/oracle parfile=expdp_xtts_other.par
--expdp_xtts_other.par The contents are as follows
directory=dump
dumpfile=tbs_xtts_other.dmp
logfile=expdp_xtts_other.log
content=metadata_only
schemas=JINGYU
metrics=yes

Execute scripts that export table spaces and user metadata:

cd /exp/newxx/dump
./expdp_xtts.sh
./expdp_xtts_other.sh

Transfer dump file to target / exp/newxx/dump directory after export

cd /exp/newxx/dump
scp *.dmp ora11g@10.5.108.28:/exp/newxx/dump

4.4.2 LINUX target imports XTTS metadata:
Create directory:

create or replace directory dump as '/exp/newxx/dump';

Import XTTS metadata:

--Import XTTS Metadata( vi impdp_xtts.sh)
impdp system/oracle parfile=impdp_xtts.par
--impdp_xtts.par The contents are as follows:
directory=dump
logfile=impdp_xtts.log
dumpfile=tbs_xtts.dmp
cluster=n
metrics=yes
transport_datafiles='+DG_DATA/targetdb/DATAFILE/DBS_D_JINGYU.290.976290433',
'+DG_DATA/targetdb/DATAFILE/DBS_I_JINGYU.286.976290433'

Note: The above data file path needs to be changed according to the actual import situation.

Execute scripts that import XTTS metadata:

cd /exp/newxx/dump
./impdp_xtts.sh
SQL> select count(1) from JINGYU.xttstest;
//Normal return result.

When the execution is completed, the validation is successful at the same time.

4.5 Table Space Readwrite
LINUX target end table space read write:

sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/read_write.sql
select 'alter tablespace '||name||' read write;' from v$tablespace where name in ('DBS_D_JINGYU','DBS_I_JINGYU') order by 1;
spool off
exit 

cat /exp/newxx/scripts/read_write.sql
@/exp/newxx/scripts/read_write.sql

4.6 Second Open Flash Back
Target Linux opens flashback again before importing other metadata

sqlplus / as sysdba
select flashback_on from v$database;
create restore point before_imp_other guarantee flashback database;
select name from v$restore_point;

4.7 Import Other Metadata
Import other metadata

--Import other metadata( vi impdp_xtts_other.sh)
impdp system/oracle parfile=impdp_xtts_other.par 
--impdp_xtts_other.par The contents are as follows
directory=dump 
dumpfile=tbs_xtts_other.dmp
logfile=impdp_xtts_other.log 
content=metadata_only
schemas=JINGYU
cluster=n
metrics=yes

Execute scripts that import other metadata:

cd /exp/newxx/dump
./impdp_xtts_other.sh

4.8 Check public dblink
The original production environment queries the public dblink, if there are results, to create a new production environment:

--Inquiry of original production environment:
select * from dba_db_links;
OWNER                          DB_LINK                        USERNAME                       HOST                           CREATED
------------------------------ ------------------------------ ------------------------------ ------------------------------ -------------------
JINGYU                          XXX_TMP                        JINGYU                          xxdb                          2008-05-20 09:51:14

select dbms_metadata.get_ddl('DB_LINK',DB_LINK,'JINGYU') FROM DBA_DB_LINKS where owner='JINGYU';

CREATE DATABASE LINK "XXX_TMP"
   CONNECT TO "JINGYU" IDENTIFIED BY VALUES '056414CFC01C4F42E2E496B913FDC0212A'
   USING 'xxdb';

--connection to JINGYU User can create, start without permission:
grant create database link to JINGYU;

4.9 Check public synonyms
The original production environment queries public synonyms, and if there are results, to create a new production environment:

select owner,SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from dba_synonyms where owner='PUBLIC' and table_owner in ('JINGYU');

There was no result this time.

4.10 Check the External Table
The original production environment queries the external table information, if there are results, to create a new production environment:

SQL> select * from dba_external_tables;

There was no result this time.

4.11 Data Comparisons
Source and target environments are queried and compared respectively:

set linesize 200
set pagesize 9999
col owner format a15
col object_type format a15
select owner, object_type, count(*)
  from dba_objects
 where object_name not like 'BIN%'
   and owner in ('JINGYU')
 group by owner, object_type
 order by 1,2 desc;

OWNER           OBJECT_TYPE       COUNT(*)
--------------- --------------- ----------
JINGYU           VIEW                     2
JINGYU           TABLE PARTITION         25
JINGYU           TABLE                   49
JINGYU           SEQUENCE                 4
JINGYU           PROCEDURE                5
JINGYU           INDEX PARTITION        225
JINGYU           INDEX                   55
JINGYU           FUNCTION                 3
JINGYU           DATABASE LINK            1

9 rows selected.

4.12 Compile Failure Objects
Query the number of invalid objects, grouping statistics by object type:

sqlplus / as sysdba
set timing on
select owner, object_type, count(*)
  from dba_objects
 where status <> 'VALID'
   and owner in ('JINGYU')
 group by owner, object_type
 order by 1, 2 desc;

OWNER           OBJECT_TYPE       COUNT(*)
--------------- --------------- ----------
JINGYU           PROCEDURE                1
JINGYU           FUNCTION                 1

--View specific failure objects and compare source and target end:
set linesize 200
set pagesize 9999
col owner format a15
col object_type format a15
col OBJECT_NAME for a32
select owner,object_name, object_type, status
  from dba_objects
 where status <> 'VALID'
   and owner in ('JINGYU') order by 2;

OWNER           OBJECT_NAME                      OBJECT_TYPE     STATUS
--------------- -------------------------------- --------------- -------
JINGYU           XXXXCITY                         FUNCTION        INVALID
JINGYU           TAB_MAINTAIN_XXX                 PROCEDURE       INVALID

//Compile invalid objects (do not do it if invalid objects on both sides are identical)
exec utl_recomp.recomp_parallel(64);

4.13 Change User Default Table Space
Because the table space is created later than the user, you need to manually change the default table space:

alter user JINGYU default tablespace DBS_D_JINGYU;
select username,default_tablespace,temporary_tablespace from dba_users where username='JINGYU';

4.14 Delete flashback point.
After confirming the success of this migration, manually delete the flashback point

--(Choose to do)At this point, you can delete the previous test table:
drop table JINGYU.xttstest;

--Manual deletion of flashback points:
drop restore point BEFORE_IMP_XTTS;
drop restore point before_imp_other;
select name from v$restore_point;

--Turn off the function of flash back database:
SQL> alter database flashback off;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> alter system set db_recovery_file_dest='' sid='*' scope=both;

System altered.

4.15 Modify service parameters
Manually modify service parameters with the same name as the user:

--service_names The parameters are consistent with the source end:
show parameter service_names
alter system set service_names ='dbaas','targetdb','jingyursv', 'jingyu' sid='*' scope=both;
alter system register;

Check the registration of surveillance:

--The name of the non-default listener used in this environment is ____________ targetdb: 
lsnrctl status targetdb

5. Other considerations

Collected colleagues'experience of XTTS migration project in other projects, combined with the situation encountered in the implementation process, listed other matters needing attention related to XTTS (welcome you to continue to add pits you trampled in XTTS and share the corresponding experience):

  • 1. The time required to migrate using XTTS varies according to the situation, and the downtime needs to be evaluated each time.
  • 2. The migration speed of XTTS is mainly due to the import and export time of metadata (parallelism cannot be used), and the more objects, the slower the import and export.
  • 3. Check whether there are any other indexes built on the exported user table space during the import process. Such indexes can not be detected in the self-contained check, so they should be checked before full backup. Dealing with this kind of object in advance, I did not encounter this time:
  /*Query the details of objects in table spaces*/
  SELECT OWNER                  AS OWNER
       ,SEGMENT_NAME           AS SEGMENT_NAME
       ,SEGMENT_TYPE           AS SEGMENT_TYPE
     ,SUM(BYTES)/1024/1024   AS SEGMENT_SIZE
  FROM DBA_SEGMENTS
  WHERE TABLESPACE_NAME in ('DBS_D_JINGYU','DBS_I_JINGYU') and owner <> 'JINGYU'
  GROUP BY OWNER,SEGMENT_NAME,SEGMENT_TYPE
  ORDER BY 4;
  • 4. Using XTTS to migrate databases must be tested in each database and generate detailed operation documents (because the relationship between each user object in each database is different, it may be difficult to sort out the interdependencies when pouring metadata into the database. Tests must be done in advance to sort them out.
  • 5. The target segment of XTTS must have 11.2.0.4 software for data file conversion, with a minimum of 10G database at the source.
  • 6. When XTTS does full and incremental recovery in target segment, it will restart the instance of running recovery script (especially if there are other business users in target database that belong to production operation).
  • 7. This test encounters the missing files in full backup of tablespace due to the special characters contained in the name of the source database data file, and the log does not report any errors. Only in the recovery stage can the missing files be found. Therefore, it is recommended to add an additional check and comparison of the number of data files to the preparation work in the future:
  select count(1) from dba_data_files where tablespace_name in ('DBS_D_JINGYU','DBS_I_JINGYU');
  --The number of migrated data files is 135, which is consistent with the number of files backed up by tablespace.

Regarding item 7, I actually encountered and reproduced it in the test environment, which can be referred to in the previous essays.

Topics: PHP SQL Database Linux rman