Ilmar Kerm

Oracle, databases, Linux and maybe more

Before 11g if you added a new column to a table then Oracle had to lock and physically update all rows in the table. Possibly a painful process 🙂 11g helped with that a little, giving metadata only default values for NOT NULL columns, so the change was done only in data dictionary, not in the table data.

SQL> create table default_values (
  2    id number primary key
  3  );

Table created.

SQL> insert into default_values values (1);

1 row created.

SQL> alter table default_values add def_w_not_null varchar2(20) default 'THIS IS DEFAULT' not null;

Table altered.

SQL> select * from default_values;

        ID DEF_W_NOT_NULL
---------- --------------------
         1 THIS IS DEFAULT

SQL> select * from table(dbms_xplan.display_cursor(null,null,'projection'));
...
Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "DEFAULT_VALUES"."ID"[NUMBER,22],
       NVL("DEFAULT_VALUES"."DEF_W_NOT_NULL",'THIS IS DEFAULT')[20]
...

So internally Oracle just rewrites the columns with the following expression: NVL(“DEFAULT_VALUES”.”DEF_W_NOT_NULL”,’THIS IS DEFAULT’). Makes sense 🙂

12c now adds the possibility to add metadata only default value for NULL columns also. Lets see what happens:

SQL> alter table default_values add def_w_null varchar2(20) default 'THIS IS DEFAULT2';

Table altered.

SQL> select id, def_w_null from default_values;

        ID DEF_W_NULL
---------- --------------------
         1 THIS IS DEFAULT2

SQL> select * from table(dbms_xplan.display_cursor(null,null,'projection'));
...
Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - "ID"[NUMBER,22], DECODE(TO_CHAR(SYS_OP_VECBIT("SYS_NC00003$",0)),NULL
       ,NVL("DEF_W_NULL",'THIS IS DEFAULT2'),'0',NVL("DEF_W_NULL",'THIS IS
       DEFAULT2'),'1',"DEF_W_NULL")[20]

We have much more complex expression now: DECODE(TO_CHAR(SYS_OP_VECBIT(“SYS_NC00003$”,0)),NULL,NVL(“DEF_W_NULL”,’THIS IS DEFAULT2′),’0′,NVL(“DEF_W_NULL”,’THIS IS DEFAULT2′),’1′,”DEF_W_NULL”). And also a new hidden column SYS_NC00003$. What is in there?

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
         1 THIS IS DEFAULT2     is null

SQL> insert into default_values (id, DEF_W_NULL) values (2, null);

1 row created.

SQL> commit;

Commit complete.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
         1 THIS IS DEFAULT2     is null
         2                      is not null 01

SQL> insert into default_values (id, DEF_W_NULL) values (3, 'some value');

1 row created.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
         1 THIS IS DEFAULT2     is null
         2                      is not null 01
         3 some value           is not null 01

SQL> insert into default_values (id) values (4);

1 row created.

SQL> select id, DEF_W_NULL, nvl2(SYS_NC00003$,'is not null','is null') is_hidden_col_null, SYS_NC00003$ FROM default_values;

        ID DEF_W_NULL           IS_HIDDEN_C SYS_NC00003$
---------- -------------------- ----------- --------------------
         1 THIS IS DEFAULT2     is null
         2                      is not null 01
         3 some value           is not null 01
         4 THIS IS DEFAULT2     is not null 01

This new column seems to set a bit, whether the value in the column “should be trusted” or replaced with a default. So, all the rows that existed before the new column was added have NULL is this new column, but all the newly modified rows set a specific bit to 1. Looking at the original expression, then there is also a possibility that this bit is set to 0, maybe this newly added column can contain other “row status bits” also?

DECODE (TO_CHAR (sys_op_vecbit ("SYS_NC00003$", 0)),
    NULL, NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
    '0', NVL ("DEF_W_NULL", 'THIS IS DEFAULT2'),
    '1', "DEF_W_NULL"
)

When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space. But it works only when datafiles are placed on ACFS, ZFS Appliance over DNFS or Netapp (Oracle Multitenant Pluggable Database Snapshot Cloning: Use Cases and Supported Platforms Doc ID 1597027.1) and you also cannot unplug/drop the source PDB and you also cannot unplug the clone PDB.

Recently I’ve started to like BTRFS (only on test systems!) and it also has builting snapshot capabilities. In this article I’ll try to use BTRFS snapshots to create pluggable database clones. It is not as automated as Oracle builtin snapshots, but possible.

What I have before cloning:

SQL> select name, open_mode, guid from v$pdbs;

NAME                           OPEN_MODE  GUID
------------------------------ ---------- --------------------------------
PDB$SEED                       READ ONLY  FFCDD773320B2767E0432700FA0A06D0
DB1                            MOUNTED    FFCC6C3F473E124BE0432700FA0A429E
DB2                            READ WRITE 000A7D190B3C15A1E0532700FA0A3C10

SQL> alter session set container=db2;

Session altered.

SQL> select round(bytes/1024/1024) mb, file_name from dba_data_files;

        MB FILE_NAME
---------- -------------
       280 /u01/db2/system01.dbf
       690 /u01/db2/sysaux01.dbf
         5 /u01/db2/users01.dbf
       100 /u01/db2/tdata01.dbf
      2048 /u01/db2/tlarge01.dbf

My goal is to clone DB2 to new PDB called DB3. Currently my DB2 is stored under /u01/db2/ that is a dedicated subvolume under BTRFS filesystem /u01. The file system is currently using 21.23GB.

[root@cdbtest u01]# btrfs subvolume list -a -t /u01
ID      gen     top level       path
--      ---     ---------       ----
256     1543    5               app
281     736     5               app_bak
285     1545    5               data
286     1297    5               data_bak
287     1544    5               db2
289     1333    5               db2_bak

[root@cdbtest u01]# ls -l /u01/db2
total 3198048
-rw-r----- 1 oracle oinstall  723525632 Aug  7 17:40 sysaux01.dbf
-rw-r----- 1 oracle oinstall  293609472 Aug  7 17:42 system01.dbf
-rw-r----- 1 oracle oinstall  104865792 Aug  7 17:21 tdata01.dbf
-rw-r----- 1 oracle oinstall   20979712 Aug  7 16:56 temp01.dbf
-rw-r----- 1 oracle oinstall 2147491840 Aug  7 17:21 tlarge01.dbf
-rw-r----- 1 oracle oinstall    5251072 Aug  7 17:21 users01.dbf

[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.73MB
Metadata: total=8.00MB, used=0.00

Now, lets start. First need to open DB2 read only and create description XML file from it:

SQL> exec dbms_pdb.describe(pdb_descr_file=>'/u01/db2_pdb.xml');

PL/SQL procedure successfully completed.

And create a BTRFS snapshot under directory /u01/db3

[root@cdbtest u01]# btrfs subvolume snapshot db2 db3
Create a snapshot of 'db2' in './db3'

[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.75MB
Metadata: total=8.00MB, used=0.00

[root@cdbtest u01]# ls -l
total 8
drwxr-xr-x 1 oracle oinstall   50 Aug  4 11:55 app
drwxr-xr-x 1 oracle oinstall   50 Aug  4 11:55 app_bak
drwxrwxr-x 1 oracle oinstall   60 Aug  5 15:16 data
drwxrwxr-x 1 oracle oinstall   60 Aug  5 15:16 data_bak
drwxr-xr-x 1 oracle oinstall  136 Aug  7 17:53 db2
drwxr-xr-x 1 oracle oinstall  112 Aug  7 15:47 db2_bak
-rw-r--r-- 1 oracle oinstall 6370 Aug  7 17:53 db2_pdb.xml
drwxr-xr-x 1 oracle oinstall  136 Aug  7 17:53 db3

[root@cdbtest u01]# ls -l db3
total 3198048
-rw-r----- 1 oracle oinstall  723525632 Aug  7 17:49 sysaux01.dbf
-rw-r----- 1 oracle oinstall  293609472 Aug  7 17:49 system01.dbf
-rw-r----- 1 oracle oinstall  104865792 Aug  7 17:49 tdata01.dbf
-rw-r----- 1 oracle oinstall   20979712 Aug  7 16:56 temp01.dbf
-rw-r----- 1 oracle oinstall 2147491840 Aug  7 17:49 tlarge01.dbf
-rw-r----- 1 oracle oinstall    5251072 Aug  7 17:49 users01.dbf

I have a snapshot now and no additional space used! 🙂 Lets try to connect it as a new PDB.

SQL> alter session set container=cdb$root;

Session altered.

SQL> create pluggable database db3 using '/u01/db2_pdb.xml' source_file_name_convert=('/u01/db2/','/u01/db3/') nocopy tempfile reuse;
create pluggable database db3 using '/u01/db2_pdb.xml' source_file_name_convert=('/u01/db2/','/u01/db3/') nocopy tempfile reuse
*
ERROR at line 1:
ORA-65122: Pluggable database GUID conflicts with the GUID of an existing container.

Simple plug in does not work, since the database already has a PDB with the same GUID. Create pluggable database command has a AS CLONE clause to generate new GUID and DBID values.

SQL> create pluggable database db3 AS CLONE using '/u01/db2_pdb.xml' source_file_name_convert=('/u01/db2/','/u01/db3/') nocopy tempfile reuse;

Pluggable database created.

SQL> select name, open_mode, guid from v$pdbs;

NAME                           OPEN_MODE  GUID
------------------------------ ---------- --------------------------------
PDB$SEED                       READ ONLY  FFCDD773320B2767E0432700FA0A06D0
DB1                            MOUNTED    FFCC6C3F473E124BE0432700FA0A429E
DB2                            READ ONLY  000A7D190B3C15A1E0532700FA0A3C10
DB3                            MOUNTED    000C54AE44352331E0532700FA0AFA69

SQL> alter session set container=db3;

Session altered.

SQL> select name from v$datafile;

NAME
----------------------------------------------------
/u01/data/CDB2/datafile/o1_mf_undotbs1_9xyyw5dc_.dbf
/u01/db3/system01.dbf
/u01/db3/sysaux01.dbf
/u01/db3/users01.dbf
/u01/db3/tdata01.dbf
/u01/db3/tlarge01.dbf

6 rows selected.

Looks good from the DB side (ignore the UNDO datafile, this is CDB undo datafile, since PDB does not have its own). How does the storage usage look?

[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.87MB
Metadata: total=8.00MB, used=0.00

Still 21.23GB. Lets open both DB2 and DB3 and see what happens.

SQL> alter session set container=cdb$root;

Session altered.

SQL> alter pluggable database db2 close;

Pluggable database altered.

SQL> alter pluggable database db2 open;

Pluggable database altered.

SQL> alter pluggable database db3 open;

Pluggable database altered.

SQL> select name, open_mode, guid from v$pdbs;

NAME                           OPEN_MODE  GUID
------------------------------ ---------- --------------------------------
PDB$SEED                       READ ONLY  FFCDD773320B2767E0432700FA0A06D0
DB1                            MOUNTED    FFCC6C3F473E124BE0432700FA0A429E
DB2                            READ WRITE 000A7D190B3C15A1E0532700FA0A3C10
DB3                            READ WRITE 000C54AE44352331E0532700FA0AFA69
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.23GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=124.90MB
Metadata: total=8.00MB, used=0.00

Still no additional storage used by the snapshot. I’ll modify some data in DB3 tables, then it must consume additional space.

SQL> alter session set container=db3;

Session altered.

SQL> select count(*) from t1.large;

  COUNT(*)
----------
   1897686

SQL> desc t1.large
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ------------------------------------
 OWNER                                                 NOT NULL VARCHAR2(128)
 OBJECT_NAME                                           NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                 VARCHAR2(128)
 OBJECT_ID                                             NOT NULL NUMBER
 DATA_OBJECT_ID                                                 NUMBER
 OBJECT_TYPE                                                    VARCHAR2(23)
 CREATED                                               NOT NULL DATE
 LAST_DDL_TIME                                         NOT NULL DATE
 TIMESTAMP                                                      VARCHAR2(19)
 STATUS                                                         VARCHAR2(7)
 TEMPORARY                                                      VARCHAR2(1)
 GENERATED                                                      VARCHAR2(1)
 SECONDARY                                                      VARCHAR2(1)
 NAMESPACE                                             NOT NULL NUMBER
 EDITION_NAME                                                   VARCHAR2(128)
 SHARING                                                        VARCHAR2(13)
 EDITIONABLE                                                    VARCHAR2(1)
 ORACLE_MAINTAINED                                              VARCHAR2(1)

SQL> update t1.large set owner = substr(owner, 1, 2);

1897686 rows updated.

SQL> commit;

Commit complete.

SQL> alter system checkpoint;

System altered.

SQL> select round(bytes/1024/1024) mb from dba_segments where owner='T1' and segment_name = 'LARGE';

        MB
----------
       256
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=21.78GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=130.33MB
Metadata: total=8.00MB, used=0.00

I tried to change every block on that table. Table is 256MB and 21.78-21.23=0.55GB was changed on the disk. Just for a test, lets try the update again (just to see how BTRFS snapshots work a little).

SQL> update t1.large set owner = 'AX';

1897686 rows updated.

SQL> commit;

Commit complete.

SQL> alter system checkpoint;

System altered.
[root@cdbtest u01]# btrfs filesystem df /u01
Data: total=29.01GB, used=22.00GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=132.34MB
Metadata: total=8.00MB, used=0.00

I should have modified the same blocks, but BTRFS used another 0.22GB of disk space. Need to look more into how BTRFS snapshots work. But here you have it, PDB snapshot cloning without restrictions 🙂

I started to write this blog post on how 12c gave us new multitenant architecture (pluggable databases) and with that also a new possibility to do database upgrades. But during writing I discovered how catctl.pl can fail the upgrade if you have modified the SQL plus prompt with custom login.sql.

I’m starting with these already created databases:

SID Home Description
cdb1 /u01/app/oracle/product/12.1.0.1/db The initial container with version 12.1.0.1. It is hosting the pluggable database DB1 that contains my application data.
cdb2 /u01/app/oracle/product/12.1.0.2/db The target container, already pre-created with version 12.1.0.2

All of them use Oracle Managed Files (OMF).
The goal of this blog post is to upgrade database DB1 from version 12.1.0.1 to 12.1.0.2 by unplugging it from CDB1 container and plugging it to CDB2 container that is created with version 12.1.0.2.

I have already created an empty container database CDB2 using new Oracle version 12.1.0.2. The upgrade itself is simply unplugging the DB1 database from the old container, plugging it into new container and running the database upgrade scripts on the container database.

sys@cdb1(cdbtest.aws.ee.dom)> alter pluggable database db1 close immediate;

Pluggable database altered.

Elapsed: 00:00:02.05

sys@cdb1(cdbtest.aws.ee.dom)> alter pluggable database db1 unplug into '/u01/data/db1_upgrade.xml';

Pluggable database altered.

Elapsed: 00:00:01.41

[oracle@cdbtest data]$ . oraenv
ORACLE_SID = [cdb1] ? cdb2
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@cdbtest data]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/db

sys@cdb1(cdbtest.aws.ee.dom)> CREATE PLUGGABLE DATABASE db1 USING '/u01/data/db1_upgrade.xml' nocopy tempfile reuse;

Pluggable database created.

Elapsed: 00:00:10.16

sys@cdb1(cdbtest.aws.ee.dom)> alter pluggable database db1 open upgrade;

Warning: PDB altered with errors.


[oracle@cdbtest data]$ cd $ORACLE_HOME/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql

... cut
[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]

     Time: 2s For PDB(s)

Grand Total Time: 2s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:0m:2s]

Something is wrong, 2 seconds it too fast. Logfile catupgrd0.log reveals one weird error:

sys@cdb2(cdbtest.aws.ee.dom)> sys@cdb2(cdbtest.aws.ee.dom)> ALTER SESSION SET CONTAINER = "0";
ERROR:
ORA-65011: Pluggable database 0 does not exist.


sys@cdb2(cdbtest.aws.ee.dom)> INSERT INTO sys.registry$error VALUES ('SYS', SYSDATE, 'catctl.pl', 'CATPROC', 'Invalid Upgrade on [0] Check catupgrd*.log', 'Invalid Upgrade');

1 row created.

Now that is a weird error, I specified DB1 as PDB name. Lets try again, first with serial execution. First clean up the failed upgrade from last time:

$ sqlplus / as sysdba

SQL> DELETE FROM sys.registry$error;

1 row deleted.

SQL> commit;

Commit complete.

And run catctl.pl again, with -S flag to run in serial.

[oracle@cdbtest ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' -S catupgrd.sql

... cut
[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]
PDB Serial Inclusion:[DB1] Exclusion:[0]
Running File In Serial Order FileName is catupgrd.sql
...

So, serial execution works. Lets try one more thing… “Bug 17810688 PDB upgrade error running catcon.pl or catctl.pl” this bug is that PDB upgrade fails when sqlplus is not in PATH, so maybe this is some kind of parsing error of sqlplus output that only shows itself, when PDB-s are upgraded? As you saw here, I’m using custom SQLPlus prompt 🙂 So lets remove my custom login.sql and try running upgrade again, in parallel.

[oracle@cdbtest admin]$ echo $SQLPATH
/home/oracle/dbscripts/oracle
[oracle@cdbtest admin]$ mv $SQLPATH/login.sql $SQLPATH/login.sql.bak
[oracle@cdbtest admin]$ pwd
/u01/app/oracle/product/12.1.0.2/db/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql

... cut
[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[DB1] Exclusion:[]
Serial   Phase #: 0 Files: 1     Time: 16s   DB1
Serial   Phase #: 1 Files: 5

So upgrade started to run this time. I’ll cancel it and try to debug further. Could this be because of the custom prompt? I’ll try again and only comment out this time the line that sets the custom prompt (set sqlprompt):

[oracle@cdbtest admin]$ mv $SQLPATH/login.sql.bak $SQLPATH/login.sql
[oracle@cdbtest admin]$ vi $SQLPATH/login.sql

define _editor=vi
set serveroutput on size 1000000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user)||'@'||instance_name||'('||host_name||')' global_name from v$instance;
--set sqlprompt '&gname> '
set termout on

column db_unique_name format a20
column database_role format a16
column db_version format a10
column db_status format a8
column db_compatible format a10
column db_version heading VERSION
column db_status heading STATUS
column db_compatible heading COMPATIBLE
select d.db_unique_name, d.database_role, v.version db_version, p.value db_compatible, v.status db_status
  from v$database d, v$instance v, v$parameter p where p.name = 'compatible';

[oracle@cdbtest admin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 5 15:24:41 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


DB_UNIQUE_NAME       DATABASE_ROLE    VERSION    COMPATIBLE STATUS
-------------------- ---------------- ---------- ---------- --------
cdb2                 PRIMARY          12.1.0.2.0 12.1.0.2.0 OPEN

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@cdbtest admin]$ pwd
/u01/app/oracle/product/12.1.0.2/db/rdbms/admin
[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql

... cut
[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]

     Time: 2s For PDB(s)

Grand Total Time: 2s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:0m:2s]

Still doesn’t work. Maybe it doesn’t like database information output, I’ll put back the custom prompt and remove the DB information.

[oracle@cdbtest admin]$ cat $SQLPATH/login.sql
define _editor=vi
set serveroutput on size 1000000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
column global_name new_value gname
set termout off
define gname=idle
column global_name new_value gname
select lower(user)||'@'||instance_name||'('||host_name||')' global_name from v$instance;
set sqlprompt '&gname> '
set termout on

column db_unique_name format a20
column database_role format a16
column db_version format a10
column db_status format a8
column db_compatible format a10
column db_version heading VERSION
column db_status heading STATUS
column db_compatible heading COMPATIBLE
--select d.db_unique_name, d.database_role, v.version db_version, p.value db_compatible, v.status db_status
--  from v$database d, v$instance v, v$parameter p where p.name = 'compatible';

[oracle@cdbtest admin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Aug 5 15:28:42 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

sys@cdb2(cdbtest.aws.ee.dom)> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@cdbtest admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c 'DB1' catupgrd.sql

... cut
[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
DB1
PDB Inclusion:[DB1] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[DB1] Exclusion:[]
Serial   Phase #: 0 Files: 1     Time: 19s   DB1
Serial   Phase #: 1 Files: 5

... cut

Grand Total Time: 1404s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:23m:24s]

And it starts upgrading! So if you have custom login.sql for SQLPlus keep an eye for this during upgrades. This is rather strange bug, because I already used catctl.pl with the same login.sql setup to upgrade a non-CDB database to 12.1.0.2.

But back to the DB upgrade… Now the pluggable database upgrade is finished and we can open it:

sys@cdb2(cdbtest.aws.ee.dom)> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DB1                            MOUNTED

sys@cdb2(cdbtest.aws.ee.dom)> alter pluggable database db1 open;

Pluggable database altered.

NB! Need to take special care with physical standby databases when going for this upgrade path!

I was looking for PL/SQL examples to use metadata and data filtering in datapump API, but I didn’t find any. So here is one example. It uses table reload_dev_tables to specify what schemas/tables should be exported using data pump and what where clause should be set.

Structure for reload_dev_tables:

 Name                                      Null?    Type                        
 ----------------------------------------- -------- -------------
 OWNER                                     NOT NULL VARCHAR2(40)                
 TABLE_NAME                                NOT NULL VARCHAR2(40)                
 IS_FULL                                   NOT NULL NUMBER(1)                   
 FILTER_PREDICATE                                   VARCHAR2(250)   

Here is the datapump code itself, tested in 11.2.0.3. This is just a demonstration how to use the datapump API, specifically the metadata and data filters.

There are a few Oracle database replication solutions on the market:

  • Oracle Streams (powerful, included with RDBMS license (Oracle SE has trigger-based capture, EE mines redo logs and log buffer), but deprecated – no longer developed, complex to manage)
  • Oracle GoldenGate (powerful, but very expensive)
  • Tungsten (heterogeneous, but from Oracle side requires deprecated CDC and complex to set up – one interesting feature, you can write data modification plugins before data is applied on target)
  • DBVisit (pretty inexpensive compared to GoldenGate, but powerful)

In this blog post I’ll give a short overview of DBVisit Replicate, that can be used to replicate data real time between two Oracle databases or from Oracle to MySQL/MSSQL. I am not connected to DBVisit company in any way and I was testing their replication solution for a client of mine.

A few interesting key concepts behind DBVisit Replicate:

  • It uses optimistic apply on the target side, meaning that data changes are replicated and applied (but not committed) to the target even before transaction is committed on the source. In case of rollback, the target database needs to roll back all the changes too. Positive side is that committed transactions get replicated to the target faster, even if the transaction is large.
  • DBVisit uses its own change capture process to mine online redo logs, so it does not depend on triggers to log the changes and does not impact the end user session. The potential downside – Oracle can change the internal structure of redo logs any time, so before upgrading the database check the DBVisit compatibility first.
  • DBVisit can run its CPU intensive processing on a different server, so it does not waste expensive CPU cycles on the Oracle DB server. This is called 3-tier architecture in DBVisit. In this architecture source database only needs to run small FETCHER process, that sends redo log changes over a network to a dedicated MINE process/server that actually does to log processing. MINE filters out the required database changes and sends this processed information over network to APPLY process. APPLY then connects to the target database over OCI (so it does not need to be running on the target database server) and executes the DML statements. (Note: fetcher process is optional, so by default dbvisit runs mine process on the source database server).

DBVisit is very easy to install and it supports RAC and ASM. My setup is done on 11.2.0.3 3-node RAC+ASM running on Oracle Linux 5.8. For Grid Infrastructure (ASM) role separation is in use (GI runs under different OS account than RDBMS). I’m using DBVisit Replicate 2.4.21 (unreleased currently, but it contains a many bug fixes needed for my environment).

In my following easy test setup:

  • I’m using the default 2-tier architecture, so no fetcher process. Apply also runs in the same host as the target database.
  • I’m using TAR version of the dbvisit software (not RPM), so I could have a single shared copy of the software for all servers in the configuration. If you use RPM, then the same RPM package needs to be installed on all servers (and you need root privileges). In my case I’m using OCFS2 filesystem and dbvisit software is extracted to /u02/app/oracle/dbvisit.
  • For processing area for each dbvisit process I’m using /u03/dbvisit/pte in this example. In my current case it is also on an OCFS2 filesystem and shared between all servers, but it does not have to be and when I move this setup to production, I’ll also use 3-tier architecture and local disks.
  • Grid Infrastructure and ASM run under OS account grid.
  • I’m using IP 10.0.0.1 as the server address where MINE is running.
  • I’m using IP 10.0.0.2 as the server address where APPLY is running.

First execute the only executable file in dbvisit replicate installation package dbvrep and execute and complete the initialization wizard.

[oracle@jfadboc1n02.jfa.unibet.com pte]$ /u02/app/oracle/dbvisit/replicate/dbvrep
Initializing......done
Dbvisit Replicate version 2.4.21.2746
Copyright (C) Dbvisit Software Limited.  All rights reserved.
No DDC file loaded.
Run "setup wizard" to start the configuration wizard or try "help" to see all commands available.

dbvrep> setup wizard
This wizard configures Dbvisit Replicate to start a replication process.

The setup wizard creates configuration scripts, which need to be run after the wizard ends. No changes to the databases are made before that.

The progress is saved every time a list of databases, replications, etc. is shown. It will be re-read if wizard is restarted and the same DDC name and script path is
selected.

           Run the wizard now? [yes] yes

           Accept end-user license agreement? (view/yes/no) [view] yes

Before starting the actual configuration, some basic information is needed. The DDC name and script path determines where all files created by the wizard go (and where
to reread them if wizard is rerun) and the license key determines which options are available for this configuration.

           (DDC_NAME) - Please enter a name for this replication (suggestion: use the name of the source database): [] pte

           (LICENSE_KEY) - Please enter your license key (or just enter "(trial)"): [(trial)] trial

           (SETUP_SCRIPT_PATH) - Please enter a directory for location of configuration scripts on this machine: [/home/oracle/pte] /u03/dbvisit/pte

… and so on. In the end the wizard will execute a script on both source and target databases that will create a DBVREP schemas and give it all necessary privileges. If you enabled DDL replication, then it will also enable database wide supplemental logging on the source database side (so check DBA_2PC_PENDING view before doing it, that you don’t have any pending 2PC transactions open, otherwise adding supplemental logging will hang until the 2PC transactions are resolved).

MINE (or FETCH in case of 3-tier architecture) process needs to run directly on the source database server (in case of RAC pick any one of the database nodes) and under the same OS account as ASM, so in my case grid. Setup wizard creates a script *-run-10.0.0.1.sh to start MINE.

[grid@xxxxxx pte]$ ./pte-run-10.0.0.1.sh
Initializing......done
DDC loaded from database (234 variables).
Dbvisit Replicate version 2.4.21.2746
Copyright (C) Dbvisit Software Limited.  All rights reserved.
DDC file /u03/dbvisit/pte/pte-MINE.ddc loaded.
Starting process MINE...started

Apply process shouldn’t need an installed Oracle client software, because DBvisit Replicate comes with an embedded Oracle Instantclient. In the version I’m currently using this did not work for me, so I needed to add the following line to *-APPLY.ddc file to set the correct ORACLE_HOME. But this bug should be fixed in the next released version.

memory_set ORACLE_HOME /u01/app/oracle/product/11.2.0.3/db

Also open *-run-10.0.0.2.sh (the script that executes APPLY process) and set NLS_LANG on the first line. NLS_LANG needs to be AMERICAN_AMERICA.SOURCE_DB_CHARSET:

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

Now start apply process:

[oracle@xxxxxxxxxx pte]$ ./pte-run-10.0.0.2.sh
Initializing......done
DDC loaded from database (234 variables).
Dbvisit Replicate version 2.4.21.2746
Copyright (C) Dbvisit Software Limited.  All rights reserved.
DDC file /u03/dbvisit/pte/pte-APPLY.ddc loaded.
Starting process APPLY...started

Monitoring and configuring the replication process is done through the replication console, which can be executed using the start-console.sh script. This will display the status of all dbvisit processes and limited list of tables that have had some changes replicated recently. From this command line you can control the replication process

/MINE IS running. Currently at plog 13 (redo sequence 1201 [1] 1395 [3] 1086 [2]) and SCN 96447934933 (04/04/2013 16:49:25).
APPLY IS running. Currently at plog 13 and SCN 96447934644 (04/04/2013 16:49:25).
Progress of replication pte:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
DBAUSER.DBVISIT_PING:         100%  Mine:21/21           Unrecov:0/0         Applied:21/21       Conflicts:0/0       Last:04/04/2013 18:24:36/OK
--------------------------------------------------------------------------------------------------------------------------------------------
1 tables listed.

dbvrep>

Some useful commands: LIST PREPARE, PREPARE SCHEMA, PREPARE TABLE, UNPREPARE SCHEMA, UNPREPARE TABLE, SHUTDOWN MINE, SHUTDOWN APPLY, SHUTDOWN ALL, LIST CONFLICT. Before you add (prepare) new tables/schemas with existing data to replication configuration, take a look at the users guide for a proper procedure. If you just execute PREPARE TABLE/SCHEMA and then export the existing data, you will get ORA-01466.

For my current project, it was very important to find a replication solution that could exclude some transactions from replication, for example when you need to purge data from source database but want to keep the same data on the target DB. It is possible with DBVisit Replicate:

  • Partition level DDL is not replicated by default, so on the source database you can just drop/truncate a partition and it will not be replicated by default to the target side.
  • If you need to exclude specific transactions from replication, then execute SET TRANSACTION NAME as a first command in that transaction.
    SET TRANSACTION NAME ‘DBREPL_DB_%s_XID_%s’
    The first %s: name of the target database (as configured in the setup wizard).
    The second %s: is not relevant.

    COMMIT; -- just to be sure that the next command is the first in that transaction
    SET TRANSACTION NAME 'DBREPL_DB_archpte_XID_XXXXX';
    DELETE FROM dbauser.dbvisit_ping;
    COMMIT;
    

I think this is enough for a first post. You can do a lot of complex configurations with DBVisit, it is a flexible product. Test your setup properly, there can be issues depending on your database setup, that DBVisit has not yet tested for. If you find an issue, report to DBVisit support (this can also be done if you have a trial license), DBVisit has an excellent and fast support team. So far I have created 7 tickets to DBVisit support and all of them have been resolved within hours or a day.

DBVisit also has some helpful videos in youtube.